Hyper-V - a child of marketing or a real alternative?

    Hello, Habr! Now I will ask you a question, and you’ll think: What, very popular and once arousing awe among you personally, today is remembered only for “nostalgia”? Surely, someone will remember Dendy or Super Nintendo, and some have their own pager. So, why am I ... There is the expression "nothing lasts forever." In today's article, we’ll consider whether this is really so in the development sphere and should we abandon VMWare in favor of Hyper-V in the matter of virtualization? We also touch upon the advantages of both platforms and the process of transition from one to the other. Look under the cat!



    I give the floor to the author.

    Disclaimer:


    1. This article is informative in nature and does not pursue the desire to have a hippie, but simply the desire to share your story, which may be useful to someone. Some things are purely individual, and judgments are personal.
    2. No, I did not sell MS. I just looked for articles of this kind for a long time as food for thought, but they were not there. I had to do it myself.
    3. MS Blog - I don’t have an invite, but my comrades liked the idea of ​​the article, and they offered to post it.
    4. There will be no PR product, there will be a story about live testing / implementation.

    Lyrical digression
    We live in an amazing time. And maybe in a terrible one, depending on which side to look at. Now it is possible that literally 20 years ago I read in science fiction books: the future came in 200–500–1000 years. Flights to other planets, going beyond our solar system, “blossoming apple trees on Mars” - all this seemed distant and unrealizable.

    And now we have (well, practically) a space nuclear engine, a plan to fly to Mars in 2024, and a satellite outside our solar system.

    So, in fact, what am I all leading to. I mean, all this became possible thanks to (or contrary to) rapidly developing computer technologies. We’ll talk about one of these technologies now.

    Epigraph


    Once upon a time there was one company. Neither large nor small, neither high nor low. Such a direct spitting medium business. She lived for herself with several racks of equipment, old, inherited from her mother. And the time has come to update all this economy. The comrades considered the cost of equipment, thought, and decided to implement virtualization. And the year was long, of the representatives of the glorious kind of universal virtualization only VMWare was. In general, it was introduced. Time passed, tasks changed, other representatives of the glorious kind of virtualization grew. And it's time to choose a representative again ...

    The main question of the IT professional is “Why?”
    (Or “What the hell?”)


    Let me introduce myself. My name is Anton and I head the infrastructure solutions department in one of the largest Russian retailers. As in any self-respecting organization, we use virtualization and, of course, our “beloved” 1C. We implemented VMware for a long time, lived with it, in principle, not bad (although the stories of adding gray hair to me are also enough), but, like with any development, you have to look around periodically to find out about alternative solutions.

    And our transition story began with the fact that I saw Hyper-V in one corner along with VMware on the Gartner Quadrant. It was then that I became thoughtful. As a result of the thoughts, we got such a sign “for / against” the transition. And also the famous VMware shoals with CBT ... Right now. Yes, and twice in two different releases. Straight fire!

    Hype minute
    I immediately recall a joke:

    “How do you know that a person is an ardent vegan. No way. He himself will tell you about it. ”
    So it is here - how to recognize the ardent red-eyed. No way. He himself will tell that Linux is God's grace, and Windows is a product of the prince of darkness.

    2x354 haters will immediately stand in the rack and, splashing liquids, will begin to tell how Microsoft updates break the whole OS to hell. That’s yes, I won’t argue here, my comrades have a love for such merry presents. But in general, the evolution process in my opinion has been perfected by Microsoft. Revolution is not theirs, but evolution is a strong point. And everyone chooses what is closer to him.

    I’ll make a reservation right away - there was a comparison “by feature” too, only in life no one “in their right mind and strong memory” will build a cluster according to limit values. Yes, and they actually look almost like twin brothers, and I personally do not see the fundamental difference between how many hundreds of cores can be given to one virtual machine.

    Minute Holivara
    Why is VMware's “Killer feature” so much just marketing?

    Fault Tolerance. Seriously? Have you read the limitations? Do you really use it in production? If so, then I sincerely feel sorry for you humanly ... For all the time I have never seen anyone really find this useful ...

    Forwarding USB and PCI devices. Also a very controversial moment. These things deprive the virtual machine of the main advantage of virtualization - free migration between hosts. We used PCI forwarding, but as soon as we were able to refuse, we breathed out with relief. To forward USB, both software and hardware solutions were invented and made long ago. Much easier.

    Read data caching on local SSDs. Yes, when I came out I was very happy about this opportunity. But in reality, the growth was not even seen on synthetic data. And in the working environment, I periodically caught wild hangs of this system (here I am not saying that the fault of the system is maybe my crooked hands did something wrong). And the cherry on the cake: this system caches only blocks of a certain size, and you need to spend a lot of time collecting information about the size of the request to the disk, thinking which virtual machine should be priority in using this technology.

    But Hyper-V has a regular ability to reduce the disk. Do you know how many times I dreamed about this in VMware? Much more than you can imagine.

    Yes, another moment. Switching to another hypervisor is an individual solution, but here is my list of stop factors, in the presence of which, in my opinion, it is definitely not worth switching to Hyper-V. Well, or very carefully think through and test everything.

    1. You have the main OS on Linux servers.
    2. You need to run exotic.
    3. You need ready-made virtual servers from vendors (I think this is just a matter of time).
    4. You do not like Microsoft.
    5. You got VMware for free with the hardware.

    Reflection plate

    For migrating to Hyper-VAgainst the switch to Hyper-V
    Reduce VMware License CostsVMware platform fame
    Based on the same platform, Azure is built.Distribution size (spoiler: Nano Server is not an analog of esxi - this is a slightly different ideology and positioning)
    Interesting network virtualizationSimple licensing scheme
    Replication to other virtual storage systems using regular methodsSupport for a large number of different OS
    Bonuses when buying a kit for building virtualization (CIS kit, which includes Windows Datacenter + System Center)VMware is already running
    Various goodies when deploying Windows serversThere is no support for the hypervisor as a separate product
    Can reduce drives on the flyVDI here can only be used for labs / tests. This is not suitable for production.
    Better support for new versions of WindowsThe presence of interesting complete solutions for virtualization, when you buy hardware and software from one vendor, and you get one management console and one technical support window
    This is MicrosoftThis is Microsoft

    Leap of Faith


    I thought and wondered for a long time, but then the stars came together, and we updated the server park. And the old ones remained, and not bad, only already slow by today's standards and also morally obsolete. And a strategic decision was made to make a farm for development on the basis of Hyper-V. Dragged the server to a new site, updated all the server firmware and rushed.

    The test plan was simple:

    1. We take the server.
    2. Install esxi on it. We do not change anything, the default settings.
    3. Expand the virtual machine.
    4. We produce tests 5 times:

      a) For 1C, the Gilev test.

      b) For SQL, a write script.
    5. We set up according to Best Practice's.
    6. We produce tests 5 times:

      a) For 1C, the Gilev test.

      b) For SQL, a write script.
    7. Install Hyper-V. We do not change anything, the default settings.
    8. Expand the virtual machine.
    9. We produce tests 5 times:

      a) For 1C, the Gilev test.

      b) For SQL, a write script.
    10. We set up according to Best Practice's.
    11. We produce tests 5 times:

      a) For 1C, the Gilev test.

      b) For SQL, a write script.
    12. We put it on a physical Windows Server machine, configure it according to Best Practice's and conduct tests.
    13. Compare and think.

    Equipment: Dell FC 630, 2 Intel Xeon E5-2643 v4 processors (purely under 1C), 512GB of memory.
    Drives: san network based on Dell SC 200 with Read-Intensive SSD.


    We got these results:
    VMware without Best PracticesGilev testSQL test
    122.4212.2
    218.617.51
    312/187.12
    426.747.18
    526.324.22
    VMware with Best PracticesGilev testSQL test
    126.464.28
    226.66.38
    326.464.22
    426.466.56
    526.64.2
    HyperV without Best PracticesGilev testSQL test
    127.174.32
    226.466.08
    304/264.24
    426.185.58
    525.916.01
    HyperV with Best PracticesGilev testSQL test
    126.186.02
    227.626.04
    326.466.2
    426.744.23
    526.746.02
    PhysicsGilev testSQL test
    135.974.06
    232.474.04
    331.856.14
    432.475.55
    532.895.43

    Legend


    Gilev test - more means better, abstract "parrots".

    SQL test - less is better, runtime.

    What was configured:

    1. Steps for preparing the DELL Poweredge 630 host

    . 1.1. We configure the host according to the recommendations from DELL

    1.1.1. Enable Processor Settings -> Virtualization Technology - Enabled.

    1.1.2. Enable Processor Settings -> Logical Processor - Enabled.

    1.1.3. Enable System Profile Settings -> Turbo Boost (in the Turbo Mode documentation) - enabled.

    1.1.4. Disable Memory Setting -> Node Interleaving (enables NUMA) - disabled.

    1.1.5. Enable Power Management -> Maximum Performance - seems to be enabled.

    1.1.6. Disable unnecessary devices in Integrated Devices - did not touch.

    1.1.7. Disable System Profile Settings -> C1E - disabled.

    1.1.8. Enable Processor Settings -> Hardware Prefetcher - Enabled.

    1.1.9. Enable Processor Settings -> Adjacent Cache Line Prefetch - Enabled.

    1.1.10. Enable Processor Settings -> DCU Streamer Prefetcher - Enabled.

    1.1.11. Enable Processor Settings -> Data Reuse - not found.

    1.1.12. Enable Processor Settings -> DRAM Prefetcher - not found.

    1.2 Configure the host according to recommendation

    1.2.1 Configure Fiber Chanel HBA.

    1.2.1.1 When loading the host, go to QLogic Fast! UTIL (CTRL + Q).

    1.2.1.2 Select the first port.

    1.2.1.3 Reset Configuration Settings -> Restore Default Settings.

    1.2.1.4 Enable Configuration Settings -> Adapter Settings -> Host Adapter BIOS -> Host Adapter BIOS -> Enable.

    1.2.1.5 Enable Configuration Settings -> Adapter Settings -> Host Adapter BIOS -> Connection Options -> 1.

    1.2.1.6 Enable Configuration Settings -> Advanced Adapter Settings -> Enable LIP Reset -> Yes.

    1.2.1.7 Enable Configuration Settings -> Advanced Adapter Settings -> Enable LIP Full Login -> Yes.

    1.2.1.8 Enable Configuration Settings -> Advanced Adapter Settings -> Login Retry Count -> 60.

    1.2.1.9 Enable Configuration Settings -> Advanced Adapter Settings -> Port Down Retry Count -> 60.

    1.2.1.10 Enable Configuration Settings -> Advanced Adapter Settings -> Link Down Timeout -> 30.

    1.2.1.11 Configure the second port according to items 1.2.1.3 - 1.2.1.10.

    2. Steps for testing on the VMware platform without best practices.

    2.1 Install VMware 5.5 with all updates.

    2.2 We make the necessary settings on VMware (we do not include it in the cluster, we test separately).

    2.3 Install Windows 2016 and all updates.

    2.4 Install "1C: Enterprise". We configure, if necessary, we default so far, version 1C - 8.3.10. (last).

    2.5 On a separate machine, install Windows 2016 with SQL Server 2016 - with all updates.

    2.6 We carry out tests (5 times).

    3. Steps for testing on the VMware platform according to best practices .

    3.1.1 Place the swap file on the SSD drive. Cluster -> Swap file location -> Store the swap file in the same directory as VM. Configuration -> VM Swapfile location -> Edit.

    3.1.2 It is recommended to enable vSphere Flash Infrastructure layer - I do not know how much this is implemented in our realities.

    3.1.3 Configure SAN Multipathing through Host -> Configuration -> Storage -> Manage Paths -> Path Selection -> Round Robin.

    3.1.4 Enable Host -> Configuration -> Power management -> Properties -> High Performance.

    3.2

    We configure VM according to recommendations: 3.2.1 We use paravirtual disks: VM -> SCSI Controller -> Change Type -> Paravirtual.

    3.2.2 It is advisable to use the Thick provision eager zeroed.

    3.2.3 Turn on VM -> Options -> CPU / MMU Virtualization -> Use Intel VTx for instruction set and Intel EPT for MMU Virtualization.

    3.2.4 Disable VM BIOS -> Legacy diskette, VM BIOS -> Primary Mater CD ROM.

    4. Steps for testing on a Windows Server platform without best practices:

    4.1 Install Windows Server 2016 Datacenter on the host and all updates.

    4.2 We make the necessary settings on the host.

    4.3 Install the virtual machine with Windows and all updates.

    4.4 Install “1C: Enterprise”. We configure, if necessary, we default so far, version 1C - 8.3.10 (the last).

    4.5 On a separate machine, install Windows Server 2016 with SQL Server 2016 with all updates.

    5. Steps for testing on the Windows Server platform according to best practices
    Best practices are described here , here and here .

    5.1 Configure Host according to the recommendations:

    5.1.1 Activate MPIO: 5.2 Configure the VM according to the recommendations: 5.2.1 Use Generation2 VM. 5.2.2 We use fixed disks in VM.
    Enable-WindowsOptionalFeature – Online – FeatureName MultiPathIO
    (Get-WindowsOptionalFeature – Online – FeatureName "MultiPathIO").State








    If life is on Mars?


    It seems that life was successful, tests show that the calculations and rates were correct and now the same desired nirvana will come ... So I thought and hoped until we set up a cluster for developers in test mode.

    I won’t lie, the installation really is simple and straightforward. The system itself checks everything that it needs to be happy, and if something is not, then sends you for this to the nearest grocery store, shows a detailed report on what is wrong and even gives advice on how to fix the problem. In this regard, I liked the product from Microsoft much more.

    I immediately remembered the story of a five-day correspondence with VMware technical support about the problem when switching to 5.5. It turned out to be a funny thing. If you create a separate account on the SQL server for connecting vSphere, then the password should be no longer than 14 characters (or 10, I don’t remember it now), because the system truncates and throws out a piece of password as an unnecessary part. Indeed, it is quite reasonable behavior.

    But all the fun began later. One server crashed and refused to see the network card (as a result, the OS had nothing to do with it). Then the servers began to lose quorum. Then the servers began to randomly fly out of the cluster. VMM did not really work and often just could not connect to the farm. Then the server began to pause in the cluster. Then, during migration, the machines began to be seen on two hosts. In general, the situation was close to disaster, as we thought.

    But, having gathered courage, we, nevertheless, decided to fight. And you know what? Everything worked out. And it turned out that the problems with the network card were hardware, the problem with the cluster was resolved after the network was configured correctly. And after we rearranged the host OS and VMM into English versions, in general, everything became good. And then I felt sad ... 2017, but I still need to install English Windows so that there are fewer problems. This is an epic fail in my opinion. But the bonus was a much simpler text search for errors.

    As a result, the cluster started up, VMM is working correctly, and we started distributing virtual machines to users.

    By the way, a separate boiler in hell deserves the one who came up with the VMM interface and logic ... To say that it is incomprehensible is to say nothing. At the first opening, I had the full feeling that I was looking at the dashboard of an alien ship. It seems like the forms are familiar, but there is no understanding of what is what and why. Although maybe in many years I will get used to it. Or just memorize actions like a monkey.

    What is it like when you finally started the tractor?


    In general, I have positive emotions and feelings from the transition. The templates and their capabilities for Microsoft OS do not go to any comparison with analogues of VMware. They are straightforward very comfortable, with a huge amount of all sorts of whistles and frills, which are generally quite sensible. While we are driving a cluster for developers, we get used to a new life.

    Still very, but very pleasantly surprised by the issue of the migration of machines from VMWare. Initially, I read the forums, searched for software, and thought how it would be. It turned out that everything had been invented for me. We connected the VMM vCenter in two accounts and said directly from VMM, “dear comrade, please give me those sweets, it’s painful to emigrate to me please this virtual machine to a new hypervisor.” And the funny thing is that he migrated. The first time. Without a tambourine and mistakes. And in the end, the migration, for the test of which I planned to set aside a week, was within 40 minutes, of which 20 was the migration itself.

    What is missing:

    1. A small distribution specifically tailored for virtualization (similar to esxi).
    2. A normal management console (the console is inconvenient, especially after VMware’s control, but there is hope for the Honolulu project. Anyway, looking at the technical preview, there is an understanding that the product should give the very ease of management).
    3. Technical support product virtualization. Yes, I know that there is Premium Support, but this is not what I want.

    To summarize (if you are too lazy to read the article):


    1. Now the performance of the two platforms is about the same.
    2. 1C performance is the same.
    3. In Hyper-V, virtual disks can be either enlarged or reduced. And online.
    4. Very, well, really very simple migration from VMWare.
    5. Trouble with support in its usual sense.
    6. VMM is an extremely inconvenient thing, especially after vCenter. But on the other hand, VMM is just a graphical shell for PowerShell scripts, so you can steer it all through the familiar Powershell CLI.
    7. The transition requires retraining and understanding the intricacies of Hyper-V. Many things and ideological approaches vary.
    8. Chic patterns of virtual machines with Windows. Amazingly comfortable.
    9. Saving money.
    10. The implementation of Software-defined storage is more interesting in my opinion, but it is "on the fan".
    11. Respect for the fact that all of Azure is built on its own technologies, which then come on-premise.
    12. Simple and very tight integration with the cloud.
    13. Good network virtualization, with many interesting points.
    14. In my opinion, VDI is not for Microsoft and Hyper-V. But on the other hand, the Stream of Proposals (RemoteApp) was made very soundly, and for most companies, little will be worse than the same Citrix.
    15. Weak support by third-party vendors of ready-made virtual images for Hyper-V (I will assume that the phenomenon is temporary).
    16. A very strange new licensing policy (per core).

    about the author


    Anton Litvinov - for the last 6 years he has been working at 585 / Zolotoy. He went from a network engineer to the head of the infrastructure solutions department and as a result combines Mr. Jekyll and Dr. Hyde - a fullstack engineer and manager. In IT for about 20 years.

    Also popular now: