Hyper-V 3.0 vs. ... Or Suicidal Holivor
Good day to all!
Today, the topic of conversation will be (as I personally think) very relevant, if not painful ...
Yes, yes, I decided to conduct a functional and economic comparison of the 2 leading hypervisor platforms - Hyper-V 3.0 and VMware ESXi 5.0 / 5.1 between myself ... But then I decided to add another Citrix XenServer 6 for comparison to the comparison ...
I’m already in anticipation of a rockfall in my garden, but still we will keep the defense to the end - please take the appropriate position, side, camp - who cares - and under the cut to start the battle ...
Before starting the fight, not for life but for death, I suggest defining the criteria, domains, zones within which features will be allocated and compared between platforms. And so:
1) Scalability and performance
2) Security and network
3) Infrastructure flexibility
4) Fault tolerance and continuity of business processes
Categories are given - let's get started!
To begin with, I suggest taking a look at the comparison table for the scalability of experimental subjects:
As can be seen from the table, the leader here is explicit and unambiguous - this is Hyper-V 3.0. And here it should be mentioned that the performance and scalability of the hypervisor from MS does not depend on its charge. At VMware, the situation is the opposite - depending on how many eternally green digital you pay - you get so many chips.
So for comparison, VMware we have already 2 columns. XenServer is clearly neither to the village, nor to the city - it somehow got stuck in the middle ...
However, if we recall the numerous statements of Citrix that “we are not a company that makes decisions on virtualization, but a company that makes decisions on optimizing traffic”, then XenServer is a tool to achieve this goal. However, the fact that Hyper-V and Citrix Xen are distant relatives - and as a result such a difference in scalability - is depressing.
VMware was very annoyed by the oscillatory factor of their RAM licensing policies - the so-called vRAM- WHAT REALLY CAN BE CALLED FOR EXTRABILITY. Not only that, it was necessary to license the amount of RAM that virtual machines consume, but in addition, if necessary, a delta was formed on this very parameter vRAM (since there was simply no such parameter in ESX / ESXi 4.x) - and then people fell for money. True, with the release of 5.1 VMware thought better of it and abandoned such a delusional format - however, such tough pricing convulsions can not testify to anything good ...
From the technological point of view, I personally really expected a “nostril-in-nostril” situation - but no, this it didn’t happen, and the interest faded a little ... I think the first block of comparison doesn’t need any more comments - it is quite “transparent”.
I propose moving on to the next part of this comparison block - namely, an overview of some features that relate to performance and scalability indicators. Consider the interaction functionality with the disk subsystem, the table below:
If you look at this table, the following points can be noted here:
1) Native support for 4K sectors in virtual disks is quite a critical thing for demanding applications. High-performance DBMSs are usually located on disks with large stripe sizes. Support for this mechanism is officially present only in Hyper-V - I could not find information on support for this mechanism in the official documentation of competing vendors.
2) The maximum size of the LUN connected directly to the VM- the situation is as follows. For Citrix and VMware, this is a parameter that depends on the hypervisor itself directly, while for Hyper-V it is more likely which OS is the guest. If you use Windows Server 2012 as a guest OS, you can forward LUNs larger than 256 TB, for other hypervisors this parameter ends with 64 TB for VMware and does not rise above the 15 TB mark for Citrix, respectively.
From the point of view of functionality, XenServer looks very weak - support for Fiber Channel virtualization should already become the norm - at least at the platform level, but the situation in this case is not a positive example. The fact that VMware has a “paid” features policy has long been accustomed to it, so that at the level of the ESXi platform itself it can do this - the only question is are you ready to pay so much for this? The editors of vSphere Enterprise Plus are not very cheap, and support must be purchased without fail.
Having support for hardware offload data transmission in SANs is also a very good bonus. If earlier it was an exceptional feature of VMware - vStorage API for Array Integration (VAAI) , then Hyper-V 3.0 has in its arsenal a similar technology -Offload Data Transfer (ODX) . Most storage vendors have either released firmware for their systems that support this technology, or are ready to do this soon. Very sad for Citrix in this particular case - XenServer has no technologies for optimizing work with SAN systems.
Well, we figured out the disk subsystem - let's now see what kind of resource management mechanisms our reviewers have.
From the point of view of features, the picture in this situation is more or less even, but there are points worth paying attention to:
1) Data Center Bridging (DCB) - this mysterious abbreviation hides nothing more than support for converged network adapters - Converged Network Adapter(CNA) . In other words, a converged network adapter is an adapter that simultaneously has hardware support and acceleration for operation both in ordinary LAN Ethernet networks and in SAN networks - i.e. the idea is that the data transmission medium, the type of media - become unified (this can be optics or twisted pair), and various protocols are used over the physical channel - SMB, FCoE, iSCSI, NFS, HTTP and others - i.e. from the point of view of management, we have the ability to dynamically reassign such adapters by redistributing them depending on the need and type of load. If you look at the table, Citrix doesn’t have DCB support - there is some list of CNA adapters that are supported - but there is not a word about DCB support in the official Citrix XenServer 6.0 documentation.
2)Quality of service, QoS support are essentially mechanisms for optimizing traffic and guaranteeing the quality of the data transmission channel. Everything here is basically normal for everyone, except for the free ESXi.
3) Dynamic memory- here the story turns out to be extremely interesting. According to Microsoft, the mechanisms for optimizing memory using Dynamic Memory were substantially redesigned and allowed optimization to work with memory - and thereby increase the density of virtual machines. To this was added an intellectual swap file, which is configured for each individual VM. Honestly, it’s hard for me to judge how effective this mechanism is, however, when comparing the density of the same VMs on Hyper-V 2.0 and Hyper-V 3.0, the difference was almost 2 times in favor of the second (The machines were with virtual Windows Server 2008 R2). And here the question arises - VMware has a whole arsenal of features for optimizing memory operations, including deduplication of memory pages, which works much more efficiently. But let's take a closer look at this situation. VMware has 4 memory optimization technologies - Memory Balooning, Transparent Page Sharing, Compression, and Swapping. However, all 4 technologies begin their work only in conditions of increased load on the resources of the host system, i.e. in their manner of action, they are reactive rather than proactive or optimizing. If we take into account the fact that most modern server platforms support large Large Page Tables, which by default are 2 MB in size - this increases productivity, then the picture changes - because ESXi does not know how to deduplicate such large blocks of data - it starts to break them into smaller blocks, 4 Kbytes, but he already deduplicates them - but all these operations cost resources and time - so the picture is not so unambiguous in terms of the usefulness of these technologies.
Well, it seems that we figured out the performance and scalability - now is the time to move on to comparing the security capabilities and isolation of virtualized environments.
The following points should be highlighted here:
1) Single Root I / O Virtualization (SRIOV)- in fact, this is a mechanism for forwarding a physical adapter into a virtual machine, however, in my opinion, it is a little incorrect to compare VMware and Microsoft technologies in this area. DirectPath I / O, the technology that VMware uses, really completely forwards the PCI Express slot, or rather the device that is in it - however, the list of devices that are compatible with this feature is extremely small, and you actually bind your VM to the host to deprive it almost all bonuses in the field of memory optimization, high availability, network traffic monitoring and other goodies. An exception, perhaps, in this situation may be platforms based on Cisco UCS - and even then - only in individual configurations. Citirix has exactly the same situation - despite SRIOV support, The VM in which this feature is activated loses all the capabilities in the field of VM transfer, the ability to configure access lists for VMs (after all, a card like this will bypass the virtual switch) - so all this is extremely limited. Microsoft is more likely to virtualize and profile such an adapter - as A VM can be highly accessible, and it can migrate from host to host, however, this requires the availability of cards with SRIOV support in each host - then the parameters of the SRIOV adapter will be transferred along with the VM to the new host.
2) Disk encryption and IPSec Offload - here the picture is clearly unambiguous, no one except Hyper-V works with such mechanisms. Naturally, for IPSec Offload, you need a card with hardware support for this technology, but if you run a VM with intensive interaction with IPSec mechanisms, then you will get a gain from using this technology. As for disk encryption, this mechanism is implemented using BitLocker, however, the possibilities for its use in our country are greatly limited by regulations. In general, the mechanism is interesting - to encrypt the space where the VMs are located in order to prevent data leakage, but, unfortunately, this mechanism will not be widely used until legal and regulatory issues are resolved.
3)Virtual Machine Queue (VMQ) - Technology for optimizing the work with VM traffic at the host level and network interfaces. Here, according to garlic, it’s hard to judge how much DMVq is better than VMq - because in practice I have not had the opportunity to compare these technologies. However, it is stated that only Hyper-V has DMVq. Well, there is such a thing, it is probably good, but in this case I will leave this point without any comments - for for me personally there is no strong difference, except for the letters - I will be glad if someone sheds light on the difference and the effect of the dynamics of the notorious VMq.
Let's continue further - now I suggest taking a look at the capabilities of virtual switches that our hypervisors have in their arsenal:
1) Extensible switches- here it is worth immediately understanding what is meant by “extensibility”. In my understanding, this means the ability to add, expand the functionality of an existing solution, while maintaining its integrity. In this case, Hyper-V and XenServer correspond to this ideology. But with VMware, I would call their switchboard not “extensible”, but “replaceable” - because as soon as you install the Cisco 1000V Nexus , it replaces vDS, rather than expanding its capabilities.
2) Protection against spoofing and unauthorized packages - here the situation is simple. Free protection against unwanted packets and attacks is possible only in Hyper-V. Citrix, unfortunately, doesn’t have such functionality, but with VMware you have to buy a solution from vShield App or from third-party partners.
Of the other features, it is probably worth noting that everyone except the free ESXi has access control to virtual network ports - i.e. with VMware, this can be applied again only at an additional cost. Well, lastly, I want to mention the direct trunk mode to VM, which only Hyper-V has.
Now let's look at the comparison of VM migration mechanisms, as well as the availability of VXLAN support .
1) Live migration- It is supported by everyone except the free ESXi. The number of simultaneous migrations - the parameter is very thin. At VMware, it is technically limited at the hypervisor level, and Hyper-V at the level of robust logic and physical infrastructure capabilities. Citrix does not have clear official data on the number of simultaneous migrations supported. Although this option looks more attractive with Hyper-V, this approach is very alarming - of course, it’s very cool that you can migrate at least 1000 VMs at the same time - but it will just hang your network. Unknowingly, this can happen, but if friendship is with your head - then you yourself set the desired parameter - it saves that by default the number of simultaneous migrations is 2 - so be careful. As for the virtual migration of virtual VM storage, only Hyper-V and vSphere Enterprise Plus can boast of this. But only Hyper-V can migrate from host to host (or where else) without shared storage - although the function is really important and useful, it is a pity that only one hypervisor has this mechanism.
2) VXLAN - a mechanism for abstraction and isolation of a virtual network with the possibility of its dynamic expansion. This mechanism is very necessary for continuous cloud environments or very large data centers with embedded virtualization. Today, only Hyper-V supports this mechanism by default - VMware has the ability to implement this mechanism, but only for an additional fee, because You will have to buy a third-party extension from Cisco.
Now let's look at the failover and fault tolerance mechanisms that are present in virtualization platforms.
1) Built-in backup - this mechanism is available for free only in Hyper-V, in XenServer - starting with the Advanced edition, and if we are talking about VMware - then starting with Essentials Plus. In my opinion, it’s a little strange not to build in and make this mechanism free - imagine situations that you are testing virtualization, you have deployed virtual machines - and when you go, everything is over !!! Not very cool, I think it will look like - if you really want to push the platform to the market, then do it with customer care (smile).
2) VM replication- A solution for a disaster-resistant scenario, when you need to be able to raise actual VM instances in another data center. VMware has a separate product for implementing this solution - Site Recovery Manager (SRM) - and it costs money. Citrix has the same situation - Platinum Edition - and you get this functionality. Hyper-V 3.0 is free.
3) Monitoring guest applications - a mechanism that monitors the state of an application or service inside a VM and, based on this data, can take any action with the VM, for example, restart it if necessary. Hyper-V has built-in functionality for taking parameters and taking actions, VMware has an API - but there is no solution for working with this function, and Citrix has nothing in it.
four)VM distribution rules - a mechanism that allows you to distribute VMs on a cluster so that they either never meet on the same host, or vice versa - never disperse among themselves. Citrix does not know how to do such feints with your ears, VMware can do this in all paid editions where HA is available, and Hyper-V as always - 0 rubles.
Otherwise, the platforms are more or less the same in terms of functionality, with the exception of ESXi, of course.
During the IT Camps in Russia and Moscow, I talked to a very different audience - and therefore I want to immediately highlight a few more points.
1) “And when will Hyper-V learn to forward USB keys from the host inside the VM?” - Most likely, never. There are limitations to this in terms of the architecture of the hypervisor, and considerations in terms of security and accessibility of services.
2) “VMware has Fault Tolerance technology - when will something like this be in Hyper-V?”- The answer will be similar to the previous one, but ... I personally consider this technology neutered - judge for yourself - you turned it on. But what will you do with a virtual machine that can have only 1 processor and not more than 4 GB of RAM? You put the domain controller there !? Holy-holy-holy-come out! There should always be at least 2 domain controllers, and this will not save you from a failure at the guest OS or guest application level - if BSOD occurs, then on both VM instances. Limitations on the number of processors and RAM promise to increase as many as 3 releases in a row, if not more - and things are still there.
3) “With VMware, the environment is better suited for the Linux environment — how is Hyper-V doing?”- Yes, indeed - VMware supports Linux guests much better - this is a fact. But Hyper-V does not stand still - both the number of supported distributions and the capabilities of integration services are increasing.
And so - in general, to summarize.
In terms of functionality - all hypervisors are about the same, but I would single out VMware and Hyper-V as 2 leaders. If you compare them with each other - it’s only a matter of money and functionality, as well as your IT infrastructure - what do you have more - Linux or Windows? VMware has all the good features worth the money - Microsoft doesn’t. If you have an operating environment on MS, then VMware will be an additional cost for you, and serious - licenses + support + maintenance ...
If you have more Linux, then VMware is probably here, if the environment is mixed, then the question is open.
PS> I heard somewhere that on the last Gartner’s square Hyper-V was in the lead and overtook vSphere, however I didn’t find such a picture, so this time without Gartner, if anyone has info on this topic, share it, plz. I found Gartner’s square from the month of June, but that’s not it ...
PSS> Well, I’m ready for the infernal execution))) What and how to use it’s up to you, as they say, but what, I’m waiting for cartridges!
Sincerely,
Fireman,
George A. Gadzhiev
Information Infrastructure Expert
Microsoft Corporation
Today, the topic of conversation will be (as I personally think) very relevant, if not painful ...
Yes, yes, I decided to conduct a functional and economic comparison of the 2 leading hypervisor platforms - Hyper-V 3.0 and VMware ESXi 5.0 / 5.1 between myself ... But then I decided to add another Citrix XenServer 6 for comparison to the comparison ...
I’m already in anticipation of a rockfall in my garden, but still we will keep the defense to the end - please take the appropriate position, side, camp - who cares - and under the cut to start the battle ...
Comparison criteria
Before starting the fight, not for life but for death, I suggest defining the criteria, domains, zones within which features will be allocated and compared between platforms. And so:
1) Scalability and performance
2) Security and network
3) Infrastructure flexibility
4) Fault tolerance and continuity of business processes
Categories are given - let's get started!
Scalability and performance
To begin with, I suggest taking a look at the comparison table for the scalability of experimental subjects:
As can be seen from the table, the leader here is explicit and unambiguous - this is Hyper-V 3.0. And here it should be mentioned that the performance and scalability of the hypervisor from MS does not depend on its charge. At VMware, the situation is the opposite - depending on how many eternally green digital you pay - you get so many chips.
So for comparison, VMware we have already 2 columns. XenServer is clearly neither to the village, nor to the city - it somehow got stuck in the middle ...
However, if we recall the numerous statements of Citrix that “we are not a company that makes decisions on virtualization, but a company that makes decisions on optimizing traffic”, then XenServer is a tool to achieve this goal. However, the fact that Hyper-V and Citrix Xen are distant relatives - and as a result such a difference in scalability - is depressing.
VMware was very annoyed by the oscillatory factor of their RAM licensing policies - the so-called vRAM- WHAT REALLY CAN BE CALLED FOR EXTRABILITY. Not only that, it was necessary to license the amount of RAM that virtual machines consume, but in addition, if necessary, a delta was formed on this very parameter vRAM (since there was simply no such parameter in ESX / ESXi 4.x) - and then people fell for money. True, with the release of 5.1 VMware thought better of it and abandoned such a delusional format - however, such tough pricing convulsions can not testify to anything good ...
From the technological point of view, I personally really expected a “nostril-in-nostril” situation - but no, this it didn’t happen, and the interest faded a little ... I think the first block of comparison doesn’t need any more comments - it is quite “transparent”.
I propose moving on to the next part of this comparison block - namely, an overview of some features that relate to performance and scalability indicators. Consider the interaction functionality with the disk subsystem, the table below:
If you look at this table, the following points can be noted here:
1) Native support for 4K sectors in virtual disks is quite a critical thing for demanding applications. High-performance DBMSs are usually located on disks with large stripe sizes. Support for this mechanism is officially present only in Hyper-V - I could not find information on support for this mechanism in the official documentation of competing vendors.
2) The maximum size of the LUN connected directly to the VM- the situation is as follows. For Citrix and VMware, this is a parameter that depends on the hypervisor itself directly, while for Hyper-V it is more likely which OS is the guest. If you use Windows Server 2012 as a guest OS, you can forward LUNs larger than 256 TB, for other hypervisors this parameter ends with 64 TB for VMware and does not rise above the 15 TB mark for Citrix, respectively.
From the point of view of functionality, XenServer looks very weak - support for Fiber Channel virtualization should already become the norm - at least at the platform level, but the situation in this case is not a positive example. The fact that VMware has a “paid” features policy has long been accustomed to it, so that at the level of the ESXi platform itself it can do this - the only question is are you ready to pay so much for this? The editors of vSphere Enterprise Plus are not very cheap, and support must be purchased without fail.
Having support for hardware offload data transmission in SANs is also a very good bonus. If earlier it was an exceptional feature of VMware - vStorage API for Array Integration (VAAI) , then Hyper-V 3.0 has in its arsenal a similar technology -Offload Data Transfer (ODX) . Most storage vendors have either released firmware for their systems that support this technology, or are ready to do this soon. Very sad for Citrix in this particular case - XenServer has no technologies for optimizing work with SAN systems.
Well, we figured out the disk subsystem - let's now see what kind of resource management mechanisms our reviewers have.
From the point of view of features, the picture in this situation is more or less even, but there are points worth paying attention to:
1) Data Center Bridging (DCB) - this mysterious abbreviation hides nothing more than support for converged network adapters - Converged Network Adapter(CNA) . In other words, a converged network adapter is an adapter that simultaneously has hardware support and acceleration for operation both in ordinary LAN Ethernet networks and in SAN networks - i.e. the idea is that the data transmission medium, the type of media - become unified (this can be optics or twisted pair), and various protocols are used over the physical channel - SMB, FCoE, iSCSI, NFS, HTTP and others - i.e. from the point of view of management, we have the ability to dynamically reassign such adapters by redistributing them depending on the need and type of load. If you look at the table, Citrix doesn’t have DCB support - there is some list of CNA adapters that are supported - but there is not a word about DCB support in the official Citrix XenServer 6.0 documentation.
2)Quality of service, QoS support are essentially mechanisms for optimizing traffic and guaranteeing the quality of the data transmission channel. Everything here is basically normal for everyone, except for the free ESXi.
3) Dynamic memory- here the story turns out to be extremely interesting. According to Microsoft, the mechanisms for optimizing memory using Dynamic Memory were substantially redesigned and allowed optimization to work with memory - and thereby increase the density of virtual machines. To this was added an intellectual swap file, which is configured for each individual VM. Honestly, it’s hard for me to judge how effective this mechanism is, however, when comparing the density of the same VMs on Hyper-V 2.0 and Hyper-V 3.0, the difference was almost 2 times in favor of the second (The machines were with virtual Windows Server 2008 R2). And here the question arises - VMware has a whole arsenal of features for optimizing memory operations, including deduplication of memory pages, which works much more efficiently. But let's take a closer look at this situation. VMware has 4 memory optimization technologies - Memory Balooning, Transparent Page Sharing, Compression, and Swapping. However, all 4 technologies begin their work only in conditions of increased load on the resources of the host system, i.e. in their manner of action, they are reactive rather than proactive or optimizing. If we take into account the fact that most modern server platforms support large Large Page Tables, which by default are 2 MB in size - this increases productivity, then the picture changes - because ESXi does not know how to deduplicate such large blocks of data - it starts to break them into smaller blocks, 4 Kbytes, but he already deduplicates them - but all these operations cost resources and time - so the picture is not so unambiguous in terms of the usefulness of these technologies.
Security and Network
Well, it seems that we figured out the performance and scalability - now is the time to move on to comparing the security capabilities and isolation of virtualized environments.
The following points should be highlighted here:
1) Single Root I / O Virtualization (SRIOV)- in fact, this is a mechanism for forwarding a physical adapter into a virtual machine, however, in my opinion, it is a little incorrect to compare VMware and Microsoft technologies in this area. DirectPath I / O, the technology that VMware uses, really completely forwards the PCI Express slot, or rather the device that is in it - however, the list of devices that are compatible with this feature is extremely small, and you actually bind your VM to the host to deprive it almost all bonuses in the field of memory optimization, high availability, network traffic monitoring and other goodies. An exception, perhaps, in this situation may be platforms based on Cisco UCS - and even then - only in individual configurations. Citirix has exactly the same situation - despite SRIOV support, The VM in which this feature is activated loses all the capabilities in the field of VM transfer, the ability to configure access lists for VMs (after all, a card like this will bypass the virtual switch) - so all this is extremely limited. Microsoft is more likely to virtualize and profile such an adapter - as A VM can be highly accessible, and it can migrate from host to host, however, this requires the availability of cards with SRIOV support in each host - then the parameters of the SRIOV adapter will be transferred along with the VM to the new host.
2) Disk encryption and IPSec Offload - here the picture is clearly unambiguous, no one except Hyper-V works with such mechanisms. Naturally, for IPSec Offload, you need a card with hardware support for this technology, but if you run a VM with intensive interaction with IPSec mechanisms, then you will get a gain from using this technology. As for disk encryption, this mechanism is implemented using BitLocker, however, the possibilities for its use in our country are greatly limited by regulations. In general, the mechanism is interesting - to encrypt the space where the VMs are located in order to prevent data leakage, but, unfortunately, this mechanism will not be widely used until legal and regulatory issues are resolved.
3)Virtual Machine Queue (VMQ) - Technology for optimizing the work with VM traffic at the host level and network interfaces. Here, according to garlic, it’s hard to judge how much DMVq is better than VMq - because in practice I have not had the opportunity to compare these technologies. However, it is stated that only Hyper-V has DMVq. Well, there is such a thing, it is probably good, but in this case I will leave this point without any comments - for for me personally there is no strong difference, except for the letters - I will be glad if someone sheds light on the difference and the effect of the dynamics of the notorious VMq.
Let's continue further - now I suggest taking a look at the capabilities of virtual switches that our hypervisors have in their arsenal:
1) Extensible switches- here it is worth immediately understanding what is meant by “extensibility”. In my understanding, this means the ability to add, expand the functionality of an existing solution, while maintaining its integrity. In this case, Hyper-V and XenServer correspond to this ideology. But with VMware, I would call their switchboard not “extensible”, but “replaceable” - because as soon as you install the Cisco 1000V Nexus , it replaces vDS, rather than expanding its capabilities.
2) Protection against spoofing and unauthorized packages - here the situation is simple. Free protection against unwanted packets and attacks is possible only in Hyper-V. Citrix, unfortunately, doesn’t have such functionality, but with VMware you have to buy a solution from vShield App or from third-party partners.
Of the other features, it is probably worth noting that everyone except the free ESXi has access control to virtual network ports - i.e. with VMware, this can be applied again only at an additional cost. Well, lastly, I want to mention the direct trunk mode to VM, which only Hyper-V has.
Infrastructure flexibility
Now let's look at the comparison of VM migration mechanisms, as well as the availability of VXLAN support .
1) Live migration- It is supported by everyone except the free ESXi. The number of simultaneous migrations - the parameter is very thin. At VMware, it is technically limited at the hypervisor level, and Hyper-V at the level of robust logic and physical infrastructure capabilities. Citrix does not have clear official data on the number of simultaneous migrations supported. Although this option looks more attractive with Hyper-V, this approach is very alarming - of course, it’s very cool that you can migrate at least 1000 VMs at the same time - but it will just hang your network. Unknowingly, this can happen, but if friendship is with your head - then you yourself set the desired parameter - it saves that by default the number of simultaneous migrations is 2 - so be careful. As for the virtual migration of virtual VM storage, only Hyper-V and vSphere Enterprise Plus can boast of this. But only Hyper-V can migrate from host to host (or where else) without shared storage - although the function is really important and useful, it is a pity that only one hypervisor has this mechanism.
2) VXLAN - a mechanism for abstraction and isolation of a virtual network with the possibility of its dynamic expansion. This mechanism is very necessary for continuous cloud environments or very large data centers with embedded virtualization. Today, only Hyper-V supports this mechanism by default - VMware has the ability to implement this mechanism, but only for an additional fee, because You will have to buy a third-party extension from Cisco.
Fault tolerance and business continuity
Now let's look at the failover and fault tolerance mechanisms that are present in virtualization platforms.
1) Built-in backup - this mechanism is available for free only in Hyper-V, in XenServer - starting with the Advanced edition, and if we are talking about VMware - then starting with Essentials Plus. In my opinion, it’s a little strange not to build in and make this mechanism free - imagine situations that you are testing virtualization, you have deployed virtual machines - and when you go, everything is over !!! Not very cool, I think it will look like - if you really want to push the platform to the market, then do it with customer care (smile).
2) VM replication- A solution for a disaster-resistant scenario, when you need to be able to raise actual VM instances in another data center. VMware has a separate product for implementing this solution - Site Recovery Manager (SRM) - and it costs money. Citrix has the same situation - Platinum Edition - and you get this functionality. Hyper-V 3.0 is free.
3) Monitoring guest applications - a mechanism that monitors the state of an application or service inside a VM and, based on this data, can take any action with the VM, for example, restart it if necessary. Hyper-V has built-in functionality for taking parameters and taking actions, VMware has an API - but there is no solution for working with this function, and Citrix has nothing in it.
four)VM distribution rules - a mechanism that allows you to distribute VMs on a cluster so that they either never meet on the same host, or vice versa - never disperse among themselves. Citrix does not know how to do such feints with your ears, VMware can do this in all paid editions where HA is available, and Hyper-V as always - 0 rubles.
Otherwise, the platforms are more or less the same in terms of functionality, with the exception of ESXi, of course.
Other readings
During the IT Camps in Russia and Moscow, I talked to a very different audience - and therefore I want to immediately highlight a few more points.
1) “And when will Hyper-V learn to forward USB keys from the host inside the VM?” - Most likely, never. There are limitations to this in terms of the architecture of the hypervisor, and considerations in terms of security and accessibility of services.
2) “VMware has Fault Tolerance technology - when will something like this be in Hyper-V?”- The answer will be similar to the previous one, but ... I personally consider this technology neutered - judge for yourself - you turned it on. But what will you do with a virtual machine that can have only 1 processor and not more than 4 GB of RAM? You put the domain controller there !? Holy-holy-holy-come out! There should always be at least 2 domain controllers, and this will not save you from a failure at the guest OS or guest application level - if BSOD occurs, then on both VM instances. Limitations on the number of processors and RAM promise to increase as many as 3 releases in a row, if not more - and things are still there.
3) “With VMware, the environment is better suited for the Linux environment — how is Hyper-V doing?”- Yes, indeed - VMware supports Linux guests much better - this is a fact. But Hyper-V does not stand still - both the number of supported distributions and the capabilities of integration services are increasing.
Conclusion
And so - in general, to summarize.
In terms of functionality - all hypervisors are about the same, but I would single out VMware and Hyper-V as 2 leaders. If you compare them with each other - it’s only a matter of money and functionality, as well as your IT infrastructure - what do you have more - Linux or Windows? VMware has all the good features worth the money - Microsoft doesn’t. If you have an operating environment on MS, then VMware will be an additional cost for you, and serious - licenses + support + maintenance ...
If you have more Linux, then VMware is probably here, if the environment is mixed, then the question is open.
PS> I heard somewhere that on the last Gartner’s square Hyper-V was in the lead and overtook vSphere, however I didn’t find such a picture, so this time without Gartner, if anyone has info on this topic, share it, plz. I found Gartner’s square from the month of June, but that’s not it ...
PSS> Well, I’m ready for the infernal execution))) What and how to use it’s up to you, as they say, but what, I’m waiting for cartridges!
Sincerely,
Fireman,
George A. Gadzhiev
Information Infrastructure Expert
Microsoft Corporation