Virtualization: recommendations from leading dog breeders
Before you build the infrastructure on the basis of virtualization, and, especially, put it into commercial operation, you need to make sure that the system’s resources are used most efficiently and the productivity is maximum. In this series of articles, I will give recommendations on how to optimize the system for performance - both from the host side and from the side of virtual machines.
Since the servers hosting the virtual machines often work at peak loads, the performance of such servers is critical to the performance of the entire system. Potential bottlenecks may include:
Here I will tell you how to identify “bottlenecks” in all four directions and how to deal with them, and most importantly, how to avoid them.
The "heart" of any computer is the processor. And the correct choice of a processor in the context of virtualization becomes even more important. A processor is the most expensive part of any computer, and choosing a processor too powerful can lead to unnecessary costs not only for purchasing the processor itself, but also in the future for electricity and cooling. If the processor is not powerful enough - the system will not be able to provide the necessary performance, which may result in the purchase of a new processor - and, therefore, again the costs.
We need answers to the following key questions:
Answering these questions is not as easy as it seems. A simple example: which system to use - dual-processor or four-processor? The price of dual-processor systems is an absolute gain: the price of one four-processor server is approximately equal to three dual-processor. It would seem that in this case the best solution is to buy three dual-processor servers and combine them into a failover cluster - and you can get a more high-performance and fault-tolerant solution. But on the other hand, in such cases ... There are many new costs:
After that, it may turn out that it would be better to use a four-processor server, which can and will cost a little more, and will be less fault tolerant - along with all the overhead it may still be cheaper.
However, overall system performance may depend not only and not so much on processors. Take, for example, the DBMS. In some cases, the processor requirements may not be too high, but the disk subsystem can be used very actively. And if business logic and analytics (OLAP, reports) are actively used in this DBMS - on the contrary, the requirements for the processor and memory can be much higher than for the disk subsystem.
To determine if a processor is a bottleneck in a system, you need to find out how much it is busy. Different system utilities can be used for this. For example, many system administrators are used to using the standard Windows Task Manager. Unfortunately, due to the specifics of the Hyper-V architecture, this same Task Manager will not show the weather in Honduras, and not the Zimbabwean dollar, but just the load on the host OS processor. Virtual machines will not be taken into account - since the host OS, in the same way as all virtual machines, works in its own isolated partition. Therefore, you need to use the Perfmon snap-in. Many administrators, especially those who have passed MCSA exams, know about this utility. For those who still don’t know, it starts quite easily:Start - Administrative Tools - Reliability and Performance . From this snap-in, we need the Monitoring Tools - Performance Monitor branch .
Using this utility, you can see the values of almost any system parameters, as well as observe their change on the graph. By default, only one parameter is added (in terms of Perfmon - “counter” or “counter”) - “% Processor Time”. This counter shows the same thing as Task Manager - loading the host OS processor. Therefore, this counter can be deleted.
Let's move on to adding counters. Perfmon has many counters related to Hyper-V. Of these, we are currently interested in two:
Note: What is a logical processor? This is easiest to understand with examples. Suppose, if you have one processor with one core, you will have one logical processor. If the processor is dual-core, then there will already be two logical processors. And if it supports Hyper-Threading, then there will be four of them.
These two counters will help to get a real picture of the host processor load. The values of the counters are measured in percent, and, accordingly, the closer they are to 100%, the higher the processor load, and perhaps you should consider buying additional or new, more powerful processors.
A powerful processor is good, but with a lack of memory, the system starts using swap files, and performance starts to drop almost exponentially. As they say on the Internet - "512 megabytes is not memory, it is insanity."
Unfortunately (and most likely, fortunately) in Hyper-V it is impossible to allocate more memory to virtual machines than is physically present in the system. This is what they call “memory overcommit,” and what marketing other vendors of virtualization solutions play with such joy. For better or worse, this is the topic of a separate article, and quite a few virtual copies have been broken about this topic.
In this regard, the question arises: how much memory do we ultimately need? The answer depends on various factors:
How do we see what happens to memory? Fortunately, you can look through your favorite Task Manager - unlike the processor load, it will show the memory usage quite truthfully. Or you can (and even need) to resort to the familiar Perfmon and its Memory / Available Mbytes and Memory / Pages / Sec counters .
As a rule, it’s hard enough to predict how much disk space virtual machines will need to operate. And therefore, situations where there is not enough disk space, or vice versa - when there is too much of it and the disks are idle - are quite common.
In addition to volume, there is another very important characteristic - the performance of the disk subsystem. 2 TB of disk space is certainly good, but if these are two SATA disks that are not combined into a RAID array, then the bandwidth may simply not be enough, and this will greatly affect the system performance.
Storage subsystem planning includes the following aspects:
Controllers Hard disk controllers can have different bus widths, different cache sizes, and in general their performance can vary greatly. Some controllers are completely “hardware”, that is, they process all requests independently, and some are “semi-software”, that is, the processor of the computer itself performs part of the request processing. First of all, the performance of the disk subsystem depends on the type of controller, and you need to choose the controller correctly.
Type of drives. Hard drives, in addition to volume, have many other characteristics that should not be forgotten. This is the type of interface (IDE, SATA, SCSI, SAS), and the spindle speed (7200, 10000, 15000 rpm), and the cache size of the hard drive itself. The difference, for example, between a 7200 and 10,000 drive, and even more so - 15,000 rpm, or between 8 and 32 MB of cache - for such highly loaded systems as virtualization hosts - is quite high.
The number of disks and the type of RAID array. As already mentioned, sometimes, in order to achieve higher performance and reliability, the best solution is not to install a single large disk, but to combine several smaller disks into a RAID array. There are several types of RAID arrays:
As you can see, the choice of disks is a rather difficult task, so you need to choose based on not only the requirements for disk space, but also performance requirements, and, of course, from allocated budgets. Sometimes it will be more justified to use an external storage system, for example - when it comes to large volumes and / or high performance that cannot be achieved using internal drives. And when an infrastructure with high fault tolerance is planned, then there is certainly no escape from external storage. External storage systems should be selected based on the same principles as internal drives: interface bandwidth, number of disks, type of disks, supported RAID arrays, additional functions such as changing the volume of virtual disks (LUNs) on the fly,
What about measurements? There are several counters related to disk subsystem performance. The following are of interest:
These counters show what percentage of time is reading, writing to disk and, accordingly, the percentage of downtime. If their values rise above 75% for long periods of time, this means that the performance of the disk subsystem is not high enough.
In addition, there are two more counters:
These two counters show the average length of the disk queue, respectively, for reading and writing. High values of these parameters (above 2) for short periods of time ("peaks") are quite acceptable, and, for example, are quite typical for DBMS or MS Exchange servers, but long-term excesses indicate that the disk subsystem is probably "narrow" the place. "
The network subsystem is a "bottleneck" much less than the processor, memory and hard drive, but nevertheless - do not forget about it.
As with all other components, there are several questions that would be nice to get answers at the planning stage:
Depending on the answers, different scenarios for configuring the network subsystem are possible. Suppose we have only one server. He has exactly 4 network interfaces. Only three virtual machines are running. The server does not have an out-of-band-management-controller, which means that if something bad happens, you will have to run to the server room (which is located at the other end of the city).
For servers that do not have remote control hardware, it is recommended that one of the network interfaces be left unused in virtual networks, exclusively for management tasks. This will greatly reduce the risk of a situation when, with excessive disposal or due to incorrect settings of the network interface, the ability to remotely control the server disappears. You can do this either at the stage of installing the Hyper-V role by unchecking one of the network interfaces, or after installation, removing the virtual network attached to the network interface that will be used for management.
In addition, at the host level, you just need to install the most “fresh” drivers for network adapters. This is necessary in order to use the special functions of network adapters - VLAN, Teaming, TCP Offloading, VMQ (provided that the network adapters themselves support this - as a rule, these are specialized server network adapters).
Suppose that our three virtual machines have been working for some time, and traffic analysis has shown that two of them do not load the network interface very much, but the third one generates very large volumes of traffic. The best solution would be to "release into the world" a virtual machine that generates a large amount of traffic through a separate network interface. To do this, you can create two virtual networks of the External type: one for those virtual machines that do not load the network, and a separate one for the third virtual machine.
In addition, you can create a virtual network with an exit "out", while not creating a virtual network adapter in the parent partition. This is done using scripts. I won’t go into details, but just give a link:blogs.msdn.com/b/robertvi/archive/2008/08/27/howto-create-a-virtual-swich-for-external-without-creating-a-virtual-nic-on-the-root.aspx
If you plan to use storage systems with an iSCSI interface, it is highly recommended that you select a separate network interface for iSCSI, or even two, for MPIO. If the LUNs will be mounted in the host OS, then you just need to leave one or two interfaces not tied to virtual networks. If iSCSI initiators will work inside virtual machines, they need to create one or two separate virtual networks that will be used exclusively for iSCSI traffic.
VLAN tagging (IEEE 802.1q) means “marking” network packets with a special token (tag), thanks to which a packet can be associated with a specific virtual network (VLAN). At the same time, hosts belonging to different VLANs will be in different broadcast domains, although physically connected to the same equipment. Virtual Network Adapters in Hyper-V also support VLAN tagging. To do this, go to the properties of the virtual adapter in the virtual machine settings and register the corresponding VLAN ID there.
So far, we talked about network interfaces and virtual network adapters within a host. But you must also take into account the bandwidth of the active equipment - for example, the switches to which our hosts will connect. A simple example: if there is an 8-port 1Gbps switch, and each of the ports utilizes all 1Gbps bandwidth, then a 1Gbps uplink physically will not be able to pass such volumes of traffic, which will lead to performance degradation. This is especially important when using iSCSI - the load there can be high, and packet delays can be quite critical for performance. Therefore, when using iSCSI, it is highly recommended that you allow iSCSI traffic through separate switches.
Now let's move on to the recommendations on the host OS. As you know, Windows Server 2008 R2 can be installed in two different modes: Full and Server Core. From the point of view of the hypervisor, these modes are no different. Although Server Core mode at first glance seems more complicated (especially for inexperienced administrators), it is recommended to use this mode. Installing the OS in Server Core mode has the following advantages over a full installation:
Running third-party applications (not related to Hyper-V) in the guest OS, as well as installing other server roles besides Hyper-V, can lead to a significant drop in performance, as well as to a decrease in stability. The fact is that due to the specifics of the Hyper-V architecture, all interaction between virtual machines and devices goes through the parent partition. Therefore, high loads or a “fall into the blue screen” in the parent partition will necessarily lead to a drop in performance or simply to a “fall” of all running virtual machines. Antivirus software can also be (and should) be included here. Whether it is needed at all on a host that will not do anything other than virtualization itself is, of course, that another question. Nevertheless, if the antivirus is still installed - the first thing to do is to exclude all folders from the scan list, where may be the files of virtual machines. Otherwise, when scanning, performance may slow down, and if something similar to a virus is found in some VHD file, then when trying to cure it, the anti-virus package may spoil VHD itself. Similar cases were observed with MS Exchange databases, and therefore the first recommendation is not to install file antiviruses at all on Exchange servers, and if so, add database folders to the exceptions.
The steps you need to take to improve the performance of the virtual machines themselves depend on the applications that will run on them. Microsoft has best practices for each application — Exchange, SQL Server, IIS, and others. Similar recommendations exist for the software of other vendors. Here I will give only general recommendations that are independent of specific software.
It will explain why you need to install Integration Services in a guest OS, how to simplify the deployment of new virtual machines using the VHD library, and how to keep these VHD up to date with the release of new patches.
Integration services are a set of drivers working inside a guest OS. They must be installed immediately after installing the OS. At the moment, the list of supported OSs is as follows:
Windows 7 and Windows Server 2008 R2 contain integration services in the installation package, so they do not need to be installed additionally on these OSs.
The installation of integration services allows the use of synthetic devices that have higher performance compared to emulated ones. Read more about the difference between emulated and synthetic devices in my article on Hyper-V architecture.
Here is a list of drivers included with Integration Services:
In addition to the listed drivers, the following functions are supported when installing integration services:
To install integration services on Windows, you need to select Action - Integration Services Setup . At the same time, an ISO image with installation files is automatically mounted to the virtual machine, and the installation process starts. If Autorun is disabled in the guest system, then the installation process will have to be started manually.
Integration components for Linux are not included in the Windows Server distribution - they must be downloaded from the Microsoft website.
If you have a sufficiently large infrastructure, and you often have to create new virtual machines and install the OS on them, a set of ready-made “master images” of virtual hard disks can save a lot of time. Such a “master image” stored as a VHD file can be copied and then created a new virtual machine using VHD as a hard disk. At the same time, an OS and some necessary set of software will be installed on it (in particular, integration services).
To create such a master image, you must:
At the first boot of the virtual machine from this image, a procedure called “mini-setup” will start. In this case, you will be prompted to re-enter the computer name, administrator password, and some other data.
We created a master image, and it will be stored with us for a long time. And all would be fine, but there is one small problem: system updates periodically come out, and when you deploy the virtual machine from the master image, you will have to install all the updates that have been released since the creation of the master image. If the image was created, say, a year or two ago, the volume of updates can be quite large. In addition, immediately after connecting to the network, the OS without the latest updates is subject to all kinds of security risks, including viruses. There is an excellent tool that allows you to install updates directly on the master images of virtual machines - it is called the “Offline Virtual Machine Servicing Tool”. To use it, you must deploy the System Center Virtual Machine Manager (SCVMM), as well as have a deployed WSUS or SCCM server, from where, in fact, updates will be pulled up. The principle of its action is as follows:
Offline Virtual Machine Servicing Tool is free. To learn more about this tool and download it, you can go to the official website: www.microsoft.com/solutionaccelerators .
I made some recommendations for tuning the hosts and the virtual machines themselves to achieve the optimal level of performance. I hope that for someone this information will be useful.
Let's start with the host
Since the servers hosting the virtual machines often work at peak loads, the performance of such servers is critical to the performance of the entire system. Potential bottlenecks may include:
- CPU
- Memory
- Disk subsystem
- Network subsystem
Here I will tell you how to identify “bottlenecks” in all four directions and how to deal with them, and most importantly, how to avoid them.
The processor is the heart of the computer
The "heart" of any computer is the processor. And the correct choice of a processor in the context of virtualization becomes even more important. A processor is the most expensive part of any computer, and choosing a processor too powerful can lead to unnecessary costs not only for purchasing the processor itself, but also in the future for electricity and cooling. If the processor is not powerful enough - the system will not be able to provide the necessary performance, which may result in the purchase of a new processor - and, therefore, again the costs.
We need answers to the following key questions:
- How many processors to put?
- How many cores are needed?
- Their speed characteristics?
Answering these questions is not as easy as it seems. A simple example: which system to use - dual-processor or four-processor? The price of dual-processor systems is an absolute gain: the price of one four-processor server is approximately equal to three dual-processor. It would seem that in this case the best solution is to buy three dual-processor servers and combine them into a failover cluster - and you can get a more high-performance and fault-tolerant solution. But on the other hand, in such cases ... There are many new costs:
- More software licenses are required - both for the OS itself and for management software (SCVMM, SCCM, SCOM, etc.)
- Administration costs increase - three servers instead of one
- Three servers consume more power, which means they generate more heat and take up more rack space than a single server, albeit a more powerful one.
After that, it may turn out that it would be better to use a four-processor server, which can and will cost a little more, and will be less fault tolerant - along with all the overhead it may still be cheaper.
However, overall system performance may depend not only and not so much on processors. Take, for example, the DBMS. In some cases, the processor requirements may not be too high, but the disk subsystem can be used very actively. And if business logic and analytics (OLAP, reports) are actively used in this DBMS - on the contrary, the requirements for the processor and memory can be much higher than for the disk subsystem.
To determine if a processor is a bottleneck in a system, you need to find out how much it is busy. Different system utilities can be used for this. For example, many system administrators are used to using the standard Windows Task Manager. Unfortunately, due to the specifics of the Hyper-V architecture, this same Task Manager will not show the weather in Honduras, and not the Zimbabwean dollar, but just the load on the host OS processor. Virtual machines will not be taken into account - since the host OS, in the same way as all virtual machines, works in its own isolated partition. Therefore, you need to use the Perfmon snap-in. Many administrators, especially those who have passed MCSA exams, know about this utility. For those who still don’t know, it starts quite easily:Start - Administrative Tools - Reliability and Performance . From this snap-in, we need the Monitoring Tools - Performance Monitor branch .
Using this utility, you can see the values of almost any system parameters, as well as observe their change on the graph. By default, only one parameter is added (in terms of Perfmon - “counter” or “counter”) - “% Processor Time”. This counter shows the same thing as Task Manager - loading the host OS processor. Therefore, this counter can be deleted.
Let's move on to adding counters. Perfmon has many counters related to Hyper-V. Of these, we are currently interested in two:
- Hyper-V Hypervisor Virtual Processor,% Total Run Time - This counter displays the load of virtual processors. You can set the display of the total load of all processors for running virtual machines, or you can select a specific virtual processor for a specific virtual machine.
- Hyper-V Hypervisor Root Virtual Processor,% Total Run Time - and this counter shows the loading of selected logical processors by tasks not related to Hyper-V.
Note: What is a logical processor? This is easiest to understand with examples. Suppose, if you have one processor with one core, you will have one logical processor. If the processor is dual-core, then there will already be two logical processors. And if it supports Hyper-Threading, then there will be four of them.
These two counters will help to get a real picture of the host processor load. The values of the counters are measured in percent, and, accordingly, the closer they are to 100%, the higher the processor load, and perhaps you should consider buying additional or new, more powerful processors.
There is never much memory
A powerful processor is good, but with a lack of memory, the system starts using swap files, and performance starts to drop almost exponentially. As they say on the Internet - "512 megabytes is not memory, it is insanity."
Unfortunately (and most likely, fortunately) in Hyper-V it is impossible to allocate more memory to virtual machines than is physically present in the system. This is what they call “memory overcommit,” and what marketing other vendors of virtualization solutions play with such joy. For better or worse, this is the topic of a separate article, and quite a few virtual copies have been broken about this topic.
In this regard, the question arises: how much memory do we ultimately need? The answer depends on various factors:
- How many virtual machines will be running, and how much memory will they need? The amount of memory required by each virtual machine depends on the tasks that it will perform. The approach is the same as for ordinary servers, but memory can be allocated to virtual machines more flexibly - not 1024 MB, but, for example, 900 MB.
- The host OS also needs memory. It is recommended to leave at least 512 MB of free memory for the needs of the hypervisor and the host OS itself. If the amount of free memory drops below 32 MB, the system will not let you launch more than one virtual machine until the memory is freed. In addition, in the host OS, some other tasks can be performed, in addition to virtualization. Although this is highly discouraged, the fact is still the place to be, and this must be taken into account.
- Other virtual machines (for Live Migration scripts). If the infrastructure is planned on the basis of a failover cluster, then it is necessary to provide additional amounts of memory on each of the hosts. The fact is that virtual machines can move from one host to another in the case of manual migration (Live Migration), or in the event of a failure of one of the hosts. If there is not enough memory on the host to run roaming virtual machines, then they simply will not be able to start on it. Therefore, at the design stage, it is necessary to provide for an “untouchable reserve” in the amount of 50-100% of the required memory capacity. Perhaps the situation will improve slightly with the release of Windows Server 2008 R2 SP1, which includes technologies for dynamic memory allocation, but for sure I can only say when I test it myself.
How do we see what happens to memory? Fortunately, you can look through your favorite Task Manager - unlike the processor load, it will show the memory usage quite truthfully. Or you can (and even need) to resort to the familiar Perfmon and its Memory / Available Mbytes and Memory / Pages / Sec counters .
Hard drives: how many do they need?
As a rule, it’s hard enough to predict how much disk space virtual machines will need to operate. And therefore, situations where there is not enough disk space, or vice versa - when there is too much of it and the disks are idle - are quite common.
In addition to volume, there is another very important characteristic - the performance of the disk subsystem. 2 TB of disk space is certainly good, but if these are two SATA disks that are not combined into a RAID array, then the bandwidth may simply not be enough, and this will greatly affect the system performance.
Storage subsystem planning includes the following aspects:
Controllers Hard disk controllers can have different bus widths, different cache sizes, and in general their performance can vary greatly. Some controllers are completely “hardware”, that is, they process all requests independently, and some are “semi-software”, that is, the processor of the computer itself performs part of the request processing. First of all, the performance of the disk subsystem depends on the type of controller, and you need to choose the controller correctly.
Type of drives. Hard drives, in addition to volume, have many other characteristics that should not be forgotten. This is the type of interface (IDE, SATA, SCSI, SAS), and the spindle speed (7200, 10000, 15000 rpm), and the cache size of the hard drive itself. The difference, for example, between a 7200 and 10,000 drive, and even more so - 15,000 rpm, or between 8 and 32 MB of cache - for such highly loaded systems as virtualization hosts - is quite high.
The number of disks and the type of RAID array. As already mentioned, sometimes, in order to achieve higher performance and reliability, the best solution is not to install a single large disk, but to combine several smaller disks into a RAID array. There are several types of RAID arrays:
- RAID 0 is a striping array. Information is written in blocks ("stripe") simultaneously on several disks. Thanks to this, reading and writing of large volumes of information occurs much faster than from a single disk, and the faster, the more disks in the array. But there is one big drawback: low reliability. Failure of any of the drives will lead to a complete loss of information. Therefore, in practice, RAID 0 is rarely used. One example is the intermediate backup storage in the Disk-to-disk-to-tape model, where reliability is not as important as performance.
- RAID 1 - “mirroring”. With this model, information is recorded simultaneously on several disks, and the contents of all disks are absolutely identical. The speed of writing and reading is not higher than for a single disk, but reliability is much higher: failure of one disk will not lead to loss of information. There is only one drawback: high cost - where one disk is enough - you have to put two or more. It makes sense in cases where reliability is critical.
- RAID 4 and RAID 5 - "striping with parity." It is a kind of “middle ground” between RAID 0 and RAID 1. The idea is that the information is stored on disks, as in the case of RAID 0, with striped blocks, but in addition, checksums of the stored data are calculated. In the event of a failure of one of the disks, the missing data is automatically calculated using the available data and checksums. Of course, this leads to a decrease in performance, but at the same time, data is not lost, and when replacing a failed disk, all information is restored (this process is called array rebuild). Data loss will only occur if two or more drives fail. Such arrays differ in that their write speed is much lower than the read speed. It happens because that when writing a data block, the checksum is calculated and written to disk. RAID 4 and RAID 5 differ in that in RAID 4 checksums are written to a separate disk, and in RAID 5 they are stored on all disks of the array along with data. In any case, to organize such an array you need N disks for data storage plus one disk. Unlike RAID 1 and RAID 10, where the number of disks simply doubles.
- RAID 6 - aka RAID DP, double-parity, double parity. The same as RAID 5, but checksums are calculated two times using various algorithms. Although disks here no longer require N + 1, as with RAID 5, but N + 2, but such an array can survive even the simultaneous failure of two disks. It is relatively rare, as a rule - in enterprise-level storage systems, for example, NetApp.
- RAID 10 - “hybrid” RAID 0 and RAID 1. Represents RAID 0 from several RAID 1 (and then called RAID 0 + 1) or vice versa - RAID 1 from several RAID 0 (RAID 1 + 0). It is distinguished by the highest performance, both in writing and in reading, but at the same time it is also distinguished by its high cost - since the disks require 2 times more than is necessary for data storage.
As you can see, the choice of disks is a rather difficult task, so you need to choose based on not only the requirements for disk space, but also performance requirements, and, of course, from allocated budgets. Sometimes it will be more justified to use an external storage system, for example - when it comes to large volumes and / or high performance that cannot be achieved using internal drives. And when an infrastructure with high fault tolerance is planned, then there is certainly no escape from external storage. External storage systems should be selected based on the same principles as internal drives: interface bandwidth, number of disks, type of disks, supported RAID arrays, additional functions such as changing the volume of virtual disks (LUNs) on the fly,
What about measurements? There are several counters related to disk subsystem performance. The following are of interest:
- Physical Disk,% Disk Read Time
- Physical Disk,% Disk Write Time
- Physical Disk,% Idle Time
These counters show what percentage of time is reading, writing to disk and, accordingly, the percentage of downtime. If their values rise above 75% for long periods of time, this means that the performance of the disk subsystem is not high enough.
In addition, there are two more counters:
- Physical Disk, Avg. Disk Read Queue Length
- Physical Disk, Avg. Disk write queue length
These two counters show the average length of the disk queue, respectively, for reading and writing. High values of these parameters (above 2) for short periods of time ("peaks") are quite acceptable, and, for example, are quite typical for DBMS or MS Exchange servers, but long-term excesses indicate that the disk subsystem is probably "narrow" the place. "
Network subsystem
The network subsystem is a "bottleneck" much less than the processor, memory and hard drive, but nevertheless - do not forget about it.
As with all other components, there are several questions that would be nice to get answers at the planning stage:
- How many virtual machines will be running at the same time, and what will be the load on the network?
- What is the network bandwidth?
- Are iSCSI storage systems used?
- Does the server have remote management hardware independent of the installed OS (for example, HP iLO or Dell DRAC)?
Depending on the answers, different scenarios for configuring the network subsystem are possible. Suppose we have only one server. He has exactly 4 network interfaces. Only three virtual machines are running. The server does not have an out-of-band-management-controller, which means that if something bad happens, you will have to run to the server room (which is located at the other end of the city).
Host level
For servers that do not have remote control hardware, it is recommended that one of the network interfaces be left unused in virtual networks, exclusively for management tasks. This will greatly reduce the risk of a situation when, with excessive disposal or due to incorrect settings of the network interface, the ability to remotely control the server disappears. You can do this either at the stage of installing the Hyper-V role by unchecking one of the network interfaces, or after installation, removing the virtual network attached to the network interface that will be used for management.
In addition, at the host level, you just need to install the most “fresh” drivers for network adapters. This is necessary in order to use the special functions of network adapters - VLAN, Teaming, TCP Offloading, VMQ (provided that the network adapters themselves support this - as a rule, these are specialized server network adapters).
Network loads
Suppose that our three virtual machines have been working for some time, and traffic analysis has shown that two of them do not load the network interface very much, but the third one generates very large volumes of traffic. The best solution would be to "release into the world" a virtual machine that generates a large amount of traffic through a separate network interface. To do this, you can create two virtual networks of the External type: one for those virtual machines that do not load the network, and a separate one for the third virtual machine.
In addition, you can create a virtual network with an exit "out", while not creating a virtual network adapter in the parent partition. This is done using scripts. I won’t go into details, but just give a link:blogs.msdn.com/b/robertvi/archive/2008/08/27/howto-create-a-virtual-swich-for-external-without-creating-a-virtual-nic-on-the-root.aspx
iSCSI
If you plan to use storage systems with an iSCSI interface, it is highly recommended that you select a separate network interface for iSCSI, or even two, for MPIO. If the LUNs will be mounted in the host OS, then you just need to leave one or two interfaces not tied to virtual networks. If iSCSI initiators will work inside virtual machines, they need to create one or two separate virtual networks that will be used exclusively for iSCSI traffic.
VLAN tagging
VLAN tagging (IEEE 802.1q) means “marking” network packets with a special token (tag), thanks to which a packet can be associated with a specific virtual network (VLAN). At the same time, hosts belonging to different VLANs will be in different broadcast domains, although physically connected to the same equipment. Virtual Network Adapters in Hyper-V also support VLAN tagging. To do this, go to the properties of the virtual adapter in the virtual machine settings and register the corresponding VLAN ID there.
Active equipment
So far, we talked about network interfaces and virtual network adapters within a host. But you must also take into account the bandwidth of the active equipment - for example, the switches to which our hosts will connect. A simple example: if there is an 8-port 1Gbps switch, and each of the ports utilizes all 1Gbps bandwidth, then a 1Gbps uplink physically will not be able to pass such volumes of traffic, which will lead to performance degradation. This is especially important when using iSCSI - the load there can be high, and packet delays can be quite critical for performance. Therefore, when using iSCSI, it is highly recommended that you allow iSCSI traffic through separate switches.
Recommendations for the host OS
Now let's move on to the recommendations on the host OS. As you know, Windows Server 2008 R2 can be installed in two different modes: Full and Server Core. From the point of view of the hypervisor, these modes are no different. Although Server Core mode at first glance seems more complicated (especially for inexperienced administrators), it is recommended to use this mode. Installing the OS in Server Core mode has the following advantages over a full installation:
- Fewer updates
- Smaller attack surface for potential attackers
- Less CPU and memory load in the parent partition
Running other applications on the host OS
Running third-party applications (not related to Hyper-V) in the guest OS, as well as installing other server roles besides Hyper-V, can lead to a significant drop in performance, as well as to a decrease in stability. The fact is that due to the specifics of the Hyper-V architecture, all interaction between virtual machines and devices goes through the parent partition. Therefore, high loads or a “fall into the blue screen” in the parent partition will necessarily lead to a drop in performance or simply to a “fall” of all running virtual machines. Antivirus software can also be (and should) be included here. Whether it is needed at all on a host that will not do anything other than virtualization itself is, of course, that another question. Nevertheless, if the antivirus is still installed - the first thing to do is to exclude all folders from the scan list, where may be the files of virtual machines. Otherwise, when scanning, performance may slow down, and if something similar to a virus is found in some VHD file, then when trying to cure it, the anti-virus package may spoil VHD itself. Similar cases were observed with MS Exchange databases, and therefore the first recommendation is not to install file antiviruses at all on Exchange servers, and if so, add database folders to the exceptions.
Recommendations for virtual machines
The steps you need to take to improve the performance of the virtual machines themselves depend on the applications that will run on them. Microsoft has best practices for each application — Exchange, SQL Server, IIS, and others. Similar recommendations exist for the software of other vendors. Here I will give only general recommendations that are independent of specific software.
It will explain why you need to install Integration Services in a guest OS, how to simplify the deployment of new virtual machines using the VHD library, and how to keep these VHD up to date with the release of new patches.
Integration Services
Integration services are a set of drivers working inside a guest OS. They must be installed immediately after installing the OS. At the moment, the list of supported OSs is as follows:
- Windows 2000 Server SP4
- Windows Server 2003 SP2
- Windows Server 2008
- Windows XP SP2, SP3
- Windows Vista SP1
- SUSE Linux Enterprise Server 10 SP3 / 11
- Red Hat Enterprise Linux 5.2 - 5.5
Windows 7 and Windows Server 2008 R2 contain integration services in the installation package, so they do not need to be installed additionally on these OSs.
The installation of integration services allows the use of synthetic devices that have higher performance compared to emulated ones. Read more about the difference between emulated and synthetic devices in my article on Hyper-V architecture.
Here is a list of drivers included with Integration Services:
- IDE controller - replaces the emulated IDE controller, which increases the speed of access to disks
- SCSI controller - is a fully synthetic device and requires the mandatory installation of integration services. Up to 64 disks can be connected to each SCSI controller, the controllers themselves can be up to 4 on each virtual machine.
- Network adapter - has better performance than the emulated (Legacy Network Adapter), and supports special functions, such as VMQ.
- Video and mouse - enhance the convenience of managing a virtual machine through its console.
In addition to the listed drivers, the following functions are supported when installing integration services:
- Operating System Shutdown - the ability to correctly shut down the guest OS without a login to it. Similar to pressing the Power button on the ATX chassis.
- Time Synchronization - as the name implies - synchronization of system time between the host and guest OS.
- Data Exchange - the exchange of registry keys between the guest and host OS. Thus, for example, the guest OS can determine the host name on which it is running. This feature is available only for guest operating systems of the MS Windows family.
- Heartbeat is a special service that periodically sends special signals, meaning that everything is in order with the virtual machine. If the guest OS for some reason, for example, freezes, it will stop sending Heartbeat, and this can serve as a signal, for example, for an automatic reboot.
- Online Backup - is a VSS Writer, which allows you to get a consistent backup of virtual machine data at any time. When starting a backup through VSS, applications running on a virtual machine automatically flush data to disk, and therefore the backup is consistent.
To install integration services on Windows, you need to select Action - Integration Services Setup . At the same time, an ISO image with installation files is automatically mounted to the virtual machine, and the installation process starts. If Autorun is disabled in the guest system, then the installation process will have to be started manually.
Integration components for Linux are not included in the Windows Server distribution - they must be downloaded from the Microsoft website.
Sysprep: creating a master image
If you have a sufficiently large infrastructure, and you often have to create new virtual machines and install the OS on them, a set of ready-made “master images” of virtual hard disks can save a lot of time. Such a “master image” stored as a VHD file can be copied and then created a new virtual machine using VHD as a hard disk. At the same time, an OS and some necessary set of software will be installed on it (in particular, integration services).
To create such a master image, you must:
- Create a new virtual machine
- Install the OS, integration services, all available system updates and additional software, if necessary
- Prepare the installed OS using the Sysprep utility, which will delete the user information, product key, and unique identifier (SID).
At the first boot of the virtual machine from this image, a procedure called “mini-setup” will start. In this case, you will be prompted to re-enter the computer name, administrator password, and some other data.
Offline update installation
We created a master image, and it will be stored with us for a long time. And all would be fine, but there is one small problem: system updates periodically come out, and when you deploy the virtual machine from the master image, you will have to install all the updates that have been released since the creation of the master image. If the image was created, say, a year or two ago, the volume of updates can be quite large. In addition, immediately after connecting to the network, the OS without the latest updates is subject to all kinds of security risks, including viruses. There is an excellent tool that allows you to install updates directly on the master images of virtual machines - it is called the “Offline Virtual Machine Servicing Tool”. To use it, you must deploy the System Center Virtual Machine Manager (SCVMM), as well as have a deployed WSUS or SCCM server, from where, in fact, updates will be pulled up. The principle of its action is as follows:
- The virtual machine is deployed on a special host selected using SCVMM - the so-called maintenance host.
- The virtual machine starts and all necessary updates are installed on it.
- The virtual machine stops and the .vhd file is returned to the library with the installed updates.
Offline Virtual Machine Servicing Tool is free. To learn more about this tool and download it, you can go to the official website: www.microsoft.com/solutionaccelerators .
Conclusion
I made some recommendations for tuning the hosts and the virtual machines themselves to achieve the optimal level of performance. I hope that for someone this information will be useful.