Real virtuality, or “640 kilobytes of memory is enough for everyone”

    image
    With this article, I open a series of articles on working with memory in Windows Server 2008 R2 SP1 Hyper-V. Basically, SP1 has two interesting innovations regarding Hyper-V:
    • Remote FX - which allows you to use all 3D acceleration functions in a virtual environment, and, accordingly, run powerful graphic applications and games on a virtual machine and work with them remotely even from a thin client
    • Dynamic Memory - allowing to achieve a higher density of virtual machines on hosts due to the possibility of dynamically allocating memory to virtual machines on the fly.

    In this series of articles (there will be several, so stock up on beer;) we will talk about dynamic memory. At the same time, Hyper-V will most likely talk specifically about dynamic memory only in the last of the articles :)

    Why was server virtualization originally invented? As I already said in the article “Why do we need virtualization?”, One of the reasons is increased consolidation and more rational use of hardware resources. You can either use a separate server for each separate task, which will use its resources by half at best, or consolidate them all in the form of many virtual servers on one physical one, using its resources almost 100% and eliminating the need to buy a mountain of iron.
    So, what am I talking about? And the fact that many customers want to get on the tree and not scratch anything, and in scientific terms, they want to place as many virtual machines on the same server as possible, and at the same time not lose performance. Unfortunately, many people make a mistake at the first stage - determining the requirements for server resources. A simple example: ask any friend who works in the relevant field - how much memory does he take when ordering a new server? And then - specific questions:
    • And if it will be a web server?
      - An internal web server to run a business application?
      - An external web server, with hundreds / thousands / tens of thousands of hits per day?
    • And for the file server?
      - An internal “file wash” for, say, one department in a dozen or two users?
      - A hefty corporate file server on which several thousand employees store documents?
    • And how much memory is needed for a domain controller?
    • And for the print server?
    • And for <paste your server application here>

    The questions can be specified further, and therefore it is very difficult to answer unequivocally the question “How much memory does the server need”. The assessment of the necessary system resources is called “sizing” and has grown into a whole science. It is not surprising that they most often act according to one of the three listed options:
    • Put on all servers 2 (4, 8, 16, 32, etc.) GB of memory, and if users start complaining about “brakes” - add more
    • View the minimum system requirements for the OS and applications and add a “margin” of 25% (50%, 100%, etc.) No matter how this memory is used, if only users would not complain.
    • Do as the vendor recommends. If the datasheet says that for 100 users 4 GB of memory is needed for this application - I will put 4 GB plus some extra. Calculations and testing are too long and tedious, as a rule there is neither time nor desire for this.

    Of course, with any of the above options, we get a not quite (or not at all) optimal solution. Either we gain more obviously more memory than necessary and spend money in vain, or we gain less than necessary and get performance problems. Of course, it will be much more interesting if the memory can be distributed flexibly, depending on the loads, and used it most efficiently.

    Memory overcommitment


    Since we are talking about memory and virtualization, it is impossible not to mention the phrase, which often causes heated debate: memory overcommitment. Translated into Russian IT, this expression means "to allocate more resources (in our case, memory, approx. Auth.) To the virtual machine than physically exists." That is, roughly speaking, to give three virtual machines 1 GB of memory each, when physically the server has only 2.5 GB. The main reason why the phrase “memory overcommitment” itself caused such wild fights between supporters of different virtualization platforms is that Microsoft does not provide such technologies: if the server has 3 GB of memory, then this can be given so much to virtual machines, and nothing more. It seems to be considered that this is bad: the possibility of the so-called overselling disappears. What is overselling? A prime example is Internet service providers. They provide Internet access at speeds of “up to 8 Mbit / s,” in fact, the speed will most likely fluctuate around 4-6 Mbit / s due to the fact that the provider’s uplink channel is slightly narrower than 8 Mbit times the number its subscribers. Nevertheless, at certain phases of the moon, at midnight, when Mercury is in the sign of Aquarius and no one watches your favorite videos on the Internet, the speed may well reach the promised 8 Mbps. If you want to get exactly 8 Mbit / s and not less baud - there are tariffs with guaranteed bandwidth, and, oddly enough - with a completely different price. So, when using Hyper-V there is no possibility of overselling, and therefore a fixed amount of RAM is allocated for each virtual machine, which, however,
    But we are nevertheless (I hope) technicians and terrible materialists, and therefore the phases of the moon and other bioenergy are not of interest to us. So we’ll see what actually lies behind the phrase “memory overcommitment”.
    Oddly enough, there is no unequivocal opinion: there are already as many as three technologies that can be described with these words. So this is:
    • Page sharing
    • Second level paging
    • Dynamic Memory Balancing (aka balling)

    The next article will be devoted to Page Sharing technology, which will examine in detail the technology of sharing pages of memory, as well as the pros and cons of its use. You can read it here . In future articles, I will try to talk about the other technologies listed, as well as about the Dynamic Memory feature in Windows Server 2008 R2, and why it does not fit the term “memory overcommitment”.

    Also popular now: