Mythic disc brakes on Xen

    Often, when discussing various ways of virtualization, Virtuozzo supporters (usually OpenVZ hosters) recall a statement they heard once and somewhere like “Xen slows down when working with a disk”. This fallacy is rooted in radically different disk caching mechanisms for Xen virtual machines and Virtuoso containers. As a result, the performance characteristics of the disk system are very different under various conditions. But delusion settles in the mind firmly and for a long time.

    To close the topic of “Xen disk brakes” and show with numbers that there are no brakes, here are the results of unixbench, bonnie ++ and packaging of the Linux kernel sources on the same machine, on the same disk partition.


    Processor: Intel® Core (TM) 2 Quad CPU Q6600 @ 2.40GHz. The drive is some kind of SATA Samsung.

    Native - froze on a physical machine: 1 CPU, 256 Mb RAM. Kernel: 2.6.18-164.6.1.el5
    Xen PV - metering on the Xen virtual machine in paravirtualization mode: 1 CPU, 256 Mb RAM. DomU kernel: 2.6.18-164.el5xen. Dom0 kernel: 2.6.18-164.el5xen. The disk in the virtual machine is given as phy.

    Unixbench


    A very synthetic test, especially for a disk, but it is often used in arguments. Cutting out what belongs to the disk:

    Native

    File Copy 1024 bufsize 2000 maxblocks 3960.0 529094.5 1336.1
    File Copy 256 bufsize 500 maxblocks 1655.0 153098.5 925.1
    File Copy 4096 bufsize 8000 maxblocks 5800.0 1208281.0 2083.2


    Xen PV

    File Copy 1024 bufsize 2000 maxblocks 3960.0 542862.3 1370.9
    File Copy 256 bufsize 500 maxblocks 1655.0 153684.5 928.6
    File Copy 4096 bufsize 8000 maxblocks 5800.0 1212533.2 2090.6


    Bold text highlights the totals - the more the better. It can be seen that on the physical hardware, and in the virtual machine, the numbers are almost equal, in the virtual machine even a little more. It would seem that the violation of the law of energy conservation, but it is explained simply - a small part of the load (about a percent) is assumed by the I / O subsystem, which is located outside the virtual machine, in dom0 and runs on another core.

    bonnie ++


    Native

    Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
    Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    dev.home 1G 575 99 64203 13 29238 5 1726 96 68316 6 144.5 1
    Latency 14923us 1197ms 483ms 60674us 16858us 541ms
    Version 1.96 ------Sequential Create------ --------Random Create--------
    dev.home -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
    files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
    256 47219 67 304464 100 23795 31 51813 73 378017 100 6970 9
    Latency 575ms 846us 673ms 416ms 22us 1408ms


    Xen PV

    Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
    Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    CentOS_5_4 1G 815 99 65675 4 29532 0 1739 92 68298 0 134.1 0
    Latency 10028us 200ms 242ms 122ms 15356us 627ms
    Version 1.96 ------Sequential Create------ --------Random Create--------
    CentOS_5_4 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
    files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
    256 53015 61 325596 99 25157 23 58020 68 404162 99 6050 5
    Latency 605ms 771us 686ms 406ms 49us 2121ms


    A more comprehensive assessment, but again, a bit strange result - in some cases, again, Xen PV is faster.

    Archiving

    And you can look at the result of a normal, real task. Packing into the archive * the source code of the Linux kernel is a task with intensive disk reading. The total size is about 320 MB, almost 24 thousand files. Before packing, the disk cache was cleared through vm.drop_caches. The time difference is slightly less than 7%, a fairly normal virtualization overhead is enough. This is the performance loss that extends to most patterns of working with a disk. If your task rested on a disk, plus or minus 7% will not significantly change the situation. * It is necessary to use cpio instead of tar because tar is so cunning that when it finds the output in / dev / null, it turns on dry run and doesn't archive anything.

    Native
    $ time (find linux-2.6.26 | cpio -o > /dev/null)
    530862 blocks

    real 0m30.247s
    user 0m0.605s
    sys 0m2.411s


    Xen PV
    $ time (find linux-2.6.26 | cpio -o > /dev/null)
    530862 blocks

    real 0m32.396s
    user 0m0.052s
    sys 0m0.120s





    Also popular now: