Which virtualization system is better?

    Good afternoon.

    People with experience in raising virtual machines on different systems, tell me:
    What virtualization system (from opensource) should be put on the server, taking into account that guest systems will mainly be engaged in routing?


    We now have a zoo of servers at our company (ordinary PC-shki) that deal with everything from packet routing to FTP servers. All this consumes electricity, requires cooling, and replacement of parts as they age.
    Putting a new server into operation, for example to replace the old one, requires the purchase of hardware, installation and configuration.
    Then you need to keep both servers turned on for a while, so that if there are any problems on the new one (you haven’t tuned it, you have encountered an unexpected glitch), you can quickly switch everything to the old one.

    The solution is seen in pulling all these iron monsters inside one machine.

    Because most of our “servers” in power were far behind one modern computer based on Core 2 Duo or Core 2 Quad - in the calculations we proceeded from the fact that at least 4 can be pulled to a computer based on Core 2 Duo E6400 @ 3GHz / 4096 RAM real cars.

    The most important servers were selected as candidates for the transfer: VPN (about 500 simultaneous sessions), a couple of software Linux routers, a radius server, and a couple of servers displaying the admin web interface.

    Then began the selection of virtualization systems. Of the candidates were:
    -OpenVZ
    -KVM
    -Xen
    -VMWare ESXi

    As a result of the selection, the following was obtained:
    -OpenVZ. Experience with him already was. It generally does not delimit systems among themselves, and works purely at the level of emulation of kernel calls. In addition, it does not allow raising new network interfaces inside the system, which means that the VPN server on it will no longer work. For web hosting, still go. For routers - no -

    VMWare ESXi could not run on any of the computers available to us. The installer either simply did not start, or the system after installation did not corny boot.

    -Xen - dropped because each guest machine must have the same kernel as the host machine. Moreover. Actually, I was not able to get it to work at all. My crooked pens might be to blame. Therefore, we moved on to the last candidate ...

    -KVM - it doesn’t matter which guest machine runs inside the host machine. At least with Windows, at least with OS / 2. In fact, the complete isolation of machines from each other. Bribed and the fact that RedHat relies on this particular system, and advises it for Enterprise applications. According to all our requirements, it corresponded.

    Put KVM. We quickly figured out how to install inside the system, set up a grid and routing between virtual machines.
    The scheme was like that. The host machine has 2 network cards, inside they are connected to the following system:
    eth0-host-machine - virtual br0 - [eth0-guest-machine-1 - eth1-guest-machine-1] - virtual br1 - [eth0-guest- machines-2 - eth1-guest-machines-2] - virtual br2 - eth1-hosts

    They began to test. In tests, such a scheme behaved remarkably. There is almost no load. It works stably.
    Put in production. And then bam! The load has increased significantly.

    It turned out that when the traffic flow through the virtual machine is about 15Mbps, it eats 40% of the processor (according to top) on the host machine. Accordingly, already 2 machines eat up almost all the processor power on the host machine. At the same time, inside the virtual machines, the load is 1-2%.

    We read that normal people do not live without virtio. These are special drivers that directly forward physical hardware to a virtual machine without emulating it. Actually, this reduces the load on the host machine.

    Have tried. And faced with two obscure things:
    1) When you turn on the virtio, the guest machine may just crash without giving any reason after 3-8 hours of operation
    2) The load on the host machine did not decrease, but remained at the same level.

    On the host machine, the Gentoo system, with a 2.6.30 kernel compiled manually. Everything that is needed for virtualization is already compiled into it.
    On guest machines, they tried Ubuntu and ArchLinux. No difference. Everyone is falling.

    We tried updating the kernel on the host machine, updating KVM, updating the guest machine ... so far it did not work. Everything works without virtio and sometimes it is overloaded.
    Now I continue to experiment with KVM on another machine, but the thought has already crept in to try Xen ... and in general, maybe I'm doing something wrong?
    For example, for virtio to work, you cannot combine the virtual machine interface with a real network card into a software bridge ... Or maybe I’m doing everything right, and it should be so? Such a load, such problems ...

    In general, the help of competent people is needed.

    P / S: A huge request, do not give advice like "replace everything with Cisco", "Linux garbage, put a slip." If you think a little, you will understand why these tips are quite far from reality.

    Also popular now: