Qemu-KVM: working in Debian

This article is a generalization of the information accumulated during the use of the Qemu-KVM hypervisor. I want to share the knowledge that I have at the moment. I hope that my article will benefit those who are just going to use the Qemu-KVM hypervisor or are already using it. And one more thing: the article is not for linux beginners (elementary things will not be considered here).

Much has been written about this virtualization system on the network. But when you really start working with it, you are faced with a lack of information and practical examples of application. So let's get started.

Inbound task. A computer was allocated as a test station - to check the operability of backup copies of databases, installed software, assembly msi packages and other very diverse tasks. Computer configuration:
  • Atlon X2 245 processor
  • RAM 4 gigabytes
  • 500 gigabyte hard drive
  • motherboard ASUS M4N68T-M LE.

After a little thought, it was decided to use a computer as a platform for working with virtual machines. The processor supported hardware virtualization. In this regard, it was decided to install another 1 hard drive. Having rummaged in "bins of the motherland" a disk of 80 gigabytes was found. It was added to the configuration.

We figured out the computer. Now the hypervisor is next in turn. I wanted to work with such a platform, which can then be used in a corporate environment. Therefore, virtualbox, microsoft virtual pc, vmware workstations did not fit. The following question arose - which system to choose:
  1. Microsoft hyper-v does not fit - paid. The company I work for uses only licensed software. Therefore, no one will allocate a server license for my purposes.
  2. VMWARE ESXi does not know the SATA controller located on the motherboard (since it was developed for server systems).
  3. Qemu-kvm is a freely developed hypervisor that supports hardware virtualization. It can be installed on any modern Linux distribution. This is for me, we take it.

Choosing a Linux distribution. I use Debian as standard. You can choose any distribution you like (ubuntu, Fedora, Opensuse, Archlinux). The qemu-kvm hypervisor is in any of the distributions listed above, and Google is full of articles on how to install it.

Let's get down to business. I will not describe the installation of the operating environment. I can only make a reservation that during installation of the operating environment the larger hard drive did not touch. His time is yet to come. How to install a hypervisor on Debian is very well described here . Personally, I put qemu-kvm libvirt-bin.

The hypervisor was delivered, now a little about the internal structure. There are two directories to look into:
/ etc / libvirt / - the configuration files are mainly stored here
/ var / lib / libvirt / - images of hard disks, snapshots of the system and much more will be stored here.
Our hypervisor is installed.

Now a little about the settings. Qemu-kvm does not work directly with a network card on a physical computer. Therefore, you need to configure the bridge. What we do: open the file / etc / network / interfaces and make it look like this:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces (5).
# The loopback network interface
auto lo br0
iface lo inet loopback
# Set up interfaces manually, avoiding conflicts with, eg, network manager
iface eth0 inet manual
# Bridge setup
iface br0 inet static
bridge_ports eth0
address xxx.xxxx.xxxx.xxxx
broadcast xxx.xxxx.xxxx.xxx
netmask xxx.xxxx.xxx.xxx
gateway xxx.xxx.xxxx.xxx
bridge_stp off
bridge_fd 0
bridge_maxwait 0

More information here .

Next, save the file and restart the computer.
Oh miracle! Hypervisor installed!
Then the question arises: how to manage the server? You can manage Qemu-kvm with two programs: virt-manager and virtinst.

Virt-manager.
This program is designed for a graphical interface. It supports both remote management of virtual machines, and local. But she has a huge minus - there are simply no analogues for windows.
How personally I got out of the situation. Installed the LXDE graphical shell and the xrdp server, thanks to such a simple set of programs I did not have to physically go to the computer (it hurts a lot of honor). I just connected through a standard RDP client which is in windows. But this is an additional waste of computer resources.
If you install virt-manager, it automatically creates:
storage for virtual machine images along the path / var / lib / libvirt / images
virtual default network interface.
Therefore, you need to mount a large hard drive into the / var / lib / libvirt / images directory.

Virtinst.
This is a console utility. No graphics, a solid command line and saving system resources.
If you decide to use console management, then you will have to manually create a repository of virtual machine images. This is done as follows. By ssh we connect to the server. We are logged in as root.
You can choose any storage location:
  1. you can mount the hard drive into a directory and specify it as storage
  2. You can simply select the device as storage.

I was more attracted by the option with the directory (it is easier to copy disks of virtual machines). Therefore, what I did: I formatted the 500 gigabyte disk to ext4 format (when btrfs is finished, it is better to use it). Created the directory / etc / libvirt / images and mounted it there. Do not forget to add a line for automatic mounting in fstab. Then in the console we type vitsh and press Enter. It looks like this:
root @ kvm: / home / firsachi # virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit

virsh #

Now if you give the help command, we see a list of shell commands. We are interested in whether there are ready-made storages. To do this, enter the command pool-list --all
virsh # pool-list --all
- Name State Autostart

If you enter just pool-list, then only active repositories will be shown to you, but you need to see them all.
We create the pool
virsh # pool-define-as storage dir --target / etc / libvirt / images /

We say that the pool starts automatically
virsh # pool-autostart storage

We start the pool
virsh # pool-start storage

Now we check
virsh # pool-list –all

The output should be like this
virsh # pool-list --all
Name State Autostart
- storage active yes

I recommend reading a little about the commands here .

If you do not create storage for virtual machine disk images, then in order for the disks to lie in one place, you will need to specify the full path to the image when creating it (write a lot), and so the image will be created in the specified pool or if you have one, then his name is optional. The pool configuration file will be in the directory / etc / libvirt / storage / this is an .xml file. Exit virsh by typing exit.

In principle, our hypervisor has established itself and can be used. But there is still a small trifle that I would like to mention. Namely, how Qemu-kvm works.

A virtual machine running on it sends commands to the physical processor directly through a loadable module (kvm-amd or kvm-intel). This must be one of the modules that matches the processor manufacturer (Intel or AMD).

This module is loaded during system startup. However, this is not my method. I am rebuilding the kernel of the system in order to embed the desired module (as an option during rebuilding) directly into the kernel.
But this is not the only modernization that I carry out. Other than that, I do the following:
  • I turn off sound support (this is a server, not a workstation);
  • I do not use IPv6 protocol (it is not used on my network);
  • I turn off support for network cards wifi, wmax and everything that is removed is connected.
I don’t like when extra functions burden the kernel with extra load.

These first and second articles helped me build my own kernel .

I'll run a little ahead. Many people on the Internet complained that the virtio network card model is not working correctly. There is an explanation for this, and quite simple. Drivers for this device are in the experimental stage. But virtio storage works fine.

Now we begin to work with virtual machines. We create our first virtual machine:
virt-install --connect qemu: /// system \
--name comp1 \
--ram 512 \
--vcpus = 1 \
--disk pool = storage, cache = none, size = 20, format = qcow2 \
--disk /home/firsachi/Winxp.iso,device=cdrom \
--bridge = br0, model = e1000 \
--os-type = windows
--os-variant = winxp \
--graphics vnc, port = 5901, listen = 0.0.0.0


I want to explain some details:
pool = storage, specify the pool, in which you want to create a disk;
cache = none this means that disk caching is disabled. In my case, this is an img image. When recording caching is enabled, the access time to the virtual machine disk is doubled;
size = 20 disk size in gigabytes;
format = qcow2 is the disk image format. As I understand it, only it supports snapshots of the system;
model = e1000 model of an Intel gigabit network card (by default there is a hundred-megabit rtl8139);
port = 5901Port that can be accessed using Ultra VNC Viewer
listen = 0.0.0.0 permission to connect from all IPs (by default only localhost listens).
Installation can be done directly on the device. It will look like this:
virt-install --connect qemu: /// system \
--name comp1 \
--ram 512 \
--vcpus = 1 \
--disk / dev / sdb, format = qcow2 \
--disk / home /firsachi/Winxp.iso,device=cdrom \
--bridge = br0, model = e1000 \
--os-type = windows
--os-variant = winxp \
--graphics vnc, port = 5901, listen = 0.0.0.0

Where sdb should be replaced on your device.

If everything went well, then to connect to the console of our virtual machine you need to install Ultra VNC Viewer to your computer. In the connection, you need to specify the server IP address and port, or server domain name and port.
How Windows is installed, I hope everyone knows.

Now about the drivers of virtual machines. The following disk images are needed to install the drivers: virtio-win-0.1-30.iso and viostor-0.1-30-floppy.img
The last disk is optional, it is needed only if you are going to install windows xp or windows 2003 server on virtio storage (by the way, if you do so, the operating system is faster).

It says how to connect and disconnect cdrom .
There is another bonus: you can connect external devices to the virtual machine (for example, a physical hard drive or usb drive). We go to virsh. The command will look like this:
virsh # attach-disk comp1 / dev / sdc vdv --type disk
Where:
  • comp1 - the name of the virtual computer to which we connect the disk.
  • / dev / sdc - path to the device on the physical computer.
  • Vdv - where we connect to the virtual machine.
  • --type is the type of disk.

Disconnect the drive:
virsh # deatach-disk comp1 vdv

This article will help a lot . I also recommend looking here .

Where do I apply this hypervisor? I have a domain controller constantly working on it. Moreover, it works stably, without failures. Authorization in the domain occurs without problems. In parallel, I can run a maximum of 2 virtual machines.
Recovering a virtual machine on another server.
One sunny and cloudless day, I thought: what will happen if you have to transfer it to another server. There is a virtual machine migration mode in the hypervisor, but there is nowhere to check it - there is no such second computer.
Therefore, I had to start from the worst option. Suppose:
  • the computer on which the virtual machine was running burned out (Atlon X2 245 processor).
  • Once a week, the virtual machine shuts down and a backup copy of the configuration file and disk image is made.

Therefore, my actions in this case. For the experiment, I brought my laptop from home. It has an i5-3210M processor, linux distribution OpenSuse (for my laptop is more compatible with hardware).

I installed Qemu-KVM on it, moved the virtual machine configuration file and disk image to it. In the configuration file, edited the path to the disk of the virtual machine, rebooted the laptop. And lo! The hypervisor not only saw my virtual machine, but also launched it.

Create snapshots of virtual machines.
Qemu-KVM supports virtual machine snapshots. I will give the simplest example. As root, go to virsh and run the following command:
virsh # snapshot-create-as name
name - this is the name of the virtual machine.
After the snapshot of the virtual machine is taken, the backup copies of the configuration files will be in the directory / var / lib / libvirt / qemu / snapshot /. But I don’t know yet where the data of the virtual machine disk lies.

You can view the pictures with the following command:
virsh # snapshot-list name
Name Creation Time State
- 1360593244 2013-02-11 16:34:04 +0200 running
1360594479 2013-02-11 16:54:39 ​​+0200 running You can

restore from a photo like this :
virsh # snapshot-revert name 1360593244

You can delete an unnecessary picture like this:
virsh # snapshot-delete name 1360593244

That's how we live now: the Qemu-KVM hypervisor, a virtual domain controller, and I am pleased with the work done.
Thanks to everyone who read to the end. I hope my thoughts have been helpful.

Also popular now: