Running FreeBSD on Linux KVM

    Task: to run FreeBSD from under Linux, preferably with a minimum of changes in the Linux system during initial setup and updates, the ability to run on a workstation and server, and with minimal loss of performance.

    You can use any common Linux distribution as a VPS farm. In our case, it will be Ubuntu 12.10 with the 3.5.0 kernel for amd64.

    The guest system will be FreeBSD 9.1 for i386. The i386 architecture was chosen due to significantly lower RAM consumption by 32-bit applications compared to 64-bit ones.

    As a virtualization system, Linux-KVM ("Kernel-based Virtual Machine") will be used .

    A brief comparison of KVM with alternatives

    KVM advantages:
    • It does not require preliminary installation of the hypervisor and resource planning, unlike Xen / ESXi / Hyper-V, and for study and testing it can be run on any Linux distribution, including desktop ones,
    • unlike all other virtualization systems (except LXC and with OpenVZ caveats), it is included in the base Linux kernel and is developed by key Linux developers (primarily RedHat ),
    • unlike LXC and OpenVZ, it is capable of launching an arbitrary OS, including Linux with its own kernel instance and a set of drivers.

    Cons KVM:
    • the processor must have hardware support for virtualization,
    • there are no convenient graphical shells for launching and editing virtual machines,
    • from the base system there is no transparent access to files, processes and container consoles (in LXC and OpenVZ it is).

    Environment setting

    Further, we assume that all disk images and settings files are stored in the home directory in the virt subdirectory :
    mkdir -p ~/virt && cd ~/virt

    Install the necessary software on Linux:
    apt-get update && apt-get -y install kvm

    Virtualization itself is performed by the module within the kernel, but the packages that apt-get installs according to dependencies contain control and auxiliary utilities, KVM-specific settings for basic Linux services (for example, for udev), etc.

    Checking for hardware support:

    Аппаратная виртуализация (а) должна поддерживаться процессором и (б) должна быть разрешена в BIOS'e. В противном случае KVM работать не сможет и команда запуска гостевой системы будет либо завершаться с ошибкой (если указан ключ "-enable-kvm"), либо переключаться в существенно менее производительный режим программной виртуализации на базе QEMU.
    Стандартный shell-сценарий kvm-ok выполняет типовые проверки и при неудачном результате советует способ исправления.

    Сеть для контейнеров

    KVM supports many options for organizing a guest network (see, for an example, a brief official review ). At the moment, “man kvm” contains 9! 8 options for the “ -net ” key with dozens of possible subkeys, and often “-net” must be specified twice in the container launch command with different sets of subkeys - to create a guest interface, and to create an interface in the base system for communicating with the guest. Perhaps network settings are the most unobvious part of KVM during initial development.

    For more or less serious use, two options make sense:
    1. the basic system provides guests with transparent access to an external network through the so-called network bridge ("network bridge"),
    2. the base system acts as a router between the external and the guest network (“router”).

    Both require superuser privileges, in our case they have the same set of parameters for "-net ..." , but differ in the set of actions in the script "-net ..., script = ... ", which KVM calls when the container starts to configure the network an interface created in the base system. The bridge option is slightly simpler, so our script ~ / virt / will do the following:
    • if there is no bridge, it creates it and adds an external physical interface to it,
    • assigns the bridge the same IP as the physical interface,
    • moves all routes from the physical interface to the bridge,
    • connects a virtual interface to the bridge to communicate with the guest system.

    # Constants
    # Variables
    gwdev="$(ip route get | grep ' via ' | sed -e 's,.* dev ,,' -e 's, .*,,' | head -1)"
    my_ip="$(ip addr list dev $gwdev | grep ' inet ' | sed -e 's,.* inet ,,' -e 's, .*,,' | head -1)"
    # Create and configure bridge
    if ! ip link list "$BRIDGE_IFACE" >/dev/null 2>&1
            echo "Create bridge $BRIDGE_IFACE..."
            brctl addbr "$BRIDGE_IFACE"
            brctl addif "$BRIDGE_IFACE" "$gwdev"
            ip link set "$BRIDGE_IFACE" up
            ip addr add "$my_ip" dev "$BRIDGE_IFACE"
    # Move routes from physical iface to bridge
    if test "$gwdev" != "$BRIDGE_IFACE"
            ip route list dev "$gwdev" | grep -v 'scope link' \
            | while read line; do
                    ip route delete $line dev "$gwdev"
                    ip route add    $line dev "$BRIDGE_IFACE"
    # Add virtual iface to bridge
    ip link set "$iface" up
    brctl addif "$BRIDGE_IFACE" "$iface"

    In various manuals it is recommended to configure the bridge in advance by editing / etc / network / interfaces , but for test purposes on a workstation it is easier to create it at the moment when it becomes really needed, i.e. at the time of the first launch of the first container.

    If additional MAC addresses are unacceptable in the external network, then routing and ProxyARP can be used instead of the bridge . If the external network allows exactly one MAC and one IP, then in the base system you have to use routing, an IP address on the internal interfaces and NAT to exit the guest systems to the outside world. In both cases, you will need to either configure static IP in the guest systems, or configure a DHCP server in the base system to configure the guests.

    KVM is able to automatically generate MAC addresses for guest network interfaces at startup, but if you plan to release guests to the outside world through a network bridge, it is better to assign them permanent MAC addresses. In particular, if a DHCP server is running on the external network, this will help the guest system get the same IP from it every time it starts. First, “compose” the base MAC address:
    perl -e '$XEN_RESERVED = "00:16:3e"; printf "%s:%02x:%02x:%02x\n", $XEN_RESERVED, int(rand(0x7f)), int(rand(0xff)), int(rand(0xff));'

    For containers, we will replace the last number with their serial number. We will use the same number for their names and for VNC consoles. For example, container number 25 will be called “kvm_25”, have MAC 00: 16: 3e: xx: xx: 25, and listen to VNC connections on port 59 25 . In order not to have hemorrhoids with different number systems, there are no unnecessary problems, it is recommended to choose numbers from 10 to 99. Of course, this approach is not used in VDS hosting, but it is suitable for personal needs.

    Action plan

    1. Boot from the CD image, install the OS on an empty hdd image, turn off the VM.
    2. Edit the startup script (turn off the CD), boot from hdd, configure virtio support in the guest OS, turn off the VM.
    3. We edit the startup script (we change the types of disk and network from IDE and Realtek to virtio), we boot.

    Preparing for the first boot

    Download the ISO image of the FreeBSD installation disc:

    Create a hard disk image:
    kvm-img create -f qcow2 freebsd9.img 8G
    kvm-img info freebsd9.img

    The image format is selected with the "-f" switch: raw (default), qcow2, vdi, vmdk, cloop, etc. Raw is understandable to any software, but provides a minimum of opportunities and immediately takes the maximum possible place. Qcow2 is more compact (supports dynamic resizing) and more functional (supports snapshots, compression, encryption, etc.), but is recognized only by QEMU-based systems.

    First launch and installation

    Script to run ~ / virt / freebsd9.start
    sudo kvm \
    -net "nic,model=rtl8139,macaddr=$MACBASE:$VM_ID" \
    -net "tap,ifname=tap$VM_ID,script=$DIR/,downscript=/bin/true" \
    -name "kvm_$VM_ID" \
    -enable-kvm \
    -m 512M \
    -hda $DIR/freebsd9.img \
    -cdrom "$DIR/FreeBSD-9.1-RELEASE-i386-disc1.iso" \
    -boot order=d \
    ## END ##

    In the window that opens, CD Loader and the FreeBSD installer should start. We perform the installation in the usual way. Almost all options can be left by default.

    Launch Command Explanations

    Sudo is necessary because To create a TAP interface, the KVM loader needs superuser privileges.

    Two " -net " keys create two network interfaces connected to each other: TAP in the base system and virtual Realtek-8139 in the guest.

    The -enable-kvm switch ensures that QEMU does not automatically select software emulation if KVM cannot start.

    The " -name " key defines the title of the console window, can be used to search the list of processes, etc.

    The boot disk is the CD (" -boot order = d"). The option is valid only when the container is turned on, i.e. upon reboot, the system search will start from the first disk.

    -m "sets the size of guest RAM. By default - 128 megabytes. This may be enough for the installer to work, but after a successful installation the very first attempt to assemble a large project from the ports with" -m 256M "and a swap partition of 512 megabytes (automatically selected by the installer) called kernel trap.

    The KVM loader works like a normal user process, so to turn off the virtual machine just press Ctrl + C in the console (naturally, when the guest OS is running, it’s better not to do this and use poweroff in the guest console). loader with a virtualization system in the kernel, the loader implements through the symbolic pseudo-device / dev / kvmTherefore, any user with the right to write to it can start virtual machines. As a rule, for such users, the group " kvm " is created in the system .

    To run in the background, the bootloader has the " -daemonize " switch .

    Second launch and configuration of virtio

    Before running in the freebsd9.start script, you need to comment out the lines " boot " and " cdrom ". Then we launch it and upon completion of the FreeBSD boot we enter its command line with superuser privileges.

    The virtio guest drivers for FreeBSD are not yet included in the core kernel, but are distributed as a port, so we need to install the ports tree:
    portsnap fetch extract

    To build, drivers require the sources of the current kernel:
    csup -h /usr/share/examples/cvsup/standard-supfile

    After that, we collect and install the drivers themselves:
    make -C /usr/ports/emulators/virtio-kmod install clean

    The following lines must be added to /boot/loader.conf :

    They can be copied from / var / db / pkg / virtio-kmod * / + DISPLAY. If you forget - FreeBSD kernel will fall out when loading an invitation «mountroot>» , because it can not see the disk drive with root FS. It will be necessary to reboot, go to the command line of the boot manager and manually load these modules in front of the kernel with the “load” command.

    In /etc/rc.conf you should use one of the two lines:
    ifconfig_vtnet0="DHCP"      # ..ifconfig_re0 можно удалить
    ifconfig_vtnet0_name="re0"  # ..ifconfig_re0 надо оставить!

    If a large number of settings are already tied to the old network interface, the second option avoids their widespread change. But he makes the overall scheme a little more confusing.

    In / etc / fstab, you must replace all / dev / ada with / dev / vtbd . If the disk was marked up automatically by the installer, fstab will become like this:
    # Device	Mountpoint	FStype	Options	Dump	Pass#
    /dev/vtbd0p2	/		ufs	rw	1	1
    /dev/vtbd0p3	none		swap	sw	0	0

    If you forget or edit fstab incorrectly, the next time you boot, you will be taken to the mountroot prompt and you will be forced to manually type in it “ufs: / dev / vtbd0p2”.

    What is virtio and why is it needed at all?

    If a virtual copy of a real device (such as a Realtek network card or a SCSI disk) is provided in the container, calls to it first go through the device driver in the guest system. The driver converts high-level data read-write calls into low-level operations with interrupts, registers, I / O ports, etc. They are intercepted by the virtualization system and does the reverse work - it translates into high-level calls for an external system (for example, reading / writing a disk image file).

    If a virtio device is provided in the container, the guest system driver immediately transfers data to and from the external system. The driver is simplified, low-level virtualization of physical resources is not required.

    They write that the transition to virtio speeds up in the guest systemthe drive is doubled , and the network is almost an order of magnitude .

    Another interesting feature of virtio is the dynamic allocation of memory for the guest system (" ballooning ") and the combination of memory blocks with the same content ( KSM , "Kernel Samepage Merging").

    VirtualBox and KVM use the compatible virtio mechanism, so the set of guest drivers for them is the same. On Linux, guest drivers are already included in the standard kernel; for FreeBSD, they are distributed as a port (see above); for Windows, they are written by the KVM developers (see here ).

    Third launch

    Change the lines with the network interface and disk in ~ / virt / freebsd9.start :
    -net "nic,model=rtl8139,macaddr=$MACBASE:$VM_ID" \
    -hda $DIR/freebsd9.img \

    ... to the following:
    -net "nic,model=virtio,macaddr=$MACBASE:$VM_ID" \
    -drive "file=$DIR/freebsd9.img,if=virtio" \

    If FreeBSD booting is successful, you can verify with the following commands that virtual devices are now in use:
    df; swapinfo
    dmesg | grep vt

    Guest console

    By default, KVM renders the guest console in the graphics window using the SDL library. This option is not suitable for running the container in the background, for running on a server without graphics, and for accessing the console over the network.

    To solve this problem, the KVM container can provide access to the guest console via the VNC network protocol. In ~ / virt / freebsd9.start, paste in the startup options:
    -vnc localhost:$VM_ID \

    Now, when the container starts, KVM will open not a graphical window, but a network connection. You can see it, for example, with the command " sudo netstat -ntlp | grep -w kvm ".

    Install a client application (e.g. tightvncviewer ) and connect to the console:
    apt-get install vncviewer
    vncviewer :10

    Note: if there is no reaction to the keyboard in the VNC window, click on it.

    A VNC connection can be password protected, but unfortunately it is not possible to assign a password directly from the command line. You will either need to connect to the container’s control console through a separate control socket (a brief description of how to configure it and how to connect to it), or open it in the main VNC window by pressing Ctrl + Alt + Shift + 2 .

    In addition to SDL and VNC, a curses- based text interface is supported (the "-curses" or "-display curses" switch). Theoretically, it could be convenient for a background run in screen. In practice, KVM sends its own diagnostic garbage to the console being created and makes its use inconvenient.

    Also popular now: