OpenVZ, Quagga and LiveMigration
- Tutorial

I want to share a convenient way to use OpenVZ containers. This object is very light, you can pick up a copy for every sneeze.
By default, the container uses the venet network interface. For the administrator, it looks like just assigning an address to the container, and it’s convenient. But in order for the container to be accessible from the network, you should use addresses from the same IP network to which the physical server (HN) is connected. The server knows the list of containers running on it and responds with its MAC address to ARP requests with container addresses. But you always want more convenience in work, and dynamic routing will help us with this.
How exactly we will look at examples using Quagga and OSPF.
In order for this scheme to work, the dynamic routing protocol (OSPF) must already be running on the network. However, if you have many networks connected by more than one router, then most likely you have already configured dynamic routing.
Firstly, you need to install Quagga on the server:
everything is simple here, Quagga is included in the standard distribution of almost all distributions,
hereinafter commands for debian based:
hereinafter commands for debian based:
#sudo apt-get install quagga
secondly, make a minimum configuration and run:
in the / etc / quagga / daemons file, correct the lines (from "no" to "yes" ):
create file /etc/quagga/zebra.conf
create the file /etc/quagga/ospfd.conf
restart:
zebra=yes
ospfd=yes
create file /etc/quagga/zebra.conf
ip forwarding
line vty
create the file /etc/quagga/ospfd.conf
router ospf
redistribute kernel
network 0.0.0.0/0 area 0.0.0.0
line vty
restart:
#sudo service quagga restart
with these settings, HN will search for routers on all interfaces and tell them about all of its running containers.
check that Quagga is connected to the network and container information is distributed:
if the list is not empty, then the neighboring routers are found.
Check that the container information is quagged:
we get two lists with IP addresses that must match.
# vtysh -e "show ip ospf nei"
Neighbor ID Pri State Dead Time Address Interface RXmtL RqstL DBsmL
198.51.100.11 128 Full/DR 2.548s 192.0.2.25 vmbr0:192.0.2.26 0 0 0
192.0.2.27 1 2-Way/DROther 2.761s 192.0.2.27 vmbr0:192.0.2.26 0 0 0
192.0.2.28 1 Full/Backup 2.761s 192.0.2.28 vmbr0:192.0.2.26 0 0 0
if the list is not empty, then the neighboring routers are found.
Check that the container information is quagged:
# vzlist
CTID NPROC STATUS IP_ADDR HOSTNAME
100 38 running - radius.local
101 26 running 198.51.100.2 dns.local
104 47 running 203.0.113.4 cacti.local
105 56 running 203.0.113.5 host3.local
152 22 running 203.0.113.52 host4.local
249 96 running 203.0.113.149 zabbix.local
# vtysh -e "show ip ospf database external self-originate" | fgrep Link\ State\ ID
Link State ID: 198.51.100.2 (External Network Number)
Link State ID: 192.168.98.4 (External Network Number)
Link State ID: 192.168.98.5 (External Network Number)
Link State ID: 192.168.98.52 (External Network Number)
Link State ID: 192.168.98.149 (External Network Number)
we get two lists with IP addresses that must match.
So you can create a container with any address, and it will be accessible from everywhere.
Now let's add some Quagga configuration:
thanks to these lines, information will be updated faster, but the interfaces on the router must also be configured accordingly.
file /etc/quagga/ospfd.conf:
Information about containers with addresses from the network in which the server is located will not be distributed. Indeed, everyone knows about this network, why bother with routing tables with extra entries.
file /etc/quagga/ospfd.conf:
file /etc/quagga/ospfd.conf:
...
interface vmbr0
ip ospf hello-interval 1
ip ospf dead-interval 3
...
Information about containers with addresses from the network in which the server is located will not be distributed. Indeed, everyone knows about this network, why bother with routing tables with extra entries.
file /etc/quagga/ospfd.conf:
...
ip prefix-list ifvmbr0 seq 100 permit 192.0.2.0/24 le 32
router ospf
redistribute kernel route-map openvz
route-map openvz deny 100
match ip address prefix-list ifvmbr0
route-map openvz permit 200
...
What other bonuses can be obtained thanks to such a scheme, besides the very fact of using any addresses:
- Live Migration, you can transfer containers between servers on different networks and at different sites without downtime and disconnection.
- You can implement a hot spare when the “spare” container with the same address is on another server and announced with a larger metric.
- Anycast, similar to the previous paragraph, containers on different servers in different points of presence, containers have the same addresses that are advertised with metric-type 1. Then traffic will go to the “nearest” address (DHCP / RADIUS / IPTV / Proxy, etc.)
PS
- With Proxmox, it also works great, only if you build a cluster on tunnels, then the tunnel interfaces must be removed from OSPF or metrics raised, that is, there is a risk that user traffic will run through the tunnels.
- The question may arise, what's new here, Quagga has been installed on servers for a long time, but the advantage is that the daemon does not need to be installed inside each container, but only on the host machine, on which many dozens of containers can work.
- I do not recommend using the OSPF NSSA area for these purposes, there are subtleties in how the quga generates LSA7 for such routes, so it most likely will not work.
PPS / upd It is not permissible to use this scheme if different commands are involved in the network and servers. This is rather a very convenient “hack” for administrators who deal with the first and second.