OpenVZ + venet + vlan / addresses from different networks

  • Tutorial
This post is dedicated to the assignment of OpenVZ addresses to containers from different networks on the venet interface. I decided to write this post because I saw how other experts solved this problem in unpretentious ways or even completely refused to use venet.

Environment


We have an openvz hostname with containers that need to be assigned real addresses and addresses on the internal network. Moreover, the container can have either real or internal, or both addresses at once. Real addresses and the internal network are accessible from the host through different network segments. In my example, they are in different vlan-ahs, and the internal network is represented by a block of addresses 192.168.1/24.
Host OS - CentOS 6.

Problem


If you simply add the internal and external addresses to the container, the network in the containers will not work as it should: for all destinations the very first address will always be selected as the outgoing address, and when choosing a route on the hostnode, a routing table unsuitable for the internal (external) network will be used .
The openvz documentation and mailing lists tend to use veth for such a network configuration. However, I try not to use veth for the following reasons:
  • veth performance lower
  • veth does not impose network restrictions on the container, it is worse in terms of security
  • veth basically involves the use of a bridge. When the composition of the bridge member interfaces changes (containers start or stop), the MAC address of the bridge is selected by the numerical value from the MAC addresses of the member interfaces. If you stop starting or stopping the container and you are not lucky enough to have a new MAC address less than the current one or stop the container whose MAC address is now on the bridge interface, the connection with the host node on this interface will be lost until the new MAC on the switch is locked. This can happen in more than a dozen seconds in some cases. I solved this problem by assigning obviously high MAC addresses to the virtual interfaces, but I was not enthusiastic about this.

Therefore, the game is worth the candle and I would like even in such a configuration to have a network on containers on venet-interfaces.

Decision


As mentioned above, the problem with venet interfaces is that everything comes to them together, and when sending, you need to distinguish between outgoing addresses and choose different routes for them. Administrators solve this problem in different ways by slipping their rules inside the container on the side or changing the template for adding addresses to the container in the ifz scripts vz. I believe that the network configuration should be outside the container so that there are no problems during migration.

We select the necessary route on a hostnode

The action takes place in /etc/sysconfig/network-scripts, unless otherwise agreed.
Internal interface:
[root@pve1 network-scripts]# cat ifcfg-vmbr0
DEVICE="vmbr0"
BOOTPROTO="static"
IPV6INIT="no"
DOMAIN="int"
DNS1="192.168.1.15"
DNS2="192.168.1.17"
GATEWAY="192.168.1.3"
IPADDR="192.168.1.142"
NETMASK="255.255.255.0"
ONBOOT="yes"
TYPE="Bridge"

Front end:
[root@pve1 network-scripts]# cat ifcfg-vmbr1 
DEVICE="vmbr1"
BOOTPROTO="static"
IPADDR=200.200.100.6
NETMASK=255.255.254.0
IPV6INIT="no"
TYPE="Bridge"

We define two additional routing tables - for emitting packets by the virtual machines into the internal network and into the external one. The last two lines in the file below specify the names of two randomly taken free numbers for routing tables:
[root@pve1 network-scripts]# cat /etc/iproute2/rt_tables 
## reserved values#255     local
254     main
253default0       unspec
## local##1      inr.ruhep200     external
210internal

Define the contents of these tables:
[root@pve1 network-scripts]# cat route-vmbr0
192.168.1.0/24 dev vmbr0 tableinternaldefault via 192.168.1.3tableinternal
[root@pve1 network-scripts]# cat route-vmbr1
200.200.100.0/23 dev vmbr1 tableexternaldefault via 200.200.100.30tableexternal

Both direct subnets and the Internet are accessible from both internal and external addresses, while the internal network is connected to the Internet through a NAT gateway ( 192.168.1.3). As we need.
Now you need to separate when to apply which routing rules. The files themselves are hard to understand, I will add comments below.
[root@pve1 network-scripts]# cat rule-vmbr0from192.168.1.0/24 iif venet0 lookup internalfrom192.168.1.0/24 to 192.168.1.0/24 iif venet0 lookup main

[root@pve1 network-scripts]# cat rule-vmbr1
from200.200.100.0/23 iif venet0 lookup externalfrom200.200.100.0/23to200.200.100.0/23 iif venet0 lookup main

These PBR rules are added from the bottom up, so the second line is the first and vice versa, and the rules themselves are associated with the interface only by what is added when this interface rises. The resulting rule table:
[root@pve1 network-scripts]# ip ru li
0:      fromall lookup local32762:  from200.200.100.0/23to200.200.100.0/23 iif venet0 lookup main 
32763:  from200.200.100.0/23 iif venet0 lookup external32764:  from192.168.1.0/24to192.168.1.0/24 iif venet0 lookup main 
32765:  from192.168.1.0/24 iif venet0 lookup internal32766:  fromall lookup main 
32767:  fromall lookup default

Here you can see that everything from the external network received through venet is routed according to the rules of the external network (external table). One or several addresses from an external network 200.200.100.0/23can be raised on a neighboring virtual machine on the same machine, and then you need to communicate with it not through a physical interface, but also through a virtual one. Therefore, for the case of sending from 200.200.100.0/23to, 200.200.100.0/23I rely on the main routing table, where openvz adds the appropriate /32routes, and in which there is a route 200.200.100.0/23through the physical interface for everything else.
Similarly for the internal network.

Now our hostnode is able to understand what to extinguish immediately on the Internet with packets from the gray network and everything else in the same vein is not necessary.

We tell the container how to choose the outgoing address

Everything is simple here, on venet you can add not only / 32 addresses in the container, but also addresses with a designated subnet mask. This gives the kernel a hint that the addresses from this block are adjacent, and that when sending to them it is preferable to use the following src:
[root@pve1 network-scripts]# fgrep IP /etc/vz/conf/138.conf
IP_ADDRESS="200.200.100.12/23 192.168.1.100/24"

[root@pve1 network-scripts]# vzctl exec 138 ip r192.168.1.0/24 dev venet0  proto kernel  scope link  src 192.168.1.100200.200.100.0/23 dev venet0  proto kernel  scope link  src 200.200.100.12 
default dev venet0  scope link

By default, the first address will be selected. Thus, if I swap the addresses in the container config, the container will try to select the internal IP and the host will send it to the Internet through a NAT gateway.

Conclusion


In my opinion, venet is a strong point of OpenVZ and it is best to try to use it if possible. The solution above allows the container to use network addresses in a way abstracted from the network configuration.
In addition, I hope that in addition to the main purpose, the post will serve someone else as an illustration of the use of policy based routing in Linux.

Also popular now: