Torrent client on linux gateway - myth or reality?

I somehow managed almost randomly a Mini-ITX motherboard on Intel Atom, and immediately a thought flashed through my head: “A quiet home server! ..” (a power supply is suitable for a laptop + soldered processor with passive cooling). Thought - done!

I bought a case, memory, one 2.5 HDD (having planned to take the same one for a mirror in a couple of months), as well as a second network card for distributing the Internet to a local network, and now on my almost-server there is a brand new (at that time) Debian Squeeze.

I started thinking about what I want on this server, and since I am from the category of people trying to squeeze out everything and a little bit more, it was decided to raise the web, mail and jabber servers (there is white statics), plus as "And a little more" get hold of a torrent rocking chair 24/7 there.

Everything would be fine if it were not for one BUT - a torrent client on the gateway. It sounds [not] a lot of delusional, because the torrent traffic going to the torrent client on the gateway will clog the entire Internet channel , leaving the local network devices with the nose, right? But do not rush to conclusions: such nonsense has the right to life. Moreover, a working solution was found that allowed the rocking chair to coexist peacefully with other services on the same server, as well as not to interfere with local network clients. But first things first…

1. A bit of theory


I have repeatedly met the statement “it does not make sense to shape the incoming traffic, because he’s already arrived, which means that he has taken part / the entire width of your channel from the higher distributing party (for example, from the provider). ” This statement is true, but partially, because requires clarification:

In the context of TCP / IP

For tcp connections there is a " slow start ". This means that packets between hosts will not start sending immediately at maximum speed. Instead, it (speed) will increase / decrease gradually, based on the results of confirmation of packet reception and the presence of losses. In other words, the packet sending speed will “float”, adapting to the bandwidth of the channel between the hosts.

Taking into account this behavior, we can somewhere between the hosts artificially lower the speed of packets of tcp connections, thereby imposing on the hosts the need to reduce the transmission speed.

But in the case of udp-packages, everything is somewhat sadder, because the sending party will simply “throw” them, not taking into account either the loss or the transit time of the packet (UDP - it is also in Africa UDP: non-guaranteed delivery with all the consequences ...). And this is already fraught with the following: if udp-traffic eats out the entire band guaranteed by you, for example, by the provider, then the packets will simply start dropping, and what kind of packets they will be is left to the Internet service provider, and the sending party will still send You have such traffic.

Sad Yes, but not so much that I abandoned my idea of ​​driving a torrent rocking chair to the only computer in my house that works according to the 24/7 scheme and this is why: the torrent client makes a request to download not all the contents of the distribution right away (they say, let's throw me all in one thread). Instead, requests are sent for some pieces, ranging in size from 16 kilobytes to 4 megabytes. From which it follows that by delaying the downloaded parts of the distribution, it is possible to reduce the frequency of requests by our side to download other parts, and, as a result, reduce the channel bandwidth consumed by the torrent client.

So, if we can force the giving side to reduce the rate of sending packets to us, then the width we consume from the maximum allocated by the provider will be used more rationally.

In the context of packets passing through a linux router

The complete scheme for passing a packet through the GNU / Linux network stack is as follows: For simplicity, let's look at a situation in which a packet is sent from the vast expanses of the Internet to a device located behind a linux router, and this packet has already left the provider's equipment, i.e. The next point will be your WAN interface. Before getting into the clutches of mighty Iptables, the package will go through the ingress discipline, and already on this discipline we could “hang up” classes describing their guaranteed and maximum speeds, but this qdisc has two significant limitations:





  1. classes added to this discipline cannot contain child classes, which means that borrowing unused width by child classes from the parent will fail;
  2. there is no possibility to “scatter” the classes going to the local network based on ip, because it hasn’t yet reached NAT.

In general, ingress-discipline does not suit us in any way for full-fledged dynamic shaping.

After ingress, the packet is sent to Iptables where filtering / NAT / labeling / ... takes place, and after that the packet is sent to the egres discipline of the LAN interface through which it (the packet) must leave the router and go to the local network.

I would like to draw attention to the fact that:
  • there is only one ingress discipline and one egress discipline along the entire path of the packet within the linux router: ingress - on the interface into which the packet flies, and egress - on the interface letting the packets go free;
  • packets destined directly to the router will never get into the egress discipline (without tricks with the ifb device).

And now you can reformulate the statement about shaping incoming traffic: in GNU / Linux, shaping incoming traffic correctly on the outgoing interface of the device, which will stand up to the channel consumers, while this device should not consume traffic for its personal needs (if there is such a need, then this will have to be taken into account when forming classes in the shaper).

2. We go around the limitations


As we already found out, shaping traffic is more correct on the egress discipline, i.e. on the outgoing interface with respect to the direction of the packet through the router. And this means that if you run the torrent client on the router itself, then the packages intended for the client will never get into the egress discipline and we won’t be able to “delay” them!

Realizing that on a router that distributes the Internet to a local network, to launch a torrent client without strictly restricting the channel width consumed in it is not feng shui, I thought about how to shape this traffic in the same way as if it went to the local network, those. At first it would be sent to the "giving" interface, but would it end up in the torrent client itself, and so that all this would happen within the same piece of iron?

At first I began to look towards virtualization, but the problem was aggravated by the fact that the hardware was too weak to lift the paravirtual machine, and hardware virtualization could not. But only a protracted wandering around the expanses of the Endless, a solution suitable for all articles was found - OpenVZ .

Having installed the openvz kernel, we, in addition to isolating the co-hosts, get the venet interface on the host system, operating at the L3 level (veth devices can be used for L2), and all the ip traffic flying into the containers goes to venet before that , i.e. we get the outgoing interface. What the doctor ordered!

Having deployed VZ containers on the host and spreading the torrent client and other services into them, I got such a scheme:
Openvz


The WAN - physical interface, looking into the world of
Venet - a virtual L3-interface through which all traffic in containers (subnet 192.168.254.0/24)
the LAN - physical interface, looking into a local network (subnet 192.168.0.0/24)

Almost ready, except that only one interface can participate in dynamic shaping of traffic . Feeling that questions may arise in this place, I will try to explain in more detail. My channel width to the world is 100 Mbit / s, and absolutely predictably I wanted to use it as much as possible for my selfish and not so purposes as follows:


  1. if in a certain period of time something is pulled from outside by a torrent client (BitTorrent container) and only to them, then the entire available band needs to be given to him
  2. if while downloading a torrent client something from the local network or subnet of containers starts downloading / watching via http, then torrent traffic needs to be clamped to, say, 32Kb / s, and the rest of the width should be given under http
  3. most importantly, all devices should be able to use the entire channel width regardless of which subnet they are in: behind the LAN interface or behind the VENET interface

Sounds good, doesn't it? And if we take into account that the example with http-traffic of the second point is a small fraction of the requirements, which can be achieved by achieving the idyll "torrents are running, pings are running, youtube is looking", then the desire to realize this appears even more.

I began to think how to get around the problem associated with point 3 of the requirements, because it’s impossible to borrow unused width from other network networks ... Well, once we can’t achieve dynamic shaping on two interfaces, we will shape on the ifb device, after wrapping all the traffic c egress disciplines LAN and VENET on it.

3. For the cause!


Before looking under the hood, I would like to tell in a few words about the principle of traffic classification, which I chose for myself:
  • when determining the priority of packets, they play the role of: protocol and port number;
  • all unclassified traffic cannot be “accelerated” to the maximum, because then (in the case of torrent traffic) it is extremely reluctant to give the borrowed band to a higher priority class. Those. I left a gap to squeeze more important packages.

Well, now let's get started.

We determine the parameters, clear disciplines and load the ifb-device
#!/bin/bash
IPT="/sbin/iptables"
TC="/sbin/tc"
IP="/bin/ip"
DEV_LAN="eth1" # Интерфейс, смотрящий в локальную сеть
DEV_VENET="venet0" # Интерфейс, смотрящий в контейнеры
DEV_IFB_LAN="ifb0" # Тут будем шейпить входящий с интернета трафик
CEIL_IN="95mbit" # Максимальная ширина канал минус 5%
CEIL_IN_BULK="90mbit" # Максимальная ширина неклассифицированного трафика
# Очищаем дисциплины интерфейсов
$TC qdisc del dev $DEV_LAN root >/dev/null 2>&1
$TC qdisc del dev $DEV_LAN ingress >/dev/null 2>&1
$TC qdisc del dev $DEV_VENET root >/dev/null 2>&1
$TC qdisc del dev $DEV_VENET ingress >/dev/null 2>&1
# Создаём ifb-устройство и поднимаем его
rmmod ifb >/dev/null 2>&1
modprobe ifb numifbs=1
$IP link set $DEV_IFB_LAN up


We create disciplines, classes and filters
# Добавляем корневую htb-дисциплину на ifb-устройство
$TC qdisc add dev $DEV_IFB_LAN root handle 1: htb default 80
# Добавляем к этой дисциплине корневой класс, указывая гарантированную ширину
$TC class add dev $DEV_IFB_LAN parent 1: classid 1:1 htb rate $CEIL_IN
    # Дочерние классы, которые будут делить между собой родительскую скорость
    $TC class add dev $DEV_IFB_LAN parent 1:1 classid 1:10 htb rate 5mbit  ceil 5mbit          prio 0
    $TC class add dev $DEV_IFB_LAN parent 1:1 classid 1:20 htb rate 10mbit ceil $CEIL_IN       prio 1
    $TC class add dev $DEV_IFB_LAN parent 1:1 classid 1:30 htb rate 10mbit ceil $CEIL_IN       prio 2
    $TC class add dev $DEV_IFB_LAN parent 1:1 classid 1:40 htb rate 10mbit ceil $CEIL_IN       prio 3
    $TC class add dev $DEV_IFB_LAN parent 1:1 classid 1:50 htb rate 10mbit ceil $CEIL_IN       prio 4
    $TC class add dev $DEV_IFB_LAN parent 1:1 classid 1:80 htb rate 50kbit ceil $CEIL_IN_BULK  prio 7
        # Дисциплины дочерних классов
        $TC qdisc add dev $DEV_IFB_LAN parent 1:10 handle 10: sfq perturb 10
        $TC qdisc add dev $DEV_IFB_LAN parent 1:20 handle 20: sfq perturb 10
        $TC qdisc add dev $DEV_IFB_LAN parent 1:30 handle 30: sfq perturb 10
        $TC qdisc add dev $DEV_IFB_LAN parent 1:40 handle 40: sfq perturb 10
        $TC qdisc add dev $DEV_IFB_LAN parent 1:50 handle 50: sfq perturb 10
        $TC qdisc add dev $DEV_IFB_LAN parent 1:80 handle 80: sfq perturb 10
            # Обеспечиваем деление ширины полосы дисциплины между ip, а не соединениями
            $TC filter add dev $DEV_IFB_LAN parent 10: protocol ip handle 110 flow hash keys dst divisor 512
            $TC filter add dev $DEV_IFB_LAN parent 20: protocol ip handle 120 flow hash keys dst divisor 512
            $TC filter add dev $DEV_IFB_LAN parent 30: protocol ip handle 130 flow hash keys dst divisor 512
            $TC filter add dev $DEV_IFB_LAN parent 40: protocol ip handle 140 flow hash keys dst divisor 512
            $TC filter add dev $DEV_IFB_LAN parent 50: protocol ip handle 150 flow hash keys dst divisor 512
            $TC filter add dev $DEV_IFB_LAN parent 80: protocol ip handle 180 flow hash keys dst divisor 512
#Раскидываем трафик по классам
TC_A_F="$TC filter add dev $DEV_IFB_LAN parent 1:" # Сокращение, чтобы строки не переносились
$TC_A_F  prio 10 protocol ip u32 match ip protocol 6 0xff \
    match u8 0x05 0x0f at 0 \
    match u16 0x0000 0xffc0 at 2 \
    match u8 0x10 0xff at 33 \
    flowid 1:10                                                                                 # ack < 64b
$TC_A_F prio 1 protocol ip u32 match ip protocol 1                           0xff   flowid 1:10 # icmp
$TC_A_F prio 1 protocol ip u32 match ip protocol 6  0xff match ip sport 53   0xffff flowid 1:10 # dns
$TC_A_F prio 1 protocol ip u32 match ip protocol 17 0xff match ip sport 53   0xffff flowid 1:10 # dns
$TC_A_F prio 2 protocol ip u32 match ip protocol 17 0xff match ip tos 0x68   0xff   flowid 1:20 # voip
$TC_A_F prio 2 protocol ip u32 match ip protocol 17 0xff match ip tos 0xb8   0xff   flowid 1:20 # voip
$TC_A_F prio 2 protocol ip u32 match ip protocol 6  0xff match ip sport 8000 0xffff flowid 1:20 # icecast
$TC_A_F prio 3 protocol ip u32 match ip protocol 6  0xff match ip sport 22   0xffff flowid 1:30 # ssh
$TC_A_F prio 3 protocol ip u32 match ip protocol 6  0xff match ip sport 3389 0xffff flowid 1:30 # rdp
$TC_A_F prio 3 protocol ip u32 match ip protocol 6  0xff match ip sport 5222 0xffff flowid 1:30 # jabber c2s
$TC_A_F prio 3 protocol ip u32 match ip protocol 6  0xff match ip sport 5223 0xffff flowid 1:30 # jabber c2s
$TC_A_F prio 3 protocol ip u32 match ip protocol 6  0xff match ip sport 5269 0xffff flowid 1:30 # jabber s2s
$TC_A_F prio 4 protocol ip u32 match ip protocol 6  0xff match ip sport 80   0xffff flowid 1:40 # http
$TC_A_F prio 4 protocol ip u32 match ip protocol 6  0xff match ip sport 443  0xffff flowid 1:40 # https
$TC_A_F prio 4 protocol ip u32 match ip protocol 6  0xff match ip sport 143  0xffff flowid 1:40 # imap
$TC_A_F prio 4 protocol ip u32 match ip protocol 6  0xff match ip sport 993  0xffff flowid 1:40 # imaps
$TC_A_F prio 4 protocol ip u32 match ip protocol 6  0xff match ip sport 25   0xffff flowid 1:40 # smtp
$TC_A_F prio 4 protocol ip u32 match ip protocol 6  0xff match ip sport 465  0xffff flowid 1:40 # smtps
$TC_A_F prio 4 protocol ip u32 match ip protocol 6  0xff match ip sport 587  0xffff flowid 1:40 # smtps
$TC_A_F prio 4 protocol ip u32 match ip protocol 6  0xff match ip sport 21   0xffff flowid 1:40 # ftp
$TC_A_F prio 5 protocol ip u32 match ip protocol 6  0xff match ip sport 20   0xffff flowid 1:50 # ftp


And the final chord: we redirect traffic from the egress disciplines venet0 and eth1 interfaces to the ifb device
   #  DEV_LAN => DEV_IFB_LAN (ifb0)
   $TC qdisc add dev $DEV_LAN root handle 1: prio
    # пускать мимо шейпера
    $TC filter add dev $DEV_LAN parent 1: prio 1 protocol ip u32 match ip src 192.168.8.253/32 action pass # с ip шлюза в лок.сеть
    $TC filter add dev $DEV_LAN parent 1: prio 1 protocol ip u32 match ip src 192.168.254.0/24 action pass # с vz-контейнеров в лок. сеть
    # остальное, идущее на ingress DEV_LAN, завернуть на egress DEV_IFB_LAN (ifb0)
    $TC filter add dev $DEV_LAN parent 1: prio 2 protocol ip u32 match u32 0 0 action mirred egress redirect dev $DEV_IFB_LAN
#  DEV_VENET -> DEV_IFB_LAN (ifb0)
$TC qdisc add dev $DEV_VENET root handle 1: prio
    # пускать мимо шейпера
    $TC filter add dev $DEV_VENET parent 1: prio 1 protocol ip u32 match ip src 192.168.8.0/24 action pass # с локальной сети в vz-контейнеры
    # остальное, идущее на ingress DEV_VENET, завернуть на egress DEV_IFB_LAN (ifb0)
    $TC filter add dev $DEV_VENET parent 1: prio 2 protocol ip u32 match u32 0 0 action mirred egress redirect dev $DEV_IFB_LAN



That's all. I would like to believe that the solution I have presented will help someone, and someone will push me to improve it.

Also popular now: