A bit about iptables, iproute2 and network emulation

    Once I needed to monitor packet loss between the master and replicas in Zabbix (replication does not feel well if the channel is not very good). For this, Zabbix has a built-in icmppingloss parameter, a series of ICMP packets is sent to the remote host and the result is recorded in the monitoring system. And now the parameter is added, the trigger is configured. It would seem that the task has been completed, however, as the saying goes, “Trust, but verify.” It remains to verify that the trigger will work when the losses really will be. So how to simulate packet loss? This, and not only, will be discussed under the cut.


    The first thought that came to my mind was to use iptables. Indeed, a short search led me to the statistic module - in short, this module processes packets, introducing some statistical probability into the process.
    In general, the problem was solved using the rule:

    iptables -A INPUT -p icmp -s zabbix_ip -m statistic --mode random --probability 0.1 -j DROP


    iptables -A INPUT -p icmp -s zabbix_ip -m statistic --mode nth --every 10 -j DROP

    As you can see here there are 2 options --mode random --probability 0.1 - means that the package will be selected randomly, with a probability of 10%. And --mode nth --every 10 - process every tenth packet. In this way, I achieved a packet loss of about 10%, the trigger worked, everything is fine.

    This seems to be possible to stop, but by accident I learned about such functionality as Network Emulator from the nuclear subsystem Network QoS. Moreover, NetEm features are much wider than the statistic module:
    • delay - add the specified delay to the sent packets on the specified interface;
    • distribution - allows you to specify the distribution of the delay;
    • drop - allows you to specify the loss of packets;
    • corruption - determines the possibility of packet corruption;
    • duplication (duplicate) - allows you to determine the duplication of packets before they are queued;
    • reorder - defines the reordering of packets (used in conjunction with delay);
    • limit - limits the effect of the above options to the specified number of packets.

    Now about all this in more detail.
    Network emulator is supported from 2.6 kernels and at the moment it is present in all modern distributions (we are talking about Linux of course). To check for Network Emulator support in the kernel, check the kernel config in / boot (or /proc/config.gz):

    # grep NETEM /boot/config-$(uname -r)

    As you can see from the output, support is in the form of a module, so we load it. We also need the tc utility from the iproute package:

    # modprobe sch_netem

    If there is nothing, you need to rebuild the kernel. Who is familiar with the kernel assembly - a quick tip, Network emulator is here:

     Networking -->
       Networking Options -->
         QoS and/or fair queuing -->
            Network emulator

    If you are unfamiliar with building a kernel, look for articles on building a kernel in the documentation for your distribution.

    When everything is ready, you can proceed. I recommend to open the second session in which to start ping to any node on the local network. Thus, the introduced emulations will be observed quite clearly.

    For experiments, we need the tc utility from the iproute2 package. The full syntax is as follows:

    tc qdisc ACTION dev DEVICE add netem OPTIONS
           ACTION := [ add | change | delete ]
           LIMIT := limit packets
           DELAY := delay TIME [ JITTER [ CORRELATION ]]] [ distribution { uniform | normal | pareto |  paretonormal } ]
           DROP := drop PERCENT [ CORRELATION ]
           CORRUPT := corrupt PERCENT [ CORRELATION ]]
           DUPLICATION := duplicate PERCENT [ CORRELATION ]]
           REORDER := reorder PERCENT [ CORRELATION ] [ gap DISTANCE ]

    1) packet delay.
    Add a delay of 100ms to sending packets:
    # tc qdisc add dev eth0 root netem delay 100ms

    Here we indicate jitter and thus to the already existing delay of 100ms and add some deviation of ± 10ms.
    # tc qdisc add dev eth0 root netem delay 100ms 10ms

    Now we add the correlation, so the delay in sending the next packet will depend on the delay of the previous packet.
    # tc qdisc add dev eth0 root netem delay 100ms 10ms 50%

    2) Delay distribution.
    In the previous examples, we received a more or less even distribution of delays on the total number of sent packets. However, in the real world, network latency is completely uneven. To obtain a more realistic picture, distribution is used (by default, if you do not specify the distribution explicitly, then normal is used).

    In the example below we indicate the distribution of pareto, normal and paretonormal are also available - the delay will be calculated using mathematical formulas. In addition, you can create your own distribution tables. In my opinion, this is a rather specific application case, but suddenly someone will be interested.

    # tc qdisc add dev eth0 root netem delay 100ms 10ms distribution pareto

    3) packet loss.
    This is where it all started, yes ...
    It indicates packet loss of 20%.

    # tc qdisc add dev eth0 root netem drop 20%

    Additionally, you can specify the correlation, in this case, the random number generator will be less random and it will be possible to observe surges in losses:

    # tc qdisc add dev eth0 root netem drop 20% 10%

    4) Packet damage.
    Intentional packet corruption, how is it done? With the indicated probability, an incorrect bit is written to a random place inside a randomly selected packet. As a result, the checksum does not converge - the packet is discarded. As with losses, you can specify a correlation to form bursts.

    # tc qdisc add dev eth0 root netem corrupt 20%

    5) Duplication of packages.
    Packet duplication is defined in the same way as packet loss or corruption. And of course, correlation can also be indicated.
    # tc qdisc add dev eth0 root netem duplicate 20%

    6) Packet reordering
    In the following example, 30% of the packets will be sent immediately, the rest will be delayed by 100ms.

    # tc qdisc add dev eth0 root netem delay 100ms reorder 30%

    In the example below, the first 4 packets (gap - 1) will be delayed by 100ms, subsequent packets will be sent immediately with a probability of 25% (+ correlation of 50%) or vice versa delayed with a probability of 75%. As soon as the packet is reordered, the iteration is repeated and the next 4 packets are delayed, the rest are sent immediately or are delayed with the specified probability.

    # tc qdisc add dev eth0 root netem delay 100ms reorder 25% 50% gap 5

    Who is too lazy to bother with this matter, there is a small demo video .

    That's it. Thank you all for your attention, experiment on health.

    Also popular now: