Creating point-to-multipoint tunnels based on GRE encapsulation in Linux 2.6

    Linux has built-in support for two types of tunnels: ipip and gre. In the form in which tunnels are traditionally used in the system, it does not matter which one to use: they both give exactly the same overhead for packets sent to the IPv4-in-IPv4 tunnel (specifically checked), are equally protected by IPsec and take up same processor time for processing. However, these are different types of tunnels, and the possibilities of gre are much wider.
    Unfortunately, the very convenient and wonderful feature of gre tunnels is not described anywhere on the Internet, and most (if not all) Linux administrators are not aware of such a possibility as mGRE tunnels. Fortunately, we intend to make up for this shortcoming :-)



    So, we have three machines, all three have running Linux version 2.6 (I'm not sure, maybe 2.4 also supports). We also need the iproute2 package - this is the standard for modern Linux systems (by the way, it's time to forget about the outdated utilities ifconfig, route and others). External ip-addresses of systems: 1.1.1.1, 2.2.2.2 and 3.3.3.3; there is routing between them.

    For now, we can do without encryption and packet authentication — it’s all easy to add to our example by simply setting IPsec between our hosts in transport mode. This is a topic outside the scope of the topic;)

    1. Create a GRE tunnel:
    (on all three)
    ip tunnel add mgre0 mode gre key 0xfffffffe ttl 255

    Notice, we did not indicate the address of the banquet here. This means that, in principle, it can be located at any address. key in this case identifies a specific mGRE network - this is a 32-bit number that is the same for all nodes.

    2. Assign him the address:
    (on 1.1.1.1)
    ip addr add 10.0.0.1/24 dev mgre0

    (on 2.2.2.2)
    ip addr add 10.0.0.2/24 dev mgre0

    (at 3.3.3.3)
    ip addr add 10.0.0.3/24 dev mgre0


    For Ethernet interfaces, that would be enough. Ethernet has the Adress Resolution Protocol (ARP), which allows systems to independently find the MAC address, knowing the IP address of the destination host. Ethernet is the Broadcast Multiple Access environment, and ARP consists of creating a request to all stations on the network (using the MAC address FF: FF: FF: FF: FF: FF): “Hey, which one of you has an IP address xxxx ?”. If a station with such an IP address is available, it already privately reports that “ xxxx is located at yy: yy: yy: yy: yy: yy ”.

    In our network (Internet) there is no such means as ARP, and the role of “second level” addresses, which in the case of Ethernet - MAC addresses, is here performed ... by the external IP addresses of systems. We work with the Non-Broadcast Multiple Access (NBMA) environment, we cannot shout to the entire Internet, as ARP would do: “Hey, who in the 0xfffffffe GRE network has the address 10.0.0.2?”.

    The Next Hop Resolution Protocol (NHRP, an analogue of ARP for NBMA environments) is intended to resolve this address problem, but for the first time we will do the work for it - at the same time we will figure out how the Linux network works in general :)

    3. So, let us know manually each station where to look for neighbors. To do this, execute the following commands:
    (on 1.1.1.1)
    ip neigh add 10.0.0.2 lladdr 2.2.2.2 dev mgre0
    ip neigh add 10.0.0.3 lladdr 3.3.3.3 dev mgre0

    (on 2.2.2.2)
    ip neigh add 10.0.0.1 lladdr 1.1.1.1 dev mgre0
    ip neigh add 10.0.0.3 lladdr 3.3.3.3 dev mgre0

    (at 3.3.3.3)
    ip neigh add 10.0.0.1 lladdr 1.1.1.1 dev mgre0
    ip neigh add 10.0.0.2 lladdr 2.2.2.2 dev mgre0


    Here, each command says: “the neighboring station with the network layer IP address xxxx has a physical address (link layer address, lladdr) yyyy , which is accessible through device M ”. If we configured static Ethernet (without ARP), instead of yyyy would be the MAC address of the corresponding station. (By the way, if you look at ip neigh show dev ethN in a working Ethernet network, we will see the results of ARP - dynamically received neighbors addresses).

    All. Our tunnel will work on this: each of the stations will be able to ping any other. If the kernel is built with support for GRE multicast, then we generally get a fully functional “LAN” - dynamic routing protocols like RIP and OSPF will work in full force on our virtual network!

    This is how it looks from the second station (2.2.2.2):
    linux2 # ping 10.0.0.1
    PING 10.0.0.1 (10.0.0.1) 56 (84) bytes of data.
    64 bytes from 10.0.0.1: icmp_seq = 1 ttl = 64 time = 4.41 ms
    64 bytes from 10.0.0.1: icmp_seq = 2 ttl = 64 time = 0.429 ms
    ^ C
    --- 10.0.0.1 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1013ms
    rtt min / avg / max / mdev = 0.429 / 2.419 / 4.410 / 1.991 ms
    linux2 # ping 10.0.0.2
    PING 10.0.0.2 (10.0.0.2) 56 (84) bytes of data.
    64 bytes from 10.0.0.2: icmp_seq = 1 ttl = 64 time = 0.027 ms
    64 bytes from 10.0.0.2: icmp_seq = 2 ttl = 64 time = 0.020 ms
    ^ C
    --- 10.0.0.2 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 999ms
    rtt min / avg / max / mdev = 0.020 / 0.023 / 0.027 / 0.006 ms
    linux2 # ping 10.0.0.3
    PING 10.0.0.3 (10.0.0.3) 56 (84) bytes of data.
    64 bytes from 10.0.0.3: icmp_seq = 1 ttl = 64 time = 8.47 ms
    64 bytes from 10.0.0.3: icmp_seq = 2 ttl = 64 time = 0.164 ms
    ^ C
    --- 10.0.0.3 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1018ms
    rtt min / avg / max / mdev = 0.164 / 4.318 / 8.472 / 4.154 ms
    linux2 # ip addr show dev mgre0
    5: mgre0 @ NONE:  mtu 1472 qdisc noqueue
        link / gre 0.0.0.0 brd 0.0.0.0
        inet 10.0.0.2/24 brd 10.0.0.255 scope global mgre0
    linux2 # ip neigh show dev mgre0
    10.0.0.1 lladdr 1.1.1.1 PERMANENT
    10.0.0.3 lladdr 3.3.3.3 PERMANENT


    Of course, if there are many stations, this approach is not good - do not register all neighbors at each station! But it is quite clear how to solve this problem. But more about that later.

    Also popular now: