P2P - The next stage in the development of information systems

    Let's digress from bans in different countries, let's not think that P2P is a mechanism for bypassing locks.

    I offer you an alternative opinion on P2P - what problems of the future and the present this architecture of information networks can solve.

    What is real P2P?

    Let's introduce the concept of real P2P .

    A true P2P is a peer-to-peer network in which absolutely all network nodes perform the same functions or can automatically change their set of functions depending on environmental conditions.

    Changing functions is nothing more than providing functions that cannot work on some peer-to-peer network nodes due to restrictions:
    1) Behind a NAT
    2) Mobile devices

    Both classes of devices either cannot have direct access to the network (NAT) or , but it is not strictly recommended - (Mobile devices) due to increased power consumption with a huge number of connections.

    To resolve this problem, technologies such as TCP Relay are used (since most P2P systems use UDP, with a huge number of simultaneous connections, you can choose a node that will perform the functions of receiving requests from the network via UDP and sending them to the end device via TCP via the same connection ) I want to remind you that a similar mechanism was already implemented in Skype for a very long time, before its purchase by MS these functions worked, later - the concept of “super nodes” in Skype was gone and they are replaced by MS servers.

    It is very important not to confuse P2P and mesh networks. P2P is peer-to-peer interaction at level 3 and higher according to the OSI model, Mesh - at 3 and lower, respectively.

    What problems does a P2P network solve and what technologies will go away with the widespread adoption of P2P?


    Currently, some providers, and almost all mobile operators, cache traffic. This saves resources and uplinks, so as not to drive the same traffic through the trunk.

    But why caching is necessary if the content that got into the operator’s network upon repeated request is most likely to be requested from the operator’s network?
    Moreover, there is no need to build any new infrastructure at all.


    The content delivery system is mainly used to deliver “heavy” content, music, video, and gaming (steam) to reduce the load on the main server and reduce response time - CDN servers are installed in different countries and / or regions that perform the balancing function load.

    The server data must be maintained, spending a person-hours they must be configured and they will not be able to dynamically increase their throughput or allow:
    In Nizhny Novgorod, the Giwi.get service has always been popular, which allows you to watch legal content online, the CDN server in the region can simultaneously provide only 100,000 users with the ability to watch movies and TV shows. But suddenly a new content (series) appears on the service according to forecasts that were made on the basis of research, this series should not be of interest to people from this region.

    But why, he was interested, and everyone decided to look at it - naturally the CDN cannot handle it, at best the content will be able to process the neighboring CDN, but not the fact that the neighboring CDN is ready for such a load.

    Lack of communication channels

    The last mile providers are ready to provide channels of 1 Gigabit / s, and even the network inside the city can pump such a load, but it’s bad, there is a main channel from the city that is not designed for such a load, and the channel expansion is millions (substitute the currency for choice )

    Naturally, this problem is again solved by P2P services, it is enough that there would be at least 1 source of content in the city (previously downloaded via the trunk) - everyone will have access to the content at the maximum speed of the local network (intracity)

    Strengthening Internet Distribution

    In the modern world, uplinks are everything, there are traffic exchange points in cities, but the provider will rather buy a couple more gigabits on the highway than expand the channels to the traffic exchange point or connect to neighboring providers.

    Uplink load reduction

    When using P2P, it is quite logical that it will be more important for a provider to have wider internal channels than external ones, and why pay for an expensive uplink if it is very likely that the required content can be found on the network of a neighboring provider.

    By the way, providers will be happy too, even now the provider provides such tariffs that its uplink does not equal the total number of all users.
    In other words - if all users start using their tariff for 100% - the provider’s uplink will end very quickly.

    Obviously, P2P solutions enable the provider to say that it gives you access to the network at a speed of at least 1 TB \ c mk, content on the network is very rarely unique, a provider (which has piercings with neighbors from the city) will be very likely to provide access to content at a tariff rate.

    No extra servers on the network

    Now the provider’s network usually has such servers as: Google CDN (/ Youtube), Yandex CDN / peering, DPI, + other specific CDN / Caching servers that are used in this region.

    Obviously, it is possible to eliminate all CDN servers and excess peering (with services, not with providers), DPI will not be needed in this situation either, since during the NNR hours there will not be such sudden jumps in load. Why?

    CNN - Forget This Abbreviation

    CHNN - The busy hour, traditionally morning and evening hours, and several peaks of the NNI are always noticeable depending on the type of employment of people:

    Peaks of the evening NNN:
    1) Return of students from school
    2) Return of students from universities
    3) Return of workers who work for 5/2 schedule

    You can see these peaks on any equipment that analyzes the network load per channel.

    P2P also solves this problem, since it is highly likely that content that is interesting to schoolchildren may be of interest to both students and employees — accordingly, it already exists inside the provider's network — accordingly, there will be no NNS on the highway.

    Distant future

    We send our devices to the moon and to Mars, for a long time there is Internet on the ISS.

    It is obvious that in the future, the development of technology will allow flights to far space and a long stay of man on other planets.

    They should also be connected to a common network if we are considering the classic Client-Server system, and the servers are located on the ground, and the clients say on Mars - Ping will kill any interaction.

    And if we assume that on a different planet there will be our colony that will grow, then just like on earth they will use the Internet, it is clear that they will need the same tools as we do:
    1) Messenger
    2) Social networks
    And this is minimal -the required number of services that allow the exchange of information.

    It is logical that the content that will be generated on Mars will be interesting and popular on Mars, and not on earth, what about social networks?
    Install your own servers that will work autonomously and after some time synchronize with the ground?

    P2P networks solve this problem as well - on Mars the content source has its subscribers, on the earth - their own, but the social network is the same, but if the Martian resident has a subscriber from the ground - there is no problem, if there is a channel, the content will fly to another planet.

    What is important to note is that there will be no out of sync that can happen on traditional networks, no need to install any extra servers there, or even configure something. The P2P system takes care of maintaining the relevance of the content itself.

    Channel break

    Let us return to our thought experiment - people live on Mars, people live on earth - they all exchange content, but at one point a catastrophe occurs and the connection between the planets disappears.

    With traditional client-server systems, we can get a completely broken social network or other service.
    Remember that each service has an authorization center. Who will authorize when the channel is broken?
    And Martian teenagers also want to post photos of their Martian food at MarsaGram.

    P2P Networks, when a channel is broken, easily go offline - in which it will exist completely autonomously and without any interaction.
    And as soon as the connection appears - all services are automatically synchronized.

    But Mars is far away, even on earth there may be problems with a break in the communication channel.

    Remember the latest high-profile Google / Facebook projects with new Internet coverage.
    Some corners of our planet are still not connected to the network. Connection may be too expensive or not economically feasible.

    If in such regions your network (intranet) is costed and then connected to a very narrow channel - a satellite, global, then P2P solutions allow you to use all the functions at the initial stage as well as when the networks are connected globally. And later - as we said above - it allows you to pump all the necessary content through a narrow channel.

    Network survival

    If we rely on a centralized infrastructure, we have a very specific number of points of failure, yes, there are also backups and backup data centers, but we must understand that if the main data center is damaged due to the elements, access to the content will be slowed down significantly. if not stop at all.

    We recall the situation with Mars, all devices come to Mars from the earth, and one fine day the Uandex or LCQ server breaks down - the RAID controller burns out, or another malfunction - and all Martians again without MarsiGram or even worse - I can’t exchange simple messages with each other with friend. The new server or its components will come from the ground oh how soon.

    With a P2P solution - failure of one network member does not affect the network.

    I can’t imagine a future in which our systems will remain client-server, this will generate a huge number of unnecessary crutches in the infrastructure, complicate support, add failure points, will not allow scaling when it is needed, it will take a lot of effort if we want our client -server solutions worked not only on our planet.

    So, the future is definitely P2P, as the world of P2P has changed right now:
    Skype - a small company did not spend money on servers could grow to the huge giant
    Bittorrent - OpenSource projects can transfer files without loading their servers

    These are just two prominent representatives of the information revolution . There are many other programs that will change the world.

    Also popular now: