Why constant traffic filtering is a must


    An encrypted traffic filtering scheme without disclosing encryption keys.

    Often in discussions, we hear that the service of neutralizing distributed denial of service attacks based on constant filtering of traffic is less efficient and more expensive than on-demand filtering.

    The arguments used in such dialogs practically do not change over time when the discussion begins: the high cost of constant filtering versus the time delay required to include a specialist or equipment in the process of neutralizing an on-demand attack.

    Qrator LabsWe would like to clarify our position, putting forward some arguments for discussion on how constant filtering differs from on-demand filtering and why the first option is actually the only workable one.

    One of the key reasons is that modern attacks develop very quickly - they evolve and become more complex in real time. The service itself is also evolving - the site and application are developing, so it may turn out that the “normal” user behavior during the previous attack is no longer relevant.

    The technical specialists of the service provider to neutralize denial of service attacks, in the case of manual filtering, in most cases require not only time to understand what is happening in order to develop the right behavior strategy and the sequence of specific actions. In addition, such a specialist also needs to know exactly when and how the attack vector changes in order to effectively neutralize it at the request of the client.

    Connecting under attack is a separate difficulty, mainly due to the reduced availability for all users trying to reach the service. If the attack was successful and users did not receive the requested resource, they try to get it again by simply refreshing the page or reloading the application. This worsens the attack scenario because it becomes more difficult to distinguish between junk traffic and legitimate traffic.

    The actual deployment of the service to neutralize attacks - in the cloud or physically on the client’s site, in the partner’s data center, is often a key requirement for its implementation. Since any of the placement options allows for continuous automatic or manual filtering, and accordingly - detection and neutralization of attacks. But having automatic filtering is a key requirement.

    Most often, cloud services to neutralize attacks filter all incoming traffic - it becomes fully available for analysis. Physical equipment installed at the network edge, or receiving cloned traffic, provides almost the same capabilities for monitoring and neutralizing attacks in real time.

    Some vendors recommend using NetFlow metricsfor traffic analysis, or other metrics, which in itself is a compromise for the worse from the point of view of the result, since third-party or derived metrics give only part of the information about the data, thus narrowing the possibilities for detecting and neutralizing the attack. And vice versa - cloud services are not required to analyze 100% of the incoming traffic, but most often they do it because this approach allows you to build models and train algorithms in the best way.

    Another disadvantage of using the NetFlow protocol as the main analysis tool is that it gives only a certain characteristic of data streams - their description, but not the streams themselves. Therefore, of course, you will notice an attack based on the parameters that NetFlow reflects, but the more complex types of attacks that should be detected by analyzing the contents of the stream will not be visible. Therefore, attacks on the application layer (L7) are difficult to repel using only the NetFlow metric, with the exception of the cases of 100% obvious attacks inside the transport (because above L4 NetFlow is frankly useless).


    General scheme of connection to the filtration network.

    1. Why do cloud-based DDoS neutralization service providers offer “permanent filtering” even if no attack is currently taking place?


    The answer is simple: constant filtering is the most effective way to neutralize attacks. It is also necessary to add here that the physical equipment hosted by the client does not differ much from cloud filtration, with the only exception that the box turns on and off physically somewhere in the data center. However, there is a choice in any case (to work - that is, turn on the device, always or only if necessary) and you will have to make it.

    Theoretically, the degradation of network delay due to filtering is indeed possible - especially if the nearest node is geographically far away, and the client needs local resources and traffic. But, what we see in most cases is a decrease in the overall delay due to the acceleration of HTTPS / TCP handshakes based on a well-constructed filtering network with logical host allocation. No one will probably argue that the correct neutralization network has a better topology (faster and more reliable) than the network that requires protection.

    Saying that reverse proxy restricts filtering options only to the HTTP and HTTPS (SSL) protocols, you are only half the truth. HTTP traffic is an integral and one of the critical parts of complex filtering systems, and reverse proxying is one of the most effective ways to collect and analyze it.

    2. As we know, distributed denial of service attacks can take many forms and modify, moving away from the HTTP protocol. Why is the cloud in this case better than freestanding equipment on the client’s side?


    Overloading of individual nodes of the filtration network is possible as much as it is realistic to accomplish with equipment located in the rack. There is no iron box powerful enough to cope with any attacks alone - it requires a complex and multi-component system.

    However, even the largest equipment manufacturers recommend switching to cloud filtering in case of the most serious attacks. Because their clouds consist of the same equipment organized in clusters, each of which is by default more powerful than a separate solution located in a data center. In addition, your box works only for you, but a large filtering network serves tens and hundreds of customers - the design of such a network was originally designed to process an order of magnitude of large amounts of data to successfully neutralize the attack.

    Before the attack, it is impossible to say for sure what will be easier: to disable stand-alone equipment (CPE) or a filtering network node. But think about this - point failure is always a problem for your vendor, but a piece of equipment that refuses to work as stated after purchase is only your problem.

    3. A network node acting as a proxy server should be able to receive content and data from the resource. Does this mean that anyone can get around the cloud solution to neutralize attacks?


    If there is no dedicated physical line between you and the security service provider, yes.

    It is true that without a dedicated channel from the client to the service provider to neutralize denial of service attacks, attackers can attack the service’s native IP address. Not all providers of such services in principle offer leased line services from themselves to the client.

    In general, switching to cloud filtering means announcing appropriate announcements using BGP. In this case, the individual IP addresses of the service under attack are hidden and inaccessible to the attack.

    4. Sometimes, as an argument against cloud filtering, the ratio of the cost of the service and the cost of it from the provider is used. What does this situation look like in comparison with equipment located on the client side?


    It is safe to say that no matter how small the denial-of-service attack, a cloud service provider to neutralize them will have to process them all, even though the internal cost of building such networks is always based on the assertion that each the attack is intense, big, long and smart. On the other hand, this does not mean at all that the provider of such a service loses money by selling customers protection against everything, but in fact has to cope mainly with small and medium-sized attacks. Yes, the filtering network may spend a little more resources than in the “perfect state”, but in the case of a successfully neutralized attack, no one will ask questions. Both the client and the provider will be satisfied with such a partnership and will continue it with a high degree of probability.

    Imagine the same situation with the equipment in place - one-time it costs orders of magnitude more, requires qualified hands to service and ... still it will be forced to work out small and rare attacks. When you were planning to buy such equipment that is not cheap anywhere, did you think about this?

    The thesis that a separate box, together with a contract for installation, technical support and payment for the work of highly qualified engineers will ultimately be cheaper compared to buying a suitable tariff in the cloud, is simply incorrect. The final cost of the equipment and its operating hours are very high - and this is the main reason why the protection and neutralization of distributed denial of service attacks has become an independent business and has formed an industry - otherwise we would see an attack protection unit in every IT company.

    Based on the premise that an attack is a rare phenomenon, a decision to neutralize it must be constructed appropriately and be able to neutralize these rare attacks successfully. But, in addition to this, it also costs adequate funds, because everyone understands that most of the time nothing bad happens.

    Cloud providers design and build their own networks in an efficient manner in order to consolidate their own risks and cope with attacks by distributing traffic between filtering points, which are both equipment and software - two parts of a system created for one purpose.

    Here we are talking about the "Law of Large Numbers"familiar from probability theory. This is the reason Internet service providers are selling higher capacity channels than the one they actually own. All clients of the insurance company, hypothetically, can get into an unpleasant situation at the same time - but in practice this never happened. And even though individual insurance compensation can be huge, it does not lead to bankruptcy of the insurance business every time someone gets into an accident.

    People who professionally neutralize denial of service attacks know that they are the cheapest, and therefore the most common attacks, are associated with amplifiers, and cannot be described as “small”.

    At the same time, agreeing that a one-time payment for equipment installed on the physical site will remain there forever - attack methods will evolve. There is no guarantee that yesterday’s equipment will cope with tomorrow’s attack - this is just an assumption. Therefore, the volumetric investment made in such equipment begins to lose its value exactly from the moment of installation, not to mention the need for its constant maintenance and updating.

    In the matter of neutralizing DDoS, it is important to have a highly scalable solution with high connectivity, which is very difficult to achieve by buying a separate box of equipment.

    When a serious attack occurs, any freestanding equipment will try to signal to the cloud about the fact of its beginning and try to distribute traffic by filtering points. However, no one says that when a channel is clogged with garbage there is no guarantee that it will be able to deliver this message to its own cloud. And again - it will take time to switch the data stream.

    Therefore, the only real price that the client can pay, in addition to money, for protecting his own infrastructure from attacks on denial of service, is the delay and nothing else. But, as we have said, properly built clouds reduce latency, improving the global availability of the requested resource.

    Keep this in mind when choosing between an iron box and a cloud of filtration.

    Also popular now: