Constant generation of alternative versions of TLS will solve the problem of the “ossification” of the old protocol



    Work on the new version of the TLS 1.3 protocol is almost complete. After four years of discussion in March 2018, the IETF committee approved the 28th draft version as the proposed standard, so it should be the last one before accepting the final specifications.

    TLS 1.3 roughly speeds up the process of establishing a secure connection by combining several steps at this stage. In addition, it implements a perfect forward secrecy mode through ephemeral keys (EC) DH. This mode guarantees the protection of session keys even in case of compromise of durable keys.

    Numerous other improvements have been implemented, including support for the ChaCha20 stream cipher, digital signature algorithms Ed25519 and Ed448, x25519 and x448 key exchange protocols, etc. Removed support for obsolete MD5 and SHA-224 hash functions, weak and rarely used elliptic curves

    But the most interesting is a new idea that is being discussed in the IETF . Experts from Google suggest periodically to generate randomly a new version of the TLS protocol with minor changes. The idea is that frequent updates will become an antidote to dangerous “ossification”.

    What is this problem?


    The so-called “ossification” is a situation where protocol developers suddenly realize that it is difficult to implement any improvements due to the ubiquity of the old version, which for some reason suits the users. In reality, the old version no longer meets current security requirements and new specifications, but de facto it cannot be implemented.

    It is believed that the so-called intermediate nodes (middlebox), namely vendors of solutions in the field of "security", became the main "brake" specifically when introducing TLS 1.3. They advise their clients to prescribe specific firewall rules with certificate scanning during TLS handshakes. The new version of the standard SSL certificates encrypted, so that the "intermediaries" will not be able to scan them.

    How will a constant update solve the problem of ossification of the protocol?


    A constant update every six weeks is a pattern familiar from the Chrome browser. In such a situation, the "executors" are forced to meet the specifications, since an incompatible implementation will not work for a large proportion of users.

    On the IETF mailing list, this idea was suggested by David Benjamin from the Chromium project. He writes that TLS 1.3 is an extensible protocol, backward compatible with TLS 1.2. It can be rolled up gradually, updating current TLS 1.2 implementations. And then there are problems. Numerous incompatible servers will not be able to establish communication at the TLS 1.3 ClientHello stage, so you will have to roll back to establishing communication using a supported version of the protocol.

    David Benjamin puts forward the idea of ​​how to avoid this problem in the future. Discussing preventive measures, he mentions protocol invariants , which are listed in clause 9.3 of the specifications. All endpoints and intermediate nodes must correspond to the described invariants. In particular, new clients and new servers do not have the right to reduce the level of parameters to the old version. Similarly, intermediate nodes do not have the right to do this when transferring the connection between the updated client and server and cannot influence the handshake. However, given the uneven update rate, the updated clients and servers can support the establishment of communication using the old protocol, but only in the correct manner described in clause 9.3.

    But practice shows that it is not enough to describe the required behavior in the specifications. How to force intermediate nodes to execute the ClientHello key processing rule — ignore unrecognized parameters? The GREASE method is proposed for this .

    GREASE: antidote against "ossification"


    GREASE (Generate Random Extensions And Sustain Extensibility) is a method of generating random extensions to TLS and the constant release of new protocol versions. David Benjamin suggests installing a standard six-week release cycle that will coincide with the release of new versions of Chrome.

    The release of such "garbage" in large numbers will cause the servers to observe the key rule of processing ClientHello to ignore unrecognized parameters. It will spread to intermediate nodes. So that they do not interfere with the communication between the client and the server, for them the key rule is this: if you did not send a request to ClientHello, then you do not have the right to process the answer. Similarly, the ecosystem should be filled with a large number of such responses.

    “In short, we plan to regularly stamp new versions of TLS (and probably other sensitive parameters, such as extensions),” said Benjamin, apparently expressing Google’s point of view. - approximately every six weeks, in accordance with the schedule of Chrome releases. Then Chrome, Google’s servers, and anyone else who wants to participate will support two (or more) versions of TLS 1.3: the standard stable 0x0304 and the temporary alternative version. Every six weeks, we randomly select a new code point. In all other respects, these versions are identical to 1.3, except perhaps for minor details for the separation of keys and the implementation of allowed syntactic changes. The goal is to pave the way for future versions of TLS, simulating them (draft negative one). "

    Such a scheme has certain risks, including collisions of code points. In addition, precautions should be developed, because the task is to maintain and maintain TLS extensibility, and not to hinder the development of the protocol. From precautionary measures:

    • Detailed documentation of all code points (if one point is worked out in a month and a half, they will be enough for more than 7,000 years).
    • Proactive collision prevention: rejecting numbers that the IETF can choose.
    • BoringSSL will not enable this option by default. It will only be enabled where it can be disabled: on servers and in the browser. In reality, most likely, only a few last code points will be used at a time. As they change rapidly, the ecosystem should not be attached to any such temporary expansion, and the implementations will remain compatible with each other.
    • If for some reason the method does not work, the experiment can be stopped at any time.

    The idea is interesting, and if Google starts to act in this way, it can really save the ecosystem from dangerous “ossification” because of the vendors of security solutions and other specific corporate systems. A representative from Cloudflare supported this proposal. In any case, he said, we need to do at least something in order to avoid the problem that we faced when trying to implement TLS 1.3 in the future. Another member of the IETF working group called it a “fantastic idea” and suggested that Google create a wiki page describing the code points it uses or plans to use in the future.




    Also popular now: