Holy Grail on steroids: total synchronization and isomorphic JavaScript on Swarm.js

    Today at Habré we present technology of the replicated model, which allows creating collaborative and real-time web applications as easily as local desktop ones. We believe that when developing applications, real-time data synchronization should be available in the same way as a TCP stream, an HTTP request or a current from a socket - immediately and without question. HTML5 applications written in Swarm , in terms of autonomy, locality and download speed are not inferior to native ones.
    Using the Swarm library, we do more over the weekend than we did without Swarm in a month. More importantly, we can do what we could not do without it. We offer this synchronization library for free.



    Today we post TodoMVC ++ , reactive HolyGrailsteroid-based application written in Swarm + React . Here is a list of the features demonstrated in the application:

    • Instant loading - the page is rendered on the server and arrives at the client as compressed HTML; then, the code and data are pulled, and the page comes to life. Isomorphic JavaScript in action.
    • Data caching in WebStorage - allows you to both speed up loading and work offline without losing work results.
    • Offline work - it’s already clear with the data, but if we add cache manifest, then the HTML5 application can load and work without the Internet!
    • Real-time synchronization - open several bookmarks (synchronization via WebStorage) or open the same page on your phone / iPad / other browser (WebSocket).
    • Complexly synchronized data structures (yes, it is complex-syn-chro-ni-zi-ru-e-we-e).

    In general, the application is written without any reference to the network, as a simple (local) MVC application. Synchronization and caching takes place completely at the Swarm library level, and the application works with local Backbone-like objects of the model.

    So, here is the application itself: ppyr.us .
    Here is the code: github.com/gritzko/todomvc-swarm

    A detailed analysis of the application and library is in the following material, and now there are a lot of letters about replicas, CRDT and theoretical basis.

    Today, synchronization is in high demand both on the client and on the server. In the server economy of the Internet giants, there are more and more different storages and data processing tools that require synchronization. The principle of one large database as a source of truth is difficult to scale.. But we are talking primarily about the client. Today, an ordinary user has got a lot of devices. When the data that he sees on the laptop screen does not match the data on the iPhone screen - the user is upset. Replica synchronization is needed everywhere, and we believe that the data synchronization system (SD) will soon be the same consumer goods as the database.

    The specifics of the current moment is that even industry leaders finished their synchronization decisions only to the “type of work” stage. GDocs does not quite work offline, GTalk and Skype systematically lose replicas from the chat history, Evernote is famous for a wide variety of bugs. And these are the leaders. In general, the synchronization problem is surprisingly complex and multifaceted. Take Evernote. If Evernote was a local application, a student could write its 80/20 subset. Just as he had MySQL and PHP, Zuckerberg wrote facebook.

    What is the fundamental complexity of synchronization? Let's understand how classic replication and synchronization technologies work, how they support replica identity. The easiest approach is to do everything through the center. All write operations flock to the central database, and new results of read operations go back from the center. It seems to be reliable and simple, but soon difficulties begin to approach from three sides.
    1. concurrency - while the answer to the previous operation was on, the client managed to do something else, and how to combine it now is no longer quite clear,
    2. scaling of the scheme, where all operations go through one point,
    3. functioning with poor Internet, when the center does not respond to customers or responds slowly.


    The first typical step of scaling this scheme is master-slave replication, as it is implemented in a typical database . The master puts the operations in a linear log and distributes this log to the slaves, which apply it to their data replicas, in the same linear order, and get the same result. This helps to scale the reading, but adds an eventual consistency element, as slaves are updated with some lag. The recording problem remains - all records go through the same center. Linearization can be stretched to distributed replicas using the consensus algorithm, such as Paxos or Raft, but even there the linearization is made by the “leader”, i.e. all the same center. When the center is shut up, they begin to scale horizontally - they cut the base into “shards”. Here linearization, and with it the entire ACID , is torn into a thousand small ACIDs.

    Well, with offline work, the center and linearization are difficult to compatible. You can, of course, say that offline “will not be soon”, but the fact is that offline happens and happens regularly. If we tweet or like something, this can be tolerated, and if something is more serious, it is unlikely. If, for example, the waiter kicked the wire and the Internet disappeared, then we cannot drive customers out of the restaurant until the admin arrives by car with flashing lights, and we can’t serve for free either (example from Max Nalsky, IIKO co-founder ).

    Moreover, all these adventures on the server side still do not affect the client side. The client simply waits until the servers agree on each other and report the result. In a notorious project, Meteor tried to make client synchronization in real time, actually caching MongoDB to the client. To make everything work vividly, waiting for a server response was masked by the “latency compensation” trick. The client rolls the operation to its cache, sends it to the server, the server answers whether it was successfully applied, and if not, it sends a patch. The approach is more than dubious. “-Lusya, did you put the car in the garage?” “Yes, partially!”

    This is such a complicated story with linearization. Well, the more interesting it is to look at popular solutions that have been scored for linearization. There are two good examples - Gitand CouchDB. Git was written by Linus Torvalds, who among Linux developers was the very “center”. This is probably why he felt very well that the center is slow, the center does not scale. In git, synchronization occurs on the principle of master-master. Data is presented as a digraph of versions; all parallel versions need to be merged sometime. Scaling - perfect, offline - no problem. About the same in CouchDB . There are attempts to bring CouchDB-like logic to the client - pouchdb and hood.ie.

    Something completely new in this area is CRDT., and about him today and speech, sorry for the long introduction. CRDT is Convergent / Commutative / Cloud Replicated Data Types. The general idea of ​​CRDT is to use partial order instead of linearization. Operations can occur in parallel on many replicas, and some operations are competitive - i.e. occurred on different replicas, not knowing about each other, not one of them is “first”, and on different replicas they are applied in a different order. If the data structures used can withstand such an easy reordering of operations that does not violate cause-effect relationships, then all problems associated with the center simply evaporate.

    Another question is how many CRDT data structures are there? As it turned out, the entire computing consumer goods - variables, arrays, associative arrays - are quite realizable as CRDTs. And if we count money? Then exactly need linearization and ACID guarantees? Alas and ah, it turned out that the new is the well-forgotten old. It was found that the data structures used in accounting - accounts, balances - are quite a CRDT. Indeed, during the Renaissance, when accounting traditions were formed, there was no Internet, so they got out without linearization.

    The great radiant feature of CRDTs is live replicas that are fully functional even when not connected to the center. Well, the immediate application of all operations, without running to the center. Such autonomy and speed is especially relevant in two cases. Firstly, for mobile devices - they are used on the go, with unreliable Internet. CRDT allows you to store data for future use, and work quietly locally, with background synchronization. Secondly, for applications with the collaboration function, especially in real time (we think about Google Docs, Apple iCloud). In such applications, the “state” is large and rapidly changing, and each run to the server and back is a nail in the coffin.

    There are non-CRDT technologies that allow you to work with data offline. Dropbox offers its own synchronization API.StrongLoop , Firebase, etc. etc., their sea. All these solutions work on the principle of Last-Write-Wins (LWW) - each record is assigned a timestamp, a record with a larger mark mills the previous ones. Cassandra is built on the same principle . And in our Swarm library, the most common primitive is an LWW object . An advantage of Swarm is those data structures that cannot be resolved through LWW. For example, text while editing.

    In general, in the Looking Glass of distributed systems the opposite is true. In ordinary programming languages, the simplest operation is the increment of a variable, ++. Working with arrays is a little more difficult, objects and associative collections are even more complicated. In distributed systems, everything is exactly the opposite! LWW objects and associative containers are not particularly complex. Linear structures (arrays, text) are very complex, and counters are extremely complex. This can be seen in the example of Cassandra, where LWW objects were done in the first place, and the counters, as it were, are still being completed.



    Get to the point. We decided to write TodoMVC in Swarm + React to show the library in action. Actually, the first TodoMVCSwarm + React was written in July by Andrei Popp in less than a day, but that code was not "idiomatic." This time, we added linear collections ( Vector), server render and a bunch of goodies. Moreover, the usual TodoMVC seemed a little boring and useless to us. For example, looking at React + flux TodoMVC, it is very difficult to understand why the authors have tricked all these tricks in a simple application. Therefore, we added one feature - recursiveness. By pressing Tab, the user goes to the nested "child" list. Also, we adapted the interface for real-time synchronization. Such an application already represents at least some practical benefit. Also, they began to show the status of the application in the URL - for easy sharing between users. In general, it was difficult for us to stop. Compared with the development of real-time projects in the past, in the person of Swarm we had a sort of sword-treasure, and all the time our hands itched to bite someone else.

    A detailed analysis of the application and library is in the following material.

    Follow the updates on the @swarm_js project twitter .

    Also popular now: