SockJS Server Performance Study

    Good time of day!

    It just so happened that I am engaged in all sorts of various push technologies using Tornado. Described a bit earlier Tornadio2 , the server-side implementation of the protocol on top of Tornado .

    Now I want to introduce a similar project - sockjs-tornado .

    For those who are not very interested, there is other useful information: PyPy 1.7 comparative load testing against CPython 2.6.6, sockjs-node and (both on node.js 0.6.5). Everything under the cut :-)

    First, what is SockJS? This is a client library written in javascript that mimics the Websocket API, but it supports all browsers by using various surrogate substitutes in the form of ajax long-polling, jsonp-polling and the like. In general, it is very similar to, but with some key differences.

    A small digression - I have nothing to do with the client part of the library, except that I sent bug reports and in every way annoyed its developers.

    So why is it necessary if there is ?
    Here are a couple of reasons that led to the development of SockJS:
    1. developers, starting with version 0.7, went somewhere wrong. Instead of fixing bugs and adding support for different browsers, they decided to make the API more high-level. I do not argue - all the innovations are very convenient, but the number of errors has not decreased. For example, a serious enough race condition cannot be closed for more than 3 months . Connection after disconnection still does not work. Well and the like.
    2. Sometimes you don’t want to be attached to a specific library. If you use, then it will be difficult to abandon it, since you will have to change all the places where there is a binding to

    So what is SockJS?
    1. As previously noted, this is a simple replacement for the Websocket API for the browser. Accordingly, moving an existing application using web sockets will be quite painless (if not talking about the server side)
    2. SockJS even works in Opera, which really does not like. In addition, SockJS works correctly with different antiviruses - it rolls back to another transport, while cannot establish a connection at all.
    3. SockJS supports streaming protocols: one permanent connection from the server, for outgoing data. has abandoned streaming transport since version 0.7.
    4. The protocol is much simpler and very well documented. For developers, there is even a set of tests that a server implementation should pass.
    5. Scalability is embedded in the protocol. For example, load balancer does not need to work with cookies to organize a sticky session - all the necessary information is already in the URL.
    6. The library is very well tested in different conditions, there are even qunit tests available that check both the client and server directly from the browser. For example, here is an example of tests for sockjs-node:

    In general, this thing just works.

    Now to the second part of the article, performance.

    After writing sockjs-tornado, I decided to check how it compares with the "native" server written in node.js. Now node.js is very fashionable, people often talk about its performance in various push technologies. I will say in advance - the test results surprised me very much.

    The testing methodology was chosen very simple: we have a chat server with one room. The server simply sends each incoming message to all chat participants. If interested, here is the server code.

    There is a Websocket client that sends ping and is waiting for its own response. After receiving a response, considers the time spent between sending and receiving. Results for different levels of concurrency and the number of messages sent are saved and a graph is built on them.

    Perhaps some will ask - what is actually being tested? And here’s what
    is tested: - The implementation speed of the Websocket protocol for different servers
    - The maximum number of messages at which the server begins to choke
    - The overhead of supporting a large number of connections
    - Response time at different load levels

    Another question, why do we need a “dumb” chat which is generally without logic? If anyone came across a project like Humble Indie Bundle, they have the amount of money earned is shown in real time. So, they use a "broker" that "holds" a large number of web clients. And they also have a data source (producer), which from time to time sends to the broker how much money was earned. The broker must send this information to all his connected clients. The faster the broker works, the more number of clients he can serve in the minimum time.

    The study was done in English, as sockjs developers asked us to do comparative testing with sockjs-node and you can see it right here . If anyone is interested, I can translate the article into Russian.

    In short, we get the following picture:
    - sockjs-node can send up to 45,000 messages per second with an average response time of 200 ms.
    - sockjs-tornado on cpython 2.6.6 can produce up to 55,000 messages per second with a response time of 200 ms
    - sockjs-tornado on pypy 1.7 just “breaks” with its 150,000+ messages per second.

    Of course, servers can send more messages per second, but the response time increases and the application ceases to be realtime :-) You

    can see the comparative schedule here. On the X axis, we have the number of messages sent by the server in one second. The y axis is the response time. Each line is a server combination (node ​​= sockjs-node, = node, cpython = sockjs-tornado in cpython, pypy = sockjs-tornado in pypy 1.7) with the number of simultaneous connections. is an example of the performance of another project on node.js.

    Even if we did not compare node.js and cpython, the performance of pypy turned out to be a complete surprise to me.

    Well, in conclusion.

    I recommend looking towards SockJS if you plan any real-time functionality, even if you have already considered options with And I hope that sockjs-tornado is useful to someone else.

    Also popular now: