Distributed chat on Node.JS and Redis

The result is a dope for a doomed "pigeon mail"

Small question / answer:

Who is it for? People who have little or no encounter with distributed systems, and who are interested to see how they can be built, what patterns and solutions exist.

Why is this? It became interesting to myself what and how. Scooped information from different sources, I decided to lay out in a concentrated form, because in due time I would like to see such work myself. In essence, this is a textual statement of my personal throwings and thoughts. Also, there will surely be a lot of corrections in comments from knowledgeable people, in part this is the goal of writing all this in the form of an article.

Formulation of the problem

How to make a chat? This should be a trivial task, probably every second backender sawed his own, just like game developers make their tetris / snakes, etc. I took it, but to make it more interesting it should be ready to take over the world so that it could withstand hundreds of billions active users and in general was incredibly cool. From this comes a clear need for a distributed architecture, because to accommodate all the imaginary number of clients on one machine is not yet possible with the current capacities. Instead of just sitting and waiting for the appearance of quantum computers, I resolutely took up the study of the topic of distributed systems.

It is worth noting that quick response is very important, the notorious realtime, because this is a chat ! and not mail delivery by pigeons.

% random joke about russian mail %

We will use Node.JS, it is ideal for prototyping. For sockets, take Socket.IO. Write on TypeScript.

And so, what do we want:

  1. So that users can send each other messages
  2. Know who is online / offline

How we want it:

Single server

There is nothing to say especially, immediately to the code. Declare the message interface:

interface Message{
    roomId: string,//В какую комнату пишем
    message: string,//Что мы туда пишем

On server:

io.on('connection', sock=>{
    //Присоеденяемся в указанную комнату
    sock.on('join', (roomId:number)=> 
    //Пишем в указанную комнату
    //Все кто к ней присоеденился ранее получат это сообщение
    sock.on('message', (data:Message)=> 
            io.to(data.roomId).emit('message', data))

On the client, something like:

sock.on('connect', ()=> {
    const roomId = 'some room'
    //Подписываемся на сообщения из любых комнат
    sock.on('message', (data:Message)=> 
            console.log(`Message ${data.message} from ${data.roomId}`))
    //Присоеденяемся к одной
    sock.emit('join', roomId)
    //И пишем в нее
    sock.emit('message', <Message>{roomId: roomId, message: 'Halo!'})

You can work with online status like this:

io.on('connection', sock=>{
    //При авторизации присоеденяем сокет в комнату с идентификатором пользователя
    //В будущем, если нужно будет послать сообщение конкретному пользователю - 
    //можно его скинуть прямо в нее
    sock.on('auth', (uid:string)=> 
    //Теперь, чтоб узнать онлайн ли пользователь,
    //просто смотрим есть ли кто в комнате с его айдишником
    //и отправляем результат
    sock.on('isOnline', (uid:string, resp)=> 
            resp(io.sockets.clients(uid).length > 0))

And on the client:

sock.on('connect', ()=> {
    const uid = 'im uid, rly'
    //Типо авторизуемся
    sock.emit('auth', uid)
    //Смотрим в онлайне ли мы
    sock.emit('isOnline', uid, (isOnline:boolean)=>
             console.log(`User online status is ${isOnline}`))

Note: the code did not run, I write from memory just for example

Just like firewood, dokruchiva syudy real authorization, management of rooms (history of messages, adding / deleting participants) and profit.

BUT! We are going to seize the world peace, and therefore not the time to stop, rapidly move on:

Node.JS Cluster

Examples of using Socket.IO on a set of nodes are right on the official site . Including there is also about the native Node.JS cluster, which seemed to me inapplicable to my task: it allows us to expand our application throughout the machine, BUT not beyond it, therefore we definitely pass by. We need to finally go beyond the bounds of one piece of iron!

Distribute and bike

How to do it? Obviously, we need to somehow bind our instances running not only at home in the basement, but also in the neighbor's basement too. What comes first to mind: we are doing some intermediate link that will serve as a bus between all our nodes:


When a node wants to send a message to another, it makes a request to the Bus, and already it in turn forwards it to the right place, everything is simple. Our network is ready!


... but isn't everything so simple?)

With this approach, we run into the performance of this intermediate link, and in general we would like to directly refer to the right nodes, because what can be faster than communicating directly? So let's move it in this direction!

What you need first? Actually, one instance to another. But how can the first find out about the existence of the second? We want to have an infinite number of them, arbitrarily pick up / remove! We need a master server, the address of which is obviously known, everyone connects to it, due to which it knows all the existing nodes on the network and kindly shares this information with everyone.


T e node rises, tells the master about his awakening, he gives a list of other active nodes, we connect to them and that's it, the network is ready. Consul or something similar can act as a master, but since we are cycling, the master should be self-made.

Great, now we have our own skynet! But the current implementation of the chat in it is no longer suitable. Let’s come up with the following requirements:

  1. When a user sends a message, we need to know to WHOM he sends it, that is, to have access to room members.
  2. When we received participants we must deliver them messages.
  3. We need to know which user is online now.
  4. For convenience, give users the opportunity to subscribe to the online status of other users in order to learn in real time about its change.

Let's deal with users. For example, you can make the master know to which node which user is connected to. The situation is as follows:


Two users are connected to different nodes. The master knows this, the nodes know what the master knows. When UserB is authorized, Node2 notifies the Master, which "remembers" that UserB is attached to Node2. When UserA wants to send a message to UserB, you get the following picture:


In principle, everything works, but I would like to avoid an extra round of trip in the form of polling the master, it would be more economical to immediately go directly to the desired node, because it was for this that everything was started. This can be done if they tell everyone around them which users are connected to them, each of them becomes a self-sufficient analogue of the master, and the master itself becomes unnecessary, because the list of the "User => Node" ratio is duplicated for everyone. When you start a node, it is enough to connect to any already running one, pull off its list yourself and voila, it is also ready for battle.



But as a trade off, we get a duplication of the list, which, although it is a ratio of "user id -> [host connections]", but still with a sufficient number of users it will be quite large in memory. And in general, to cut it yourself - it clearly smacks of the bicycle industry. The more code - the more potential errors. Perhaps, we will freeze this option and we will look that already is from ready:

Message brokers

An entity that implements the same "Bus", "intermediate" mentioned above. His task is to receive and deliver messages. We, as users - we can subscribe to them and send ours. It's simple.

There are proven RabbitMQ and Kafka: they only do what they deliver messages - such is their purpose, crammed with all the necessary functionality. In their world, the message must be delivered, no matter what.

At the same time, there is Redis and his pub / sub - the same as the aforementioned guys, but more oakish: he just stupidly receives the message and delivers it to the subscriber, without any queues and other overheads. He absolutely doesn’t care about the messages themselves, if they disappear, if the subscriber hangs up - he will throw it away and take on something new, as if throwing a hot poker into his hands that he wants to get rid of faster. Also, if he suddenly falls - all messages will also be lost with him. In other words, there is no guarantee of any delivery of speech.

... and this is what you need!

Well, really, we do just chat. Not some critical money service or space mission control center, but ... just a chat. The risk that the conditional Petya once a year does not receive one message out of a thousand can be neglected if, in return, we get a performance increase and in the place with it the number of users for the same days, trade off in all its glory. Moreover, at the same time, you can keep a history of messages in some persistent storage, which means Petya will see the same missing message by reloading the page / application. That is why we’ll dwell on Redis pub / sub, to be exact: let's look at the existing adapter for SocketIO, which is mentioned in the article at the office. site .

So what is it?

Redis adapter


With it, the usual application with a few lines and a minimum number of gestures turns into a real distributed chat! But how? If you look inside - there is only one file on the floor, hundreds of lines.

In the case when we issue a message

io.emit("everyone", "hello")

it feeds into radishes, is transmitted to all other instances of our chat, which in turn will emit it locally on their sockets


The message will be distributed on all nodes even if we issue to a specific user. T e each node accepts all messages and already understands whether it needs it.

Also, there is implemented a simple rpc (remote procedure call), allowing not only to send but also to receive answers. For example, you can control sockets remotely, such as "who is in the specified room", "order the socket to join the room", and so on.

What can be done with this? For example, use the user ID as the name of the room (user id == room id). When authorizing joining a socket to it, and when we want to send a message to the user - just a helmet into it. Also, we can find out whether the user is online, by simply looking at whether there are sockets in the specified room.

In principle, this can be stopped, but, as always, we have little:

  1. Bottleneck in the form of a single radish instance
  2. Redundancy, I would like the nodes to receive only the messages they need

At the expense of the first item, we look at such a thing as:

Redis cluster

Connects several radish instances, and then work as a unit. But how does he do it? Yes, like this:


... and we see that the message is duplicated to all members of the cluster. T e it is not intended to increase performance, but to increase reliability, which is nice and necessary, but for our case it has no value and does not save the situation with a bottleneck, plus in total it is even more resource consumption.


I am a newbie, I don’t know much, sometimes I have to go back to vilesopedostroenie, which we will do. No, let's leave the radish in order not to slip at all, but something needs to be invented with the architecture for the current one is no good.

Wrong turn

What do we need? Increase overall throughput. For example we will try to stupidly sleep another one instance. Imagine that socket.io-redis can connect to several, when pushing a message, it chooses a random one, and subscribes to everything. It turns out like this:


Voila! In general, the problem is solved, radish is no longer a bottleneck, you can spawn arbitrarily instances! But they became nodes. Yes, our chatik instances still digest ALL messages, no matter what they are intended for.

It can be the other way around: subscribe to one random one, which will reduce the load on the nodes, and push into everything:


We see that it has become the opposite: the nodes feel calmer, but the load on the radish instance has increased. This, too, is no good. Need a bit sensitive bicycling.

In order to pump our system, we leave the socket.io-redis package alone, although it’s cool, but we need more freedom. And so, we connect radishes:

//Отдельные каналы для:
const pub = new RedisClient({host: 'localhost', port: 6379})//Пуша сообщений
const sub = new RedisClient({host: 'localhost', port: 6379})//Подписок на них
//Также вспоминаем этот интерфейс
interface Message{
    roomId: string,//В какую комнату пишем
    message: string,//Что мы туда пишем

Set up our message system:

//Отлавливаем все приходящие сообщения тут
sub.on('message', (channel:string, dataRaw:string)=> {
    const data = <Message>JSON.parse(dataRaw)
    io.to(data.roomId).emit('message', data))
//Подписываемся на канал
//Присоеденяемся в указанную комнату
sock.on('join', (roomId:number)=> 
//Пишем в комнату
sock.on('message', (data:Message)=> {
    //Публикуем в канал
    pub.publish("messagesChannel", JSON.stringify(data))

At the moment it turns out as in socket.io-redis: we listen to all messages. Now we fix it.

We organize subscriptions in the following way: we recall the concept with a "user id == room id", and when a user appears, we subscribe to the eponymous channel in radishes. Thus, our nodes will receive only messages intended for them, and not to listen to "the whole broadcast."

//Отлавливаем все приходящие сообщения тут
sub.on('message', (channel:string, message:string)=> {
    io.to(channel).emit('message', message))
let UID:string|null = null;
sock.on('auth', (uid:string)=> {
    UID = uid
    //Когда пользователь авторизируется - подписываемся на 
    //одноименный нашему UID канал
    //И соответствующую комнату
sock.on('writeYourself', (message:string)=> {
    //Пишем сами себе, т е публикуем сообщение в канал одноименный UID
    if (UID) pub.publish(UID, message)

Awesome, now we are sure that the nodes receive only messages intended for them, nothing more! It should be noted, however, that the subscriptions themselves are now much, much more, which means that they will eat the memory of the second yoy, + more subscription / unsubscription operations, which are relatively expensive. But in any case, this gives us some flexibility, it is even possible at this point to stop and reconsider all previous options, taking into account our new property of the nodes in the form of more selective, chaste receiving messages. For example, nodes can subscribe to one of several radish instances, and when pushing, send a message to all instances:


... but, anyway, they still do not give endless extensibility with reasonable overhead, you need to give birth to other options. At one point, the following scheme came to mind: what if the radish instances were divided into groups, say, A and B, two instances each. When subscribing, nodes subscribe to one instance from each group, and when pushing, they send a message to all instances of any one random group.



Thus, we obtain a real structure with an infinite potential of extensibility in real time, the load on an individual node at any point does not depend on the size of the system, because:

  1. The total bandwidth is divided between groups, i.e. with an increase in users / activity, we simply compare additional groups.
  2. Management by users (subscriptions) is divided within the groups themselves, that is, with an increase in users / subscriptions, we simply increase the number of instances within the groups.

... and as always there is one "BUT": the more it all becomes, the more resources are needed for the next increase, it seems to me exorbitant trade off.

In general, if you think about it - the above-mentioned plugs come from ignorance of which node is which user. Well, after all, indeed, having we had this information, we could have used to push messages right where necessary, without unnecessary duplication. What have we been trying to do all this time? They tried to make the system infinitely scalable, while not having a clear addressing mechanism, from which they inevitably dropped either to a dead end or to an unjustified redundancy. For example, you can recall the master, performing the role of "address book":


Something similar tells this dude:

To get the user's location, we perform an additional roundtrip, which is basically OK, but not in our case. It seems we are digging in the wrong direction, we need something else ...

Hash strength

There is such a thing as a hash. It has some finite range of values. You can get it from any data. And what if this range is divided between radish instances? Well, we take the user ID, produce a hash, and depending on the range in which it turned out, we subscribe / push to one specific instance. T e we do not know in advance where a user exists, but upon receiving his ID, we can say with confidence that he is in the n instance, infa 100. Now the same, but with the code:

function hash(val:string):number{/**/}//Наша хэш-функция, возвращающая число
const clients:RedisClient[] = []//Массив клиентов редиса
const uid = "some uid"//Идентификатор пользователя
//Теперь, такой не хитрой манипуляцией мы получаем всегда один и тот же
//клиент из множества для данного пользователя
const selectedClient = clients[hash(uid) % clients.length]

Voila! Now we do not depend on the number of instances from the word in general, we can scale arbitrarily without overhead! Well, seriously, this is a brilliant option, the only negative of which is the need to completely restart the system when updating the number of radish instances. There is such a thing as the Standard Ring and the Partition Ring that can overcome this, but they are not applicable to the conditions of the messaging system. Well, I can make the logic of migration of subscriptions between instances possible, but it still costs an additional piece of code of an incomprehensible size, and as we know - the more the code, the more bugs, we don’t need it, thanks. And in our case, downtime is quite acceptable tradeoff.

You can also look at RabbitMQ with its plugin , which allows you to do the same thing as we do, and + ensures the migration of subscriptions (as I said above - it is tied with functionality from head to toe). In principle, you can take it and sleep peacefully, but if someone fumbles in his tuning in order to bring the mode to realtime, leaving only the feature with the hash ring.

Filled repository on githab.

It implements the final version to which we came. In addition, there is an additional logic for working with rooms (dialogues).

In general, I am pleased and can be rounded.


You can do anything, but there is such a thing as resources, but they are finite, so you need to wriggle.

We started with complete ignorance of how distributed systems can work to less tangible specific patterns, and this is good.

Also popular now: