Yandex post office: how we made a service that analyzes the results of mailings in real time

    Yandex has a service for bona fide letter mailers - the Post Office . (For dishonest ones, we have Anti-Spam in the Mail and the “Unsubscribe” button .) With it, they can understand how many letters Yandex.Mail users delete, how long they read, how much they read. My name is Anton Kholodkov, and I was developing the server side of this system. In this post I will talk about exactly how we developed it and what difficulties we encountered.



    For the mailer, the Post Office interface is completely transparent. It is enough to register your domain or email in the system. The service collects and analyzes data for a variety of parameters: name and domain of the sender, time, sign of spam / not spam, read / not read. Aggregation by list-id field is also implemented - a special header for identifying newsletters. We have several data sources.

    Firstly, the system for delivering letters to metabases. It generates events that the letter entered the system. Secondly, the web-based mail interface, where the user changes the state of the message: reads it, marks it as spam, or deletes it. All these actions should go to the repository.

    When storing and modifying information, there are no strict time requirements. A record can be added or changed for quite some time. At this point, statistics will reflect the previous state of the system. With large amounts of data, this is invisible to the user. One or two of his actions, not yet put in the database, will not be able to change the statistics to any great value.

    In the web interface, the response speed is very important - braking looks ugly. In addition, it is important to prevent “jumps” in the values ​​when updating the browser window. This situation occurs when data is extracted from one head, then from the other.

    A few words I want to say about choosing a platform. We immediately realized that not every solution will cope with the data stream that we have. Initially, there were three candidates: MongoDB ,Postgres , our own bunch of Lucene + Zookeeper . We abandoned the first two due to insufficient performance. Particularly big problems were when inserting a lot of data. As a result, we decided to use the experience of our colleagues and used the Lucene + Zookeeper bunch - the same bunch uses Yandex.Mail search.

    The standard for communication between components within the system is JSON. Java and Javascript have convenient tools for working with it. In C ++, we use yajl and boost :: property_tree. All components implement the REST API.

    Data on the system is stored in Apache Lucene. As you know, Lucene is a library developed by the Apache Foundation for full-text search. It allows you to store and retrieve any pre-indexed data. We turned it into a server: taught to store data, add to the index and compress it. Using http request, you can search, add, modify. There are various types of aggregation.

    In order for each state change record to be processed in all the "heads" of the cluster, Zookeeper, another Apache Fondation product, is used. We finalized it and added the ability to use it as a queue.

    A special daemon is written to extract and analyze data from Lucene. It focuses all the logic of work. Web interface calls turn into Lucene http requests. Immediately implemented the logic of data aggregation, sorting and other processing that are needed to display data in the web interface.



    When users perform actions in the web interface, information about these actions is stored through Zookeeper in Lucene. Each action - for example, pressing the "Spam" button - changes the state of the system, and you need to carefully modify all the data that it affects. This is the most difficult part of the system, we rewrote and debugged it the longest.

    The first attempt to solve the problem was, as they say, "in the forehead." We wanted to make notes about the status of the letter on Lucene on the fly. Aggregate data was supposed to be in real time during extraction. This solution worked great on a small number of records. Summing up hundreds of records took microseconds. Everything looked great. Problems started with a large number of entries. For example, thousands have already been processed seconds. Tens of thousands - tens of seconds. This annoyed users and us. It was necessary to look for ways to speed up data output.

    At the same time, aggregation of ordinary users with dozens, hundreds and thousands of letters per day was not a problem. The problem was a single mailing list who sent out hundreds of thousands of letters in a very short time. It was impossible to calculate data for them in real time.

    The solution was found after analyzing requests from the web interface. There were few types of queries, and they all boiled down to summing the data or finding the average value of the data series. We added aggregation records to the database and began to modify them when adding or changing records about the status of the letter. For example, a letter arrived - they added one to the total counter. The user deleted the letter - they took the unit from the total counter and added the unit to the deleted counter. The user marked the message as spam - they added a unit to the counter of “spam letters”. The number of records that need to be processed to complete the request has become smaller, and this greatly accelerated aggregation. Zookeeper makes it easy to ensure data integrity. It takes time to change the aggregate records, but we can afford a slight data lag.

    What is the result? There are currently four cars on the system on Lucene, three on Zookeeper. Input data comes from 10 machines and is issued on six frontend machines. The system processes 4,500 modifying requests and 1,100 read requests per second. The storage capacity today is 3.2 terabytes.

    The storage system on Lucene + Zookeeper has proven to be very stable. You can disable a node on the fly on Lucene, you can add a node. Zookeeper stores the history and rolls the desired number of events to the new machine. After some time, we will get a head with relevant information. One machine in the cluster is allocated for data backup storage.

    Despite the tight development time, the system turned out to be reliable and fast. The architecture makes it easy to scale - both vertically and horizontally - and add new data analysis capabilities. Which we will definitely add for you soon.

    Also popular now: