"Tools are not as important as the ability to think about the systems that they create." Great interview with Martin Kleppman



    Martin Kleppman is a researcher at the University of Cambridge working on CRDT and formal verification of algorithms. His book Designing Data-Intensive Applications , published in 2017, has become a bestseller in data storage and processing. 

    Kevin Scott (CTO at Microsoft) once said : “This book should be a must for development engineers. "This is a rare resource combining theory and practice, helping developers to think deeper about the design and implementation of infrastructure and data processing systems." Something similar was said by Jay Kreps, the creator of Apache Kafka and CEO Confluent.

    Before starting academic research, Martin worked in the industry and co-founded two successful startups: Rapportive (bought by LinkedIn in 2012) and Go Test It (bought by RedGate).

    This habrapost is a detailed interview with Martin. Sample discussion topics:

    • Transition from business to academic research;
    • Prerequisites for Writing Designing Data-Intensive Applications;
    • Common sense against artificial hype and advertising tools;
    • Unnecessity of the CAP theorem and other industry errors;
    • The usefulness of decentralization;
    • Blockchains, Dat, IPFS, Filecoin, WebRTC;
    • New CRDT. Formal verification at Isabelle;
    • Discussion about event sourcing. Low level approach. XA transactions 
    • Apache Kafka, PostgreSQL, Memcached, Redis, Elasticsearch;
    • Using it all in real life;
    • The threshold for entering Martin's reports and the Hydra conference.

    The interview was conducted by Vadim Tsesko ( @incubos ) - a leading developer in the team of the Odnoklassniki Platform company. Vadim's scientific and engineering interests relate to distributed systems and data warehouses, as well as verification of software systems.

    From business to academic research


    Vadim : I would like to start with a question that is very important for me personally. You founded Go Test It and Rapportive, and for a long time engaged in the development of large systems on LinkedIn, but then decided to leave commercial development and do research at the university. Could you tell me what pushed you to this? What are the benefits of working at a university and what have you sacrificed?

    Martin: It was a very interesting transition. As I understand it, you are interested in my solution due to the fact that quite a few people are leaving for academic research from commercial development, much more often there is a movement in the opposite direction. This is understandable, since earnings at universities are significantly lower than in business. I am personally attracted to the position of a researcher by the fact that I can decide for myself which topics to work on, and I make this choice on the basis of what seems interesting and important to me, even if working on a topic does not promise to make a profit over the next 6 months. Everything that you work for in a company must be sold in one form or another. At the moment, I am working on topics that are important for the future of the Internet and software, but our understanding of them is not yet deep enough to create a finished product. So far, we do not even have a general idea of ​​how these technologies should work. Since these studies are fundamental, I decided that it is better to conduct them at the university, and not at the company: the university has more freedom, there you can do things that will not bring any profit for another 10 years. The planning horizon is much wider.

    Book Designing Data-Intensive Applications


    Vadim : We will definitely return to the topic of research, but for now let's talk about your book, Designing Data-Intensive Applications . In my opinion, this is one of the best guides on modern distributed systems, almost an encyclopedia: it lists all the most significant achievements that exist today.

    Martin : Thank you, I'm glad that it came in handy.

    Vadim : It is unlikely that our readers are not familiar with it yet, but just in case, let's discuss the most significant achievements in the field of distributed systems that you are writing about.

    Martin: Actually, when writing this book, I did not set a goal to describe certain technologies. Rather, I wanted to make a guide around the world of systems used to store and process data. Now there are a huge number of databases, stream processors, batch processing tools, all kinds of replication tools and the like, so it’s very difficult to compose an overall picture of this area. And if you need a database to solve a specific problem, it is difficult to choose one of the many existing ones. Many books written about such systems are simply useless in this case. For example, in a book about Apache Cassandra it may be written about how wonderful Cassandra is, but nothing will be said about tasks for which it is not suitable. Therefore, in my book, I try to identify the main issues which you need to ask yourself when writing large systems. Answers to these questions will help determine which technologies are well suited to solve the current problem, and which are not very good. The main thing is that there is no technology that can do everything. I am trying to show what are the advantages and disadvantages of different technologies in different contexts.

    Vadim : Indeed, many technologies have common features and functionality and provide the same data model. At the same time, one cannot trust advertising, and in order to understand the internal structure of a system, one has to read not only technical reports and documentation, but even source codes.

    Common sense versus artificial hype and tool advertising


    Martin : Moreover, you often have to read between the lines, because the documentation does not say for what tasks the database is not very suitable. In fact, any database has its own limitations, so you should always know which ones. Often you have to read deployment guides and reconstruct the internal operation of the system.

    Vadim : Yes, this is a great example. Do not you think that in this area there is not enough common vocabulary or a single set of criteria, using which it would be possible to compare different solutions for one task? Now different names are used for the same things, and many aspects that should be clearly and clearly spelled out are not mentioned at all - for example, guarantees of transactionality.

    Martin: Yes this is true. Unfortunately, in our industry very often there is excessive excitement around different instruments. Which is understandable, since these tools are created by companies that are interested in promoting their products. Therefore, these companies send people to conferences, and they, in essence, talk about what these products are great. This masquerades as technical reports, but, in essence, it is an advertisement. It would not hurt us as an industry to be more honest about the advantages and disadvantages of our products. One of the requirements for this is general terminology; without it, it is impossible to compare things. But besides this, methods are needed to analyze the advantages and disadvantages of various technologies.

    Unnecessary CAP Theorems and Other Industry Mistakes


    Vadim : My next question is quite sensitive. Could you tell us about any common mistakes in our industry that you have encountered during your career? For example, about some overrated technology or a widely used solution that you should have gotten rid of long ago? This may not be the best example, but it comes to my mind to use JSON over HTTP / 1.1 instead of gRPC over HTTP / 2. Or maybe you do not share this point of view?

    Martin: Most often, when creating systems, in order to achieve one thing, you need to sacrifice something else, and here I prefer not to talk about mistakes. In the case of a choice between JSON over HTTP / 1.1 and, say, Protocol Buffers over HTTP / 2, both options have a right to exist. If you decide to use Protocol Buffers, you need to define a scheme, and it can be very useful, because it helps to accurately determine the behavior of the system. But in some situations, such a scheme does not cause anything but annoyance, especially in the early stages of development, when data formats change quite often. Again, here in order to achieve a certain goal one has to sacrifice something, and in some situations this is justified, but in others it is not. There are not so many solutions that really can be called wrong. But since we are talking about this, let's talk about the CAP theorem - in my opinion, there is absolutely no benefit from it. When it is used in the design of systems, either there is a misunderstanding of the meaning of the CAP theorem, or self-evident statements are substantiated with the help of it. It uses a very narrowly defined consistency model — linearizability, and a very narrowly defined accessibility model — each replica must be fully accessible, even if it cannot establish a connection with any other replica. On the one hand, these definitions are quite correct, but, on the other hand, they are too narrow: many applications simply do not need such a definition of consistency or accessibility. And if the application uses a different definition of these words, the CAP theorem is useless for him. So I don’t see much point in applying it. By the way, since we started talking about mistakes in our industry, let's honestly admit that cryptocurrency mining is a completely waste of electricity. I don’t understand how you can seriously do this.

    Vadim : In addition, most of the storage technologies are now customizable for a specific task, i.e. You can select the appropriate mode of operation in the presence of failures.

    Martin : That's right. Moreover, a significant part of the technologies do not fall under the strict definition of consistency and accessibility of the CAP theorem, that is, they are not CP, not AP and not CA, but only P. Nobody will write about this software directly, but in reality it can Be a perfectly rational development strategy. This is one of the reasons why I believe that CAP when analyzing software is more harmful than good: a significant part of the design decisions are not presented in any way, and it can be quite rational solutions, but CAP does not allow them to be described.

    The benefits of decentralization


    Vadim : What are the most acute problems in developing Data-Intensive Applications now? What topics are most actively explored? As far as I know, you are a supporter of decentralized computing and decentralized data warehousing.

    Martin: Yes. One of the points I prove in my research is that at the moment we rely too heavily on servers and centralization. In the early days of the Internet, when it evolved from ARPANET, it was designed as a highly stable network in which packets can be sent along various routes, and they still achieve their goal. In the event of a nuclear explosion in any city in America, the surviving part of the network would continue to operate, alternative routes would simply be used to bypass the failed sections. It was a scheme generated by the Cold War. But then we decided to place everything in the cloud, so now almost everything somehow passes through one of the AWS centers somewhere in Virginia, in the eastern United States. At some point, we abandoned the ideal of decentralized use of various parts of the network and identified the services on which now everything depends. I consider it important to return to a decentralized approach, in which more control over the data would belong not to services, but to end users.

    When it comes to decentralization, very often they mean things like cryptocurrencies, because they have networks of interacting agents over which there is no single centralized authority like a bank. But this is not the decentralization that I am talking about, because, in my opinion, cryptocurrencies are also extremely centralized: if you need to complete a deal with Bitcoin, it must be done through the Bitcoin network, so everything is centralized around this network. The network structure is decentralized in the sense that it does not have a single organizing center, but the network as a whole is extremely centralized, since each transaction must be made through this network and nothing else. I believe that this is also a form of centralization. In the case of cryptocurrencies, this is inevitable, since it is necessary to ensure the absence of double costs, and this is difficult to achieve without a network, which provides consensus on which transactions were completed, and the like. But there are many applications that do not require a system like a blockchain; they can work with a much more flexible data model. It is these decentralized systems that interest me the most.

    Vadim : Since you mentioned the blockchain, could you tell us about promising or not well-known technologies in the field of decentralized systems? I myself played with IPFS, but you have much more experience in this area.

    Martin : In fact, I do not actively follow such technologies. I read a little about IPFS, but I haven’t used it myself. We worked a bit with Dat , which, like IPFS , is a decentralized storage technology. The difference is that Filecoin cryptocurrency is tied to IPFS, and it is used to pay for data storage, and no blockchain is attached to Dat. Dat only allows you to replicate data to multiple machines on a P2P basis, and for the project we were working on, Dat is great. We wrote software for users to collaborate on a document, data or database, and each change in this data is sent to everyone who has a copy of the data. In such a system, Dat can be used according to the P2P principle, so that it ensures operation at the network level, that is, NAT Traversal and passing through firewalls, which is a rather difficult task. On top of this, we wrote a level from CRDT, with the help of which several people could edit a document or a dataset and which made it possible to quickly and conveniently share edits. I think a similar system could be written on top of IPFS, 

    Vadim : But would such a system not have become less responsive, because WebRTC directly connects nodes to each other, and IPFS is more of a distributed hash table.

    Martin : The thing is, WebRTC is a slightly different stack level. It is intended mainly for video calls - with a high probability it is used in the software through which we are now communicating. In addition, WebRTC provides a channel through which you can send arbitrary binary data. But creating a replication system on top of it can be difficult. But in Dat and IPFS, you don’t need to do anything for this. 

    You mentioned responsiveness, and this is a really important factor to keep in mind. Suppose we want to make the next Google Docs decentralized. In Google Docs, the unit of change is a single keystroke, and each new character can be sent in real time to other people who work with the same document. On the one hand, this ensures quick collaboration, on the other hand, it means that when writing a large document, you need to send hundreds of thousands of one-character changes, and many existing technologies cope poorly with this kind of data compression. Even if we assume that for each keystroke it is necessary to send only a hundred bytes, then for a document of 100,000 characters it will be necessary to send 10 MB of data, although such a document usually takes no more than several tens of kilobytes. Until some ingenious compression method has been invented, such data synchronization requires an enormous additional cost of resources. Many P2P systems do not yet have an effective way to create snapshots of the state that would allow them to be used for a system like Google Docs. It is this problem that I am currently working on, trying to create an algorithm for more efficient document synchronization for several users. This should be an algorithm that would not store every single keystroke, because this requires too many resources, and it should provide a more efficient use of the network. which would allow them to be used for a system like Google Docs. It is this problem that I am currently working on, trying to create an algorithm for more efficient document synchronization for several users. This should be an algorithm that would not store every single keystroke, because this requires too many resources, and it should provide a more efficient use of the network. which would allow them to be used for a system like Google Docs. It is this problem that I am currently working on, trying to create an algorithm for more efficient document synchronization for several users. This should be an algorithm that would not store every single keystroke, because this requires too many resources, and it should provide a more efficient use of the network.

    New CRDT, formal verification at Isabelle


    Vadim : Could you tell us more about this? Have you managed to achieve more than 100x data compression? Are you talking about new compression techniques or special CRDTs?

    Martin : Yes. So far, we only have a prototype; it has not yet been fully implemented. Additional experiments need to be done to find out how effective it is in practice. But some of our methods are promising. In my prototype, I was able to reduce the size of a single edit from 100 to 1.7 bytes. But, I repeat, this is only an experimental version so far, this indicator may change slightly. One way or another, there are great opportunities for optimization in this area.

    Vadim : So your report at the Hydra conference will be about this?

    Martin: Yes. I will have a short introduction about CRDT, collaboration software and some issues that arise in this context. Then I will talk about the research that we are doing in this area - they deal with many different problems. On the application side, we have an implementation of these algorithms in JavaScript, based on it we create functioning programs to better understand the behavior of the algorithms. At the same time, we are also working on formal methods for proving the correctness of these algorithms, because some of them are rather unobvious, and we want to ensure that they always reach a consistent state. Many previously developed algorithms do not provide consistency in some borderline cases. To avoid this, we turned to formal methods of proving correctness.

    Vadim: Do you use theorem proofs like Coq or Isabelle for this system? 

    Martin : Yes, Isabelle .

    Editors note: Martin will read a talk about Isabelle at The Strange Loop.

    Vadim : Are you planning to publish this evidence?

    Martin : Yes, the first set of evidence we published a year and a half ago, together with the CRDT verification framework. We tested three CRDTs using this framework, the most important of which was RGA ( Replicated Growable Array ), CRDT for co-editing text. This algorithm is not too complicated, but rather unobvious, in the opinion it is not immediately clear whether it is correct, therefore a formal proof was necessary. We also worked on proving the correctness of several existing CRDTs, and the last thing we did in this area was creating our own CRDTs for new data models.

    Vadim: How much more is the formal proof than the code size of the algorithm itself? There are sometimes difficulties with this.

    Martin : Really enough difficulties, we have to work a lot with evidence. I just looked at the code: the description of the algorithm takes about 60 lines, it is quite compact, and the proof exceeds 800 lines. It turns out that the proof is 12 times longer. Unfortunately, this is a very typical situation. On the other hand, the presence of formal proof gives us confidence in the correctness of the algorithm. In addition, work on the proof allowed us to better understand this algorithm. Formalization in general often contributes to this. Ultimately, this allows you to create better implementations of the algorithms.

    Vadim: Tell me, what audience do you expect your report to be? What prior knowledge is needed?

    Martin : I try to make my reports as accessible as possible, and try to pull everyone to the same level. I give a lot of material, but I start with fairly simple things. For listeners it will be useful to have some experience with distributed systems: sending data over the network via TCP, an understanding of the Git device, and the like. But, by and large, in addition to basic knowledge, nothing is needed. Having them, understanding our work is not so difficult. I explain everything with examples and illustrate them with pictures. I hope that the report will be available to everyone.

    Event sourcing, low-level approach, XA transactions


    Vadim : I would like to talk about your recent article on processing events online. As far as I understand, you are a supporter of event sourcing. Now this approach is gaining popularity, and programmers are trying to apply it everywhere because of the advantages of a globally ordered operation log. And in what situations is event sourcing not the best approach? I would like to avoid disappointment in this technology due to the fact that people try to use it everywhere and in some cases it does not work well.

    Martin: This problem needs to be discussed at two different levels. Event sourcing in the form in which it was created by Greg Young and others, is a data modeling mechanism. If your database becomes too many tables and transactions with these tables and it becomes too disorganized, then event sourcing can help streamline the data model. Events can directly express what is happening at the level of the application logic, what action the user takes, how its consequences update various tables, and so on. In essence, event sourcing allows you to separate an action (event) from its consequences. 

    I came to event sourcing from a lower level. I was creating scalable systems using technologies like Apache Kafka. Event sourcing is similar to Apache Kafka, because events are used there and there. But it is not necessary to use Apache Kafka for event sourcing, it can also be done using a regular database, or a database specially created for event sourcing. In general, these approaches are similar, but they are not tied to each other, they just have some intersection. A system like Apache Kafka is useful if you need to scale, if the data stream is so large that it is impossible to process them in a database consisting of a single node. Using an event log like Apache Kafka, this load can be distributed across multiple computers. Apache Kafka is especially useful If you need to integrate several different storage systems. Using it, you can update not only a relational database, but also a full-text search index like Elasticsearch, or a caching system like Memcached or Redis in one event. 

    As for your initial question, it's hard for me to say exactly when event sourcing should not be used. As a rule, I prefer to use the simplest approach. If the required data model is perfectly implemented in a relational database with inserting, updating, and deleting rows, then use it. Relational databases are a perfectly acceptable option; they have served us well for a long time. But if the data model becomes too complex for a relational database, then move on to event sourcing. A similar principle should be followed at a lower level: if the size of the data allows them to be stored in PostgreSQL on one computer, then this should be done. If one computer cannot process all the data, then you should turn to distributed systems like Kafka. That is, I repeat 

    Vadim : This is great advice. In addition, most application systems are constantly evolving and the direction of their development is not always known in advance, so you never know in advance what queries, patterns and data streams will appear in them. 

    Martin : Yes, and relational databases are especially useful in this regard, because now they usually have JSON support (for example, PostgreSQL supports it well) and they are especially flexible with it. If you need to support new queries, you can simply create the missing indexes. You can change the data scheme and migrate the database. If the data size is not too large and not too complex, everything will work just fine.

    Vadim: I have one more question regarding event sourcing. You mentioned an interesting example in which events from one queue are sent to several recipients. Suppose we create a new document (say, an ad), and several systems receive events about it: a search engine based on Elasticsearch, which allows you to find this ad; a caching system that puts it in a key-value cache based on Memcached; and a database that stores it in tables. These systems work simultaneously and in parallel. 

    Martin : So you want to know what to do if some of these event recipients are already updated, while others are not yet?

    Vadim: Yes. And in this situation, the user comes to the site, enters a search query and sees that, say, an apartment is available in this area, but after clicking on an ad receives a code of 404, since the database has not yet managed to receive the event and the required document is not yet available in it.

    Martin: This is indeed a significant difficulty. Ideally, causal consitency should be ensured for these systems. That is, if one system contains some necessary data, then they will also be in other systems. Unfortunately, it is very difficult to achieve this for several different storage systems: it does not matter which approach or system is used to send updates to different systems, in the end, there can always be problems with concurrency. Even if you record to both systems at the same time, there may be a slight delay in the network, due to which processing of one of the recording actions will occur a little earlier or later. When reading from both systems, inconsistency can be detected. There are research projects who are trying to achieve this kind of causal consistency, but it’s difficult to achieve simply by using Elasticsearch or Memcached. The problem is that for the right solution, you need to have a snapshot of the search index, the cache, and the database. If we work only with a relational database, then we have snapshot isolation: this means that reading from the database is performed as if you had a copy of the entire database. And all queries will return data at the time the snapshot was taken. That is, even if the data at the time of reading has changed, the old data will still be presented, because they are part of a consistent snapshot. In the case under discussion with Memcached and Elasticsearch, the problem can be solved using a consistent snapshot of these two systems. But unfortunately, neither Memcached, neither Redis nor Elasticsearch provide an effective mechanism for creating snapshots that can be coordinated for multiple storage systems. Each system operates independently and, as a rule, simply issues the last value of each key. There is usually no way to get an earlier but consistent version of the data. So I can’t recommend the best solution for this problem. I'm afraid that one way or another I will have to change the code of storage systems. We need a mechanism for creating snapshots that is fast enough to be able to be used constantly - such snapshots may be needed several times per second, and not once a day. So far, existing solutions do not allow snapshots for multiple storage systems. Generally, This is a very interesting research topic. I hope that someone will take care of it, but so far I have not seen a satisfactory solution to this problem. 

    Vadim : Yes, some uniform distributed Multiversion Concurrency Control is needed .

    Martin: Yes, like distributed transaction systems. You can use distributed XA transactions, but, unfortunately, in the current state, this system is not very suitable for these purposes. It works only with concurrency control based on locks, that is, when reading any data, this data is blocked, and no one else can change it. The fact is that such concurrency has a very low performance, so in practice it is not applied now. But without such a lock, it is impossible to create snapshots in XA. Perhaps a new protocol for distributed transactions could be deduced from this difficulty, which would allow the use of snapshot isolation for several different systems. But so far I have not seen anything like it.

    Vadim : Let's hope that someone is working on this.

    Martin: It would be very useful, including for microservices. Now it is considered correct to create for each microservice its own repository, its own database, and the microservice should not directly access the database of another service, as this would be a violation of encapsulation. Each service manages only its data, and there is a separate service for managing users with a separate database. If you need to know anything about users, you need to contact this service. All this allows us to observe encapsulation, since with this approach the details of the database schema are hidden from other services. But this creates great difficulties in maintaining the coherence of various services among themselves due to the problem that we just discussed: in two services there may be data depending on each other, and in one of these services they may be updated a little earlier than in the other. Then, when reading from these two services, inconsistent results will be obtained. As far as I know, an adequate solution to this problem for microservices does not yet exist. 

    Vadim : I was suddenly struck by the idea that work processes in society and in the state are organized in a very similar way. They are essentially asynchronous, and they do not have delivery guarantees. If someone has changed their passport number, he needs to prove that he did it, and that he is still the same person. 

    Martin : That's right. But in society, we can find a solution to these problems. We can determine that the information in some database is out of date, and then we just repeat the request on another day. But when writing software, all these mechanisms must be specially created, because the software itself does not think.

    Vadim: At least not yet thinking. I would also like to talk about the benefits of event sourcing. Using this approach, you can stop event processing if a bug is detected, and resume processing when a new version with a fix is ​​deployed. Thus, the system is always in a consistent state. This is a very useful and popular feature, but in the case of, for example, banking software, it is impossible to use it. Suppose a bank operates a system that continues to receive financial transactions, but at the same time balances or balances are outdated, since events are suspended until a new version is deployed without a bug. How to avoid such a situation?

    Martin: I do not think that in reality, for the deployment of a new version, the system is suspended. Rather, in the event of a bug, the system continues to work, and in parallel with it, a new version of the code is created without a bug, which then runs in parallel with the old one. The corrected version will have to process all incoming events that have occurred since the deployment of the code with the bug, and, possibly, write the result to a separate database. After that, you can replace and disable the old version. Thus, the system does not stop working at any moment. Thanks to this, developers have time to fix the bug, and they can eliminate its consequences, since it is possible to reprocess input events.

    Vadim: Yes, this is an excellent approach if there is control over the storage, but side effects for external systems are not taken into account here.

    Martin: It's right. If the data is sent to a third-party system, it may not be possible to correct it. But this is the same situation that is encountered in accounting. When a quarterly report is created, the status of all transactions at the end of the quarter is recorded, income and profit are calculated based on these indicators. But sometimes data about some transactions come late (say, someone forgot to give a receipt), even though the transaction itself refers to the previous quarter. In this case, accountants take it into account in the next quarter as an amendment to the previous one. Typically, such changes amount to a negligible amount and no errors due to this occur, and in the end, the correct amount is taken into account. In accounting, this approach has been used for centuries - as long as accounting itself exists. I think

    Vadim : Interesting. This is a new approach to writing systems.

    Editorial note: Vladimir Krasilshchik made a report about this two years ago .

    Martin : Yes, this is not an ordinary view of things, and at first glance it can be confusing. But I am afraid that there is no other way - our knowledge about the entire state of the world is inevitably inaccurate, there is no way to go anywhere when working with distributed systems. And this inaccuracy is necessarily revealed during data processing.

    Hydra 2019 Conference, Professional Growth and Development


    Vadim : In your opinion, how important are conferences like Hydra? Indeed, all distributed systems are very different, and it is difficult for me to imagine that all participants after the conference will immediately begin to put into practice the new approaches that they learned about. 

    Martin: The topic is really broad, but, in my opinion, a significant part of the interesting approaches in the field of distributed systems are more likely to be at the conceptual level, that is, these are not direct instructions “use this database” or “use this technology”. Rather, we are talking about ways of thinking about systems and software. Such ideas can be quite widely used. To be honest, it’s not so important for me whether the participants will learn about some new language or instrument; much more important, in my opinion, that they learn about how to think about the systems that they create. 

    Vadim: If we are talking about such complex topics as the one that you will discuss in your report, then what is the advantage of the reports at the conference over publication in a scientific journal? Indeed, in the second case, the topic can be covered in much more detail. Or do you think it is necessary to do both?

    Martin: I think that publications and reports have different goals. In the publication, you can very carefully, accurately and reliably analyze the problem. The report rather serves to interest other people in this topic and initiate discussion. I like to attend conferences partly because of discussions after the reports. People from the audience constantly come up to me and say that they have already tried something similar, but at the same time stumbled upon such and such problems. And reflecting on their problems, I myself will learn a lot of new things. This is an important reason for me to speak at conferences: I learn from the experience of other people and share my experience if it is useful to other people. But basically speaking at the conference is more likely an introduction to the problem, and publication is an in-depth analysis of a rather narrow issue. So, in my opinion, these are very different genres, 

    Vadim : And the last question. What do you do for your professional growth as a researcher and developer? Perhaps you could recommend any conferences, blogs or communities for those who want to develop in the field of distributed systems?

    Martin: Great question. There are many interesting materials. For example, many online recordings of speeches at conferences have been posted online. In books, including mine, you can find both an introduction to the topic, and links to other studies for further reading. If you have any questions - you can follow the links. But besides this, it is also very important to write systems on your own and watch in practice how they work, as well as share experiences with other people. Partly this is what conferences are useful for: live communication. But you can communicate in other ways, for example, in Slack there is a channel for people who are interested in distributed systems. Finally, nothing prevents you from learning from colleagues in the company where you work. In general, the only right way to learn does not exist, you need to choose the one that is best for you.

    Vadim : Thank you very much for the valuable advice and interesting discussion. 

    Martin : Please, I was very glad to talk with you.

    Vadim : Great, then see you at the conference !

    I remind you that this is a pre-recorded interview. When you write comments, remember that Martin will not read them. We can only convey something most interesting. If you really want to talk with the author, he will be giving a presentation “Syncing data across user devices for distributed collaboration” at the Hydra 2019 conference, which will be held on July 11-12, 2019 in St. Petersburg. Tickets can be purchased on the official website .

    Also popular now: