Microservice Architecture = Distributed Computing

Greetings to all readers of Habr! My name is Igor Rybakov and I am the technical director of the Kazakhstan IT company DAR. Today I will share with you the understanding and use of the principles of parallel computing in modern information systems. To understand this more deeply, I would like to give arguments in favor of the study and practical application of the concepts of parallel and distributed computing in the development of modern information systems.


Parallel computing or what does the founder of Intel


First, a little history. In 1965, Gordon Moore, one of the founders of Intel, discovered a pattern: the emergence of new models of microcircuits was observed about a year after their predecessors, while the number of transistors in them almost doubled each time. It turns out that the number of transistors placed on the integrated circuit chip doubled every 24 months. This observation came to be called Moore's law. The founder of Intel predicted that the number of elements in the chip would increase from 2 ^ 6 (about 60) in 1965 to 2 ^ 16 (65 thousand) by 1975 .


Now, in order to be able to put into practice this additional computing power that Moore's Law predicted, it became necessary to use parallel computing. For several decades, processor manufacturers have constantly increased the clock speed and parallelism at the instruction level, so that on new processors the old single-threaded applications run faster without any changes in the program code.


Since about the mid-2000s, processor manufacturers began to prefer a multi-core architecture, but to get the full benefit from the increased performance of the central processor, programs must be rewritten in an appropriate manner. Here a problem arises, because according to Amdahl’s law , not every algorithm can be parallelized, thus determining the fundamental limit on the efficiency of solving a computational problem on supercomputers.


To overcome this limit, distributed computing approaches are used. This is such a way to solve labor-intensive computing problems using several computers, which are most often combined into a parallel computing system. Sequential calculations in distributed information systems are performed taking into account the simultaneous solution of many computational problems. A feature of distributed multiprocessor computing systems, in contrast to local supercomputers, is the possibility of unlimited increase in performance due to scaling .


Around the middle of 2005, computers are massively equipped with multi-core processors, which allows parallel computing. And modern network technologies allow you to combine hundreds and thousands of computers. Which led to the emergence of the so-called "cloud computing".


The use of parallel computing


Current information systems, such as e-commerce, are in dire need of providing quality services to their customers. Companies are competing by inventing ever new services and information products. Services should be designed for high load and high fault tolerance, as users of services, not one office, not one country, but the whole world.


At the same time, it is important to preserve the economic feasibility of projects and not spend unnecessary funds on expensive server equipment if old software that uses only part of the computing power will work on it.


Developers of application systems faced a new problem - the need to rewrite information systems to meet the requirements of modern business and the need to better utilize server resources to reduce the total cost of ownership. The tasks that need to be addressed by modern information systems are diverse.


Starting from machine learning and big data analytics, to ensure stable operation of the existing basic system functionality during peak periods. For example, here you can cite mass sales in the online store. All these tasks can be solved using a combination of parallel and distributed computing, for example, implementing a micro-service architecture.


Service Quality Control


To measure the actual quality of services for a client, the concept of a service-level agreement (SLA) is used, that is, some statistical metrics of system performance.


For example, developers can set themselves the task that 95% of all user requests are served with a response time not exceeding 200 ms. By the way, it’s quite real non-functional requirements, because users do not like to wait.


To assess user satisfaction with the service, you can use the Apdex indicator, which reflects the ratio of successful (satisfied) responses to unsatisfactory (unsatisfactory). For example, our threshold value is SLA = 1.2 seconds, then with 95% of the response time of requests <= 1.2 seconds, the result will be successful. In the case of a large number of requests more than 1.2 seconds, but less than 4T (4.8 seconds), the result is considered satisfactory, and when a large number of requests exceed 4T, those> 4.8 seconds, the result is considered a failure.


findings


In the end, I would like to say that the development of microservices actually involves understanding and practical application of Distributed and Parallel Computing.


Of course, you will have to sacrifice some things:


  • The simplicity of development and maintenance - the efforts of developers to implement distributed modules;
  • When working with databases, strict consistency (ACID) can take away and require other approaches;
  • Computing power is being spent on network communications and serialization;
  • We spend time implementing DevOps practices for more complex monitoring and deployment.

In return, we get, in my opinion, much more:


  • the ability to reuse entire modules that are ready for use, and, as a result, quickly launch products on the market;
  • high scalability of systems, which means more customers without losing SLA;
  • Using eventual consistency, based on the CAP theorem, we get the ability to manage huge amounts of data, again, without losing SLA;
  • the ability to record any change in the state of the system for further analysis and machine learning.

Also popular now: