Time is fragmentary; a bit about the similarity of distributed systems and weak memory models

Original author: Sohum Banerjea, Aldrin Montana, Lindsey Kuper
  • Transfer
Hello to all!

Today we would like to once again touch on the topic of simultaneous and sequential execution in various programs, especially in distributed systems. Back in September, we published the article “ Synchronicity is a myth ” on this topic, and now we are publishing a translation of a more serious study, which we hope will help you better navigate with distributed systems.

There is only one real problem in computer science: to recognize that cache invalidation errors are named incorrectly. These are only errors per unit associated with the use of time.
- Author unknown

Time - a strange thing.

The time is so strange, because we really, really want to think that it is perfectly ordered. It seems to us that any event at 3.00 pm takes place (as we would say) before any event at 4.00 pm - without exceptions, arguments or compromises.

However, computer science knows a lot of examples when it is necessary to approach this requirement not so strictly. It manifests itself at the level of processors, compilers, network nodes. Again and again in calculations, at different levels of the stack, we find ourselves in situations where we have two events in front of us, and we do not know in what order they occurred. Time is obviously not total; it is fragmentary.

Why? The fact is that we do not know this, since the level of abstraction above which we exist does not give an answer to this question. Whether by chance or not, our computational abstractions offer no guarantees about the course of action. Freedom to reorder events often allows you to create much more productive and accessible systems.

A processor may have a memory sequencing model ; it reflects which guarantees the processor does not wish to give you at the assembly execution stage any guarantees - for example, which instruction was executed earlier, and which - later. The processor decides how to pipe the instructions and executes them not in the order of arrival — that is, it uses its own chips more efficiently than I would have guessed.

A language may have a memory consistency model (“memory model” for short); it reflects what guarantees the language does not give you when generating an assembly, for example, when distributing instructions over multiple threads. Such a reordering is by definition inherent in a hardware memory model and largely explains why the compilers provide such a “weak” concept of time. It is within the framework of such a memory model implemented in the language that you program when you write non-blocking code.

The famous example of a memory model implemented at the language level is a strong and weak memory model.in standard C ++ 11. By default, C ++ provides atomic operations with synchronization, but it can also weaken the memory access model to improve performance. The behavior thus provided is intended to serve as an abstraction over the mainstream processor architectures used today (x86, POWER and ARM).

Finally, a distributed system may have its own consistency model; it reflects what guarantees the system is not going to give you regarding the order of events on clients and replicas on the global computer network. The reordering, directly related to the delay in communication or lack of synchronization, basically explains why in a distributed system you will not do without the mentioned weak time model. It is this consistency model that you program when you write a distributed application.

In practice, there is a huge zooconsistency models that can be used when programming a distributed system. In all such situations, these models describe the (desired) system behavior observed from outside this system. If I - a specific client or a specific stream - write down the value, then immediately read it, is it guaranteed that I will definitely see a record no older than mine? If time were not fragmentary, if we always had a clear idea of ​​the order in which operations occur in our system - naturally, the answer to this question would be in the affirmative. It would be strange to even ask such a question.

But time is fragmentary - therefore, it is necessary to raise such a question.

Consistency Models - I mean memory models


To talk about such a fragmented order is often difficult and always unpleasant. We would like to proceed from the fact that at all levels of the stack time is always absolutely - be it with ACID transactions, or with atomic operations / locks. The stricter the guarantee, the more naturally it is easier to program with them!

But we all strive for speed. Whether it's about distributed systems, where you have to sacrifice strict consistency for the sake of accessibility, or non-blocking programming, where a weak memory model is used as a basis for avoiding synchronization costs, it is usually advisable for a programmer working with any stack level to go into these complex arguments. .

Consistency of models with shared memory and consistency of models with distributed memory - both of themare abstract . They describe to the programmer working with the system the interface of this system. They make it clear what types of behavior, corresponding to a weak memory model, we can expect, given now that the general properties of the ordering of events in the system, which we take for granted, no longer apply to it. It may seem that the two memory models are similar, however, both communities developed their own discourses to discuss them. The values ​​used in them are different, although they overlap.

We already imagine how confusing this can be. What to do?

Description of time as an entity implying anywhere from two to eight kinds of partial order


In his 2014 book, Sebastian Burckhardt tries to give an exhaustive description of the many variants of consistency models. With this characteristic, along with other mathematical structures, two variants of logical ordering of events are used: "visibility" and "arbitrariness" (arbitration), also previously mentioned in other papers by Burkhardt et al., See, for example, the article on indication and verification replicated data types (2014).

“Visibility” is a partial order inherent in potential conditionality. It allows you to track which events (possibly in other replicas) are visible to some other events. There are no requirements for visibility other than acyclicity; events in an object can be visible to events in another object, and reading or writing an event does not affect its visibility for other events.

“Arbitrariness” is a general procedure that allows you to track how a distributed system, in which a situation of choice arises, will judge which event to occur earlier, and which later.

Since distributed consistency models are similar to memory models, it turns out that such phenomena of visibility and arbitrariness can also be useful when discussing memory models. In particular,attached to his 2014 article, Burckhardt demonstrates how “close” the weak memory model from C ++ 11 is to object-by-object causality, but with some interesting deviations. This will be discussed in the rest of the post.

To begin with, let's specify the visibility and arbitrariness, taking into account the “reading” and “order of changes”. When “reading”, visibility between any two objects will be taken into account only in situations where both reading and writing relate to the same object, and when reading, only one record can be seen (or not one).
This corresponds to a situation in which the processor with shared memory at any moment in time can record information in just one memory cell for any specific object, even if different threads can access it at different points of cause and effect (on the other hand, in a distributed system, logical an object can be written immediately to many individual replicas).

“Modification order” corresponds to the same stage when specifying arbitrariness, it is item-by-object and allows only recordings. Again, this specialization is based on the fact that with a weak memory specification, categorical guarantees are given only at the level of a single object.

Next, let's discuss consistency axioms formulated by Burckhardt and others and see how they are applied to a weak memory model. Note: even despite the word "axioms", these are simply properties that may or may not be provided in different memory models. Burckhardt’s article focuses on properties that determine cross-object causal consistency.

Coherence ultimately


For any particular event, there cannot be an indefinitely many number of events that do not see it. That is, any event is ultimately visible to the system.

Logically, to build such conditions in a system with a weak memory model should be somewhat more difficult: one has to argue that for any particular record there cannot exist an infinite number of read operations that would not read this record or earlier records (in the order of modification).

In the C ++ 11 specification, compliance with this axiom is not guaranteed, although in practice it is difficult to find a counterexample.

Ethereal consistency


When tracking “potential conditionality” at the level of flows / client operations and with regard to visibility / readability, you need to understand that there is no reversal time. That is why it is required that closures, when streamlining streams that involve reading, be acyclic. As a rule, there is no doubt that this property will be observed in distributed systems, however, it is this property that does not allow for user visibility in some speculative execution cases, if the system has a weak memory model.

Burckhardt et al. Indicate that this axiom is “not confirmed” in the C ++ 11 specification, and it is unclear whether it “does not validate” whether one can observe “satisfying cycles” in practice .

Axioms of conditionality


In order to concretize exactly what the phenomenon of conditionality refers to in case of a weak memory model, we must precisely determine which events may influence the results of any other events. To begin, consider our standard cause-and-effect axioms: session guarantees . These are four interrelated qualities reflecting the coherence properties of read and write operations occurring in different streams, and they must be specified at the level of each object (see Fig. 23 in Burckhardt and others ).

  • RYW (Read your records): the read operation following the write operation is done in the same cell, within the same thread / replica / session, must read data no less relevant than the record. The variant of this property for distributed systems is specified exclusively in terms of visibility, whereas the variant for a weak memory model must be based on both the read order and the change order.
  • MR (monolithic readings): subsequent readings (within the same flow, in the same cell) should also see in the future at least relevant data.
  • WFR (read first, then write): if the record follows the read within the stream, in the same cell, then in the order of changes it should go later than the read operation.
  • MW (Monolithic entries): later entries (within the flow, to the same cell) should go later in the order of modification.

The original versions of WFR and MW exist in two versions, for arbitrariness and visibility; but this is only important when working with more complex data cells than with registers for integers.

These properties reflect the concepts of conditionality, consistent with our common sense; however, they miss the most interesting. In particular, when analyzing in a weak memory model, such phenomena of conditionality are limited to the limits of the stream / replica / session and the specific cell / object to which the recording is made: in Burckhardt et al . in this case, it refers to “object-specific conditional visibility” and “object-specific conditional arbitrariness”, see also fig. 23. These phenomena do not limit the behavior of the system at all, when different streams write information to different cells.

Then, the axioms of cross-object conditionality describe the influence of cause-and-effect relationships at the level of various objects / memory cells.

  • COCV (Cross-Object Conditional Visibility): the same case as RYW, but without the condition that the final read must be done in the same stream / replica / session. The readings from the object, which are objectively later than the records in this object, should take data no less relevant than those entered during the recording.

The C ++ 11 specification reflects these properties. Please note: they are defined in such a way that the restrictions on visibility during recording and the arbitrariness of the modification order are not too reflected in these definitions.

But this does not apply to the latter property.

  • COCA (Cross-Object Conditional Arbitality): similar to monolithic records, but applies to different streams, just as COCV is RYW for different streams. However, since the order of modification affects only the entries in one object, the formulation for a weak memory model allows for inconsistent distribution of acts of writing to different objects in the system, and the entries may not match either the readings or the order within the stream.

Specifically, COCA in a weak memory model is a much weaker property. That is why with a weak memory model, the following code can return {x ≡ 0, y ≡ 0}. The order within each stream can be mismatched with the object-specific order and modification order. Note: when RYW is not in the order of modification and for - the same; thus, in the order of modification must be contained and . Thus, the order of modification obviously forms a loop in the order of the threads.

Thread A: y := 0; x := 1; return x
Thread B: x := 0; y := 1; return y


x := 0 → x := 1yx := 1 → x := 0y := 1 → y := 0
Such a cycle is allowed in COCA with a weak memory model. The point is not that the order of threads / reads contradicts the order of modification, but that each thread sees a consistent record history. These stories are consistent with the stories of other streams only if we individually limit the scope of their application.

What does all this mean?


Time is fragmentary.

Even though it seems to us that time flows perfectly orderly, the study of distributed systems and a weak memory model clearly shows you that this is not so. That is why in both of these situations, our standard excessive approximation, according to which time is total, limits productivity — which we cannot afford.
Then, recognizing that time is indeed fragmentary, we will discover many small, but important differences between the varieties of such partialness. Even the two aforementioned fields, which seem so similar at first glance, in a variety of subtle nuances, allow us to distinguish which types of events are considered to be mutually influential.

It is necessary to understand in more detail the technical details of various properties after someone can express the properties of one field in the language of another.

Time is fragmentary. Maybe we just need to get used to it.

Also popular now: