The forgotten story of the PLO

Original author: Eric Elliott
  • Transfer
Most of the programming paradigms that we use today were first mathematically studied in the 1930s using the ideas of lambda calculus and the Turing machine, which are variants of the model of universal computing (these are formalized systems that can perform general-purpose calculations). The Church-Turing thesis showed that the lambda calculus and Turing machines are functionally equivalent. Namely, we are talking about the fact that everything that can be calculated using the Turing machine can be calculated using the lambda calculus, and vice versa.



There is a common misconception that Turing machines can calculate everything that can be calculated. There are classes of problems (for example, the problem of stopping ) that can be computable using Turing machines only for some cases. When the word "computable" is used in this text, it is meant "computable by the Turing machine".

Lambda calculus demonstrates the approach of applying functions to the calculations from the top-down principle. And the Turing tape machine is an imperative (step-by-step) approach to calculations, implemented on a “bottom-up” basis.

Low-level programming languages, like machine code or assembler, appeared in the 1940s, and by the end of the 1950s, the first popular high-level languages ​​emerged that implemented both functional and imperative approaches. So, Lisp dialects are still widely used, among them Clojure, Scheme, AutoLisp and so on. In the fifties, languages ​​such as FORTRAN and COBOL appeared. They are examples of imperative high-level languages ​​that are still alive. Although it should be noted that the languages ​​of the C family, in most spheres, have replaced both COBOL and FORTRAN.

The roots of imperative and functional programming lie in the formal mathematics of computing, they appeared before digital computers. Object Oriented Programming, or OOP (Object Oriented Programming, OOP), came later; it originated in a structural programming revolution that took place in the sixties and seventies of the last century.

The first object I knew was used by Ivan Sutherland in his fateful Sketchpad application, created between 1961 and 1962, described by him in thiswork in 1963. The objects were graphic signs displayed on the oscilloscope screen (perhaps this is the first ever case of using a graphical computer monitor) and supporting inheritance through dynamic delegates, which Ivan Sutherland called in his work “master objects” (masters). Any object could become a master object, additional instances of the object were called "implementations" (occurrences). This made Sketchpad the owner of the first known programming language that implemented prototype inheritance.

The first programming language, widely known as "object-oriented", was the Simula language, specifications of which were developed in 1965. Like Sketchpad, Silmula involved working with objects, but also included classes, class-based inheritance, subclasses, and virtual methods.

A virtual method is a method defined in a class that is intended to be redefined by subclasses. Virtual methods allow programs to call methods that may not exist at the time of compilation of code, thanks to the use of dynamic dispatch to determine which particular method to call during program execution. JavaScript has dynamic types and uses a delegation chain to determine which method to call. As a result, this language does not need to introduce the concept of virtual methods to programmers. In other words, all methods in JavaScript use dispatching at runtime; as a result, methods in JavaScript do not need to be declared as “virtual” to support this feature.

The opinion of the father of the PLO on the PLO


"I came up with the term" object-oriented ", and I can say that I did not mean C ++." Alan Kay, OOPSLA Conference, 1997.

Alan Kay coined the term "object-oriented programming," meaning Smalltalk programming language (1972). This language was developed by Alan Kay, Dan Ingles and other employees of the Xerox PARC research and development center as part of a project to create a Dynabook device. Smalltalk was more object oriented than Simula. In Smalltalk, everything is an object, including classes, integers, and blocks (closures). The initial implementation of the language, Smalltalk-72, did not have the ability to create subclasses. This feature appeared in Smalltalk-76.

While Smalltalk supported classes, and, as a result, the creation of subclasses, in Smalltalk these ideas were not put at the forefront. It was a functional language that Lisp influenced as much as Simula. According to Alan Kay, treating classes as a code reuse mechanism is a mistake. The programming industry pays great attention to creating subclasses, distracting from the real advantages of object-oriented programming.

JavaScript and Smalltalk have a lot in common. I would say that JavaScript is the Smalltalk revenge on the world for misunderstanding OOP concepts. Both of these languages ​​support the following features:

  • Objects
  • First class and closure functions.
  • Dynamic types
  • Late binding (functions and methods can be replaced during program execution).
  • OOP without a class-based inheritance system.

“I regret the fact that a long time ago I came up with the term“ objects ”for this phenomenon, since its use leads to the fact that many people give the main meaning to an idea that is not as important as the main one. The basic idea is messaging. ” Alan Kay

In2003 email correspondence , Alan Kay clarified what he had in mind when he called Smalltalk an "object-oriented language."

"OOP for me means only the exchange of messages, local preservation, and protection, and hiding the state, and extremely late binding." Alan Kay

In other words, in accordance with the ideas of Alan Kay, the most important ingredients of the PLO are the following:

  • Messaging.
  • Encapsulation.
  • Dynamic linking

It is important to note that Alan Kay, the man who invented the term "OOP" and brought it to the masses, did not consider inheritance and polymorphism to be the most important parts of the OOP.

Essence OOP


The combination of messaging and encapsulation serves several important purposes:

  • Avoiding the shared mutable state of an object by encapsulating the state and isolating other objects from local changes in its state. The only way to influence the state of another object is to ask him (and not give him a command) about the change by sending him a message. State changes are controlled at the local, cellular level, the state is not made available to other objects.
  • Separation of objects from each other. The sender of the message is loosely associated with the recipient through the message API.
  • Adaptability and robustness to changes during program execution through late binding. Adapting to changes during program execution provides many significant benefits that Alan Kay considered to be very important for the PLO.

The sources of inspiration for Alan Kay, who expressed these ideas, were his knowledge of biology, and what he knew about ARPANET (this is an early version of the Internet). Namely, we are talking about biological cells and about individual computers connected to the network. Even then, Alan Kay imagined how programs run on huge, distributed computers (the Internet), while individual computers act like biological cells, working independently with their own isolated state and communicating with other computers by sending messages.

“I realized that the metaphor of a cell or computer would help get rid of the data [...]”. Alan Kay

Saying “help get rid of the data,” Alan Kay, of course, was aware of the problems caused by the shared mutable state, and the strong connectivity caused by sharing data. Today these topics are widely heard. But in the late 1960s, ARPANET programmers were unhappy with the need to choose a data model representation for their programs before starting to develop programs. The developers wanted to get away from this practice, since, in advance of driving themselves into the framework determined by the presentation of data, it is more difficult to change something in the future.

The problem was that different ways of presenting data required, for access to them, different code and different syntax in the programming languages ​​used at some point in time. The holy grail here would be a universal way to access and manage data. If all data would look the same for a program, it would solve a lot of problems for developers regarding the development and maintenance of programs.
Alan Kay was trying to "get rid" of the idea, in accordance with which the data and programs were, in a sense, independent entities. They are not treated as such in the List or in Smalltalk. There is no separation between what you can do with data (with values, variables, data structures, and so on) and software constructs like functions. Functions are “first class citizens”, and programs are allowed to change during their execution. In other words, there is no particular, privileged relationship to data in Smalltalk.

Alan Kay, moreover, considered objects as algebraic structures, which provided certain, mathematically demonstrable, guarantees of their behavior.

"My mathematical education allowed me to understand that each object can have several algebraic models associated with it, that there can be whole groups of similar models, and that they can be very, very useful." Alan Kay

It was proved that this is the case, and this formed the basis for objects such as promises and lenses, and both the theory of categories had an influence on both.
The algebraic nature of how Alan Kay saw objects would allow objects to provide formal verification, deterministic behavior, improve testability, since algebraic models are essentially operations that obey several rules in the form of equations.

In the jargon of programmers, “algebraic models” are abstractions created from functions (operations) that accompany certain rules enforced by modular tests that these functions must pass (axioms, equations).

These ideas have been forgotten for decades in most object-oriented languages ​​of the C family, including C ++, Java, C #, and so on. But these ideas are beginning to look for a way back, in recent versions of the most widely used object-oriented languages.

On this occasion, one might say that the programming world rediscovers the benefits of functional programming and makes rational arguments in the context of object-oriented languages.

Like JavaScript and Smalltalk earlier, most modern object-oriented languages ​​are becoming more and more “multi-paradigm." There is no reason to choose between functional programming and OOP. When we look at the historical essence of each of these approaches, they look not only as compatible, but also as complementary ideas.

What, according to Alan Kay, is the most important thing in the PLO?

  • Encapsulation.
  • Messaging.
  • Dynamic binding (the ability of programs to evolve and adapt to changes during their implementation).

What is insignificant in OOP?

  • Classes.
  • Class based inheritance.
  • Special relationship to objects, functions or data.
  • Keyword new.
  • Polymorphism.
  • Static typing.
  • Attitude to classes as to "types".

If you know Java or C #, you might think that static typing or polymorphism are the most important ingredients of OOP, but Alan Kay prefers to deal with universal patterns of behavior in algebraic form. Here is an example written in Haskell:

fmap :: (a -> b) -> f a -> f b

This is the signature of a universal functor mapthat works with indefinite types aand bby applying the function from ato bin the context of a functor ain order to create a functor b. “Functor” is a word from mathematical jargon, the meaning of which comes down to “supporting a mapping operation”. If you are familiar with the method [].map()in JavaScript, then you already know what it means.

Here are a couple of JavaScript examples:

// isEven = Number => Boolean
const isEven = n => n % 2 === 0;
const nums = [1, 2, 3, 4, 5, 6];
// метод map принимает функцию `a => b` и массив значений `a` (через `this`)
// он возвращает массив значений `b`
// в данном случае значения`a` имеют тип `Number`, а значения `b` тип `Boolean`const results = nums.map(isEven);
console.log(results);
// [false, true, false, true, false, true]

The method .map()is universal, in the sense that athey bcan be of any type, and this method can easily cope with this situation, since arrays are data structures that implement the algebraic laws of functors. Types for .map()have no value, since this method does not try to work with the corresponding values ​​directly. Instead, it uses a function that expects and returns values ​​of appropriate types that are correct from the point of view of the application.

// matches = a => Boolean// здесь `a` может быть любого типа, поддерживающего сравнениеconst matches = control => input => input === control;
const strings = ['foo', 'bar', 'baz'];
const results = strings.map(matches('bar'));
console.log(results);
// [false, true, false]

Relationships of generic types can be difficult to correctly and fully express in languages ​​like TypeScript, but this is very easy to do in the Hindley-Milner type system used in Haskell, which supports higher types of genders (types of types).

Most type systems provide too strong restrictions to allow free expression of dynamic and functional ideas, such as composition of functions, free composition of objects, expansion of objects during program execution, the use of combinators, lenses, and so on. In other words? static types often make it difficult to write software using build methods.

If your type system is characterized by too many restrictions (as in TypeScript or in Java), then you have to write more complex code to achieve the same goals than when using languages ​​with a freer approach to typing. This does not mean that using static types is a bad idea, or that all implementations of static types are characterized by the same restrictions. I, for example, encountered much fewer problems working with the Haskell type system.

If you are a fan of static types and not against restrictions, I wish you seven feet under the keel. But if you find that some of the ideas expressed here are difficult to implement due to the fact that it is not easy to typify the functions obtained by combining other functions and composite algebraic structures, then blame the type system and not the ideas. Drivers like the amenities that frame SUVs give them, but no one complains that they don’t fly. For the flight you need a vehicle that has more degrees of freedom.

If restrictions simplify your code, that's great! But if restrictions force you to write more complex code, then maybe something is wrong with these restrictions.

What is an "object"?


The word "object", over time, has acquired many side shades of meaning. What we call “objects” in JavaScript is simply composite data types, with no hint of class-based programming or Alan Kay’s message passing ideas.

In JavaScript, these objects can support, and often support, encapsulation, message passing, behavioral separation, even polymorphism using subclasses (albeit using the delegation chain rather than type-based dispatching).

Alan Kay wanted to get rid of the difference between the program and its data. JavaScript, to some extent, achieves this goal by placing the methods of the objects in the same place as the properties that store the data. Any property, for example, can be assigned to any function. You can dynamically design the object's behavior and change the semantic content of the object during program execution.

An object is just a composite data structure, and it doesn’t need anything special to be considered an object. However, programming with the use of objects does not lead to the fact that such code turns out to be "object-oriented", just as the use of functions does not make the code "functional."

OOP is no longer a real OOP.


Since the concept of "object" in modern programming languages ​​means much less than that meant for Alan Kay, I use the word "component" instead of the word "object" to describe the rules of this OOP. Some objects are owned and controlled directly by some third-party JavaScript code in relation to them, but components must encapsulate their own state and control it.

This is what the real PLO is:

  • Programming using components (Alan Kay calls them "objects").
  • The state of the component must be encapsulated.
  • For communication between entities, message passing is used.
  • Components can be added, changed and replaced during program execution.

Most object behaviors can be defined in a generic form using algebraic data structures. There is no need for inheritance. Components can reuse behaviors from publicly available functions and import modules without having to make their data publicly available.

Manipulating objects in JavaScript or using class-based inheritance does not mean that someone else is doing OOP programming. But the use of components in such ways means. But it’s very difficult to get rid of well-established notions about terms, so perhaps we should leave the term “OOP” and name what the above described “components” use, “Message Oriented Programming (MOP)”? Below, we will use the term “MOP” when talking about message-oriented programming.

By chance, the English word "mop" is translated as "mop", and they are known to be used to restore order.

What does a good MOP look like?


In most modern programs there is a certain user interface (User Interface, UI) that is responsible for user interaction, some code that is involved in managing the state of the application (user data), and code that works with the system or is responsible for communicating with the network.

Each of these systems may require long-lived processes, such as event listeners. It will also need the state of the application - to store something like information about network connections, the status of interface controls, and the application itself.

A good MOP means that instead of all such systems having access to each other’s state and being able to directly control it, they interact with each other through messages. When the user clicks the “Save” button, a message can be dispatched "SAVE". The application component responsible for managing the state can interpret this message and redirect it to the handler responsible for updating from the state (such as a pure reducer function). Possibly, after updating the state, the component responsible for managing the state dispatches the message."STATE_UPDATED"the user interface component, which, in turn, interprets the state, decides which parts of the interface need to be updated, and passes the updated state to the subcomponents who are responsible for working with specific interface elements.

Meanwhile, the component responsible for network connections can monitor the user's connection to another computer on the network, listen for messages, and dispatch an updated status view to save it to the remote machine. Such a component is responsible for working with network mechanisms, knows whether the connection is working or not, and so on.

Such systems applications do not need to know the details of its other parts. They should only care about solving their own problems. System components can be disassembled and assembled as a constructor. They implement standardized interfaces, which means that they can interact with each other. As long as the well-known requirements for the interface of the components are fulfilled, such components can be replaced by others, with the same interfaces, but doing the same thing differently, or performing, accepting the same messages, something completely different. It is possible to change one component to another even during the execution of the program - this will not break its work.

The components of a certain software system do not even have to be on the same computer. The system can be decentralized. Network storage can place data in a decentralized data storage system like IPFS , as a result, the user is independent of the health of some specific machine, which ensures the safety of its data. With this approach, the data are securely stored and protected from intruders.

OOP, in part, came under the influence of the ideas of ARPANET, and one of the goals of this project was to create a decentralized network that would be resistant to attacks like a nuclear strike.

A good MOP system can have a similar level of resilience using components that support hot swapping while the application is running. It will be able to continue functioning if the user is working with her from a cell phone and was out of the network coverage area due to entering the tunnel. If the hurricane broke the power supply of one of the data centers in which its servers are located, it will also continue to function.

The time has come for the world of software to free itself from a failed experiment with class-based inheritance and adopt the mathematical and scientific principles that pioneered the PLO.

The time has come for us, the developers, to create more flexible, stable, beautiful programs using a harmonious combination of MOP and functional programming.
By the way, the acronym “MOP” is already used, describing “Monitoring oriented programming” (Monitoring Oriented Programming), but this concept, unlike OOP, will just quietly disappear.

So do not be discouraged if the term “MOP” does not look like a word from the jargon of programmers. Simply put your OOP in order using the above MOP principles.


Also popular now: