Reference model of the interaction of computing systems?

    (I ask you not to swear much, these are thoughts before going to bed).

    At one time, the first protocols for network interaction did not have a rigid division into levels. The data was "just transmitted" and "just read." Gradually, there was an understanding that every time inventing a universal combine (not compatible with other combines) is expensive and inconvenient.

    The protocols were divided into levels: physical-channel, network, transport, and application. Then they tried to attach a theoretical 7-level OSI model to this (practically used) TCP / IP model. Not really caught on (name me 5 presentation layer protocols).

    However, no one doubts the need to separate the ups and downs of the physical-channel level and the network. Protocols are changing, hardware is changing, but IP is still the same ...

    Roughly the same thing is happening now with computers. At first, these were universal combines that were able to initialize hardware, draw graphics, and work as a server. But it is expensive . The clearest example is the need to use a floppy disk to install the still-sold windows 2003. In 2010! Floppy disk! Why? Because the unfortunate OS is forced to think about what controllers it has inside and what kind of interruption it has there. And at the same time, it should also provide multitasking, planning processor time on a multiprocessor system, planning disk operations and other complex things. Ah, yes, also the right to control.


    Emulation has always been a laboratory miracle. Oh, we can launch the game for the Play Station! Oh, we can run the game for NES, SNES, ZX ... And here is the IBM / 360 emulator. Cool! But the dosbox, in which the UFO works ...

    It was all at the level of test tubes, laboratories, and maybe gaming systems. All this was a thousandfold slowdown (price of interpretation) ...

    Then came virtualization. Which differed from emulation precisely this thousand times. Although the first virtual machines did not differ much from the emulators in terms of functionality (well, they were only faster), they already had an important (most important) property - translation of host performance to a guest with a small overhead. This was an essential property. Further to it was attached (or rather, just starting to appear) infrastructure.

    And, in fact, we now have something similar to the OSI model (or TCP / IP) for computers. We have identified the level of abstraction that deals with hardware, information storages, network card initialization, resource allocation, etc. In other words, the OS “cut off” a piece from below, leaving it with high-level tasks.

    However, the OS itself still remains "harvesters." Virtualization adapts to them, although it would be much more correct to develop OSs that can work ONLY with the participation of the hypervisor. Such an OS, of course, will lose its versatility, but with a high probability it will be compact, stable (the less code, the fewer errors), convenient for the hypervisor to work.

    Some steps in this area have already been made by Xen, which has a kernel for paravirtualization (de facto, there is no virtualization there already, there is simply the interaction of the OS and the hypervisor, in which the OS gives all the base work outsourcing, much like an IP protocol gives the low-lying work of forming frames, sending them, monitoring for the absence of collisions, etc. to the Ethernet Ethernet).

    ... And I must say, at least in the case of Xen, that the old dispute between Torvalds and Tanenbaum is resolved unexpectedly: the concept of a “microkernel” is replaced by the concept of a “microhypervisor” (as Xen’s authors say, 1,500 lines), there is a “pocket virtual machine” (Dom0) for all sorts of non-critical things, such as disk and network operations; the hypervisor transfers requests from DomU to Dom0 (approximately how the microkernel should do this); The hypervisor itself is busy with “real things,” such as managing memory and allocating processor time.

    Most likely, in the near future, there will be the appearance of just such systems, designed for a "good uncle from above", giving resources and distributing them. Most likely, all manufacturers of hypervisors will come to a single call format (so that a virtual machine from one hypervisor can easily be launched on another). XenCloud is already taking some steps - there is an xva format that claims to be the “cross-hypervisor” format of virtual machines.

    I see the future in the appearance of a certain stack of a computer that described each level as an independent entity with standardized interfaces “up” and “down”. The main difference between this stack and the existing layers of software abstraction from the hardware of modern OSs will be standardization of interfaces. It will be possible to take a hypervisor from windows, an intermediate layer from linux and userspace from solaris. Or vice versa - take xen, the linux kernel, and on it there are already several different systems that coexist quietly with each other ... approximately like now we can start tcp both on ip and ipv6, and ip itself goes on so many different channel protocols, which does not count ...

    (flying away into fantasy) and there will also be tunneling of hypervisors (like now ip over ip, GRE, etc.) - there will be a xen hypervisor, in which there will be a vmware hypervisor that will be able to start the hypervisor from the quest, ...

    Also popular now: