
Virtualization: history and development trends

These days, VMware, Microsoft, Citrix and Red Hat are leaders in the virtualization market, but these companies were not at the forefront of the technology. In the 1960s, everything began with the development of specialists from companies such as General Electric (GE), Bell Labs, IBM, and others.
For more than half a century, the field of virtualization has come a long way. Today we will talk about its history, current situation and technology development forecasts for the near future.
At dawn
Virtualization was born as a means to expand the size of the RAM of computers in the 60s of the last century.
In those days, it was about making several programs possible - the first supercomputer in which the operating system processes were separated was the project of the Department of Electrical Engineering at the University of Manchester called Atlas (funded by Ferranti Limited).
Supercomputer Atlas
Atlas was the fastest supercomputer of its time. This was partially achieved through the separation of system processes using a supervisor (the component responsible for controlling key resources - processor time, etc.) and the component that executed user programs.

Atlas supercomputer
For the first time in Atlas, a virtual memory (one-level store) was used - the system storage was separated from that used by user programs. These developments were the first steps towards creating the level of abstraction, which was later used in all major virtualization technologies.
Project M44 / 44X
The next step in the development of virtualization technologies was the project of IBM Corporation. American researchers went further than their British counterparts and developed the concept of a "virtual machine", trying to divide the computer into separate small parts.
The main computer was the "scientific" IBM 7044 (M44), which ran the 7044 (40X) virtual machines - at this stage, the virtual machines did not fully simulate the hardware.
CP / CMS
IBM worked on the S / 360 mainframe - it was planned that this product will replace the previous development of the corporation. S / 360 was a single-user system that could run multiple processes simultaneously.
The focus of the corporation began to change after July 1, 1963, when scientists at the Massachusetts Institute of Technology (MIT) launched the MAC project. Initially, the abbreviation of the system was formed from the phrase Mathematics and Computation, showing the direction of development, but later under the MAC they began to understand Multiple Access Computer ("multiple access computer").

IBM S / 360
The MAC project received a grant from the US defense agency DARPA in the amount of $ 2 million - among the assigned tasks was to conduct research in the field of operating systems, artificial intelligence and computational theory.
To solve some of these problems, MIT scientists needed computer hardware, with which several users could work simultaneously. Requests for the possibility of creating such systems were sent to IBM, General Electric and some other vendors.
IBM at that time was not interested in creating such a computer - the corporation management believed that there was no demand for such devices on the market. MIT, in turn, did not want to use a modified version of S / 360 for research.
The loss of the contract was a real blow for IBM - especially after the corporation learned about the interest in multitasking computers from Bell Labs.
To meet the needs of MIT and Bell Labs, the CP-40 mainframe was created. This computer was never sold to private customers and was used only by scientists, but this development is an extremely important milestone in the history of virtualization, as it later evolved into the CP-67 system, which became the first commercial mainframe with virtualization support.
The CP-67 operating system was called CP / CMS - the first two letters were short for Control Program, and CMS for short for Console Monitor System.
CMS was a single-user interactive operating system, and CP was a program that created virtual machines. The essence of the system was to launch the CP module on the mainframe - it ran virtual machines running on the CMS operating system, which users, in turn, already worked with.
In this project, interactivity was first implemented - previously, IBM systems could only “eat up” the programs submitted to the input and print the results of calculations, the CMS now had the opportunity to interact with the programs during their work.
The public release of CP / CMS took place in 1968. Subsequently, IBM created a multi-user operating environment on the IBM System 370 (1972) and System 390 (VM / ESA) computers.
Other projects of that time
IBM projects have had the greatest impact on the development of virtualization technologies, but were not the only developments in this direction. Among such projects were:
- Livermore Time-Sharing System (LTSS) - development of the Lawrence Livermore laboratory. Researchers created the operating system for the Control Data CDC 7600 supercomputers, which took the title of the fastest supercomputers from the Atlas project.
- Cray Time-Sharing System (CTSS - early IBM developments also hid behind a similar abbreviation, do not confuse them) - the system for the first Cray supercomputers, created by the Los Alamos science laboratory in collaboration with the Livermore laboratory. Cray X-MP computers with the CTSS operating system were used by the US Department of Energy for nuclear research.
- New Livermore Time-Sharing System (NLTSS) . The latest version of CTSS, supporting the most advanced technologies of its time (for example, TCP / IP and LINCS). The project was curtailed in the late 80s.
Virtualization in the USSR
In the USSR, the analogue of IBM System / 370 was the CBM (Virtual Machine System) project , launched in 1969. One of the main objectives of the project was the adaptation of the IBM VM / 370 Release 5 system (its earlier version of CP / CMS). Consecutive and complete virtualization was implemented in the CMS (on the virtual machine, another copy of the CBC could be launched, etc.).

XEDIT text editor screen in SVM PDO. Image: Wikipedia
SoftPC, Virtual PC and VMware
In 1988, Insignia Solutions introduced the SoftPC software emulator, with which it was possible to run DOS applications on Unix workstations - functionality that was previously unavailable. At that time, a PC with the ability to run MS DOS cost about $ 1,500, and a working UNIX station with SoftPC would cost only $ 500.
In 1989, the Mac version of SoftPC was released - users of this OS were able to not only run DOS applications, but also Windows programs.
The success of SoftPC has led other companies to launch similar products. In 1997, Apple created the Virtual PC program (sold through Connectix).) With this product, Mac users were able to run Windows, which made it possible to alleviate the lack of software for Mac.
In 1998, VMware was founded, which in 1999 brought to market a similar product called VMware Workstation. Initially, the program worked only on Windows, but later support for other operating systems was added. In the same year, the company released the first virtualization tool for the x86 platform called VMware Virtual Platform.
Market development in the 2000s
In 2001, VMware released two new products that allowed the company to enter the corporate market - ESX Server and GSX Server. GSX allowed users to run virtual machines inside operating systems like MS Windows (this technology is the second type of hypervisor - Type-2 Hypervisor). ESX Server belongs to the first type of hypervisors (Type-1 Hypervizor) and does not require a home operating system to run virtual machines.
Hypervisors of the first type are much more efficient, since they have great optimization capabilities and do not require resources to launch and maintain the operating system.

Differences of hypervisors of the first and second type. Image: IBM.com
With the release of ESX Server, VMware has rapidly taken over the corporate market, ahead of the competition.
Following VMware, other players entered the market - in 2003, Microsoft bought Connectix and restarted the Virtual PC product, and then, in 2005, released the Microsoft Virtual Server Enterprise Solution.
In 2007, Citrix Corporation entered the enterprise virtualization market with an open source virtualization platform called Xensource. This product was then renamed Citrix XenServer.
The general history of the development of virtualization technologies is presented on the infographic of the CloudTweaks resource: By clicking on the image, the full size will open

Current market situation
Currently, there are several types of virtualization (server, network, desktop virtualization, memory virtualization, application virtualization). The most actively developing segment of server virtualization.
According to the IT Candor analytic company, the server market in 2013 was estimated at $ 56 billion ($ 31 billion went to physical servers, and another $ 25 to virtual ones). VMware's leadership in the virtual server market at that time was not in doubt:

Nevertheless, the flagship product VMware vSphere Hypervisor, there are competitors - Microsoft Hyper-V, Citrix XenServer, Oracle VirtualBox, Red Hat Enterprise Virtualization Hypervisor (REVH). Sales of these products are growing , and VMware's market share is declining.
Analysts at the NASDAQ American Exchangepredict a decrease in the company's share in the overall virtualization market to just over 40% by 2020.

Trends
According to some analysts from large companies, infrastructure virtualization, storage virtualization, mobile virtualization will become key growth points for virtualization technologies, and the desktop virtualization area will gradually wither away.
In addition, the following promising areas and areas related to virtualization technologies are called:
Microvirtualization
Corporate servers are usually well protected from external intrusions, so attackers often infiltrate corporate networks through employee work machines. As a result, such desktops within the network become a springboard for the further development of an attack on the organization.
Bromium has created a desktop protection technology built on virtualization technology. This tool can create micro-virtual machines, “inside” of which ordinary user processes are launched (for example, opening web pages or documents). After closing the document or browser window, the micro-virtual machine is destroyed.
Due to the fact that such virtual machines are isolated from the operating system, hackers will not be able to penetrate it using malicious files - even if malicious software is installed in a micro-virtual machine, it will be destroyed with its inevitable closure.
Virtual Storage Area Network Technology (Virtual SAN)
Storage Area Networks (VSANs) allow organizations to more efficiently use their virtual infrastructure, including. However, such products for connecting external storage devices (optical drives, disk arrays, etc.) are often too expensive for small companies.
With the advent of the product the Virtual SAN capabilities from VMware conventional SAN become available to smaller businesses. The benefits of this project are the fact that Virtual SAN VMware is built right into the company's main hypervisor.
PS If you notice a typo, mistake or inaccuracy in the presentation - write in a personal message and we will quickly fix it. Thanks for attention!
Related materials and links: