Windows Containers: 10 Years Before Microsoft
In October 2014, Microsoft announced a
partnership with Docker , which will present the implementation of container virtualization for a future version of Windows Server, expected in the 3rd quarter of 2015. To support containers, Microsoft will use its own technology developed in a research project Drawbridge . Drawbridge virtualization technology is similar to that used in the Wine project, allowing you to run Windows applications on computers with UNIX-like operating systems. A key feature of both technologies is that the virtualization of hardware (processor, memory, input / output devices) is not required, and only an executable Windows environment is emulated.
Parallels began developing containers for Windows long before Microsoft thought about it. And since this topic, in connection with the announcement of Microsoft, is obviously becoming very relevant, in this post I will try to tell you what technologies our Windows containers work with, what features they have, and what are the main scenarios for their use. In the comments, I am ready to answer all questions, including the most popular one - how many containers can be launched on the host.
Increasing the power of computing systems continues to obey Moore's law , but how to effectively use these systems with minimal overhead? One possible solution is to use containers to control resources and isolate applications launched by different users. Without false modesty, the pioneer and leader in the promotion of container technologies is Parallels, which, in addition to products for the OS based on the Linux kernel, has been offering its own implementation of containers for Windows for almost 10 years now . The approach implemented in Parallels is based on virtualization of the OS kernel , which, after modifications, is able to run an arbitrary number of Windows user environments.
The user environment is located on a virtual disk containing links to Windows OS files and its own registry; Windows system files are launched from the same disk, which form the environment for users (including the administrator), with the possibility of domain membership, application operation and network interaction through a virtual adapter. This is what we call the Parallels Container for Windows.
Parallels containers — for both the Linux kernel and Windows OS — rely on kernel modifications. And if the Linux community accepted from Parallels most of these changes to the main branch of the kernel source code, then Microsoft was not so open. During the work on the containers, we sent a large number of messages to Microsoft about the detected limitations and simply errors in the components of the Windows kernel. Some of them were taken into account in future versions of Windows, and to solve more than a dozen problems, Microsoft released separate fix packs. So we can say that we also participate, albeit indirectly, in the development of the components of the Windows kernel. But, unfortunately, we did not dare to take our initiatives to expand cooperation at Microsoft.
So how does it still work if Microsoft does not accept the changes and does not provide access to the source code of the Windows kernel? When developing Parallels containers, two technologies are central to it: reverse engineering and updating of program code at runtime . And if reverse engineering is, rather, the practical application of the method of scientific knowledge, known as system analysis , in the field of computer science, then we are truly proud of Parallels as one of the best in the world by implementing software update technology at runtime , and we protect by patents.

As already noted, all virtualization technology works in the kernel of the OS, which allows you to separate the kernel objects between containers, thereby isolating the containers from each other. Each container has its own set of processes, sessions, and drivers, as well as a registry and a tree of kernel objects.

Pay attention to the screenshots. Firstly, we can conclude that the experience of user interaction with the OS in the container is no different from interaction with the Windows operating system itself, and secondly, it becomes clear how isolation works. The containers do not know anything about each other, nor about the host on which they are located, but from the host itself the containers are easily accessible, so the host administrator is the SuperAdministrator for containers. Containers are integrated into Windows so that regular OS tools such as Task Manager, Registry Editor and Mark Russinovich's Sysinternals utility suite can be used to manage and monitor containers.
The most popular question for our users is how many Parallels containers can be run on a host. During the experiment, we managed to launch about 600 containers in the Parallels laboratory, which could be authorized through RDP, although the delays in the response of the user interface were already unacceptably large. Further experiments confirmed that the overhead for virtualizing the OS kernel is relatively small and significantly less than in solutions based on hypervisors , so the applications that you plan to run in the container and the physical capabilities of the host itself will be decisive factors.
For real-life applications, the number of applications inside containers must be limited, so resource control is a key feature of Parallels containers. You can control the container’s consumption of computing power of processors, memory, storage space and network traffic.
How to deploy applications inside Parallels containers? You can do this as usual by running the application installer, but if there are a lot of containers, you will have to press the Install button countless times. In order to automate the process and not waste container disk space for placing application files, we use Application Templates. Physically, a template is a file that stores information that reproduces the original location of files, folders, and registry keys. To create an application template, a special tool is used, the Template Creation Wizard, which tracks all changes on the file system and in the registry that the application installer makes and saves these changes to the application template. The resulting application template can be connected to any container, which will be equivalent to installing this application, and immediately start using it in the container. Parallels container user can create a template from any application that can be installed inside the container.

Parallels containers share with each other not only the kernel, but also all the files installed on the host from the OC Windows distribution, which significantly saves storage space. Deduplication of data is carried out at the file level using templates and a specialized file system with support for copy-on-write. For each supported version of Windows, including language localizations, Parallels releases an OS template. The OS template differs from the application template only in that the contents of the files in it are not stored. From inside the container, accessing files from the OS template looks absolutely transparent, creating a holistic view of the file structure of the Windows OS, consisting of general and container-private files. Support in the file system copy-on-write allows you to avoid modifying files from the OS template, saving changes only inside the container.

From the very beginning of the project, we at Parallels have been offering our containers for hosting web applications, the most important of which is the Parallels Plesk automation system . Hosting products based on Parallels containers are offered by leading global providers such as AT&T, 1 & 1, GoDaddy, HostEurope, etc.

But most of all, Parallels containers for Windows are suitable for deploying many identical environments, managed and configured in a uniform manner. An example of such a scenario is desktop virtualization . So, Parallels in partnership with the IBS integrator has developed and since 2013 offers FSTEC certified Parallels VDI solution, which consists of a broker of connections and container-based virtualization. At the moment, on the basis of this decision, a large-scale automation project of the Federal Tax Service of the Russian Federation is being implemented, providing for the relocation of more than 10,000 jobs to the cloud data center.
The Windows container project started in May 2002, after successful experiments with Parallels containers based on the Linux kernel. In January 2003, a prototype was demonstrated on Windows 2000 Server, which was able to run 50 isolated copies of Microsoft IIS and Microsoft SQL Server, and in June 2005 the first public release of Parallels Virtuozzo Containers for Windows 2003 Server version 3.0.
To date, the project has survived 7 public releases, nearly 300 updates have been released, and the project code base has exceeded 1,300,000 SLOCs. And more recently, the total number of containers created using Parallels technologies (including the one described) has exceeded 1,000,000 !
Parallels containers were also highly praised by Microsoft - in private conversations the engineers of this company spoke about the project as the most technically difficult for the Windows kernel.

Microsoft's entry into the container market will undoubtedly make this approach to virtualization much more popular, and Docker solutions will allow you to “wrap” applications in containers in a simple and convenient way. From a technical point of view, the advantage of Microsoft containers is the small costs of virtualization and the ability to implement containers exclusively in the user OS execution mode. The disadvantages include difficulties in ensuring application compatibility, because for this it is necessary to emulate the entire Windows API , which currently has thousands of calls.
Although the approaches to implementing containers in Parallels and Microsoft are different, both technologies can complement each other perfectly - Microsoft containers are likely to work inside Parallels containers. Therefore, we see Microsoft not as a competitor, but as the creator of the Windows ecosystem, in which container virtualization occupies an important place.
We can hope for this fact that the container virtualization projects for the Linux kernel-based OS currently on the market are not in a state of competition - most of them are teams from Parallels, Google, IBM, Canonical, Docker, etc. work together. And although the ecosystem for developing Windows is understandably more closed, we look forward to further cooperation.
Learn more and try Parallels Containers for Windows here .
I will also try to answer all your questions in the comments.

Parallels began developing containers for Windows long before Microsoft thought about it. And since this topic, in connection with the announcement of Microsoft, is obviously becoming very relevant, in this post I will try to tell you what technologies our Windows containers work with, what features they have, and what are the main scenarios for their use. In the comments, I am ready to answer all questions, including the most popular one - how many containers can be launched on the host.
Parallels Containers
Increasing the power of computing systems continues to obey Moore's law , but how to effectively use these systems with minimal overhead? One possible solution is to use containers to control resources and isolate applications launched by different users. Without false modesty, the pioneer and leader in the promotion of container technologies is Parallels, which, in addition to products for the OS based on the Linux kernel, has been offering its own implementation of containers for Windows for almost 10 years now . The approach implemented in Parallels is based on virtualization of the OS kernel , which, after modifications, is able to run an arbitrary number of Windows user environments.

And what's under the hood?
Parallels containers — for both the Linux kernel and Windows OS — rely on kernel modifications. And if the Linux community accepted from Parallels most of these changes to the main branch of the kernel source code, then Microsoft was not so open. During the work on the containers, we sent a large number of messages to Microsoft about the detected limitations and simply errors in the components of the Windows kernel. Some of them were taken into account in future versions of Windows, and to solve more than a dozen problems, Microsoft released separate fix packs. So we can say that we also participate, albeit indirectly, in the development of the components of the Windows kernel. But, unfortunately, we did not dare to take our initiatives to expand cooperation at Microsoft.
So how does it still work if Microsoft does not accept the changes and does not provide access to the source code of the Windows kernel? When developing Parallels containers, two technologies are central to it: reverse engineering and updating of program code at runtime . And if reverse engineering is, rather, the practical application of the method of scientific knowledge, known as system analysis , in the field of computer science, then we are truly proud of Parallels as one of the best in the world by implementing software update technology at runtime , and we protect by patents.

As already noted, all virtualization technology works in the kernel of the OS, which allows you to separate the kernel objects between containers, thereby isolating the containers from each other. Each container has its own set of processes, sessions, and drivers, as well as a registry and a tree of kernel objects.

Pay attention to the screenshots. Firstly, we can conclude that the experience of user interaction with the OS in the container is no different from interaction with the Windows operating system itself, and secondly, it becomes clear how isolation works. The containers do not know anything about each other, nor about the host on which they are located, but from the host itself the containers are easily accessible, so the host administrator is the SuperAdministrator for containers. Containers are integrated into Windows so that regular OS tools such as Task Manager, Registry Editor and Mark Russinovich's Sysinternals utility suite can be used to manage and monitor containers.
How much to hang in grams?
The most popular question for our users is how many Parallels containers can be run on a host. During the experiment, we managed to launch about 600 containers in the Parallels laboratory, which could be authorized through RDP, although the delays in the response of the user interface were already unacceptably large. Further experiments confirmed that the overhead for virtualizing the OS kernel is relatively small and significantly less than in solutions based on hypervisors , so the applications that you plan to run in the container and the physical capabilities of the host itself will be decisive factors.

Application Templates
How to deploy applications inside Parallels containers? You can do this as usual by running the application installer, but if there are a lot of containers, you will have to press the Install button countless times. In order to automate the process and not waste container disk space for placing application files, we use Application Templates. Physically, a template is a file that stores information that reproduces the original location of files, folders, and registry keys. To create an application template, a special tool is used, the Template Creation Wizard, which tracks all changes on the file system and in the registry that the application installer makes and saves these changes to the application template. The resulting application template can be connected to any container, which will be equivalent to installing this application, and immediately start using it in the container. Parallels container user can create a template from any application that can be installed inside the container.

File system with copy-on-write support
Parallels containers share with each other not only the kernel, but also all the files installed on the host from the OC Windows distribution, which significantly saves storage space. Deduplication of data is carried out at the file level using templates and a specialized file system with support for copy-on-write. For each supported version of Windows, including language localizations, Parallels releases an OS template. The OS template differs from the application template only in that the contents of the files in it are not stored. From inside the container, accessing files from the OS template looks absolutely transparent, creating a holistic view of the file structure of the Windows OS, consisting of general and container-private files. Support in the file system copy-on-write allows you to avoid modifying files from the OS template, saving changes only inside the container.

Who needs this?
From the very beginning of the project, we at Parallels have been offering our containers for hosting web applications, the most important of which is the Parallels Plesk automation system . Hosting products based on Parallels containers are offered by leading global providers such as AT&T, 1 & 1, GoDaddy, HostEurope, etc.

But most of all, Parallels containers for Windows are suitable for deploying many identical environments, managed and configured in a uniform manner. An example of such a scenario is desktop virtualization . So, Parallels in partnership with the IBS integrator has developed and since 2013 offers FSTEC certified Parallels VDI solution, which consists of a broker of connections and container-based virtualization. At the moment, on the basis of this decision, a large-scale automation project of the Federal Tax Service of the Russian Federation is being implemented, providing for the relocation of more than 10,000 jobs to the cloud data center.
Project history
The Windows container project started in May 2002, after successful experiments with Parallels containers based on the Linux kernel. In January 2003, a prototype was demonstrated on Windows 2000 Server, which was able to run 50 isolated copies of Microsoft IIS and Microsoft SQL Server, and in June 2005 the first public release of Parallels Virtuozzo Containers for Windows 2003 Server version 3.0.
To date, the project has survived 7 public releases, nearly 300 updates have been released, and the project code base has exceeded 1,300,000 SLOCs. And more recently, the total number of containers created using Parallels technologies (including the one described) has exceeded 1,000,000 !
Parallels containers were also highly praised by Microsoft - in private conversations the engineers of this company spoke about the project as the most technically difficult for the Windows kernel.

Conclusion
Microsoft's entry into the container market will undoubtedly make this approach to virtualization much more popular, and Docker solutions will allow you to “wrap” applications in containers in a simple and convenient way. From a technical point of view, the advantage of Microsoft containers is the small costs of virtualization and the ability to implement containers exclusively in the user OS execution mode. The disadvantages include difficulties in ensuring application compatibility, because for this it is necessary to emulate the entire Windows API , which currently has thousands of calls.
Although the approaches to implementing containers in Parallels and Microsoft are different, both technologies can complement each other perfectly - Microsoft containers are likely to work inside Parallels containers. Therefore, we see Microsoft not as a competitor, but as the creator of the Windows ecosystem, in which container virtualization occupies an important place.
We can hope for this fact that the container virtualization projects for the Linux kernel-based OS currently on the market are not in a state of competition - most of them are teams from Parallels, Google, IBM, Canonical, Docker, etc. work together. And although the ecosystem for developing Windows is understandably more closed, we look forward to further cooperation.
Learn more and try Parallels Containers for Windows here .
I will also try to answer all your questions in the comments.