
Xen Cloud Platform in the enterprise [1]
Among all enterprise virtualization systems, XCP is the only free and free. The history of XCP goes to XenServer, which, although it was based on the open source hypervisor, was quite a paid software. Cirix published the XenServer code under a free license, and from then on XenServer began to smoothly turn into the Xen Cloud Platform.
In this series of articles, I’ll talk about how to use XCP in a single administrative center when virtual machines and the virtualization infrastructure are managed by the same organization (i.e., about a typical scenario with enterprise server virtualization). In these articles there will be few examples and command line keys (the administration guide on the cirix website is quite published), instead I will talk about the concepts, terms and relationships of objects.
From a user point of view, the main difference between a regular xen (as part of most operating systems) and XCP is the installation process and the number of improvements before launching into a product. XCP comes in the form of an ISO with a ready-made OS for dom0 (CentOS), adapted to serve the hypervisor and provide hosts in the cloud. Xen usually goes in the form of hypervisor + utils, it means that a person will create everything else. Another bonus for those who have to deal with Microsoft products are signed drivers for Windows (they can be installed with some tricks in the xena, but they are native in XCP).
XCP is a relatively peculiar platform. It is not “closed” in the sense that it is closed, for example, hyper-v, but it comes in the form of a ready-made OS, many configuration aspects of which are controlled by the platform, and not the OS. For example, a network: you can hang an IP address on any interface ifconfig, but the consequences will be sad - you should use the platform’s tools to manage networks and interfaces.
XCP consists of several components: xen, xapi, open vswitch, xe cli, stunnel, sqeezed providing various aspects of the system.
At the beginning, the system requirements:
1) If we are talking about windows virtualization (that is, HVM domains), then processors with VT / Pacifica support are a prerequisite.
2) If the cloud is planned with more than one server, the use of network storage (iscsi or NFS) is mandatory.
3) Hosts (if there are more than one) must be exactly the same - the same processor stepping, motherboard, etc.
4) Hosts must be in the same channel segment (i.e., be connected through a switch, not through a router).
Now, actually, to the point.
(table of contents)
Host - a server that deals with virtualization.
Pool - a combination of identical hosts that allows migration.
SR - storage repository - a place where virtual machines are stored (either a local screw or NFS / ISCSI storage). To be precise, SR is storage information. Each host has its own PBD (physical block device) that connects the host to SR. The presence of PBD in SR on each host is a condition for the possibility of machine migration.
VDI - virtual disk image, I think, does not require decryption. It can be either a file or a logical volume on an LVM
VM - virtual machine.
VBD - virtual block device - XCP-specific design, the logical connection between VDI and the block device inside the virtual machine.
network - a network (more precisely, a network record). Similarly, SR hosts connect to the network using PIF (Physical interface).
VIF - virtual interface - a logical construction that connects the network and the virtual machine. Unlike VBD, it is more “real”, it can be seen in the list of network interfaces at the moment when the virtual machine is turned on.
vlan - vilan is vilan. If vilans are used, they represent the level between network and pif (on one pif there can be several vilans, vilans are part of the network).
Pool is an abstraction that unites hosts. The pool has a configuration (state) that describes all (almost all) aspects of the configuration of everything - hosts, pool, networks, SRs, virtual machines, etc. Each host maintains a complete replica of the state, although only one pool is the master. About once every 15 seconds, the wizard sends the changes to all pending ones (these are hosts, and possibly external observers using XenAPI). In addition, changes in specific components are notified “in real time”. The wizard can be reassigned on the go (practically without interfering with regular work and without exerting any influence on virtual machines). In the event of a wizard failure, hosts can be reconfigured to a new master on the go. Accepting / excluding a host in the pool requires rebooting, in addition, all virtual machines are lost, located on it (if the machines were in a pool with several hosts and were stored on an external SR, then they will remain available for running on other hosts of the pool, if the machines were stored locally, they will be destroyed). For service needs, hosts in the pool can be turned on / off without removing them from the pool (in fact, this is simply a ban on starting new machines on them).
If there is only one host, it is "its own pool." If the host joins a foreign pool, then it "forgets" about its pool and accepts the foreign one. Hosts in a pool always belong to one pool and do not know anything about other pools (i.e. the pool is always one, although it has a unique identifier, but this is just a formality).
Virtual machines come in two types: hardware-assisted virtualization (HVM) and paravirtualized (PV). Paravirtualized virtual machines are always preferable to HVMs because PV uses a special kernel that “helps” virtualization and uses hypercalls directly rather than intercepting privileged instructions by the hypervisor (as happens in HVM). Windows can only work in HVM mode due to the fact that Microsoft has not published the kernel code under a license that allows it to be adapted for efficient operation in PV mode.
A virtual machine in XCP is significantly more complex than a domain in a regular xena. The virtual machine "exists" even when it is turned off. The virtual machine has a lot of attributes that are used to start and operate the machine (in fact, this config is the “virtual machine”).
The concepts of VBD (virtual block device), VIF (virtual network adapter) are associated with a virtual machine. Both disks and network adapters can be in a multitude (I have not tested tightly, but 8 pcs can definitely be, and by numbers it is allowed to create devices even in hundreds).
Among the important features of a virtual machine: memory quotas, processor, the number of allowed cores (from 1 to 16 in the current configuration).
An important feature: XCP allows you to change the amount of virtual machine memory on the go, however, it does not allow you to use any kind of oversell (i.e., declaring to the virtual machine that there is more memory than there is). The maximum amount of memory that can be allocated to virtual machines is equal to the virtual memory of the host minus overhead (about 512MB). Memory can be moved between machines on the go, but the total number cannot be exceeded. Each machine can have its own swap and use it as much as it wants.
The processor can be connected and disconnected on the go (this is a swindle, in fact, certain processors are simply allowed and / or prohibited for use). Not all programs like it (for example, atop blows down if the processor pokes on the go). You can specify the virtual machine quota (in percent of computer time) and \ or priority for competitive access to the processor.
For particularly delicate configurations, you can allocate a few cores (processors) to the virtual machine for exclusive use (vcpu pinning).
The network is the most complex area of virtualization. XCP uses open vswitch and open flow technology to implement a virtual network. The description of this technology is far beyond the scope of the series of articles, I can only say that this technology allows you to make the “logic” of managing the switch as a separate application. A network can be connected to physical adapters, or it can be purely virtual. Unfortunately, purely virtual networks do not migrate properly (for communication between virtual machines located on different hosts, you must use the network connected to the switch connecting the hosts). The created virtual network adapter is connected to the virtual network. It can work both in normal (unicast) mode and in promiscuous mode (listening to all network traffic). Basically, There are no restrictions on the number of network adapters of a virtual machine on a single network. In the existing implementation, this network does not support jumbo frames, however, offload to the control domain supports CRC counting of outgoing frames (and if there is an understanding hardware, it can also process TCP segments).
Of course, a network can be associated not with a physical adapter, but with a vilan - in this case, all network traffic will go beyond the host in the trunk.
One of the fundamental features of XCP is the concept of SR - storage repository. SR is the disk storage (VDI) of virtual machines and ISO's (future CDs for virtual machines). SR can be of two types: local (it’s not interesting in any way, because by its functionality it is a regular local ball, disk partition, directory, etc.) and general (shared). It is shared SR that is the main tool of XCP. The cloud (more precisely, the cloud manager) controls that all hosts have access to the SR. If there are several hosts in the cloud, a single SR creation will automatically create all the necessary connectors (PBD - physical block device) for all hosts and change their configuration so that the storage will automatically connect after rebooting.
Generic SR allows for live migration between machines, starting the machine on any (first available) host, and in general, is mandatory when operating more than one host in the cloud. Depending on the SR, different functionality may be provided: copy-on-write, thin provisioning, high-speed disk cloning, snapshots, etc.
Frankly, I will not name all types of SRs, among those available without specials. Hardware - NFS and iSCSI. NFS has a slightly more economical use of disk space, iSCSI is faster.
PBD - Physical Block Device. An abstraction called the method of host access to the storage location of virtual machine disks (VDI). It can be either an NFS ball, or an iSCSI ball, or FC, or some other solution from shelf manufacturers. The main idea of PBD is the universality of PBD operation, regardless of what it is based on (the creation process and each type have its own parameters, but after creation all PBDs in some frameworks provide the same thing and are administered by the same tools). Each host has its own PBD for each SR to which it is connected.
Physical network interface. Used to connect the host to the network. Most often it is a real network interface, however, in the case of using tagged vilans, it is an abstraction associated with a particular vilan. (In this case, several vilans are connected to one interface, and PIFs are built on the basis of these vilans). All PIFs are part of the internal host network, organized using open vSwitch.
VDI is the most valuable part of a virtual machine, the disk image (virtual disk image). It is located on the SR. VDI itself is not a property of a virtual machine and connects to it using VBD (see below). VDI can be of several types, among which the system (does not contain valuable information and can be uprooted as you like) and user (which stores information and is subject to careful protection and care). VDIs can form a chain of snapshots, theoretically reducing the amount of disk space. In practice, this is not recommended, since chain processing reduces the performance of disk operations.
Abstract device in a virtual machine. Connects a disk in a virtual machine and VDI. From an XCP internals point of view, VBD is the “VDI Access Driver”. It may be, it may not be, it does not particularly affect the existence of VDI. (And, conversely, VBD without VDI cannot exist). VBD can be of several types, in particular, it can emulate CD-ROMs (catching ISO's). During the migration of the machine, VBD is recreated anew, and VDI, as it was lying on the SR, remains to lie.
A virtual network interface used to access virtual machines on a network. From the point of view of dom0, vif is exactly the same interface as all the others, and it is included in the same virtual switch (there can be several switches themselves).
Metrics are associated with virtual machines - an RRD database with relative load values for each of the resources taken into account (memory, disk, processor, network). Metrics are somewhat separate from all other types of objects, because they require special inclusion (due to overhead).
(to be continued, further: migration, memory management, domain concept, difference between HVM and PV, console, ISO connection, processor and quota management, disk schedulators, monitoring, console and graphical management methods, API)
Part two (hereinafter)
In this series of articles, I’ll talk about how to use XCP in a single administrative center when virtual machines and the virtualization infrastructure are managed by the same organization (i.e., about a typical scenario with enterprise server virtualization). In these articles there will be few examples and command line keys (the administration guide on the cirix website is quite published), instead I will talk about the concepts, terms and relationships of objects.
From a user point of view, the main difference between a regular xen (as part of most operating systems) and XCP is the installation process and the number of improvements before launching into a product. XCP comes in the form of an ISO with a ready-made OS for dom0 (CentOS), adapted to serve the hypervisor and provide hosts in the cloud. Xen usually goes in the form of hypervisor + utils, it means that a person will create everything else. Another bonus for those who have to deal with Microsoft products are signed drivers for Windows (they can be installed with some tricks in the xena, but they are native in XCP).
XCP is a relatively peculiar platform. It is not “closed” in the sense that it is closed, for example, hyper-v, but it comes in the form of a ready-made OS, many configuration aspects of which are controlled by the platform, and not the OS. For example, a network: you can hang an IP address on any interface ifconfig, but the consequences will be sad - you should use the platform’s tools to manage networks and interfaces.
XCP consists of several components: xen, xapi, open vswitch, xe cli, stunnel, sqeezed providing various aspects of the system.
At the beginning, the system requirements:
1) If we are talking about windows virtualization (that is, HVM domains), then processors with VT / Pacifica support are a prerequisite.
2) If the cloud is planned with more than one server, the use of network storage (iscsi or NFS) is mandatory.
3) Hosts (if there are more than one) must be exactly the same - the same processor stepping, motherboard, etc.
4) Hosts must be in the same channel segment (i.e., be connected through a switch, not through a router).
Now, actually, to the point.
XCP terminology
(table of contents)
Host - a server that deals with virtualization.
Pool - a combination of identical hosts that allows migration.
SR - storage repository - a place where virtual machines are stored (either a local screw or NFS / ISCSI storage). To be precise, SR is storage information. Each host has its own PBD (physical block device) that connects the host to SR. The presence of PBD in SR on each host is a condition for the possibility of machine migration.
VDI - virtual disk image, I think, does not require decryption. It can be either a file or a logical volume on an LVM
VM - virtual machine.
VBD - virtual block device - XCP-specific design, the logical connection between VDI and the block device inside the virtual machine.
network - a network (more precisely, a network record). Similarly, SR hosts connect to the network using PIF (Physical interface).
VIF - virtual interface - a logical construction that connects the network and the virtual machine. Unlike VBD, it is more “real”, it can be seen in the list of network interfaces at the moment when the virtual machine is turned on.
vlan - vilan is vilan. If vilans are used, they represent the level between network and pif (on one pif there can be several vilans, vilans are part of the network).
Pools
Pool is an abstraction that unites hosts. The pool has a configuration (state) that describes all (almost all) aspects of the configuration of everything - hosts, pool, networks, SRs, virtual machines, etc. Each host maintains a complete replica of the state, although only one pool is the master. About once every 15 seconds, the wizard sends the changes to all pending ones (these are hosts, and possibly external observers using XenAPI). In addition, changes in specific components are notified “in real time”. The wizard can be reassigned on the go (practically without interfering with regular work and without exerting any influence on virtual machines). In the event of a wizard failure, hosts can be reconfigured to a new master on the go. Accepting / excluding a host in the pool requires rebooting, in addition, all virtual machines are lost, located on it (if the machines were in a pool with several hosts and were stored on an external SR, then they will remain available for running on other hosts of the pool, if the machines were stored locally, they will be destroyed). For service needs, hosts in the pool can be turned on / off without removing them from the pool (in fact, this is simply a ban on starting new machines on them).
If there is only one host, it is "its own pool." If the host joins a foreign pool, then it "forgets" about its pool and accepts the foreign one. Hosts in a pool always belong to one pool and do not know anything about other pools (i.e. the pool is always one, although it has a unique identifier, but this is just a formality).
Virtual machines
Virtual machines come in two types: hardware-assisted virtualization (HVM) and paravirtualized (PV). Paravirtualized virtual machines are always preferable to HVMs because PV uses a special kernel that “helps” virtualization and uses hypercalls directly rather than intercepting privileged instructions by the hypervisor (as happens in HVM). Windows can only work in HVM mode due to the fact that Microsoft has not published the kernel code under a license that allows it to be adapted for efficient operation in PV mode.
A virtual machine in XCP is significantly more complex than a domain in a regular xena. The virtual machine "exists" even when it is turned off. The virtual machine has a lot of attributes that are used to start and operate the machine (in fact, this config is the “virtual machine”).
The concepts of VBD (virtual block device), VIF (virtual network adapter) are associated with a virtual machine. Both disks and network adapters can be in a multitude (I have not tested tightly, but 8 pcs can definitely be, and by numbers it is allowed to create devices even in hundreds).
Among the important features of a virtual machine: memory quotas, processor, the number of allowed cores (from 1 to 16 in the current configuration).
An important feature: XCP allows you to change the amount of virtual machine memory on the go, however, it does not allow you to use any kind of oversell (i.e., declaring to the virtual machine that there is more memory than there is). The maximum amount of memory that can be allocated to virtual machines is equal to the virtual memory of the host minus overhead (about 512MB). Memory can be moved between machines on the go, but the total number cannot be exceeded. Each machine can have its own swap and use it as much as it wants.
The processor can be connected and disconnected on the go (this is a swindle, in fact, certain processors are simply allowed and / or prohibited for use). Not all programs like it (for example, atop blows down if the processor pokes on the go). You can specify the virtual machine quota (in percent of computer time) and \ or priority for competitive access to the processor.
For particularly delicate configurations, you can allocate a few cores (processors) to the virtual machine for exclusive use (vcpu pinning).
Network
The network is the most complex area of virtualization. XCP uses open vswitch and open flow technology to implement a virtual network. The description of this technology is far beyond the scope of the series of articles, I can only say that this technology allows you to make the “logic” of managing the switch as a separate application. A network can be connected to physical adapters, or it can be purely virtual. Unfortunately, purely virtual networks do not migrate properly (for communication between virtual machines located on different hosts, you must use the network connected to the switch connecting the hosts). The created virtual network adapter is connected to the virtual network. It can work both in normal (unicast) mode and in promiscuous mode (listening to all network traffic). Basically, There are no restrictions on the number of network adapters of a virtual machine on a single network. In the existing implementation, this network does not support jumbo frames, however, offload to the control domain supports CRC counting of outgoing frames (and if there is an understanding hardware, it can also process TCP segments).
Of course, a network can be associated not with a physical adapter, but with a vilan - in this case, all network traffic will go beyond the host in the trunk.
Sr
One of the fundamental features of XCP is the concept of SR - storage repository. SR is the disk storage (VDI) of virtual machines and ISO's (future CDs for virtual machines). SR can be of two types: local (it’s not interesting in any way, because by its functionality it is a regular local ball, disk partition, directory, etc.) and general (shared). It is shared SR that is the main tool of XCP. The cloud (more precisely, the cloud manager) controls that all hosts have access to the SR. If there are several hosts in the cloud, a single SR creation will automatically create all the necessary connectors (PBD - physical block device) for all hosts and change their configuration so that the storage will automatically connect after rebooting.
Generic SR allows for live migration between machines, starting the machine on any (first available) host, and in general, is mandatory when operating more than one host in the cloud. Depending on the SR, different functionality may be provided: copy-on-write, thin provisioning, high-speed disk cloning, snapshots, etc.
Frankly, I will not name all types of SRs, among those available without specials. Hardware - NFS and iSCSI. NFS has a slightly more economical use of disk space, iSCSI is faster.
PBD
PBD - Physical Block Device. An abstraction called the method of host access to the storage location of virtual machine disks (VDI). It can be either an NFS ball, or an iSCSI ball, or FC, or some other solution from shelf manufacturers. The main idea of PBD is the universality of PBD operation, regardless of what it is based on (the creation process and each type have its own parameters, but after creation all PBDs in some frameworks provide the same thing and are administered by the same tools). Each host has its own PBD for each SR to which it is connected.
Pif
Physical network interface. Used to connect the host to the network. Most often it is a real network interface, however, in the case of using tagged vilans, it is an abstraction associated with a particular vilan. (In this case, several vilans are connected to one interface, and PIFs are built on the basis of these vilans). All PIFs are part of the internal host network, organized using open vSwitch.
Vdi
VDI is the most valuable part of a virtual machine, the disk image (virtual disk image). It is located on the SR. VDI itself is not a property of a virtual machine and connects to it using VBD (see below). VDI can be of several types, among which the system (does not contain valuable information and can be uprooted as you like) and user (which stores information and is subject to careful protection and care). VDIs can form a chain of snapshots, theoretically reducing the amount of disk space. In practice, this is not recommended, since chain processing reduces the performance of disk operations.
Vbd
Abstract device in a virtual machine. Connects a disk in a virtual machine and VDI. From an XCP internals point of view, VBD is the “VDI Access Driver”. It may be, it may not be, it does not particularly affect the existence of VDI. (And, conversely, VBD without VDI cannot exist). VBD can be of several types, in particular, it can emulate CD-ROMs (catching ISO's). During the migration of the machine, VBD is recreated anew, and VDI, as it was lying on the SR, remains to lie.
Vif
A virtual network interface used to access virtual machines on a network. From the point of view of dom0, vif is exactly the same interface as all the others, and it is included in the same virtual switch (there can be several switches themselves).
Metrics
Metrics are associated with virtual machines - an RRD database with relative load values for each of the resources taken into account (memory, disk, processor, network). Metrics are somewhat separate from all other types of objects, because they require special inclusion (due to overhead).
(to be continued, further: migration, memory management, domain concept, difference between HVM and PV, console, ISO connection, processor and quota management, disk schedulators, monitoring, console and graphical management methods, API)
Part two (hereinafter)