
Windows Azure Security Overview, Part 1
Good afternoon, dear colleagues.
In this review, I tried to talk as simple as possible about how various aspects of security on the Windows Azure platform are provided. The review consists of two parts. The first part will reveal the basic information - confidentiality, identity management, isolation, encryption, integrity and availability on the Windows Azure platform itself. The second part of the review will provide information on SQL Databases, physical security, client-side security features, platform certification, and security recommendations.
Security is one of the most important topics when discussing hosting applications in the cloud. The growing popularity of cloud computing has attracted close attention to security issues, especially in light of the presence of resource sharing and multi-tenancy. Aspects of multi-tenant and virtualization of cloud platforms necessitate some unique security methods, especially considering such types of attacks as side-channel attack (a type of attack based on obtaining some information about physical implementation).
After June 7, 2012, the Windows Azure platform cannot be called SaaS, PaaS, or any platform, but now it is more like an umbrella term that combines many types of services. Microsoft provides a secure runtime environment, provides security at the level of the operating system and infrastructure. Some security aspects implemented at the level of the cloud platform provider are actually better than those available in the local infrastructure. For example, the physical security of data centers, where Windows Azure is located, is significantly more reliable than that of the vast majority of enterprises and organizations. Windows Azure network protection, isolation of the runtime environment, and approaches to ensuring the security of the operating system are significantly higher than with traditional hosting. Thus, hosting applications in the cloud can improve the security of your applications. In November 2011, the Windows Azure platform and its information security management system were recognized by the British Standards Institute as meeting the ISO 27001 certification. The platform’s certified functionality includes computing services, storage, a virtual network and a virtual machine. The next step will be certification of the rest of the functionality of Windows Azure: SQL Databases, Service Bus, CDN, etc.
In general, any cloud platform should provide three key aspects of client data security: confidentiality, integrity, and availability, and the Microsoft cloud platform is no exception. In this review, I will try to disclose as detailed as possible all those technologies and methods that are used to provide three aspects of security with the Windows Azure platform.
Confidentiality
Confidentiality allows the client to be sure that his data will be available only to those objects that have the corresponding right to that. On the Windows Azure platform, privacy is ensured through the following tools and methods:
Let's look at all three of these technologies in more detail.
Identity Management
To begin with, your subscription is accessed using the secure Windows Live ID system, which is one of the oldest and most trusted authentication systems on the Internet. Access to already deployed services is controlled by subscription.
There are two ways to deploy applications in Windows Azure - from the Windows Azure portal and using the Service Management API (SMAPI). The Service Management API (SMAPI) provides web services using the Representational State Transfer (REST) protocol and is designed for developers. The protocol runs on top of SSL.
SMAPI authentication is based on the user creating a pair of public and private keys and a self-signed certificate, which is registered on the Windows Azure portal. In this way, all critical application management activities are protected by your own certificates. At the same time, the certificate is not tied to a trusted root certificate authority (CA), instead it is self-signed, which allows with a certain degree of accuracy to be sure that only certain client representatives will have access to the services and data protected in this way.
Windows Azure Storage uses its own authentication mechanism based on two Storage Account Keys (SAKs), which are associated with each account and can be reset by the user.
Thus, Windows Azure implements perfect comprehensive protection and authentication, the general data of which are given in the table.
It should be noted that in the case of storage services, you can use Shared Access Signatures to determine user rights (for all storage services - from version 1.7, earlier - only for blobs). Shared Access Signatures were previously available only for blobs, which allowed storage account owners to issue signed URLs in a certain way to provide access to blobs. Now Shared Access Signature is available for tables and queues in addition to blobs and containers. Prior to the introduction of this feature, in order to accomplish something from CRUD with a table or queue, it was necessary to be the owner of the account. Now you can provide the other person with a link signed by Shared Access Signature, and provide what rights you need. The functionality of Shared Access Signature lies in this - detailed control of access to resources, determining what operations a user can perform on a resource, having a Shared Access Signature. The list of operations available for defining Shared Access Signature includes:
The Shared Access Signature parameters include all the information necessary for granting access to storage resources - the request parameters in the URL determine the time period through which the Shared Access Signature "goes bad", the permissions granted to this Shared Access Signature, the resources to which access is granted and , in fact, the signature by which authentication occurs. In addition, a link to a stored access policy can be included in the Shared Access Signature URL, with which you can provide another layer of control.
Naturally, Shared Access Signature should be distributed using HTTPS and allow access to the shortest possible time period required for operations.
A typical example for using Shared Access Signature is an address book service, one condition for the development of which is that it can scale for a large number of users. The service provides the user with the ability to store their own address book in the cloud and access it from any device or application. The user subscribes to the service and receives the address book. You can implement this scenario using the Windows Azure role model, and then the service will work as a layer between the client application and cloud platform storage services. After authentication of the client application, it will gain access to the address book through the web interface of the service, which will send requests initiated by the client to the cloud platform table storage service. However, specifically in this scenario, the use of Shared Access Signature for the table service looks great, and it is implemented quite simply. SAS for the table service can be used to provide direct access to the address book for the application. This approach allows you to increase the degree of scalability of the system and reduce the cost of the solution by removing the layer that processes requests in the form of a service. The role of the service in this case will be reduced to processing client subscriptions and generating Shared Access Signature tokens for the client application. This approach allows you to increase the degree of scalability of the system and reduce the cost of the solution by removing the layer that processes requests in the form of a service. The role of the service in this case will be reduced to processing client subscriptions and generating Shared Access Signature tokens for the client application. This approach allows you to increase the degree of scalability of the system and reduce the cost of the solution by removing the layer that processes requests in the form of a service. The role of the service in this case will be reduced to processing client subscriptions and generating Shared Access Signature tokens for the client application.
You can read more about Shared Access Signatures in an article dedicated to this topic .
An additional measure of security is the principle of least privilege, which, in fact, is also a generally accepted recommended practice. According to this principle, clients are denied administrative access to their virtual machines (I remind you that all services in Windows Azure are virtualized), and the software they run runs under a special limited account. Thus, anyone who wants to gain access to the system in any way should carry out the upgrade procedure.
Everything that is transmitted over the network to Windows Azure and inside the platform is reliably protected with SSL, and in most cases, SSL certificates are self-signed. The exception is data transfer outside the Windows Azure internal network, for example, for storage services or fabric controller, which use certificates issued by Microsoft.
Windows Azure Access Control Service
As for more complex identity management scenarios, for example, not just Live Id login, but integration of Windows Azure authentication mechanisms and a cloud (or on-premises) application, Microsoft offers its own Windows Azure Access Control Service. Windows Azure Access Control Service provides a service for providing federal security and access control to your cloud or on-premises applications. AD has built-in support for AD FS 2.0 and all providers that support WS-Fed, + the public providers LiveID, FB, Google are pre-configured providers on the portal. In addition, AD supports OAuth, OpenId, and REST services.
Windows Azure and Access Control Service (including Windows Azure Active Directory) use claims-based authentication. These statements may include any object information that the identity provider that provides this information allows. Using claims-based authentication is one of the most effective methods for solving complex authentication scenarios. So, many web projects use claims - Google, Yahoo, Facebook and so on. After authentication using the selected authentication provider, the client receives approvals using WS-Federation or Security Assertion Markup Language (SAML), which are then transferred to the security token (container containing the statements) where necessary. Statements allow you to effectively implement the principle of single sign-on,

For example, a client has an application that uses for authentication a certain repository of user information located in a local data center. For this, a special authentication module that implements a certain standard is used. At a certain point, it becomes necessary to implement not only a fault-tolerant authentication mechanism with a single provider, but also provide users with the opportunity to authenticate through public identification providers, for example, Windows Live Id, Facebook and so on. The authentication module adds the logic for working with these identity providers. But in the event that any, even the most frivolous, change occurs in the authentication logic, standard or syntax of the identity provider - the developer will need to manually encode this change, which is an ineffective approach to business. The problem becomes even more serious if this application is migrated to the cloud. The Windows Azure Access Control Service allows you to solve this particular scenario by offering an elegantly built infrastructure - when a user enters a web application page, he is first transferred to the Windows Azure Access Control Service, where he selects the necessary authentication provider, then authenticates with his help and logs on to the system. The developer in this case can offering an elegantly built infrastructure - the user, when entering the web application page, is first transferred to the Windows Azure Access Control Service, where he selects the necessary identity provider, then authenticates with his help and logs into the system. The developer in this case can offering an elegantly built infrastructure - the user, when entering the web application page, is first transferred to the Windows Azure Access Control Service, where he selects the necessary identity provider, then authenticates with his help and logs into the system. The developer in this case cancompletely ignore the internal authentication mechanisms, tokens, claims, and so on - for it all the work will be done by Microsoft and the Windows Azure Access Control Service. Thus, it is possible to implement a common scenario when it is necessary to provide identity management in a situation with a ready-made application that has migrated to the cloud.

A common question also is “what if there is a ready-made application that authenticates users using their domain credentials?”. There is an answer to this question - it is enough to add the Active Directory Federation Services 2.0 element to the + AD + Windows Azure application bundle to get a working script for integrating the application located in the cloud and the local Active Directory infrastructure. For the user in this case, authentication continues to be a transparent process. In addition, no credentials are transferred to the cloud - AD FS 2.0 acts as an authentication provider that receives user credentials with some credentials, generates a security token from these claims and sends it through the secure Access Control Service channel.

Windows Azure Active Directory
The latest service for implementing authentication scenarios in Windows Azure is Windows Azure Active Directory. It’s worth mentioning right away that this service is not a complete analogue of the local Active Directory; rather, it expands the local directory into the cloud with its “mirror”.
Windows Azure Active Directory consists of three main components: a REST service with which you can create, receive, update and delete information from the catalog, as well as use SSO (in case of integration with Office 365, Dynamics, Windows Intune, for example); integration with various identity providers such as Facebook and Google, as well as a library that simplifies access to the functionality of Windows Azure Active Directory. Initially, Windows Azure Active Directory was used for Office 365. Now Windows Azure Active Directory provides convenient access to the following information:
Users : passwords, security policies, roles.
Groups : Security / Distribution-groups.
And other basic information (for example, about services). All of this is provided using Windows Azure AD Graph, an innovative social corporate graph with an interface that supports REST, with a browser view for easy discovery of information and relationships. As in the case of the Access Control Service, when you want to integrate with the local infrastructure running AD, you need to install and configure Active Directory Federation Services Version 2. Thus, using Windows Azure Active, you can create applications both internal and public plan that use, for example, Office 365, and implement federated authentication and synchronization scenarios between the on-premises Active Directory infrastructure and Windows Azure. Isolation

Depending on the number of role instances defined by the client, Windows Azure creates an equal number of virtual machines called role instances (for Cloud Services), and then launches a deployed application on these virtual machines. These virtual machines, in turn, run in a hypervisor specifically designed to work in the cloud (Windows Azure Hypervisor).
Of course, in order to implement an effective security mechanism, it is necessary to appropriately isolate service instances serving individual clients from each other and the data that will be stored in the storage services.
.
Given the virtual “origin” of everything and everything on the platform, it is critical to isolate the so-called Root VM (a secure system where the Fabric Controller hosts its own Fabric Agents, which, in turn, control the Guest Agents hosted on client virtual machines) from guest virtual machines and guest virtual machines apart. Windows Azure uses its own hypervisor, a virtualization layer developed based on Hyper-V. It runs directly on the equipment and divides each node into a certain number of virtual machines. Each node has a Root VM on which the host operating system is running. Windows Azure uses a heavily cropped version of Windows Server as its operating system. on which exclusively necessary services are installed for servicing host virtual machines, which is done both to increase productivity and reduce the "surface" of the attack. In addition, virtualization in the cloud has led to the emergence of new types of threats, for example:
• Privilege escalation due to an attack from a virtual machine to a physical host or to another virtual machine
• Going beyond virtual machines and executing code in the context of the physical host OS with OS control capture (Jailbreaking, Hyperjacking)

All network access and disk operations are controlled by the operating system on the Root VM. Filters on the virtual network of the hypervisor control traffic to and from virtual machines, also preventing various attacks based on sniffing. In addition, other filters are installed that block broadcasts and multicasts (except, of course, DHCP lease). Moreover, the connection rules are cumulative - for example, if the instances of roles A and B belong to different applications, then A can initiate opening a connection to B only if A can open Internet connections (which needs to be configured), and B can accept connections from The Internet.
As for packet filters, the controller, having a list of roles, takes this list and translates it into a list of instances of these roles, and then translates the list to IP addresses, which are then used by the agent to configure packet filters that allow intra-application connections to these IP addresses. It should be noted that Fabric Controller itself is maximally protected from potentially hacked Fabric Agents on hosts. The communication channel between the controller and agents is bidirectional, while the agent implements an SSL-protected service that is used by the controller and can only respond to a request, but cannot initiate a connection to the controller. In addition, if it turns out that there is a controller or device that does not know SSL, it is located in a separate VLAN.

VLANs in Windows Azure are used quite actively. First of all, they are used to ensure the isolation of controllers and other devices. VLANs divide the network in such a way that there can be no “conversation” between two VLANs except through a router, which prevents malicious operation of a compromised node - for example, spoofing traffic or viewing it. There are three VLANs on each cluster:
1) The primary VLAN connects untrusted client nodes.
2) VLAN of the controller - trusted controllers and supported systems.
3) VLAN devices - trusted infrastructure devices (for example, network).

Note: communications between the controller VLAN and the primary VLAN are possible, but only the controller can initiate a connection to the primary, but not vice versa. Similarly, communications are blocked from the primary VLAN to the device VLAN.
Encryption An
effective security tool is, of course, data encryption. As already stated more than once, all that is possible is protected by SSL. The client can use the Windows Azure SDK, which extends the base .NET libraries with the integration capabilities of the .NET Cryptographic Service Providers (CSP) in Windows Azure, for example:
1) A full set of cryptography-related functionality, for example, MD5 and SHA-2.
2) RNGCryptoServiceProvider - a class for generating random numbers sufficient to implement enough entropy for cryptography.
3) Encryption algorithms (for example, AES), verified by years of real use.
4) etc.
All control messages transmitted over communication channels within the platform are protected by the TLS protocol with cryptographic keys with a minimum length of 128 bits.
All operations calls to Windows Azure are made using standard protocols SOAP, XML, REST-based. The communication channel may or may not be encrypted, depending on the settings.
What you also need to consider when working with data in Windows Azure - at the storage services level, client data is not by defaultencrypted - that is, as they are put in the storage of blobs or tables, in this form they are stored. If it is necessary to encrypt data, you can do this either on the client side or use the Trust Services functionality ( http://www.microsoft.com/en-us/sqlazurelabs/labs/trust-services.aspx ), necessary for server side encryption. When using Trust Services, data can only be decrypted by authorized users.

Microsoft Codename “Trust Services” is an encryption framework that is used at the application level to protect sensitive data inside your cloud-based applications stored in Windows Azure. Data encrypted by the framework can only be decrypted by authorized clients, which allows the distribution of encrypted data. At the same time, search is supported on encrypted data, encryption of streams, as well as separation of roles for data administration and publication.
For particularly critical data, you can use a hybrid solution when important data is stored locally, while non-critical in the Windows Azure storage or SQL Databases.
Integrity
When a client uses data in electronic form, he quite naturally expects that this data will be protected from changes, both intentional and accidental. On Windows Azure, integrity is guaranteed, firstly, by the fact that clients do not have administrative privileges on virtual machines on computing nodes, and secondly, the code runs under a Windows account with minimal privileges. There is no durable storage on the VM. Each VM is connected to three local virtual disks (VHD):
* Disk D: contains one of several versions of Windows. WA provides various images and updates them in a timely manner. The client selects the most suitable version and, as soon as a new version of Windows becomes available, the client can switch to it.
* Disk E: contains the image created by the controller, with the content provided by the client - for example, the application.
* Drive C: contains configuration information, swap files, and other overhead.
Drives D: and E: are, of course, virtual disks, and are read-only (ACLs, access lists, contain certain rights to deny access from client processes). However, a loophole was left for the operating system - these virtual disks are implemented as VHD + delta files. For example, when the platform updates VHD D: containing the operating system, the delta file of this disk is cleaned and filled in a new way. Also with other discs. All disks return to their original state if the role instance is migrated to another physical machine.
Availability
Any business client or a simple individual who uploads the service to the cloud is critically important for the maximum possible availability for both consumers and, in fact, for the client. The Microsoft cloud platform provides a certain layer of functionality that implements redundancy and, thereby, the maximum possible availability of customer data.
The most important concept that reveals the basic accessibility mechanism on Windows Azure is replication. Let's look at new mechanisms (and they are really new - the official announcement occurred on June 7, 2012) in more detail.
Locally Redundant Storage (LRS) provides storage with a high degree of durability and availability within one geographic location (region). The platform stores three replicas of each data item in one main geographical location, which ensures that this data can be restored after a general failure (for example, failure of a disk, node, basket, etc.) without a storage account being idle and, accordingly, not affecting the availability and durability of the store. All write operations to the repository are performed synchronously in three replicas in three different fault domains (fault domain), and only after the successful completion of all three operations the code returns the successful completion of the transaction. In the case of using local redundant storage, if the data center in which the data replicas are located,
Geo Redundant Storage (GRS) provides a much higher degree of durability and security by placing data replicas not only in the main geographical location, but also in any additional in the same region, but hundreds of kilometers away. All data in blob and table storage services is geographically replicated (but there are no queues). With geographically redundant storage, the platform again saves three replicas, but in two locations. Thus, if the data center stops working, the data will be available from the second location. As in the case of the first redundancy option, data recording operations in the main geographical location must be confirmed before the system returns a code to successfully complete the operation. Upon confirmation of the operation in asynchronous mode, replication to another geographical location occurs. Let's see in more detail what
When you perform operations of creating, deleting, updating, and so on in the data warehouse, the transaction is completely replicated to three completely different nodes of the store in three different error and update domains in the main geographical location, after which the client returns a code for the successful completion of the operation and confirmed asynchronously the transaction is replicated to the second location, where it is completely replicated to three completely different storage nodes in different error and update domains. The overall performance does not fall, since everything is done asynchronously.
As for geographic fault tolerance and how everything is restored in the event of serious disruptions. If a serious glitch occurred in the main geographical location, it is natural that the corporation tries to smooth out the consequences to the maximum. However, if everything is completely bad and the data is lost, it may be necessary to apply the rules of geographical fault tolerance - the client is notified of the disaster in the main location, after which the corresponding DNS records are rewritten from the main location to the second (account.service.core.windows.net ) Of course, in the process of translating DNS records, something is unlikely to work, but upon completion, existing blobs and tables become available at their URL. After the translation process is completed, the second geographical location will be upgraded to the main one (until until the next failure of the data center). Also, immediately upon completion of the process of upgrading the data center status, the process of creating a new second geographical location in the same region and further data replication is initiated.
All this is controlled by Fabric Controller. In the event that guest agents (GA) installed on virtual machines stop responding, the controller transfers everything to another node and reprograms the network configuration to ensure full availability.
Also on the Windows Azure platform there are mechanisms such as update domains and error domains, which also guarantee the continued availability of the deployed service, even when updating the operating system or hardware errors.
Error domains are a physical unit, a deployment container, and usually it is limited to a rack. Why is it limited to recom? Because, if the domains are located in different rivers, it turns out that the instances will be located so that there will not be a sufficient probability of their general failure. In addition, an error in one error domain should not lead to errors in other domains. Thus, if something breaks in the error domain, the entire domain is marked as broken and the deployment is transferred to another error domain. You cannot currently control the number of error domains - this is what Fabric Controller does.
Update domains are a more controlled entity. There is a certain level of control over the update domains, and the user can perform incremental or rolling updates of a group of instances of his service at one time. Update domains differ from error domains in that they are a logical entity, while error domains are physical. Since the update domain logically groups the roles, one application can be located in several update domains and at the same time only in two error domains. In this case, updates can be made first in the update domain No. 1, then in the update domain No. 2, and so on.

Each data center has at least two power sources, including an autonomous power source. The environmental controls are autonomous and will function as long as the systems are connected to the Internet.
In this review, I tried to talk as simple as possible about how various aspects of security on the Windows Azure platform are provided. The review consists of two parts. The first part will reveal the basic information - confidentiality, identity management, isolation, encryption, integrity and availability on the Windows Azure platform itself. The second part of the review will provide information on SQL Databases, physical security, client-side security features, platform certification, and security recommendations.
Security is one of the most important topics when discussing hosting applications in the cloud. The growing popularity of cloud computing has attracted close attention to security issues, especially in light of the presence of resource sharing and multi-tenancy. Aspects of multi-tenant and virtualization of cloud platforms necessitate some unique security methods, especially considering such types of attacks as side-channel attack (a type of attack based on obtaining some information about physical implementation).
After June 7, 2012, the Windows Azure platform cannot be called SaaS, PaaS, or any platform, but now it is more like an umbrella term that combines many types of services. Microsoft provides a secure runtime environment, provides security at the level of the operating system and infrastructure. Some security aspects implemented at the level of the cloud platform provider are actually better than those available in the local infrastructure. For example, the physical security of data centers, where Windows Azure is located, is significantly more reliable than that of the vast majority of enterprises and organizations. Windows Azure network protection, isolation of the runtime environment, and approaches to ensuring the security of the operating system are significantly higher than with traditional hosting. Thus, hosting applications in the cloud can improve the security of your applications. In November 2011, the Windows Azure platform and its information security management system were recognized by the British Standards Institute as meeting the ISO 27001 certification. The platform’s certified functionality includes computing services, storage, a virtual network and a virtual machine. The next step will be certification of the rest of the functionality of Windows Azure: SQL Databases, Service Bus, CDN, etc.
In general, any cloud platform should provide three key aspects of client data security: confidentiality, integrity, and availability, and the Microsoft cloud platform is no exception. In this review, I will try to disclose as detailed as possible all those technologies and methods that are used to provide three aspects of security with the Windows Azure platform.
Confidentiality
Confidentiality allows the client to be sure that his data will be available only to those objects that have the corresponding right to that. On the Windows Azure platform, privacy is ensured through the following tools and methods:
- Personality management - determining whether an authenticated principal is an object that has access to something.
- Isolation - ensuring the isolation of data using the "containers" of both the physical and logical levels.
- Encryption - additional data protection using encryption mechanisms. Encryption is used on the Windows Azure platform to protect communication channels and is used to provide better protection for customer data.
Let's look at all three of these technologies in more detail.
Identity Management
To begin with, your subscription is accessed using the secure Windows Live ID system, which is one of the oldest and most trusted authentication systems on the Internet. Access to already deployed services is controlled by subscription.
There are two ways to deploy applications in Windows Azure - from the Windows Azure portal and using the Service Management API (SMAPI). The Service Management API (SMAPI) provides web services using the Representational State Transfer (REST) protocol and is designed for developers. The protocol runs on top of SSL.
SMAPI authentication is based on the user creating a pair of public and private keys and a self-signed certificate, which is registered on the Windows Azure portal. In this way, all critical application management activities are protected by your own certificates. At the same time, the certificate is not tied to a trusted root certificate authority (CA), instead it is self-signed, which allows with a certain degree of accuracy to be sure that only certain client representatives will have access to the services and data protected in this way.
Windows Azure Storage uses its own authentication mechanism based on two Storage Account Keys (SAKs), which are associated with each account and can be reset by the user.
Thus, Windows Azure implements perfect comprehensive protection and authentication, the general data of which are given in the table.
Subjects | Objects of protection | Authentication mechanism |
Customers | Subscription | Windows Live ID |
Developers | Windows Azure / SMAPI Portal | Windows Live ID (portal), self-signed certificate (SMAPI) |
Role instances | Storage | Key |
External applications | Storage | Key |
External applications | Applications | User defined |
It should be noted that in the case of storage services, you can use Shared Access Signatures to determine user rights (for all storage services - from version 1.7, earlier - only for blobs). Shared Access Signatures were previously available only for blobs, which allowed storage account owners to issue signed URLs in a certain way to provide access to blobs. Now Shared Access Signature is available for tables and queues in addition to blobs and containers. Prior to the introduction of this feature, in order to accomplish something from CRUD with a table or queue, it was necessary to be the owner of the account. Now you can provide the other person with a link signed by Shared Access Signature, and provide what rights you need. The functionality of Shared Access Signature lies in this - detailed control of access to resources, determining what operations a user can perform on a resource, having a Shared Access Signature. The list of operations available for defining Shared Access Signature includes:
- Reading and writing content - in the case of blobs, there are also their properties and metadata, as well as block lists.
- Removing, leasing, creating snapshots blobs.
- Getting lists of content items.
- Adding, deleting, updating messages in queues.
- Retrieving queue metadata, including the number of messages in the queue.
- Read, add, update, and insert entities in a table.
The Shared Access Signature parameters include all the information necessary for granting access to storage resources - the request parameters in the URL determine the time period through which the Shared Access Signature "goes bad", the permissions granted to this Shared Access Signature, the resources to which access is granted and , in fact, the signature by which authentication occurs. In addition, a link to a stored access policy can be included in the Shared Access Signature URL, with which you can provide another layer of control.
Naturally, Shared Access Signature should be distributed using HTTPS and allow access to the shortest possible time period required for operations.
A typical example for using Shared Access Signature is an address book service, one condition for the development of which is that it can scale for a large number of users. The service provides the user with the ability to store their own address book in the cloud and access it from any device or application. The user subscribes to the service and receives the address book. You can implement this scenario using the Windows Azure role model, and then the service will work as a layer between the client application and cloud platform storage services. After authentication of the client application, it will gain access to the address book through the web interface of the service, which will send requests initiated by the client to the cloud platform table storage service. However, specifically in this scenario, the use of Shared Access Signature for the table service looks great, and it is implemented quite simply. SAS for the table service can be used to provide direct access to the address book for the application. This approach allows you to increase the degree of scalability of the system and reduce the cost of the solution by removing the layer that processes requests in the form of a service. The role of the service in this case will be reduced to processing client subscriptions and generating Shared Access Signature tokens for the client application. This approach allows you to increase the degree of scalability of the system and reduce the cost of the solution by removing the layer that processes requests in the form of a service. The role of the service in this case will be reduced to processing client subscriptions and generating Shared Access Signature tokens for the client application. This approach allows you to increase the degree of scalability of the system and reduce the cost of the solution by removing the layer that processes requests in the form of a service. The role of the service in this case will be reduced to processing client subscriptions and generating Shared Access Signature tokens for the client application.
You can read more about Shared Access Signatures in an article dedicated to this topic .
An additional measure of security is the principle of least privilege, which, in fact, is also a generally accepted recommended practice. According to this principle, clients are denied administrative access to their virtual machines (I remind you that all services in Windows Azure are virtualized), and the software they run runs under a special limited account. Thus, anyone who wants to gain access to the system in any way should carry out the upgrade procedure.
Everything that is transmitted over the network to Windows Azure and inside the platform is reliably protected with SSL, and in most cases, SSL certificates are self-signed. The exception is data transfer outside the Windows Azure internal network, for example, for storage services or fabric controller, which use certificates issued by Microsoft.
Windows Azure Access Control Service
As for more complex identity management scenarios, for example, not just Live Id login, but integration of Windows Azure authentication mechanisms and a cloud (or on-premises) application, Microsoft offers its own Windows Azure Access Control Service. Windows Azure Access Control Service provides a service for providing federal security and access control to your cloud or on-premises applications. AD has built-in support for AD FS 2.0 and all providers that support WS-Fed, + the public providers LiveID, FB, Google are pre-configured providers on the portal. In addition, AD supports OAuth, OpenId, and REST services.
![]() |
Windows Azure and Access Control Service (including Windows Azure Active Directory) use claims-based authentication. These statements may include any object information that the identity provider that provides this information allows. Using claims-based authentication is one of the most effective methods for solving complex authentication scenarios. So, many web projects use claims - Google, Yahoo, Facebook and so on. After authentication using the selected authentication provider, the client receives approvals using WS-Federation or Security Assertion Markup Language (SAML), which are then transferred to the security token (container containing the statements) where necessary. Statements allow you to effectively implement the principle of single sign-on,

For example, a client has an application that uses for authentication a certain repository of user information located in a local data center. For this, a special authentication module that implements a certain standard is used. At a certain point, it becomes necessary to implement not only a fault-tolerant authentication mechanism with a single provider, but also provide users with the opportunity to authenticate through public identification providers, for example, Windows Live Id, Facebook and so on. The authentication module adds the logic for working with these identity providers. But in the event that any, even the most frivolous, change occurs in the authentication logic, standard or syntax of the identity provider - the developer will need to manually encode this change, which is an ineffective approach to business. The problem becomes even more serious if this application is migrated to the cloud. The Windows Azure Access Control Service allows you to solve this particular scenario by offering an elegantly built infrastructure - when a user enters a web application page, he is first transferred to the Windows Azure Access Control Service, where he selects the necessary authentication provider, then authenticates with his help and logs on to the system. The developer in this case can offering an elegantly built infrastructure - the user, when entering the web application page, is first transferred to the Windows Azure Access Control Service, where he selects the necessary identity provider, then authenticates with his help and logs into the system. The developer in this case can offering an elegantly built infrastructure - the user, when entering the web application page, is first transferred to the Windows Azure Access Control Service, where he selects the necessary identity provider, then authenticates with his help and logs into the system. The developer in this case cancompletely ignore the internal authentication mechanisms, tokens, claims, and so on - for it all the work will be done by Microsoft and the Windows Azure Access Control Service. Thus, it is possible to implement a common scenario when it is necessary to provide identity management in a situation with a ready-made application that has migrated to the cloud.

A common question also is “what if there is a ready-made application that authenticates users using their domain credentials?”. There is an answer to this question - it is enough to add the Active Directory Federation Services 2.0 element to the + AD + Windows Azure application bundle to get a working script for integrating the application located in the cloud and the local Active Directory infrastructure. For the user in this case, authentication continues to be a transparent process. In addition, no credentials are transferred to the cloud - AD FS 2.0 acts as an authentication provider that receives user credentials with some credentials, generates a security token from these claims and sends it through the secure Access Control Service channel.

Windows Azure Active Directory
The latest service for implementing authentication scenarios in Windows Azure is Windows Azure Active Directory. It’s worth mentioning right away that this service is not a complete analogue of the local Active Directory; rather, it expands the local directory into the cloud with its “mirror”.
Windows Azure Active Directory consists of three main components: a REST service with which you can create, receive, update and delete information from the catalog, as well as use SSO (in case of integration with Office 365, Dynamics, Windows Intune, for example); integration with various identity providers such as Facebook and Google, as well as a library that simplifies access to the functionality of Windows Azure Active Directory. Initially, Windows Azure Active Directory was used for Office 365. Now Windows Azure Active Directory provides convenient access to the following information:
Users : passwords, security policies, roles.
Groups : Security / Distribution-groups.
And other basic information (for example, about services). All of this is provided using Windows Azure AD Graph, an innovative social corporate graph with an interface that supports REST, with a browser view for easy discovery of information and relationships. As in the case of the Access Control Service, when you want to integrate with the local infrastructure running AD, you need to install and configure Active Directory Federation Services Version 2. Thus, using Windows Azure Active, you can create applications both internal and public plan that use, for example, Office 365, and implement federated authentication and synchronization scenarios between the on-premises Active Directory infrastructure and Windows Azure. Isolation

Depending on the number of role instances defined by the client, Windows Azure creates an equal number of virtual machines called role instances (for Cloud Services), and then launches a deployed application on these virtual machines. These virtual machines, in turn, run in a hypervisor specifically designed to work in the cloud (Windows Azure Hypervisor).
Of course, in order to implement an effective security mechanism, it is necessary to appropriately isolate service instances serving individual clients from each other and the data that will be stored in the storage services.
.

Given the virtual “origin” of everything and everything on the platform, it is critical to isolate the so-called Root VM (a secure system where the Fabric Controller hosts its own Fabric Agents, which, in turn, control the Guest Agents hosted on client virtual machines) from guest virtual machines and guest virtual machines apart. Windows Azure uses its own hypervisor, a virtualization layer developed based on Hyper-V. It runs directly on the equipment and divides each node into a certain number of virtual machines. Each node has a Root VM on which the host operating system is running. Windows Azure uses a heavily cropped version of Windows Server as its operating system. on which exclusively necessary services are installed for servicing host virtual machines, which is done both to increase productivity and reduce the "surface" of the attack. In addition, virtualization in the cloud has led to the emergence of new types of threats, for example:
• Privilege escalation due to an attack from a virtual machine to a physical host or to another virtual machine
• Going beyond virtual machines and executing code in the context of the physical host OS with OS control capture (Jailbreaking, Hyperjacking)

All network access and disk operations are controlled by the operating system on the Root VM. Filters on the virtual network of the hypervisor control traffic to and from virtual machines, also preventing various attacks based on sniffing. In addition, other filters are installed that block broadcasts and multicasts (except, of course, DHCP lease). Moreover, the connection rules are cumulative - for example, if the instances of roles A and B belong to different applications, then A can initiate opening a connection to B only if A can open Internet connections (which needs to be configured), and B can accept connections from The Internet.
As for packet filters, the controller, having a list of roles, takes this list and translates it into a list of instances of these roles, and then translates the list to IP addresses, which are then used by the agent to configure packet filters that allow intra-application connections to these IP addresses. It should be noted that Fabric Controller itself is maximally protected from potentially hacked Fabric Agents on hosts. The communication channel between the controller and agents is bidirectional, while the agent implements an SSL-protected service that is used by the controller and can only respond to a request, but cannot initiate a connection to the controller. In addition, if it turns out that there is a controller or device that does not know SSL, it is located in a separate VLAN.

VLANs in Windows Azure are used quite actively. First of all, they are used to ensure the isolation of controllers and other devices. VLANs divide the network in such a way that there can be no “conversation” between two VLANs except through a router, which prevents malicious operation of a compromised node - for example, spoofing traffic or viewing it. There are three VLANs on each cluster:
1) The primary VLAN connects untrusted client nodes.
2) VLAN of the controller - trusted controllers and supported systems.
3) VLAN devices - trusted infrastructure devices (for example, network).

Note: communications between the controller VLAN and the primary VLAN are possible, but only the controller can initiate a connection to the primary, but not vice versa. Similarly, communications are blocked from the primary VLAN to the device VLAN.
Encryption An
effective security tool is, of course, data encryption. As already stated more than once, all that is possible is protected by SSL. The client can use the Windows Azure SDK, which extends the base .NET libraries with the integration capabilities of the .NET Cryptographic Service Providers (CSP) in Windows Azure, for example:
1) A full set of cryptography-related functionality, for example, MD5 and SHA-2.
2) RNGCryptoServiceProvider - a class for generating random numbers sufficient to implement enough entropy for cryptography.
3) Encryption algorithms (for example, AES), verified by years of real use.
4) etc.
All control messages transmitted over communication channels within the platform are protected by the TLS protocol with cryptographic keys with a minimum length of 128 bits.
All operations calls to Windows Azure are made using standard protocols SOAP, XML, REST-based. The communication channel may or may not be encrypted, depending on the settings.
What you also need to consider when working with data in Windows Azure - at the storage services level, client data is not by defaultencrypted - that is, as they are put in the storage of blobs or tables, in this form they are stored. If it is necessary to encrypt data, you can do this either on the client side or use the Trust Services functionality ( http://www.microsoft.com/en-us/sqlazurelabs/labs/trust-services.aspx ), necessary for server side encryption. When using Trust Services, data can only be decrypted by authorized users.

Microsoft Codename “Trust Services” is an encryption framework that is used at the application level to protect sensitive data inside your cloud-based applications stored in Windows Azure. Data encrypted by the framework can only be decrypted by authorized clients, which allows the distribution of encrypted data. At the same time, search is supported on encrypted data, encryption of streams, as well as separation of roles for data administration and publication.
For particularly critical data, you can use a hybrid solution when important data is stored locally, while non-critical in the Windows Azure storage or SQL Databases.
Integrity
When a client uses data in electronic form, he quite naturally expects that this data will be protected from changes, both intentional and accidental. On Windows Azure, integrity is guaranteed, firstly, by the fact that clients do not have administrative privileges on virtual machines on computing nodes, and secondly, the code runs under a Windows account with minimal privileges. There is no durable storage on the VM. Each VM is connected to three local virtual disks (VHD):
* Disk D: contains one of several versions of Windows. WA provides various images and updates them in a timely manner. The client selects the most suitable version and, as soon as a new version of Windows becomes available, the client can switch to it.
* Disk E: contains the image created by the controller, with the content provided by the client - for example, the application.
* Drive C: contains configuration information, swap files, and other overhead.
Drives D: and E: are, of course, virtual disks, and are read-only (ACLs, access lists, contain certain rights to deny access from client processes). However, a loophole was left for the operating system - these virtual disks are implemented as VHD + delta files. For example, when the platform updates VHD D: containing the operating system, the delta file of this disk is cleaned and filled in a new way. Also with other discs. All disks return to their original state if the role instance is migrated to another physical machine.
Availability
Any business client or a simple individual who uploads the service to the cloud is critically important for the maximum possible availability for both consumers and, in fact, for the client. The Microsoft cloud platform provides a certain layer of functionality that implements redundancy and, thereby, the maximum possible availability of customer data.
The most important concept that reveals the basic accessibility mechanism on Windows Azure is replication. Let's look at new mechanisms (and they are really new - the official announcement occurred on June 7, 2012) in more detail.
Locally Redundant Storage (LRS) provides storage with a high degree of durability and availability within one geographic location (region). The platform stores three replicas of each data item in one main geographical location, which ensures that this data can be restored after a general failure (for example, failure of a disk, node, basket, etc.) without a storage account being idle and, accordingly, not affecting the availability and durability of the store. All write operations to the repository are performed synchronously in three replicas in three different fault domains (fault domain), and only after the successful completion of all three operations the code returns the successful completion of the transaction. In the case of using local redundant storage, if the data center in which the data replicas are located,
Geo Redundant Storage (GRS) provides a much higher degree of durability and security by placing data replicas not only in the main geographical location, but also in any additional in the same region, but hundreds of kilometers away. All data in blob and table storage services is geographically replicated (but there are no queues). With geographically redundant storage, the platform again saves three replicas, but in two locations. Thus, if the data center stops working, the data will be available from the second location. As in the case of the first redundancy option, data recording operations in the main geographical location must be confirmed before the system returns a code to successfully complete the operation. Upon confirmation of the operation in asynchronous mode, replication to another geographical location occurs. Let's see in more detail what
When you perform operations of creating, deleting, updating, and so on in the data warehouse, the transaction is completely replicated to three completely different nodes of the store in three different error and update domains in the main geographical location, after which the client returns a code for the successful completion of the operation and confirmed asynchronously the transaction is replicated to the second location, where it is completely replicated to three completely different storage nodes in different error and update domains. The overall performance does not fall, since everything is done asynchronously.
As for geographic fault tolerance and how everything is restored in the event of serious disruptions. If a serious glitch occurred in the main geographical location, it is natural that the corporation tries to smooth out the consequences to the maximum. However, if everything is completely bad and the data is lost, it may be necessary to apply the rules of geographical fault tolerance - the client is notified of the disaster in the main location, after which the corresponding DNS records are rewritten from the main location to the second (account.service.core.windows.net ) Of course, in the process of translating DNS records, something is unlikely to work, but upon completion, existing blobs and tables become available at their URL. After the translation process is completed, the second geographical location will be upgraded to the main one (until until the next failure of the data center). Also, immediately upon completion of the process of upgrading the data center status, the process of creating a new second geographical location in the same region and further data replication is initiated.
All this is controlled by Fabric Controller. In the event that guest agents (GA) installed on virtual machines stop responding, the controller transfers everything to another node and reprograms the network configuration to ensure full availability.
Also on the Windows Azure platform there are mechanisms such as update domains and error domains, which also guarantee the continued availability of the deployed service, even when updating the operating system or hardware errors.
Error domains are a physical unit, a deployment container, and usually it is limited to a rack. Why is it limited to recom? Because, if the domains are located in different rivers, it turns out that the instances will be located so that there will not be a sufficient probability of their general failure. In addition, an error in one error domain should not lead to errors in other domains. Thus, if something breaks in the error domain, the entire domain is marked as broken and the deployment is transferred to another error domain. You cannot currently control the number of error domains - this is what Fabric Controller does.
Update domains are a more controlled entity. There is a certain level of control over the update domains, and the user can perform incremental or rolling updates of a group of instances of his service at one time. Update domains differ from error domains in that they are a logical entity, while error domains are physical. Since the update domain logically groups the roles, one application can be located in several update domains and at the same time only in two error domains. In this case, updates can be made first in the update domain No. 1, then in the update domain No. 2, and so on.

Each data center has at least two power sources, including an autonomous power source. The environmental controls are autonomous and will function as long as the systems are connected to the Internet.