8 Destinations Intel Software Guard Extensions

Original author: Matthew Hoekstra
  • Transfer

One of the functional innovations that appeared in the sixth generation Intel Core processors (Skylake) was Intel Software Guard Extensions (Intel SGX). It is easy to verify by googling that there is not much information about it on the Internet. We decided to fill this gap, especially since we had at hand an article by one of the developers of this technology, Matthew Hoekstra; in it, he describes the goals pursued by Intel SGX. We give its translation.

Speaking of the essence, Intel SGX is a set of new processor instructions that can be used by applications to highlight private areas of code and data. Creating this technology, we pursued the following goals.

Goal 1. Allow application developers to protect sensitive data from unauthorized access or modification by malicious software running with higher privileges.
I would like to highlight several points of principle in this paragraph. First, the protection of sensitive data involves ensuring their confidentiality (preventing leakage) and integrity (protection against falsification). Second, it is necessary to protect not only data, but also code (for example, an attacker can easily access data by changing or disabling authorization). Third, data must be protected not only when it is stored in encrypted form, but also during runtime, when it is not encrypted and is actively used for calculations. Finally, it is critical to protect the runtime against malware that bypasses the privilege control system in order to obtain a higher level of rights.

Goal 2. Allow applications to ensure the confidentiality and integrity of sensitive data and code without interfering with the operation of the privilege control system, without interfering with its planning and control of platform resources.
Sensitive data and code should be protected from malware running with a high level of rights, but at the same time, the privilege control system must constantly do its job, it should not be interfered. Protected applications must not take over or disrupt basic system functionality, such as task scheduling, device management, etc. Operating systems have evolved over the years to perform these tasks well, and creating an entity parallel to them would be impractical.

Goal 3. Allow users of computer devices to control them, while at the same time providing freedom to install and remove applications and services.
The work of a trusted application should not require any specific configurations and should not limit the user’s control over his computer. A common practice for security today is to severely restrict the set of applications that can be downloaded to the platform. In game consoles, smartphones, etc. now usually a specialized operating system is built in, which imposes various kinds of restrictions on the availability and behavior of applications in order to prevent security problems.

Corporate use of devices may impose even more stringent additional restrictions (for example, on the connection of USB-drives). In principle, there is nothing wrong with these measures, but they should not be required to ensure confidentiality and data integrity. This condition becomes even more obvious if we talk about a personal PC, where the need for a trusted environment is as great as the need for personalization.

Goal 4. Allow the platform to measure the trusted application code and produce a signed certificate using the processor, which includes this measurement and other certificates confirming that the code was correctly initialized in a trusted environment.
By allowing the user to control the software on the platform, we are creating the problem of trusted application delivery. How can anyone be sure that the platform has all the necessary primitives to support the trusted computing that the application requires, or that the installed application has not been tampered with? Or, in other words, how can an application prove that it is trusted?
To determine if the application was loaded and initialized correctly, you can compare the application signature (the cryptographic hash of its memory fingerprint at a predetermined execution point) with the expected value obtained from the considered trusted system - this is called the application dimension. To confirm its origin, the measurement is signed with a private key known only to the trusted system that performs the measurement.

Note that developers cannot rely on the calculations made programmatically by the system; As mentioned earlier, software can always be virtualized or tricked into using malware that has a sufficient level of privileges. Thus, the calculation must be hardware and run by the same component that creates the trusted environment, downloads / initializes the trusted application, and calculates sensitive data.

Goal 5. Allow developers to create trusted applications using tools and processes known to them.
The first 4 goals provide the benefits of a more closed environment by reducing the set of entities that should be trusted, while the platforms remain open and the user's choice is free. However, it does not follow from this that the software development process will remain unchanged. For example, if it turns out that developers will have to radically change their processes or they will be forced to write software for a proprietary security controller, the productivity of the process will significantly decrease.

Goal 6. Allow the performance of trusted applications to increase with increasing processor performance.
This goal grows out of the idea of ​​minimizing the impact on the existing software development process. One of the driving forces of this process is that developers are trying to get the most out of the increasing processor performance. It would be great if trusted applications did not have performance problems compared to others.

Goal 7. Allow software manufacturers to distribute and update applications using the most convenient methods for them.
If the proposed solution requires independent software developers to work closely with platform manufacturers in order to preinstall their applications at the time of platform production, or software updates can only be installed together with a system update, this will also hamper the creation of innovative products.

Goal 8. Allow applications to define protected areas of code and data that contain privacy, even if the attacker physically controls the platform and can directly attack its memory.
An effective solution should provide protection against various types of hardware attacks, including those cases when the platform is physically at the disposal of the enemy. Researchers at Princeton University demonstrate one of these attacks . Other options are also possible using memory bus analyzers or similar techniques.

Also popular now: