Use and protection of legacy in the modern world

Original author: Lior Neudorfer
  • Transfer
image

The legacy infrastructure is still an important part of enterprises in many industries: in medical organizations that still use the Windows XP system, in Oracle databases that run on old Solaris operating system servers, in various business applications that require the Linux RHEL4 system, and ATMs with versions of the Windows system that were outdated ten years ago. It even happens that many organizations still use legacy servers on Windows 2008 systems.

Inherited infrastructure is very common in data centers, especially in large enterprises. For example, you can often see how older computers with the AIX operating system perform critical operations to process large amounts of transaction data in banks or endpoints, such as ATMs, medical devices and terminal systems for shopping, on them often used operating systems, whose service life has long expired. Upgrading the applications used in this infrastructure is a constant and difficult process, which usually takes years.

Unsafe legacy systems compromise your data center


The risk to the organization associated with improper protection of legacy systems is very high and goes beyond the workloads of these systems. For example, an unpatched device running the Windows XP system can be easily used to gain access to any data center. Earlier this month, we received a reminder of a similar risk when Microsoft released a security update for a serious system vulnerability that allowed remote code execution on older operating systems such as Windows XP and Windows Server 2003.

If attackers gain access to such an unprotected machine (which is much easier than hacking a modern, well-patched server), they can gain access to the network using lateral movements. As data centers become more complex, expand to the possibility of using publicly accessible cloud storage servers and using the latest technologies, such as “containers”, the risk of hacking increases. The interdependencies between different business applications (inherited and not) are becoming more complex and dynamic, which from the point of view of security makes it difficult to manage traffic models. This gives attackers more freedom to seamlessly move through various parts of the infrastructure.

Old infrastructure, new risk


The legacy systems have been with us for years, but the security risks that they pose are constantly increasing. As organizations go through the digital transformation process, modernize their infrastructure and data centers, and move to hybrid cloud storage systems, attackers will have more opportunities to gain access to mission-critical internal applications.

A locally installed business application on a legacy system that was once used by only a small number of other locally installed applications can now be used by a large number of both on-premises and cloud applications. The use of legacy systems with an increasing number of applications and environments is expanding the area for potential hacking.

So the question is, how can we reduce this risk? How do we maintain the security of legacy, but still business critical components, while ensuring that new applications can be quickly deployed in today's infrastructure?

Risk identification


The first step is to correctly identify and quantify the risk. Using existing inventory systems and so-called “tribal knowledge” is probably not enough - you should always strive to get a complete, accurate and current view of your environment. For legacy systems, getting the right information can be a particularly difficult task, as knowledge of these systems within an organization tends to decrease over time.

The security team should use a good analysis and visualization tool to provide a plan that will answer the following questions:

  1. Which servers and endpoints are running legacy operating systems?
  2. What environments and business applications do these workloads relate to?
  3. How do these workloads interact with other applications and environments? What ports? Using what processes? For what business purpose?

Answering these important questions is the starting point for lowering your security risks. They show which workloads pose the greatest risk to the organization, which business processes can be damaged during an attack by attackers, and which network routes can be used by attackers when moving sideways between legacy and unherited systems through cloud storage and data centers. Users are often surprised when they see unexpected streams of data coming to their inherited machines and when data is suddenly sent, which leads to more questions about security status and risks.

A good analysis and visualization tool will also help you identify and analyze systems that need to be moved to other environments. Most importantly, the visual map of information flows allows you to easily plan and use a strict policy for segmenting these resources. A well-planned policy significantly reduces the risk to which these older machines are exposed.

Microsegmentation Risk Reduction


Network segmentation is widely used as a cost-effective way to reduce risks in data centers and cloud storage. In particular, with the help of micro-segmentation, users can create a policy of a rigid modular security system that significantly limits the ability of an attacker to move laterally between workloads, applications, and environments.

When working with legacy infrastructure, the value of a good tool for analysis and microsegmentation becomes even clearer. Old segmentation methods, such as VLANs, are difficult to exploit, and they often put all similar legacy systems in one segment, leaving the entire group open to attack in the event of a single hack. In addition, gateway rules between legacy VLANs and other parts of the data center are difficult to maintain, leading to over-authorization policies that increase overall risk. With proper visualization of both legacy and modern workloads, the security team can plan a server-level policy that allows only narrow-specific flows between legacy systems,

Coverage Boundaries are Key


When choosing a solution for micro-segmentation, make sure that the solution you choose can be easily used throughout your infrastructure, that it will cover all types of workloads in data centers or cloud storages. By segmenting modern applications, leaving legacy systems unattended, you are leaving a big security hole in your infrastructure.

Personally, I believe that security providers should take on the task of covering the entire infrastructure in order to be able to help their customers cope with this growing threat. Although some vendors focus only on modern infrastructure, refusing to support old operating systems, I believe that good and advanced security platforms should cover all spectra of infrastructures.

Overcome the difficulties of legacy systems


Inherited systems present a unique problem for organizations: they are critical to business, but are more difficult to maintain and are not properly protected. As organizations migrate to hybrid cloud storage and gain scope for potential attacks, special care must be taken to protect legacy applications. To do this, the security team must accurately identify legacy servers, understand the interdependencies with other applications and environments, control risks by creating a strict segmentation policy. Leading microsegmentation providers should be able to cover legacy systems without sacrificing any other type of infrastructure. The Guardicore Centra platform provides the ability to analyze,

Also popular now: