
Corporate Security Crown - How to Protect Data at the Database Level
The press has been buzzing again last year over database leaks.

While many organizations consider this an unavoidable evil, the reality is that there are many things that enterprises can do right now to prevent unauthorized access to their systems and data.
Many data leaks occur due to simple errors of an incorrect configuration - database parameters that inadvertently increase the risk of violation by lowering the level of system security.
They can be as simple as an inadequate password policy, or as complex as poorly configured encryption procedures on a corporate network.
A good help could be the “Safe Technical Development Manual” ( STIG ).
It will also be faithful to use a public database with registered holes in the software ( National Checklist Program Repository ).
Data is one of the most valuable business assets, and encryption is an important step in ensuring its security.
While many still believe that encryption affects performance, the advent of cloud computing and the latest encryption technologies often means that this is no longer the case.
All major database providers provide some form of database encryption, usually referring to this function as something like Transparent Data Encryption (TDE).
This type of encryption helps protect against attacks that try to bypass database access control mechanisms by preventing attackers from accessing and reading your data through the operating system, backups, or data array, while protecting the database from these external group attacks.
Do not focus on continuously analyzing which data is confidential - the cost of constantly analyzing and re-introducing encryption is outweighed by the advantage of simply encrypting all application data.
Encryption at the base level allows you to protect the database or backup file from the Mask Show or from theft by direct access. In the early 2000s, unidentified people entered the tax province of Ontario (Canada), gave security guards on the head, and took the server with them.
If the attacker does not have a master key (which should be in a safe place other than from the server), then the data is in relative safety.
Crypting the base is not enough. Attackers with access to the corporate network and the database server can easily make queries to the database and read data using standard software products or the admin of the database can install the application directly on the server with the database and so merge the data.
Proper configuration of the database firewall means that the database is monitored, proactively detecting and even blocking unauthorized access, bypassing applications and SQL injection.
When setting up your database firewall, you must define policies that can help you easily identify abnormal activity.
In most cases, database workloads are repetitive - with a well-defined group of application servers and clients using the same consistent set of programs to access the database.
Different database firewall providers offer their own unique paradigms for policy development, but almost all of them have some way of identifying exceptions for normal client activity.
In some cases, this profiling of normal activity can be as detailed as defining normal SQL activity for the database, so the database firewall can even block SQL injection attacks.
Regular database auditing is one of the best ways to minimize the risk that your data will be exposed to external threats or unauthorized access.
Organizations do not know what they do not know, and if regular audit processes are not conducted, it is impossible to determine where vulnerabilities are located and where, due to improper configuration, confidential data remains unprotected.
It is especially important if you bought or ordered a product from a third-party organization, which, if it detects a vulnerability, may not be in a hurry to advertise it because of reputation risks.
Remember that the network monitor seems to be able to see only plain text commands that pass through the network and are not scripted over SSL / TLS for example.
If your database allows direct local connections that are not routed over the network, the database firewall may not see them.
It will be good practice to periodically check the integrity of all created objects in the database and check the time of creation of objects.
There are many ways an organization can restrict access to sensitive data without affecting the work of its employees.
Try to restrict your users and administrators to those privileges that are necessary for their business tasks.
The first step is to determine what data is needed for each business task, and then establish strict rules for accessing certain sets of business data.
This is one of the most important tools to help prevent data abuse within the company.
If your database supports this, use access control mechanisms to separate the responsibilities of the database administrator and the system from managing the data in the database.
At a minimum, you should check access to privileged user data.
Some bases (for example MSSQL) do not allow protecting the base from the system administrator.
Here, more brutal decisions work by dividing the data into parts with dividing the physical access of the same person to all the physical parts of the database.
Or a cheaper way - access to the database to the administrator only under the supervision of a certain time frame.
If this is your internal product, then the best way to do this is to use a server application with an Application Role in the database.
A third-party application should not have access to tables at all!
Read data only through View or Function.
Any change of data through the procedure.
View and Function should only have table read permissions.
Temporary access restrictions - if the operator in the bank works from 8 to 5 - access from this computer to the database should be limited - this may be a trigger on the system table, resetting the connection to the database.
Restrictions on the amount of data - if the operator processes 10-20 documents per day - then after 50 access to the database should be blocked - this allows you to prevent the discharge of all documents.
Sometimes application developers and administrators require a test environment to develop, maintain, and deploy business applications.
In many cases, testing and development requires data sets that are equivalent in size and complexity to production, as a result of which many organizations clone a production database to create these lower-level environments.
When this happens, the security risk inherent in the production database suddenly increases, since now there are two (or maybe FIVE) copies of the data.
Reduce risk by masking data - replacing sensitive data with artificially generated or encrypted data that doesn't reveal true data.
The industry term for this is static data masking.
If you are selling a boxed product, you will most likely request a copy of the database from customers.
When I was at RIM, copies of the database of hundreds of organizations, commercial and state, passed through me.
In no state organization (the names of which I will not even tell anyone aloud) have the data ever been masked.
Some commercial ones encrypted backups, a couple (Walmart and some Japanese security company sent only the necessary table) and only one company sent separate fields in a text file.
That is, the problem is actually catastrophic in nature.
There is another way to improve system security.
Dynamic data masking - it is non-destructive, without changing the underlying data. For example, in most cases, someone with access to a credit card number should not see the entire number — only the last four digits.
Based on the article: Securing Enterprise Crown Jewels: How to Protect Data at DB Level

While many organizations consider this an unavoidable evil, the reality is that there are many things that enterprises can do right now to prevent unauthorized access to their systems and data.
Point number 1. Understanding the settings of the IT system - check the configuration
Many data leaks occur due to simple errors of an incorrect configuration - database parameters that inadvertently increase the risk of violation by lowering the level of system security.
They can be as simple as an inadequate password policy, or as complex as poorly configured encryption procedures on a corporate network.
A good help could be the “Safe Technical Development Manual” ( STIG ).
It will also be faithful to use a public database with registered holes in the software ( National Checklist Program Repository ).
Point number 2. Crypt all your data
Data is one of the most valuable business assets, and encryption is an important step in ensuring its security.
While many still believe that encryption affects performance, the advent of cloud computing and the latest encryption technologies often means that this is no longer the case.
All major database providers provide some form of database encryption, usually referring to this function as something like Transparent Data Encryption (TDE).
This type of encryption helps protect against attacks that try to bypass database access control mechanisms by preventing attackers from accessing and reading your data through the operating system, backups, or data array, while protecting the database from these external group attacks.
Do not focus on continuously analyzing which data is confidential - the cost of constantly analyzing and re-introducing encryption is outweighed by the advantage of simply encrypting all application data.
Encryption at the base level allows you to protect the database or backup file from the Mask Show or from theft by direct access. In the early 2000s, unidentified people entered the tax province of Ontario (Canada), gave security guards on the head, and took the server with them.
If the attacker does not have a master key (which should be in a safe place other than from the server), then the data is in relative safety.
Item # 3. Use SQL Database Firewall
Crypting the base is not enough. Attackers with access to the corporate network and the database server can easily make queries to the database and read data using standard software products or the admin of the database can install the application directly on the server with the database and so merge the data.
Proper configuration of the database firewall means that the database is monitored, proactively detecting and even blocking unauthorized access, bypassing applications and SQL injection.
When setting up your database firewall, you must define policies that can help you easily identify abnormal activity.
In most cases, database workloads are repetitive - with a well-defined group of application servers and clients using the same consistent set of programs to access the database.
Different database firewall providers offer their own unique paradigms for policy development, but almost all of them have some way of identifying exceptions for normal client activity.
In some cases, this profiling of normal activity can be as detailed as defining normal SQL activity for the database, so the database firewall can even block SQL injection attacks.
Point number 4. Control everything - check your database
Regular database auditing is one of the best ways to minimize the risk that your data will be exposed to external threats or unauthorized access.
Organizations do not know what they do not know, and if regular audit processes are not conducted, it is impossible to determine where vulnerabilities are located and where, due to improper configuration, confidential data remains unprotected.
It is especially important if you bought or ordered a product from a third-party organization, which, if it detects a vulnerability, may not be in a hurry to advertise it because of reputation risks.
Remember that the network monitor seems to be able to see only plain text commands that pass through the network and are not scripted over SSL / TLS for example.
If your database allows direct local connections that are not routed over the network, the database firewall may not see them.
It will be good practice to periodically check the integrity of all created objects in the database and check the time of creation of objects.
Point number 5. Limit the visibility - set access rules
There are many ways an organization can restrict access to sensitive data without affecting the work of its employees.
Try to restrict your users and administrators to those privileges that are necessary for their business tasks.
The first step is to determine what data is needed for each business task, and then establish strict rules for accessing certain sets of business data.
This is one of the most important tools to help prevent data abuse within the company.
If your database supports this, use access control mechanisms to separate the responsibilities of the database administrator and the system from managing the data in the database.
At a minimum, you should check access to privileged user data.
Some bases (for example MSSQL) do not allow protecting the base from the system administrator.
Here, more brutal decisions work by dividing the data into parts with dividing the physical access of the same person to all the physical parts of the database.
Or a cheaper way - access to the database to the administrator only under the supervision of a certain time frame.
If this is your internal product, then the best way to do this is to use a server application with an Application Role in the database.
A third-party application should not have access to tables at all!
Read data only through View or Function.
Any change of data through the procedure.
View and Function should only have table read permissions.
Temporary access restrictions - if the operator in the bank works from 8 to 5 - access from this computer to the database should be limited - this may be a trigger on the system table, resetting the connection to the database.
Restrictions on the amount of data - if the operator processes 10-20 documents per day - then after 50 access to the database should be blocked - this allows you to prevent the discharge of all documents.
Point number 6. Mask the data, which may reduce the risk from the consequences of the attack
Sometimes application developers and administrators require a test environment to develop, maintain, and deploy business applications.
In many cases, testing and development requires data sets that are equivalent in size and complexity to production, as a result of which many organizations clone a production database to create these lower-level environments.
When this happens, the security risk inherent in the production database suddenly increases, since now there are two (or maybe FIVE) copies of the data.
Reduce risk by masking data - replacing sensitive data with artificially generated or encrypted data that doesn't reveal true data.
The industry term for this is static data masking.
If you are selling a boxed product, you will most likely request a copy of the database from customers.
When I was at RIM, copies of the database of hundreds of organizations, commercial and state, passed through me.
In no state organization (the names of which I will not even tell anyone aloud) have the data ever been masked.
Some commercial ones encrypted backups, a couple (Walmart and some Japanese security company sent only the necessary table) and only one company sent separate fields in a text file.
That is, the problem is actually catastrophic in nature.
There is another way to improve system security.
Dynamic data masking - it is non-destructive, without changing the underlying data. For example, in most cases, someone with access to a credit card number should not see the entire number — only the last four digits.
Based on the article: Securing Enterprise Crown Jewels: How to Protect Data at DB Level