PostgreSQL testing with HugePages on Linux
- Transfer
The Linux kernel provides a wide range of configuration options that can affect performance. The key is to choose the right configuration for your application and workload. Like any other database, PostgreSQL requires optimal tuning of the Linux kernel. Incorrect settings may result in poor performance. It is important to conduct a comparative analysis of database performance after each tuning session. In one of his previous posts titled "Tune Linux Kernel Parameters For PostgreSQL Optimization"I described some of the most useful parameters of the Linux kernel and how they help improve database performance. Now I will share the results of benchmarking after configuring HugePages in Linux under various PostgreSQL loads. I conducted a complete set of tests under a variety of different PostgreSQL loads with a different number of parallel clients.
PC on which the test was performed
- Supermicro Server:
- Intel® Xeon® CPU E5-2683 v3 @ 2.00 GHz
- 2 sockets / 28 cores / 56 threads
- Memory: 256 GB RAM
- Storage: SAMSUNG SM863 1.9TB Enterprise SSD
- File system: ext4 / xfs
- OS: Ubuntu 16.04.4, kernel 4.13.0-36-generic
- PostgreSQL: version 11
Linux kernel settings
I used the default kernel parameters without any optimization / configuration, only disabled Transparent HugePages. This technology is enabled by default and allocates pages of a size that is not recommended for databases. In general, databases need HugePages of a fixed size, but Transparent HugePages cannot provide them. Therefore, it is always recommended to disable this feature and by default install the classic HugePages.
PostgreSQL settings
I used the same PostgreSQL settings for all tests to record various PostgreSQL workloads with different Linux HugePages settings. The following PostgreSQL settings were used for all tests:
shared_buffers = '64GB'
work_mem = '1GB'
random_page_cost = '1'
maintenance_work_mem = '2GB'
synchronous_commit = 'on'
seq_page_cost = '1'
max_wal_size = '100GB'
checkpoint_timeout = '10min'
synchronous_commit = 'on'
checkpoint_completion_target = '0.9'
autovacuum_vacuum_scale_factor = '0.4'
effective_cache_size = '200GB'
min_wal_size = '1GB'
wal_compression = 'ON'
Testing scheme
The testing scheme plays an important role. All tests are performed three times, the duration of each run is 30 minutes. Following these 3 tests, I derived the average. Testing was performed using the PostgreSQL pgbench tool , it works with a scaling factor in increments of approximately 16 MB of load.
Hugepages
By default, Linux uses 4K memory pages, as well as HugePages technology. BSD uses Super Pages technology, and Windows uses Large Pages. PostgreSQL only supports HugePages (Linux) technology. In cases where the amount of memory used is large, smaller pages will reduce performance. Using HugePages, you increase the allocated memory for the application and, therefore, reduce the “overhead” that occurs during the allocation / swap process. Thus, HugePages increase productivity.
Here are the settings for HugePages of 1 GB. This information is available at any time using / proc.
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 100
HugePages_Free: 97
HugePages_Rsvd: 63
HugePages_Surp: 0
Hugepagesize: 1048576 kB
More about HugePages I wrote in a previous post.
https://www.percona.com/blog/2018/08/29/tune-linux-kernel-parameters-for-postgresql-optimization/
In general, HugePages are 2 MB and 1 GB, so it makes sense to use 1 GB.
https://kerneltalks.com/services/what-is-huge-pages-in-linux/
Test results
This test shows the overall effect of using HugePages of various sizes. The first test set was created with a 4K page size — used by Linux by default — and without activating HugePages. Let me remind you: I turned off the Transparent HugePages option for the duration of the tests.
Then a second set of tests was performed for HugePages of 2 MB. Finally, a third set of tests was performed for HugePages of 1 GB.
For all comparative tests PostgreSQL version 11 DBMS was used. The sets include combinations of different sizes of databases and different clients. The graph below shows the results of performance comparisons using these tests: TPS (number of transactions per second) on the Y axis, and database size and number of clients for a database of a certain size on the X axis.
From the above graph, it can be seen that, from the use of HugePages, the gain increases as the number of customers and the size of the database increases - as long as this size remains within the previously allocated shared buffer.
This test compared the TPS scores and the number of customers. In this case, the size of the database is 48 GB. TPS is shown on the Y axis, and the number of connected clients is shown on the X axis. The size of the database is small enough to fit into a shared buffer with a fixed size of 64 GB.
When the size of HugePages is 1 GB, the comparative performance gain increases with the number of customers.
The following graph is the same as the previous one, but the size of the database is 96 GB. This is larger than the total buffer size set to 64 GB.
The main thing that needs to be noted here is that performance with HugePages of 1 GB size increases as the number of customers increases and ultimately provides better performance than with the use of HugePages of 2 MB in size or standard 4 KB pages.
This test shows the ratio of TPS and database size. In this case, the number of connected clients is 32. TPS is shown on the Y axis, and database sizes are shown on the X axis.
As expected, when the size of the database exceeds the size of the pre-allocated HugePages, the performance is significantly reduced.
Conclusion
One of my main recommendations is to disable Transparent HugePages. You will get the greatest performance boost if the database is placed in a shared buffer with the included HugePages. Determining the optimal size of HugePages is carried out by trial and error, but potentially this approach can lead to a significant gain in TPS, when the size of the database is large enough, but it allows it to fit in a common buffer.