
Million WordPress visitors versus server
My server, which will be the hero of the subsequent storytelling, is a regular middle-class server rented from FirstDedic with a DualCore Xeon E3110 3.00Ghz processor. RAM was installed 4 GB, hard drive 500 GB. On the server, nginx 1.01 was installed as the frontend, and apache 2 as the backend, with scripts running in CGI mode.
The story happened with a site that was posted on my server, in fact, not a site, but someone else's personal blog . Earlier on the blog, traffic peaks of up to 10,000 per day were observed, but the server coped with this load with a bang without any optimization on standard configuration files.
And so, one fine “Women's Day”, in the morning, an SMS comes from the site monitoring service that the server is unavailable. Naturally, I immediately wake up from such news and try to ping the server. Ping was present, but very lethargic. An SSH connection cannot be established because all server resources are given to an unknown process, or processes.
Connecting via KVM, I sent the server to reboot, and immediately after booting connected via SSH. In the processes, I saw a terrible picture: about 1000 php processes were launched on behalf of the author of the blog, in addition, Load averages are more than a hundred. A very scary indicator that shows how much the process has to expect in its turn for a portion of resources.
Naturally, I only had enough time to see this by running the top command. After a minute, the server stopped responding, and I had to restart it again, and immediately after rebooting, turn off apache. Now I am guaranteed to get a server that does not consume all resources. I started the analysis, I deduced the number of open connections with the netstat command and was horrified. There were more than 10,000 established connections with nginx. This means that in the last minute there were 10 thousand attempts to enter the client’s site - a good load.
Trying to rummage through the WordPress settings, naturally with the consent of the client, I found that the WP Super Cache caching plugin was activated, which I turned off, because it was the one that loaded the biggest load on the file system. Turning off the plugin, the site began to perform a lot of queries to the database - not surprisingly. Therefore, the first thing I did was turn on the query caching system in MySQL, since the load was given only by one page, to which there were many transitions. After enabling query caching, the database sighed more freely, but not as much as we would like, despite the fact that the main load was now given by Wordpress itself.
By turning off all possible plugins and rewriting the topic with the least number of requests, the load did not decrease. I had to go to extreme measures - I turned on forced caching of proxied requests in nginx. To do this, I entered the following line in the http section
In the server section we need, we write:
As soon as I did this, the server load dropped sharply. However, we received many inconveniences associated with administration and commenting. Despite the inconvenience, the problem was resolved. However, consciousness suggested that manually enabling such caching would not always be time and opportunity, but leaving it as it was was not an option for the author of the blog. Thus, it was necessary to prohibit caching for authorized users and users who left their comments. The result is approximately the following server section:
After this, inconveniences in work are fully compensated by continuity.
As a result of all of the above, the server was unavailable for 2 hours, during this time the traffic flow decreased significantly, but after this incident there were other similar holiday traffic spikes that the site successfully withstood without giving a noticeable load on the server. Since then, I try to put such a configuration on all hosted WordPress sites.
Google Analytics on the client’s site, after the server finally went up, showed 6,000 visitors online. This figure fell rapidly, because the relevance of the request, by which the site was in the top of all search engines, was lost every minute. By the end of the day, the number of visitors became seven-digit, but the owner of the resource still looks at me as a wolf, because the figure could be several times higher, as well as his income.
Using this config, I can confidently say that the server transfers several such sites. Since my project after adding to Google News began to attract a lot of traffic, as a matter of fact with Yandex blogs.
The story happened with a site that was posted on my server, in fact, not a site, but someone else's personal blog . Earlier on the blog, traffic peaks of up to 10,000 per day were observed, but the server coped with this load with a bang without any optimization on standard configuration files.
And so, one fine “Women's Day”, in the morning, an SMS comes from the site monitoring service that the server is unavailable. Naturally, I immediately wake up from such news and try to ping the server. Ping was present, but very lethargic. An SSH connection cannot be established because all server resources are given to an unknown process, or processes.
Connecting via KVM, I sent the server to reboot, and immediately after booting connected via SSH. In the processes, I saw a terrible picture: about 1000 php processes were launched on behalf of the author of the blog, in addition, Load averages are more than a hundred. A very scary indicator that shows how much the process has to expect in its turn for a portion of resources.
Naturally, I only had enough time to see this by running the top command. After a minute, the server stopped responding, and I had to restart it again, and immediately after rebooting, turn off apache. Now I am guaranteed to get a server that does not consume all resources. I started the analysis, I deduced the number of open connections with the netstat command and was horrified. There were more than 10,000 established connections with nginx. This means that in the last minute there were 10 thousand attempts to enter the client’s site - a good load.
Trying to rummage through the WordPress settings, naturally with the consent of the client, I found that the WP Super Cache caching plugin was activated, which I turned off, because it was the one that loaded the biggest load on the file system. Turning off the plugin, the site began to perform a lot of queries to the database - not surprisingly. Therefore, the first thing I did was turn on the query caching system in MySQL, since the load was given only by one page, to which there were many transitions. After enabling query caching, the database sighed more freely, but not as much as we would like, despite the fact that the main load was now given by Wordpress itself.
By turning off all possible plugins and rewriting the topic with the least number of requests, the load did not decrease. I had to go to extreme measures - I turned on forced caching of proxied requests in nginx. To do this, I entered the following line in the http section
proxy_cache_path /path/to/cache levels=1:2 keys_zone=wpblog:10m max_size=10m;
In the server section we need, we write:
proxy_cache_valid 200 3m;
proxy_cache wpblog;
proxy_pass http://127.0.0.1:8080;
As soon as I did this, the server load dropped sharply. However, we received many inconveniences associated with administration and commenting. Despite the inconvenience, the problem was resolved. However, consciousness suggested that manually enabling such caching would not always be time and opportunity, but leaving it as it was was not an option for the author of the blog. Thus, it was necessary to prohibit caching for authorized users and users who left their comments. The result is approximately the following server section:
proxy_cache_valid 200 3m;
location / {
if ($http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" ) {
set $do_not_cache 1;
}
proxy_no_cache $do_not_cache;
proxy_cache_bypass $do_not_cache;
proxy_cache wpblog;
proxy_pass http://127.0.0.1:8080;
}
location ~* wp\-.*\.php|wp\-admin {
proxy_pass http://127.0.0.1:8080;
}
location ~* ^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpe?g|avi|zip|gz|bz2?|rar)$ {
root /path/to/static;
access_log off;
expires max;
add_header Last-Modified: $date_gmt;
}
location ~* \/[^\/]+\/(feed|\.xml)\/? {
proxy_pass http://127.0.0.1:8080;
}
After this, inconveniences in work are fully compensated by continuity.
As a result of all of the above, the server was unavailable for 2 hours, during this time the traffic flow decreased significantly, but after this incident there were other similar holiday traffic spikes that the site successfully withstood without giving a noticeable load on the server. Since then, I try to put such a configuration on all hosted WordPress sites.
Google Analytics on the client’s site, after the server finally went up, showed 6,000 visitors online. This figure fell rapidly, because the relevance of the request, by which the site was in the top of all search engines, was lost every minute. By the end of the day, the number of visitors became seven-digit, but the owner of the resource still looks at me as a wolf, because the figure could be several times higher, as well as his income.
Using this config, I can confidently say that the server transfers several such sites. Since my project after adding to Google News began to attract a lot of traffic, as a matter of fact with Yandex blogs.