LAMP + Nginx on VPS is stable and without any headache
The task is to deploy hosting of several unloaded sites on the minimum VPS resources. To do this quickly and conveniently with minimal problems in the future and not fall on peak loads.
1. OS - Centos-6 86_x64 because it is stable, convenient and easy to update.
2. No self-assembled software. And as they say, “make && make install command turns any distribution into Slackware.”
A little clarification, at the moment I use the v256 tariff plan with the hosting provider flynet.pro (256MB of RAM) and do not expect a lot of work so most of it refers to that amount of RAM, but in general, the solutions are easily portable to virtually all tariff plans different hosting providers.
And one more clarification - hosting is done "for yourself." There are insufficiently described points that should be considered if you give access to site administration to strangers.
Go.
1. Check for updates.
The installation image of the hosting provider may not be very fresh. There is something to update - we update. No, we are happy. 2. Connect the EPEL repository (http://fedoraproject.org/wiki/EPEL) from which we will install the missing software. 3. We install the software we need Briefly about the software: httpd - Apache standard version for Centos-6 - 2.2.15 mysql-server - Mysql 5.1.52 php - PHP 5.3.2 vsftpd - pretty convenient FTP server vsftpd 2.2.2 mc - some things are still more convenient to do in mc than on the command line. phpMyAdmin - similar to mc. manage mysql databases in phpMyAdmin is still more convenient.
php-eaccelerator - accelerator for PHP. It significantly increases the speed of script execution and reduces the load on the processor. Yes, and as a keepsake.
sysstat - in case we want to see how the system is doing.
crontabs - for scheduled tasks.
tmpwatch is a utility for removing obsolete files.
In fact, several more packages will be installed, to those packages that we asked to install, everything necessary for their functioning will be added.
The result is: 4. With the free command, we see if we have a swap and if not, then create it and connect it. If there is, we rejoice and skip this item.
An important point here - the active use of a swap - is very bad. If there is an active swap, it means you need to optimize or trim something. If you cannot optimize and cut back, you will have to switch to a more expensive tariff plan. It is also worth considering that the hosting provider may be offended by the excessive use of the swap.
But without a swap, it’s not very good either - oom killer is a terrible thing. It may inadvertently kill mysqld and instead of just slowing down your sites will completely lie.
Note - you do not need to do a swap more than the available RAM. There will be no benefit from him, but he eats up a place.
We create a swap as follows: we connect well and in order for it to connect we automatically write this command to /etc/rc.local
You can check the availability and busyness of the swap with the top or free commands
5. Turn on and start the daemons 6. Create users for the sites. I prefer the username to be similar to the domain of the site. Next, create additional user directories. html (in which the main content of the sites will be) and the log in which the logs for this site will be written and set the rights. We set the rights: user - full access, apache group reading and listing directories, the rest - ficus. Rights can be set by hand, or you can use a small script: 7. Set up a web server. Edit /etc/httpd/conf/httpd.conf
Of the really needing changes - we configure the prefork module so that it initially eats less memory and limits its appetites.
The fact is that Apache was initially configured to run up to 256 of its work processes, while one work process easily takes 20-40 MB (256 * 20 = 5 GB), this can easily lead to problems, especially on modest VPS where there is only 256 MB of RAM.
Therefore, we limit their number to reasonable numbers based on the available RAM. For example, 5 Apache processes with an average size of 30 MB will take about 150 MB - which is already bearable.
It was: It has become: This setting will not allow the Apache to proliferate beyond measure and eat all the RAM. Depending on the actual load, the parameters may be worth revising.
Well, uncomment the line in order to have many sites on the same ip address. Next, go to the /etc/httpd/conf.d/ directory and configure our sites. There you can delete welcome.conf which turns off the indexes and instead displays the Apache 2 Test Page page. Please note that the virtual host configs in this directory are applied in turn in alphabetical order. In order for a user to go by IP address to any of our sites and not get to a completely different (which will be the first on the list) in the conf.d directory, put a file with the name for example 000-default.conf and such contents: and put in the directory / var / www / html / index.html file with wishes.
Next, for each of our virtual hosts, we create a config file according to approximately the following template: You can add individual settings of any modules to these files to your taste. We restart apache and see if everything works. apache should start normally. In the directories of log sites, 2 log files should be created. When accessing the server by IP address, the file you put in / var / www / html / should be displayed, and when accessing by site name, you should see the contents of the html directory (most likely empty) and the entries in the access.log file of the corresponding site. 8. Configure mysql. First of all, delete the test database and set the root password for mysql
With MySql, the problem is about the same as with Apache - memory requirements which are quite expensive on VPS.
To reduce the amount of sql memory used by the server, edit /etc/my.cnf as follows:
add the following to the [mysqld] section: and add these lines to the end of the file: restart mysqld to make sure everything is fine: You also need to replace the option “skip- networking ”makes it possible to access the server only from the local machine through the socket. If network access is required, this option does not need to be enabled.
Such settings will minimize the memory used by the mysql process and work normally on an unloaded site. But of course you need to look at the statistics of mysql and depending on the needs to increase the data here limits.
Further administration of mysql is more convenient to do through phpMyAdmin.
Now there is one caveat - by default phpMyAdmin is available on the path / phpMyAdmin on all our sites.
To avoid this, we create a specialized site for management (for example, cfg.testsite.ru) and configure it similarly to the rest.
Then we transfer the entire contents of the /etc/httpd/conf.d/phpMyAdmin.conf file to the config of this site, and delete the phpMyAdmin.conf file itself or transfer it somewhere from the conf.d directory.
After such actions phpMyAdmin will be available on the path / phpMyAdmin / only on a dedicated site.
Well, so that you can enter it in the site configuration file, change
Order Deny, Allow
Deny from All
Allow from 127.0.0.1
Allow from :: 1
at
Order Deny, Allow
Deny from All
Allow from 127.0.0.1
Allow from your email address.
Allow from :: 1
After that phpMyAdmin will be available from your ip address.
We log in as the root user with the password that you set.
To create a user, go to the "Privilege" - "Add a new user"
username - arbitrary, I prefer to use the site name to reduce confusion.
The host is local (do we make it for a site that will spin right there?)
Password - generate. (do not forget to copy the password)
Put a checkmark - “Create a database with the user name in the name and provide full privileges to it”
Apply.
As a result, we get a user with the name, password and database of your choice with the same name.
9. Often uploading files to a hosting is more convenient via ftp. To do this, we installed vsftpd, we
edit its config /etc/vsftpd/vsftpd.conf,
turn off the anonymous login, change it to and uncomment Now, in order to be able to access ftp of a certain site, the corresponding user needs to set a password And do not forget that by default this user with the password set can log in via SSH. To disable this feature, the easiest way is to change the user shell. Turn on and run vsftpd. Check if everything works.
And finally, a very simple "operational backup". According to the principle of "backups do not happen much."
It would be better to use something more correct, but a bad backup is still better than a complete absence.
Such a backup can serve as a good addition to the full backup of the virtual machine at the hosting provider. But, by no means replace it.
We backup the contents of sites and databases, as well as the settings in the / etc / directory.
We create the directory / backup / and set the rights to it "700" In the directory /etc/cron.daily/ create the file backup.sh and also set the rights to it "700". The file has the following contents:
In principle, instead of backup in just one heap, it may be better to backup everything separately, but then it becomes possible to forget to configure backup of something and regret it when you need it.
Well, or the backup option "separately" requiring that the site username and database name match: 10. Updates. Do not forget to update the system from time to time. Thanks to the RHEL / Centos policy regarding software, the software versions after the update will remain the same and inadvertently put the server due to the fact that something has changed very little in the config. The truth in this approach is also a minus - in three years in Centos-6 there will be the same software versions as now. But if our goal is stability - it suits us. 11. Testing.
I highly recommend testing the site after setting up.
The first point of testing is to reboot the server and verify that all the necessary daemons are started and everything works as expected. I would generally recommend not chasing uptime digits but rebooting after installing or changing versions of any server software starting automatically.
It’s better to find out that Apache does not start in autorun after a scheduled reboot of your own, than to find out that the hoster had problems and, as a result of rebooting your virtual machine, sites on it have been working for half a day already.
Next is stress testing using the ab utility (Apache HTTP server benchmarking tool).
In this testing, we are interested not so much in the number of parrots as the behavior of the server under load. It should not have dying processes and active swap.
For testing, we need a site hosted on this server in working condition. And a “typical” page from this site. Well, or you can use not the typical, but the most difficult.
For example, I am testing on a freshly installed Drupal 7.9
From the whole variety of command line ab we need only 2 parameters -n - the number of http requests -c - the number of simultaneous requests (threads).
During the test in the second ssh session, using top, we observe how the server is doing.
100 requests in 2 threads.
From the output of ab I am particularly interested in “Requests per second”, “Time per request” and “Failed requests” which give a general idea of server performance. It can be seen that the server processes 6 with a penny of requests per second and spends 322 milliseconds to generate one page. From the top output, memory allocation and processor loading are interesting. Swap: 0k used - sooo good. 93232k free + 76604k cached is actually 170 megabytes of free memory. 100 requests 5 threads. The number of requests per second remained the same, but the generation time increased more than 2 times - they ran into the processor. And finally, the habraeffect or something close :-)
Again, the number of requests per second is relatively stable, but the generation time has become quite sad. But at the same time, Failed requests is zero. Which means that, although slowly, everything works.
Well, and about the memory - at the moment Swap: 0k used, 82116k free, 76672k cached - consumption has not increased much and in principle you can increase some limits, but given my lack of site content at the moment, I think this should not be done. But later it is worth running the tests on the completed site and, depending on the results, already adjust the settings.
12. Installing nginx as a frontend.
Why is this necessary.
The main problem is how apache handles incoming connections. For each incoming connection, a new process is created or one of the started ones is taken and the connection is transferred to it for servicing. Until the connection is closed, this process only deals with it.
In principle, everything looks good as long as we have a lot of RAM and / or very fast clients (ab running one of these options from a localhost), but everything becomes much sadder if the client is sitting on a slow channel or just not in a hurry. In this case, it actually blocks one of the processes during the fence of the request, which for this time is turned off from the server.
Thus, in theory, having a server on a 100Mbit channel and one persistent client on a dialup with a reset, we can get something like DOS - a client in several threads will block almost all of our apache processes, which we have in mind due to the small amount of RAM.
This problem is solved by installing some kind of light http server in the form of a front-end. If there is a frontend, all incoming connections are accepted by him, then the request is sent to apache and a response is quickly received thereby freeing the apache process for new requests. The frontend, slowly and without wasting unnecessary resources, gives the received response to the client who has already requested it.
The frontend can itself provide static content with an additional bonus - for example, pictures, css, etc. Relieving heavy Apache.
In order for apache and our scripts in requests to see the real client IP address and not the front-end address, we will install mod_realip2.
edit /etc/httpd/conf.d/mod_realip2.conf, uncomment edit the httpd.conf and files in /etc/httpd/conf.d/ change all the instructions for port 80 to port 8080 There are three directives to change: we edit / etc / nginx / nginx.conf I use the start of nginx from under the apache user since initially we gave all the rights with the expectation of it. It will also be useful to comment out the access_log directive in nginx.conf to avoid double logging. error_log is better not to touch - Apache and nginx errors are still different. In the server section, edit the listen directive and set:
change: to the directory /etc/nginx/conf.d/ create a proxy.conf file with the following contents, restart apache and nginx and check if everything works. In general, everything. Now nginx stands as a frontend, accepts all incoming connections and proxies their Apache which processes them and quickly passes the response back to nginx freeing the process for new requests. The next step to increase performance and reduce resource consumption will be the return of static content directly through nginx. To do this, in addition to the apache virtual hosts, you will have to create nginx virtual hosts and specify what to distribute. To do this, in the /etc/nginx/conf.d/ directory, create a file with the name of our site and the extension .conf with the following contents:
In this example, for a site on CMS Drupal, the static contents of the / sites / default / files directory are distributed via nginx, and for everything else, we already go to the apache.
Another option is to replace the location directive with: In this case, all files with the corresponding extensions will be given by nginx. But in this version there is a small minus - nginx does not know how to work with .htaccess files, so if you have any content there that is closed from viewing .htaccess, you should refrain from using this option. It is also worth noting that in this situation we get two logs on one site. Separately, the request log for which Apache worked and separately the log of the contents given by nginx.
Alternatively, transfer the access_log directive from the location section to the server section and disable access_log in the Apache virtual host. In this case, only nginx will log.
But to see “how it works,” a double log can be interesting - you can immediately see how much of the load is on anyone.
For further optimization, it’s worth reading the manuals on optimizing specific components already and doing it with an eye to the current situation.
UPD: Fixed several typos
UPD: Fixed swap connection, thanks AngryAnonymous
UPD: Added description of installing and configuring nginx, thanks masterbo for the kick in the right direction.
Another option backup script fromodmin4eg : habrahabr.ru/blogs/s_admin/132302/#comment_4391784
Waiting for criticism.
Basic principles:
1. OS - Centos-6 86_x64 because it is stable, convenient and easy to update.
2. No self-assembled software. And as they say, “make && make install command turns any distribution into Slackware.”
A little clarification, at the moment I use the v256 tariff plan with the hosting provider flynet.pro (256MB of RAM) and do not expect a lot of work so most of it refers to that amount of RAM, but in general, the solutions are easily portable to virtually all tariff plans different hosting providers.
And one more clarification - hosting is done "for yourself." There are insufficiently described points that should be considered if you give access to site administration to strangers.
Go.
1. Check for updates.
The installation image of the hosting provider may not be very fresh. There is something to update - we update. No, we are happy. 2. Connect the EPEL repository (http://fedoraproject.org/wiki/EPEL) from which we will install the missing software. 3. We install the software we need Briefly about the software: httpd - Apache standard version for Centos-6 - 2.2.15 mysql-server - Mysql 5.1.52 php - PHP 5.3.2 vsftpd - pretty convenient FTP server vsftpd 2.2.2 mc - some things are still more convenient to do in mc than on the command line. phpMyAdmin - similar to mc. manage mysql databases in phpMyAdmin is still more convenient.
[root@test ~]# yum update
[root@test ~]# rpm -ihv download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm
[root@test ~]# yum install httpd mysql-server php vsftpd mc phpMyAdmin php-eaccelerator sysstat crontabs tmpwatch
php-eaccelerator - accelerator for PHP. It significantly increases the speed of script execution and reduces the load on the processor. Yes, and as a keepsake.
sysstat - in case we want to see how the system is doing.
crontabs - for scheduled tasks.
tmpwatch is a utility for removing obsolete files.
In fact, several more packages will be installed, to those packages that we asked to install, everything necessary for their functioning will be added.
The result is: 4. With the free command, we see if we have a swap and if not, then create it and connect it. If there is, we rejoice and skip this item.
Install 44 Package(s)
Upgrade 0 Package(s)
Total download size: 37 M
Installed size: 118 M
An important point here - the active use of a swap - is very bad. If there is an active swap, it means you need to optimize or trim something. If you cannot optimize and cut back, you will have to switch to a more expensive tariff plan. It is also worth considering that the hosting provider may be offended by the excessive use of the swap.
But without a swap, it’s not very good either - oom killer is a terrible thing. It may inadvertently kill mysqld and instead of just slowing down your sites will completely lie.
Note - you do not need to do a swap more than the available RAM. There will be no benefit from him, but he eats up a place.
We create a swap as follows: we connect well and in order for it to connect we automatically write this command to /etc/rc.local
[root@test /]# dd if=/dev/zero of=/swap bs=1M count=256
[root@test /]# mkswap /swap
[root@test /]# swapon /swap
You can check the availability and busyness of the swap with the top or free commands
5. Turn on and start the daemons 6. Create users for the sites. I prefer the username to be similar to the domain of the site. Next, create additional user directories. html (in which the main content of the sites will be) and the log in which the logs for this site will be written and set the rights. We set the rights: user - full access, apache group reading and listing directories, the rest - ficus. Rights can be set by hand, or you can use a small script: 7. Set up a web server. Edit /etc/httpd/conf/httpd.conf
[root@test /]# chkconfig httpd on
[root@test /]# chkconfig mysqld on
[root@test /]# chkconfig crond on
[root@test /]# service httpd restart
[root@test /]# service mysqld restart
[root@test /]# service crond restart
[root@test /]# adduser testsite.ru
[root@test /]# adduser mysite.ru
[root@test /]# adduser cfg.testsite.ru
cd /home
for dir in `ls -1 `; do
mkdir /home/$dir/log
mkdir /home/$dir/html
chown -R $dir:apache $dir
chmod ug+rX $dir
done;
Of the really needing changes - we configure the prefork module so that it initially eats less memory and limits its appetites.
The fact is that Apache was initially configured to run up to 256 of its work processes, while one work process easily takes 20-40 MB (256 * 20 = 5 GB), this can easily lead to problems, especially on modest VPS where there is only 256 MB of RAM.
Therefore, we limit their number to reasonable numbers based on the available RAM. For example, 5 Apache processes with an average size of 30 MB will take about 150 MB - which is already bearable.
It was: It has become: This setting will not allow the Apache to proliferate beyond measure and eat all the RAM. Depending on the actual load, the parameters may be worth revising.
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 4000
StartServers 2
MinSpareServers 2
MaxSpareServers 3
ServerLimit 5
MaxClients 5
MaxRequestsPerChild 1000
Well, uncomment the line in order to have many sites on the same ip address. Next, go to the /etc/httpd/conf.d/ directory and configure our sites. There you can delete welcome.conf which turns off the indexes and instead displays the Apache 2 Test Page page. Please note that the virtual host configs in this directory are applied in turn in alphabetical order. In order for a user to go by IP address to any of our sites and not get to a completely different (which will be the first on the list) in the conf.d directory, put a file with the name for example 000-default.conf and such contents: and put in the directory / var / www / html / index.html file with wishes.
NameVirtualHost *:80
ServerName localhost.local
DocumentRoot "/var/www/html"
Next, for each of our virtual hosts, we create a config file according to approximately the following template: You can add individual settings of any modules to these files to your taste. We restart apache and see if everything works. apache should start normally. In the directories of log sites, 2 log files should be created. When accessing the server by IP address, the file you put in / var / www / html / should be displayed, and when accessing by site name, you should see the contents of the html directory (most likely empty) and the entries in the access.log file of the corresponding site. 8. Configure mysql. First of all, delete the test database and set the root password for mysql
ServerName testsite.ru
ServerAlias www.testsite.ru
ServerAdmin webmaster@testsite.ru
ErrorLog /home/testsite.ru/log/error.log
CustomLog /home/testsite.ru/log/access.log combined
DocumentRoot /home/testsite.ru/html/
Order allow,deny
Allow from all
[root@test /]# service httpd restart
[root@test /]# mysql
mysql> DROP DATABASE test;
mysql> USE mysql;
mysql> UPDATE user SET Password=PASSWORD('MyMysqlPassword') WHERE user='root';
mysql> FLUSH PRIVILEGES;
mysql> quit
With MySql, the problem is about the same as with Apache - memory requirements which are quite expensive on VPS.
To reduce the amount of sql memory used by the server, edit /etc/my.cnf as follows:
add the following to the [mysqld] section: and add these lines to the end of the file: restart mysqld to make sure everything is fine: You also need to replace the option “skip- networking ”makes it possible to access the server only from the local machine through the socket. If network access is required, this option does not need to be enabled.
key_buffer = 16M
max_allowed_packet = 10M
table_cache = 400
sort_buffer_size = 1M
read_buffer_size = 4M
read_rnd_buffer_size = 2M
net_buffer_length = 20K
thread_stack = 640K
tmp_table_size = 10M
query_cache_limit = 1M
query_cache_size = 32M
skip-locking
skip-innodb
skip-networking
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 8M
sort_buffer_size = 8M
[myisamchk]
key_buffer = 8M
sort_buffer_size = 8M
[mysqlhotcopy]
interactive-timeout
[root@test ]# service mysqld restart
Such settings will minimize the memory used by the mysql process and work normally on an unloaded site. But of course you need to look at the statistics of mysql and depending on the needs to increase the data here limits.
Further administration of mysql is more convenient to do through phpMyAdmin.
Now there is one caveat - by default phpMyAdmin is available on the path / phpMyAdmin on all our sites.
To avoid this, we create a specialized site for management (for example, cfg.testsite.ru) and configure it similarly to the rest.
Then we transfer the entire contents of the /etc/httpd/conf.d/phpMyAdmin.conf file to the config of this site, and delete the phpMyAdmin.conf file itself or transfer it somewhere from the conf.d directory.
After such actions phpMyAdmin will be available on the path / phpMyAdmin / only on a dedicated site.
Well, so that you can enter it in the site configuration file, change
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
Allow from ::1
Order Deny, Allow
Deny from All
Allow from 127.0.0.1
Allow from :: 1
at
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
Allow from ваш.ип.адрес.
Allow from ::1
Order Deny, Allow
Deny from All
Allow from 127.0.0.1
Allow from your email address.
Allow from :: 1
After that phpMyAdmin will be available from your ip address.
We log in as the root user with the password that you set.
To create a user, go to the "Privilege" - "Add a new user"
username - arbitrary, I prefer to use the site name to reduce confusion.
The host is local (do we make it for a site that will spin right there?)
Password - generate. (do not forget to copy the password)
Put a checkmark - “Create a database with the user name in the name and provide full privileges to it”
Apply.
As a result, we get a user with the name, password and database of your choice with the same name.
9. Often uploading files to a hosting is more convenient via ftp. To do this, we installed vsftpd, we
edit its config /etc/vsftpd/vsftpd.conf,
turn off the anonymous login, change it to and uncomment Now, in order to be able to access ftp of a certain site, the corresponding user needs to set a password And do not forget that by default this user with the password set can log in via SSH. To disable this feature, the easiest way is to change the user shell. Turn on and run vsftpd. Check if everything works.
anonymous_enable=YES
anonymous_enable=NO
chroot_local_user=YES
[root@test /]# passwd testsite.ru
[root@test etc]# chsh -s /sbin/nologin testsite.ru
[root@test /]# chkconfig vsftpd on
[root@test /]# service vsftpd start
And finally, a very simple "operational backup". According to the principle of "backups do not happen much."
It would be better to use something more correct, but a bad backup is still better than a complete absence.
Such a backup can serve as a good addition to the full backup of the virtual machine at the hosting provider. But, by no means replace it.
We backup the contents of sites and databases, as well as the settings in the / etc / directory.
We create the directory / backup / and set the rights to it "700" In the directory /etc/cron.daily/ create the file backup.sh and also set the rights to it "700". The file has the following contents:
[root@test /]# mkdir /backup/
[root@test /]# chmod 700 /backup/
[root@test /]# touch /etc/cron.daily/backup.sh
[root@test /]# chmod 700 /etc/cron.daily/backup.sh
#!/bin/sh
#Бекапим все директории html наших сайтов
tar -cf - /home/*/html/ | gzip > /backup/sites-`date +%Y-%m-%d`.tar.gz
#Бекапим все базы даных в один файл
mysqldump -u root --password=MyMysqlPassword --all-databases | gzip > /backup/mysql-`date +%Y-%m-%d`.dump.gz
#Бекапим конфигурационные файлы
tar -cf - /etc/ | gzip > /backup/etc-`date +%Y-%m-%d`.tar.gz
#Удаляем файлы бекапов старше 7 дней
tmpwatch -t -m 7d /backup/
In principle, instead of backup in just one heap, it may be better to backup everything separately, but then it becomes possible to forget to configure backup of something and regret it when you need it.
Well, or the backup option "separately" requiring that the site username and database name match: 10. Updates. Do not forget to update the system from time to time. Thanks to the RHEL / Centos policy regarding software, the software versions after the update will remain the same and inadvertently put the server due to the fact that something has changed very little in the config. The truth in this approach is also a minus - in three years in Centos-6 there will be the same software versions as now. But if our goal is stability - it suits us. 11. Testing.
#!/bin/sh
for dir in `ls -1 /home/ `; do
tar -cf - /home/$dir/html/ | gzip > /backup/sites-$dir-`date +%Y-%m-%d`.tar.gz
mysqldump -u root --password=MyMysqlPassword $dir | gzip > /backup/mysql-$dir-`date +%Y-%m-%d`.dump.gz
done;
#Бекапим конфигурационные файлы
tar -cf - /etc/ | gzip > /backup/etc-`date +%Y-%m-%d`.tar.gz
#Удаляем файлы бекапов старше 7 дней
tmpwatch -t -m 7d /backup/
[root@test ~]# yum update
I highly recommend testing the site after setting up.
The first point of testing is to reboot the server and verify that all the necessary daemons are started and everything works as expected. I would generally recommend not chasing uptime digits but rebooting after installing or changing versions of any server software starting automatically.
It’s better to find out that Apache does not start in autorun after a scheduled reboot of your own, than to find out that the hoster had problems and, as a result of rebooting your virtual machine, sites on it have been working for half a day already.
Next is stress testing using the ab utility (Apache HTTP server benchmarking tool).
In this testing, we are interested not so much in the number of parrots as the behavior of the server under load. It should not have dying processes and active swap.
For testing, we need a site hosted on this server in working condition. And a “typical” page from this site. Well, or you can use not the typical, but the most difficult.
For example, I am testing on a freshly installed Drupal 7.9
From the whole variety of command line ab we need only 2 parameters -n - the number of http requests -c - the number of simultaneous requests (threads).
During the test in the second ssh session, using top, we observe how the server is doing.
100 requests in 2 threads.
[root@test ~]# ab -n 100 -c 2 testsite.ru
From the output of ab I am particularly interested in “Requests per second”, “Time per request” and “Failed requests” which give a general idea of server performance. It can be seen that the server processes 6 with a penny of requests per second and spends 322 milliseconds to generate one page. From the top output, memory allocation and processor loading are interesting. Swap: 0k used - sooo good. 93232k free + 76604k cached is actually 170 megabytes of free memory. 100 requests 5 threads. The number of requests per second remained the same, but the generation time increased more than 2 times - they ran into the processor. And finally, the habraeffect or something close :-)
Failed requests: 0
Requests per second: 6.20 [#/sec] (mean)
Time per request: 322.788 [ms] (mean)
Tasks: 62 total, 3 running, 59 sleeping, 0 stopped, 0 zombie
Cpu(s): 19.9%us, 5.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.4%si, 74.5%st
Mem: 244856k total, 151624k used, 93232k free, 3752k buffers
Swap: 262136k total, 0k used, 262136k free, 76604k cached
[root@test ~]# ab -n 100 -c 5 testsite.ru
Failed requests: 0
Requests per second: 6.21 [#/sec] (mean)
Time per request: 804.513 [ms] (mean)
Tasks: 63 total, 5 running, 58 sleeping, 0 stopped, 0 zombie
Cpu(s): 17.5%us, 6.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 76.3%st
Mem: 244856k total, 159756k used, 85100k free, 3812k buffers
Swap: 262136k total, 0k used, 262136k free, 76660k cached
[root@test ~]# ab -n 500 -c 50 testsite.ru
Failed requests: 0
Requests per second: 6.45 [#/sec] (mean)
Time per request: 7749.972 [ms] (mean)
Tasks: 63 total, 6 running, 57 sleeping, 0 stopped, 0 zombie
Cpu(s): 19.1%us, 5.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 75.6%st
Mem: 244856k total, 162740k used, 82116k free, 3884k buffers
Swap: 262136k total, 0k used, 262136k free, 76672k cached
Again, the number of requests per second is relatively stable, but the generation time has become quite sad. But at the same time, Failed requests is zero. Which means that, although slowly, everything works.
Well, and about the memory - at the moment Swap: 0k used, 82116k free, 76672k cached - consumption has not increased much and in principle you can increase some limits, but given my lack of site content at the moment, I think this should not be done. But later it is worth running the tests on the completed site and, depending on the results, already adjust the settings.
12. Installing nginx as a frontend.
Why is this necessary.
The main problem is how apache handles incoming connections. For each incoming connection, a new process is created or one of the started ones is taken and the connection is transferred to it for servicing. Until the connection is closed, this process only deals with it.
In principle, everything looks good as long as we have a lot of RAM and / or very fast clients (ab running one of these options from a localhost), but everything becomes much sadder if the client is sitting on a slow channel or just not in a hurry. In this case, it actually blocks one of the processes during the fence of the request, which for this time is turned off from the server.
Thus, in theory, having a server on a 100Mbit channel and one persistent client on a dialup with a reset, we can get something like DOS - a client in several threads will block almost all of our apache processes, which we have in mind due to the small amount of RAM.
This problem is solved by installing some kind of light http server in the form of a front-end. If there is a frontend, all incoming connections are accepted by him, then the request is sent to apache and a response is quickly received thereby freeing the apache process for new requests. The frontend, slowly and without wasting unnecessary resources, gives the received response to the client who has already requested it.
The frontend can itself provide static content with an additional bonus - for example, pictures, css, etc. Relieving heavy Apache.
[root@test ~]# rpm -ihv centos.alt.ru/pub/repository/centos/6/x86_64/centalt-release-6-1.noarch.rpm
[root@test ~]# yum install mod_realip2 nginx-stable
In order for apache and our scripts in requests to see the real client IP address and not the front-end address, we will install mod_realip2.
edit /etc/httpd/conf.d/mod_realip2.conf, uncomment edit the httpd.conf and files in /etc/httpd/conf.d/ change all the instructions for port 80 to port 8080 There are three directives to change: we edit / etc / nginx / nginx.conf I use the start of nginx from under the apache user since initially we gave all the rights with the expectation of it. It will also be useful to comment out the access_log directive in nginx.conf to avoid double logging. error_log is better not to touch - Apache and nginx errors are still different. In the server section, edit the listen directive and set:
RealIP On
RealIPProxy 127.0.0.1
RealIPHeader X-Real-IP
Listen 127.0.0.1:8080
NameVirtualHost *:8080
user apache;
worker_processes 2;
listen 80 default
change: to the directory /etc/nginx/conf.d/ create a proxy.conf file with the following contents, restart apache and nginx and check if everything works. In general, everything. Now nginx stands as a frontend, accepts all incoming connections and proxies their Apache which processes them and quickly passes the response back to nginx freeing the process for new requests. The next step to increase performance and reduce resource consumption will be the return of static content directly through nginx. To do this, in addition to the apache virtual hosts, you will have to create nginx virtual hosts and specify what to distribute. To do this, in the /etc/nginx/conf.d/ directory, create a file with the name of our site and the extension .conf with the following contents:
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location / {
proxy_pass 127.0.0.1:8080/;
}
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
service httpd restart
service nginx restart
server {
listen 80;
server_name testsite.ru www.testsite.ru;
location / {
proxy_pass 127.0.0.1:8080/;
}
location ~ /\.ht {
deny all;
}
location /sites/default/files {
root /home/testsite.ru/html;
access_log /home/testsite.ru/log/access_static.log combined;
}
}
In this example, for a site on CMS Drupal, the static contents of the / sites / default / files directory are distributed via nginx, and for everything else, we already go to the apache.
Another option is to replace the location directive with: In this case, all files with the corresponding extensions will be given by nginx. But in this version there is a small minus - nginx does not know how to work with .htaccess files, so if you have any content there that is closed from viewing .htaccess, you should refrain from using this option. It is also worth noting that in this situation we get two logs on one site. Separately, the request log for which Apache worked and separately the log of the contents given by nginx.
location ~ \.(jpg|gif|png|css|js|ico)$ {
root /home/testsite.ru/html;
access_log /home/testsite.ru/log/access_static.log combined;
}
Alternatively, transfer the access_log directive from the location section to the server section and disable access_log in the Apache virtual host. In this case, only nginx will log.
But to see “how it works,” a double log can be interesting - you can immediately see how much of the load is on anyone.
For further optimization, it’s worth reading the manuals on optimizing specific components already and doing it with an eye to the current situation.
UPD: Fixed several typos
UPD: Fixed swap connection, thanks AngryAnonymous
UPD: Added description of installing and configuring nginx, thanks masterbo for the kick in the right direction.
Another option backup script fromodmin4eg : habrahabr.ru/blogs/s_admin/132302/#comment_4391784
Waiting for criticism.