Moving XenForo Forum to a modern platform

  • Tutorial


Why was this needed


The platform for the community of our product has long been working on the basis of the XenForo forum engine . Until recently, the forum ran VPS based on CentOS 6.8 with vendor Apache 2.2.15, MySQL 5.1, and PHP 5.6.

In connection with the upcoming release of XenForo 2.0, which has increased requirements for components, and a general desire to speed up the forum on a modern component base, it was decided to move to VPS with nginx, the latest version of PHP and a database running on Percona Server 5.7.

The instruction below does not pretend to be the perfect solution with the perfect configuration and can be considered as a general plan for using XenForo on nginx hosting. The instruction is primarily intended for those XenForo administrators who are not too strong in the intricacies of Linux administration and would like to have some kind of common basic instruction.

VPS Preparation


CentOS 7.3 was chosen as the operating system simply because the administrator of rpm-based OSes is closer than deb-based :)

VPS has 25Gb of disk space, 4Gb RAM, and the command:

# cat /proc/cpuinfo | grep processor | wc -l

shows the number 8.

First, we remove all unnecessary packages, such as Samba, httpd, and everything that you consider unnecessary. Then install all available updates from the official repository using:

# yum update

Next, you need to connect all the necessary third-party repositories and install the components we need. The first to install the Percona database server . We connect the repository and install the necessary packages:

# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
# yum install Percona-Server-server-57

Here, the subtlety is that during the installation, a temporary admin password is generated, which must be found using the command:

# grep 'temporary password' /var/log/mysqld.log

We will need it for further security configuration of Percona Server with the command:

# /usr/bin/mysql_secure_installation

After that, you will get a permanent password for your database server.

Next, install the nginx repository and the nginx package itself:

# yum install epel-release
# yum install nginx

After that, install the latest version of PHP with all the necessary components:

# cd /tmp
# curl 'https://setup.ius.io/' -o setup-ius.sh
# bash setup-ius.sh
# yum install php71u-fpm-nginx php71u-cli php71u-mysqlnd php71u-pecl-memcached php71u-opcache php71u-gd memcached

Turn on all the necessary services with

# systemctl enable nginx
# systemctl enable memcached
# systemctl enable mysql
# systemctl enable php-fpm

so that after rebooting the server, they all start automatically. This is where we finish the preparation and proceed to the most difficult part - setting up this entire economy for the optimal operation of our XenForo forum.

Service Setup


This section should not be taken as the ultimate truth. Experienced administrators can specify much better fine-tuning. Inexperienced are offered some general recommendations that they can use one to one, as a really working configuration, or use them as a template for their own individual configuration.

So, for starters, set the value of the cgi.fix_pathinfo = 0 parameter in the /etc/php.ini file. Then we go to the file /etc/php-fpm.d/www.conf, comment out the line listen = 127.0.0.1:9000 and uncomment listen = /run/php-fpm/www.sock. Additionally, enable listen.acl_users = nginx. It should be something like:

;listen = 127.0.0.1:9000
listen = /run/php-fpm/www.sock
listen.acl_users = nginx

In the /etc/nginx/conf.d/php-fpm.conf file, we also enable work through the socket:

#server 127.0.0.1:9000;
server unix:/run/php-fpm/www.sock;

Restart php-fpm:

# systemctl restart php-fpm

For security reasons, the memcached service is bound only to the address 127.0.0.1:

# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="2048"
CACHESIZE="1024"
OPTIONS="-l 127.0.0.1"

Run it with:

# systemctl start memcached

make sure that port 11211 is open for connections and configure its use in the XenForo config for caching the backend in accordance with the official documentation of XenForo. But there is subtlety, instead of the line:

$config['cache']['backend'] = 'Memcached';

I earned a line:

$config['cache']['backend']='Libmemcached';

You can try to optimize Percona Server using their wizard , or with the well-known script mysqltuner.pl Everything is at your discretion and in accordance with the resources of your hardware.

Keep in mind that the configuration file is located in /etc/percona-server.conf.d/mysqld.cnf

The most difficult part in this story is the nginx configuration. In the main settings, nothing special. Just set the worker_processes values ​​correctly (the number of processors is determined by the command cat / proc / cpuinfo | grep processor | wc -l) and worker_connections (worker_processes * 1024):

user nginx;
worker_processes 8;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 8192;
    use epoll;
    multi_accept on;
}

Next is the httpd block. Here, too, there are no special subtleties, with the exception of one, very important. The fact is that we use FastCGI caching, and this will require additional settings. This caching requires the inclusion of two configuration blocks in different nginx.conf blocks. To get started, how everything looks in the httpd block:

http {
    access_log  off;
    server_tokens off;
    charset utf-8;
    reset_timedout_connection on;
    send_timeout 15;
    client_max_body_size 5m;
    client_header_buffer_size    1k;
    client_header_timeout 15;
    client_body_timeout 30;
    large_client_header_buffers  2 1k;
    open_file_cache max=2000 inactive=20s;
    open_file_cache_min_uses 5;
    open_file_cache_valid 30s;
    open_file_cache_errors off;
    output_buffers      1 32k;
    postpone_output     1460;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    keepalive_requests  100000;
    types_hash_max_size 2048;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    ### FastCGI Cache ################
    map $http_cookie $nocachecookie {
     default                   0;
    ~xf_fbUid                  1;
    ~xf_user                   1;
    ~xf_logged_in              1;
}
    map $request_uri $nocacheuri {
       default              0;
    ~^/register             1;
    ~^/login                1;
    ~^/validate-field       1;
    ~^/captcha              1;
    ~^/lost-password        1;
    ~^/two-step             1;
}
fastcgi_cache_path              /tmp/nginx_fastcgi_cache levels=1:2 keys_zone=fastcgicache:200m inactive=30m;
fastcgi_cache_key               $scheme$request_method$host$request_uri;
fastcgi_cache_lock              on;
fastcgi_cache_use_stale         error timeout invalid_header updating http_500;
fastcgi_ignore_headers          Cache-Control Expires Set-Cookie;
### FastCGI Cache ################

Further we will return to the features of enabling FastCGI caching in another block, but for now let's look at the following block, server:

server {
    listen  80 reuseport;
    server_name  domain.com;
    return 301 https://domain.com$request_uri;
}
server {
    listen 443 ssl reuseport http2;
    server_name  domain.com;
    root  /var/www/html;
    ssl_certificate "/etc/nginx/ssls/ssl-bundle.crt";
    ssl_certificate_key "/etc/nginx/ssls/domain_com.key";
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    ssl_ciphers "EECDH:+AES256:-3DES:RSA+AES:RSA+3DES:!NULL:!RC4";
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 77.88.8.8 valid=300s;
    resolver_timeout 5s;
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    add_header Strict-Transport-Security max-age=31536000;

Configuring an SSL certificate is important here. In our case, a certificate from Comodo is used. Instructions for connecting it can be found on their website , and to generate /etc/ssl/certs/dhparam.pem we use the command:

# openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Subsequent verification of the correctness of the SSL certificate settings can be done here .

And finally, the last important nginx config blocks:

location / {
  index  index.php index.html;
  try_files $uri /index.php?$uri&$args;
  }
location ~ /(internal_data|library) {
         internal;
    }
location ~ /wp-content/ { return 444; }
location ~ /wp-includes/ { return 444; }
# define error page
error_page 404 = @notfound;
# error page location redirect 301
location @notfound {
    return 301 /;
}
error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
location ~ \.php$ {
    fastcgi_max_temp_file_size 1M;
    fastcgi_cache_use_stale updating;
    fastcgi_pass_header Set-Cookie;
    fastcgi_pass_header Cookie;
    fastcgi_pass  unix:/run/php-fpm/www.sock;
    fastcgi_index index.php;
    fastcgi_intercept_errors on;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_buffer_size 128k;
    fastcgi_buffers 256 16k;
    fastcgi_busy_buffers_size 256k;
    fastcgi_temp_file_write_size 256k;
    fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
    proxy_buffer_size 8k;
    include fastcgi_params;
    ### fastcgi_cache ###
    fastcgi_cache           fastcgicache;
    fastcgi_cache_bypass    $nocachecookie $nocacheuri;
    fastcgi_no_cache        $nocachecookie $nocacheuri;
    fastcgi_cache_valid     200 202 302 404 403 5m;
    fastcgi_cache_valid     301 1h;
    fastcgi_cache_valid     any 1m;
    add_header X-Cache      $upstream_cache_status;
    ### fastcgi_cache end ###
        }
    gzip                   on;
    gzip_http_version      1.1;
    gzip_vary              on;
    gzip_min_length        1100;
    gzip_buffers           64 8k;
    gzip_comp_level        6;
    gzip_proxied           any;
    gzip_types             image/png image/gif image/svg+xml image/jpeg image/jpg text/xml text/javascript text/plain text/css application/json application/javascript application/x-javascript application/vnd.ms-fontobject
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";
location ~* \.(ico|css|js|gif|jpeg|jpg|png|woff|ttf|svg)$ {
    add_header "Access-Control-Allow-Origin" "*";
    root /var/www/html;
    expires 30d;
    add_header Pragma public;
    add_header Cache-Control "public";
}
}
}

Location parameters are very important here for the correct operation of CNC links, PHP scripts and blocking access to important internal directories internal_data and library. In addition, gzip compression and caching of static media files is included here. Well, the second part of setting up FastCGI caching.

Actually, moving the content of the forum itself consisted of transferring a database dump and tar.gz of the archive contents of the forum root directory and deploying them to a new server.

Additional caching information in nginx


In the beginning, I tried using nginx microcaching. To start, I created a directory for storing the cache:

# mkdir /var/cache/nginx2

Created the file /etc/nginx/conf.d/microcache.conf with the contents:

fastcgi_cache_path /var/cache/nginx2 levels=1:2 keys_zone=microcache:5m max_size=1000m;
map $http_cookie $cache_uid {
  default nil; # hommage to Lisp 
  ~SESS[[:alnum:]]+=(?[[:alnum:]]+) $session_id;
}
map $request_method $no_cache {
  default 1;
  HEAD 0;
  GET 0;
}

and in the nginx config for php location did this:

location ~ \.php$ {
  fastcgi_cache microcache;
  fastcgi_cache_key $server_name|$request_uri;
  fastcgi_cache_valid 404 30m;
  fastcgi_cache_valid 200 10s;

In principle, everything worked perfectly, the forum began to work very quickly, with the exception of one problem - sessions of registered and logged in users began to work strangely. Suddenly you find that you are logged in and you had to log in again.

It turned out that the problem lies in the depths of the XenForo engine and is solved by installing the Logged In Cookie plugin and editing the XenForo templates helper_login_form and login_bar_form by replacing the line:


per line:


But I learned all this later when I configured the FastCGI caching described above, with which everything now works fine. Therefore, I think the problem with the sessions would be solved for nginx microcaching, but I did not check. You can try this caching option.

Conclusion


After testing the forum on Google Pagespeed and the corresponding additional optimization, a significant acceleration of the forum could not be overlooked. Now the forum is gaining 86 points out of 100. Previously, Apache had 78 points. There is still work to do in terms of code optimization, especially for the mobile version.

In addition, I compared the old Apache forum and the new one on nginx with load testing on php script with a total of 1000 requests and 300 simultaneous connections. The results are obvious, as they say:

Apache:


# ab -n 1000 -c 300 talk6.plesk.com/admin.php
This is ApacheBench, Version 2.3 <$ Revision: 655,654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, www.zeustech.net
Licensed to The Apache Software Foundation , www.apache.org

Benchmarking talk6.plesk.com (be patient)
Completed 100 requests
SSL handshake failed (5).
SSL handshake failed (5).
Completed 200 requests
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: Apache / 2.2.15
Server Hostname: talk6.plesk.com
Server Port: 443
SSL / TLS Protocol: TLSv1 / SSLv3, ECDHE-RSA-AES256-GCM-SHA384,2048,256

Document Path: /admin.php
Document Length: 3438 bytes

Concurrency Level: 300
Time taken for tests: 9.056 seconds
Complete requests: 1000
Failed requests: 44
(Connect: 0, Receive: 0, Length: 44, Exceptions: 0)

Write errors: 0
Total transferred: 3734136 bytes
HTML transferred: 3286728 bytes
Requests per second: 110.43 [# / sec] ( mean)
Time per request: 2716.714 [ms] (mean)
Time per request: 9.056 [ms] (mean, across all concurrent requests)
Transfer rate: 402.69 [Kbytes / sec] received

Connection Times (ms)
min mean [± sd] median max
Connect: 0 1987 1940.1 1223 8748
Processing: 59 257 800.3 76 4254
Waiting: 0 79 31.4 72 211
Total: 234 2244 1926.3 1472 8811

Percentage of the requests served within a certain time (ms)
50% 1472
66% 2019
75% 2683
80% 3068
90% 4278
95% 8313
98% 8625
99% 8787
100% 8811 (longest request)

nginx:


# ab -n 1000 -c 300 talk.plesk.com/admin.php
This is ApacheBench, Version 2.3 <$ Revision: 655,654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, www.zeustech.net
Licensed to The Apache Software Foundation , www.apache.org

Benchmarking talk.plesk.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: nginx
Server Hostname: talk.plesk.com
Server Port: 443
SSL / TLS Protocol: TLSv1 / SSLv3, ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path: /admin.php
Document Length: 3437 bytes

Concurrency Level: 300
Time taken for tests: 5.585 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 3932790 bytes
HTML transferred: 3474807 bytes
Requests per second: 179.05 [# / sec] (mean)
Time per request: 1675.541 [ms] (mean)
Time per request: 5.585 [ms] (mean, across all concurrent requests)
Transfer rate: 687.65 [Kbytes / sec] received

Connection Times (ms)
min mean [± sd] median max
Connect: 182 1089 298.9 1185 1450
Processing: 55 261 279.5 159 1092
Waiting: 55 243 267.6 139 943
Total: 253 1350 81.5 1323 1510

Percentage of the requests served within a certain time (ms)
50% 1323
66% 1347
75% 1422
80% 1451
90% 1467
95% 1477
98% 1486
99% 1498
100% 1510 (longest request)

Monitoring of consumed VPS resources at the highest forum load also allows us to judge their very small consumption. The forum interface has recently been completely redesigned in accordance with the new corporate standard, and in combination with reactive responsiveness has become an additional plus to attract new members of our community.

PS I would be very grateful to connoisseurs and experts of nginx in pointing out errors and advice on additional configuration optimization.

Also popular now: