Speed ​​up Nginx in 5 minutes

    image
    Try it yourself

    Typically, a properly configured Nginx server on Linux can handle 500,000 - 600,000 requests per second. But this figure can be very significantly increased. I would like to draw attention to the fact that the settings described below were applied in a test environment and, perhaps, they will not work for your battle servers.

    A moment of banality.

    yum -y install nginx
    

    For every fireman, create a backup of the original config.

    cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.orig
    vim /etc/nginx/nginx.conf
    

    And now you can cheat!

    Let's start with the worker_processes directive . If Nginx does the work of loading the processor (for example, SSL or gzipping), then it is optimal to set this directive to a value equal to the number of processor cores. A gain with a higher value will only be obtained if a very large amount of statics is processed.

    # This number should be, at maximum, the number of CPU cores on your system. 
    worker_processes 24;
    

    Also, the worker_processes directive multiplied by worker_connections from the event section will give the maximum possible number of clients.

    # Determines how many clients will be served by each worker process.
    worker_connections 4000;
    

    The last proletarian directive I want to touch on is worker_rlimit_nofile . This directive indicates how many file descriptors Nginx will use. Two descriptors must be allocated for each connection, even for static files (images / JS / CSS): one for connecting to the client, and the second for opening a static file. Thus, the value of worker_rlimit_nofile should be equal to twice the value of Max Clients . On a system, this value can be set from the ulimit -n 200000 command line, or using /etc/security/limits.conf .

    # Number of file descriptors used for Nginx.
    worker_rlimit_nofile 200000;
    

    Now let's deal with logging. Firstly, we will leave logging of only critical errors.

    # Only log critical errors.
    error_log /var/log/nginx/error.log crit
    

    If you are completely fearless and want to turn off error logging entirely, then remember that error_log off will not help you. You just get the entire log in the off file . To disable error logging, do this:

    # Fully disable log errors.
    error_log /dev/null crit;
    

    But access logs are not so scary to disable completely.

    # Disable access log altogether.
    access_log off;
    

    Or at least enable the read / write buffer.

    # Buffer log writes to speed up IO.
    access_log /var/log/nginx/access.log main buffer=16k;
    

    Nginx supports a number of methods for handling connections. The most effective for Linux is the epoll method .

    # The effective method, used on Linux 2.6+, optmized to serve many clients with each thread.
    use epoll;
    

    In order for Nginx to try to accept the maximum number of connections, you must enable the multi_accept directive . However, if the value of worker_connections is too small , their limit can be very quickly exhausted.

    # Accept as many connections as possible, after nginx gets notification about a new connection.
    multi_accept on;
    

    Of course, we can not do without caching information about:
    • descriptors of recently opened files: their size and modification date;
    • the existence of directories;
    • errors when searching for files: lack of the file itself, lack of read permissions, etc.

    I advise you not to copy the values ​​of the cache directives, but play with them, choosing the optimal ones for your environment.

    # Caches information about open FDs, freqently accessed files.
    open_file_cache max=200000 inactive=20s; 
    open_file_cache_valid 30s; 
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
    

    The sendfile directive activates the copying of data between file descriptors by the kernel, which is much more efficient than the read () + write () bundle , which requires data exchange with user space.

    # Sendfile copies data between one FD and other from within the kernel.
    sendfile on; 
    

    After you enable sendfile , you can force Nginx to send the HTTP response headers in one packet, not in separate parts.

    # Causes nginx to attempt to send its HTTP response head in one packet,  instead of using partial frames.
    tcp_nopush on;
    

    For keep-alive connections, you can turn off buffering ( the Nagle algorithm ). This will be useful when frequently requesting small amounts of data in real time, without receiving an immediate response when timely delivery of data is important. A classic example is mouseover events.

    # Don't buffer data-sends (disable Nagle algorithm).
    tcp_nodelay on; 
    

    It is worth paying attention to two more directives for keep-alive connections. Their purpose seems obvious.

    # Timeout for keep-alive connections. Server will close connections after this time.
    keepalive_timeout 30;
    # Number of requests a client can make over the keep-alive connection.
    keepalive_requests 1000;
    

    To free up additional memory allocated for sockets, enable the reset_timedout_connection directive . It will allow the server to close the connection of those clients that have stopped responding.

    # Allow the server to close the connection after a client stops responding. 
    reset_timedout_connection on;
    

    You can also significantly reduce the timeouts for the client_body_timeout and send_timeout directives (the default value of both is 60 seconds). The first - limits the time to read the body of the request from the client. The second is the response time to the client. Thus, if the client does not start reading data in the specified period of time, then Nginx will close the connection.

    # Send the client a "request timed out" if the body is not loaded by this time.
    client_body_timeout 10;
    # If the client stops reading data, free up the stale client connection after this much time.
    send_timeout 2;
    

    And, of course, data compression. Plus - the only and obvious: reducing the size of the forwarded traffic. The minus is the only and obvious one: it does not work for MSIE 6 and below. You can disable compression for these browsers with the gzip_disable directive , specifying the special mask “msie6” as the value, which matches the regular expression “MSIE [4-6] \.”, But works faster (thanks hell0w0rd for the comment ).

    # Compression.
    gzip on;
    gzip_min_length 10240;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
    gzip_disable "msie6";
    

    Perhaps this is all that I wanted to talk about. I can only say again that you should not copy the above settings one to one. I advise you to use them one at a time, each time running some kind of utility for load testing (for example, Tsung ). It is very important to understand what settings really speed up your web server. Methodicality in testing will save you a lot of time.

    PS All settings in one piece for fearless lazy people
    # This number should be, at maximum, the number of CPU cores on your system. 
    worker_processes 24;
    # Number of file descriptors used for Nginx.
    worker_rlimit_nofile 200000;
    # Only log critical errors.
    error_log /var/log/nginx/error.log crit
    events {
        # Determines how many clients will be served by each worker process.
        worker_connections 4000;
        # The effective method, used on Linux 2.6+, optmized to serve many clients with each thread.
        use epoll;
        # Accept as many connections as possible, after nginx gets notification about a new connection.
        multi_accept on;
    }
    http {
        # Caches information about open FDs, freqently accessed files.
        open_file_cache max=200000 inactive=20s; 
        open_file_cache_valid 30s; 
        open_file_cache_min_uses 2;
        open_file_cache_errors on;
        # Disable access log altogether.
        access_log off;
        # Sendfile copies data between one FD and other from within the kernel.
        sendfile on; 
        # Causes nginx to attempt to send its HTTP response head in one packet,  instead of using partial frames.
        tcp_nopush on;
        # Don't buffer data-sends (disable Nagle algorithm).
        tcp_nodelay on; 
        # Timeout for keep-alive connections. Server will close connections after this time.
        keepalive_timeout 30;
        # Number of requests a client can make over the keep-alive connection.
        keepalive_requests 1000;
        # Allow the server to close the connection after a client stops responding. 
        reset_timedout_connection on;
        # Send the client a "request timed out" if the body is not loaded by this time.
        client_body_timeout 10;
        # If the client stops reading data, free up the stale client connection after this much time.
        send_timeout 2;
        # Compression.
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
        gzip_disable "msie6";
    }
    


    Also popular now: