Battle of the Balancers
- Transfer
Battle of the Balancers is a load test of the balancers / proxies that support WebSockets. These technologies are indispensable for scaling infrastructure.
The following technologies were tested:
- http-proxy , version: 0.10.0
- HAProxy , version: 1.5-dev18 (development release)
- elementary “echo server”, for the control test.
There were doubts about hipache . The reason it was excluded is simple - it is built on the basis of http-proxy . At the moment, they are using a fork of the project, in which there are simply no patches related to performance.
For testing, 3 different, unconnected servers were used, all were hosted onjoyent .
1. Proxy, 512Mb, Ubuntu server. All proxies were installed on this server. image: sdc: jpc: ubuntu-12.04: 2.4.0
2. WebSocket server, 512MB “smart machine” with Node.js, on which our WebSocket echo server was spinning. The server is written in Node.js and runs on several kernels using the cluster module. image: sdc: sdc: nodejs: 1.4.0
3. Thor, 512Mb, another “smart machine” on Node.js with specifications similar to the previous one. From this server we generated the necessary load. Thor is the WebSocket load generation tool we developed. This application is distributed open source and is available at http://github.com/observing/thor .
Our proxy server was a “clean” server with Ubuntu 12.04. The following steps have been taken to configure and install all the dependencies. To make sure that we are working with the latest versions, run:
The following dependencies were installed on the system:
- git to access github repositories
- build-essential for compiling proxies from source, most proxies have only recently acquired WebSockets or HTTPS support
- libssl-dev is needed to support HTTPS
- libev-dev is required for stud, which is just incredible
This installs binary npm so that we can install the dependencies of this project. Run npm install in the root of this repository and http-proxy and all the dependencies will be installed automatically.
Please note that since testing and writing this article, nginx 1.4.0 has been released, in which there is support for WebSockets. So if you are reading this article and plan to deploy it on production, my advice is to use version 1.4.0. instead of development versions.
As you can see from these options, we turned on SSL, SPDY and used some other settings. In the end, this general configuration came out:
After that:
Now that everything is installed, you need to configure the configuration files. For Nginx, you can copy nginx.conf from the root of this repository to /etc/nginx/nginx.conf. Other proxies can be configured on the fly.
After installing all the proxies, some tuning of the sockets is required. I pulled this information from the Internet:
And the following values are set:
2 different tests are carried out:
1. The load test of proxy servers without SSL. In this case, we test only WebSockets proxy performance.
2. Load test proxy servers with SSL. Do not use insecure WebSockets, as they have a very bad connection in browsers. But here the extra load is added during the SSL termination process to the proxy server.
In addition to our two tests, we try a different number of compounds:
- 2k
- 5k
- 10k
And for the same results, also:
- 20k
- 30k
Before each test, all WebSocket servers are reset, and proxies are reinitialized. Thor loads all proxies with X-number of connections with 100 simultaneous connections. For each established connection, one UTF-8 message is sent and received. After the message is received, the connection is closed.
http-proxy lives up to its name, it proxies requests and does it fast enough. But since it is based on Node.js, it eats up a lot of memory. Even the simplest node process requires 12+ MB of memory. For 10k queries, it took about 70 MB of memory. Compared with the control test, the HTTP proxy took 5 seconds more. HTTPS, as expected, showed the slowest result, because Node.js is losing out over SSL. And this is not to mention the fact that, being under a heavy load, it completely stops your main event loop.
There is a pull requestfor http-proxy, which significantly reduces memory usage. I manually applied the patch, and as a result, the eaten memory was halved. But still, even after the patch, it uses more memory compared to Nginx, which is easily explained by writing the latter in pure C.
I had high hopes for Nginx, and it did not fail me. He used no more than 10 MB of memory, and really worked out very quickly. The first time I tested Nginx, it showed terrible performance. Node showed even faster results with SSL than Nginx, and I felt that there should be some kind of error, I should be wrong in configuring Nginx. After a couple of tips from friends, I really changed one line in the config - there were incorrect encryption settings. A little setup and confirmation with openssl s_client -connect server: ip fixed everything (now really fast RC4 encryption is used by default).
Next was HAProxy, which showed the same performance as NGINX, but required less (7 MB) of memory.The biggest difference was when testing on HTTPS: it was very slow, not even close to Nginx. We hope that this will be fixed, because So far we have only tested development brunch. Then I made the same mistake as with Nginx, incorrectly configured encryption, which I correctly noticed on HackerNews . In addition to testing HTTPS, we installed stud in front of it to test the performance shown.
http-proxy is a great flexible proxy, easily extensible and appendable. When used in production, I would advise running stud in front of it for SSL termination.
nginx and haproxy showed very close results, it is difficult to say that one of them would be faster or better. But if you look at them from the point of view of administration, it is easier to deploy and work with one nginx than with stud and haproxy.
Winner : Nginx and HAProxy are really fast and their results are close.
Winner : Nginx and HAProxy are really fast and their results are close.
All test results are available at: https://github.com/observing/balancerbattle/tree/master/results
All configurations are in the repository, I would be very happy to check if we can get better performance of our servers.
The following technologies were tested:
- http-proxy , version: 0.10.0
- HAProxy , version: 1.5-dev18 (development release)
- elementary “echo server”, for the control test.
There were doubts about hipache . The reason it was excluded is simple - it is built on the basis of http-proxy . At the moment, they are using a fork of the project, in which there are simply no patches related to performance.
For testing, 3 different, unconnected servers were used, all were hosted onjoyent .
1. Proxy, 512Mb, Ubuntu server. All proxies were installed on this server. image: sdc: jpc: ubuntu-12.04: 2.4.0
2. WebSocket server, 512MB “smart machine” with Node.js, on which our WebSocket echo server was spinning. The server is written in Node.js and runs on several kernels using the cluster module. image: sdc: sdc: nodejs: 1.4.0
3. Thor, 512Mb, another “smart machine” on Node.js with specifications similar to the previous one. From this server we generated the necessary load. Thor is the WebSocket load generation tool we developed. This application is distributed open source and is available at http://github.com/observing/thor .
Proxy settings
Our proxy server was a “clean” server with Ubuntu 12.04. The following steps have been taken to configure and install all the dependencies. To make sure that we are working with the latest versions, run:
apt-get upgrade
The following dependencies were installed on the system:
- git to access github repositories
- build-essential for compiling proxies from source, most proxies have only recently acquired WebSockets or HTTPS support
- libssl-dev is needed to support HTTPS
- libev-dev is required for stud, which is just incredible
apt-get install git build-essential libssl-dev libev-dev
Node.js
Node.js is needed for http-proxy. While http-proxy uses the latest version of Node.js, these tests were performed on version 0.8.19 in order to ensure compatibility of all dependencies. Node.js has been cloned with github.git clone git://github.com/joyent/node.git
cd node
git checkout v0.8.19
./configure
make
make install
This installs binary npm so that we can install the dependencies of this project. Run npm install in the root of this repository and http-proxy and all the dependencies will be installed automatically.
Nginx
Nginx is already a widespread server. It supports proxying to various server backends, but does not support WebSockets. Not so long ago, it was added to the development of the Nginx branch. Thus, we installed the latest development version and compiled from the source:Please note that since testing and writing this article, nginx 1.4.0 has been released, in which there is support for WebSockets. So if you are reading this article and plan to deploy it on production, my advice is to use version 1.4.0. instead of development versions.
wget http://nginx.org/download/nginx-1.3.15.tar.gz
tar xzvf nginx-1.3.15.tar.gz
cd nginx-1.3.15
./configure --with-http_spdy_module --with-http_ssl_module \
--pid-path=/var/run/nginx.pid --conf-path=/etc/nginx/nginx.conf \
--sbin-path=/usr/local/sbin --http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log --without-http_rewrite_module
As you can see from these options, we turned on SSL, SPDY and used some other settings. In the end, this general configuration came out:
Configuration summary
+ PCRE library is not used
+ using system OpenSSL library
+ md5: using OpenSSL library
+ sha1: using OpenSSL library
+ using system zlib library
nginx path prefix: "/usr/local/nginx"
nginx binary file: "/usr/local/sbin"
nginx configuration prefix: "/etc/nginx"
nginx configuration file: "/etc/nginx/nginx.conf"
nginx pid file: "/var/run/nginx.pid"
nginx error log file: "/var/log/nginx/error.log"
nginx http access log file: "/var/log/nginx/access.log"
nginx http client request body temporary files: "client_body_temp"
nginx http proxy temporary files: "proxy_temp"
nginx http fastcgi temporary files: "fastcgi_temp"
nginx http uwsgi temporary files: "uwsgi_temp"
nginx http scgi temporary files: "scgi_temp"
After that:
make
make install
HAProxy
HAProxy was previously able to proxy WebSockets in tcp mode, and now also in http mode. HAProxy also received support for HTTPS termination. So again we need to install development brunch.wget http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev18.tar.gz
tar xzvf haproxy-1.5-dev18.tar.gz
cd haproxy-1.5-dev18
make TARGET=linux26 USE_OPENSSL=1
make install
Stud
Although HAProxy has SSL termination capability, stud is usually used for SSL termination before HAProxy. And we also want to verify this.git clone git://github.com/bumptech/stud.git
cd stud
make
make install
Now that everything is installed, you need to configure the configuration files. For Nginx, you can copy nginx.conf from the root of this repository to /etc/nginx/nginx.conf. Other proxies can be configured on the fly.
Kernel tuning
After installing all the proxies, some tuning of the sockets is required. I pulled this information from the Internet:
vim /etc/sysctl.conf
And the following values are set:
# General gigabit tuning:
net.core.somaxconn = 16384
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_syncookies = 1
# this gives the kernel more memory for tcp
# which you need with many (100k+) open socket connections
net.ipv4.tcp_mem = 50576 64768 98152
net.core.netdev_max_backlog = 2500
Benchmarking
2 different tests are carried out:
1. The load test of proxy servers without SSL. In this case, we test only WebSockets proxy performance.
2. Load test proxy servers with SSL. Do not use insecure WebSockets, as they have a very bad connection in browsers. But here the extra load is added during the SSL termination process to the proxy server.
In addition to our two tests, we try a different number of compounds:
- 2k
- 5k
- 10k
And for the same results, also:
- 20k
- 30k
Before each test, all WebSocket servers are reset, and proxies are reinitialized. Thor loads all proxies with X-number of connections with 100 simultaneous connections. For each established connection, one UTF-8 message is sent and received. After the message is received, the connection is closed.
Launch
Stud
stud --config stud.conf
HAProxy
haproxy -f ./haproxy.cfg
Nginx
nginx
http proxy
FLAVOR=http node http-proxy.js
WebSocketServer
FLAVOR=http node index.js
results
http-proxy lives up to its name, it proxies requests and does it fast enough. But since it is based on Node.js, it eats up a lot of memory. Even the simplest node process requires 12+ MB of memory. For 10k queries, it took about 70 MB of memory. Compared with the control test, the HTTP proxy took 5 seconds more. HTTPS, as expected, showed the slowest result, because Node.js is losing out over SSL. And this is not to mention the fact that, being under a heavy load, it completely stops your main event loop.
There is a pull requestfor http-proxy, which significantly reduces memory usage. I manually applied the patch, and as a result, the eaten memory was halved. But still, even after the patch, it uses more memory compared to Nginx, which is easily explained by writing the latter in pure C.
I had high hopes for Nginx, and it did not fail me. He used no more than 10 MB of memory, and really worked out very quickly. The first time I tested Nginx, it showed terrible performance. Node showed even faster results with SSL than Nginx, and I felt that there should be some kind of error, I should be wrong in configuring Nginx. After a couple of tips from friends, I really changed one line in the config - there were incorrect encryption settings. A little setup and confirmation with openssl s_client -connect server: ip fixed everything (now really fast RC4 encryption is used by default).
Next was HAProxy, which showed the same performance as NGINX, but required less (7 MB) of memory.
conclusions
http-proxy is a great flexible proxy, easily extensible and appendable. When used in production, I would advise running stud in front of it for SSL termination.
nginx and haproxy showed very close results, it is difficult to say that one of them would be faster or better. But if you look at them from the point of view of administration, it is easier to deploy and work with one nginx than with stud and haproxy.
HTTP
Proxy | Connections | Handshaken (medium) | Latency (medium) | Total |
---|---|---|---|---|
http proxy | 10k | 293 ms | 44 ms | 30168 ms |
nginx | 10k | 252 ms | 16 ms | 28433 ms |
haproxy | 10k | 209 ms | 18 ms | 26974 ms |
control | 10k | 189 ms | 16 ms | 25310 ms |
Https
Proxy | Connections | Handshaken (medium) | Latency (medium) | Total |
---|---|---|---|---|
http proxy | 10k | 679 ms | 62 ms | 68670 ms |
nginx | 10k | 470 ms | 30 ms | 50 180 ms |
haproxy | 10k | 464 ms | 25 ms | 50058 ms |
haproxy + stud | 10k | 492 ms | 42 ms | 52403 ms |
control | 10k | 703 ms | 65 ms | 71500 ms |
All test results are available at: https://github.com/observing/balancerbattle/tree/master/results
Contributions
All configurations are in the repository, I would be very happy to check if we can get better performance of our servers.