NginX load balancing Apache server

Hello dear habro population.

I want to warn you right away that I don’t have much grammar, since I’m not native Russian and I write and speak this language relatively recently, so I apologize in advance.

In the course of work, two years ago, we started a project that is based on HTML + PHP + MSSQL and of course the WEB part of all this works on the famous Apache WEB server. But over time, the load began to increase and it was time to distribute the load between several WEB-servers. After much discussion and debate, we came to the following decision. Please switch your attention to the following scheme:


What is needed to implement our scheme into reality:

1) NginX is a proxy server for traffic balancing
2) Two Apache servers

In this article we will consider installing a traffic balancing server based on NginX.
For this, we will choose the CentOS 6.4 operating system.

Our server has two network cards, one of which has Internet access, we go to our interface for configuration.

cd / etc / sysconfig / network-scripts

vi ifcfg-eth0

DEVICE = eth0
HWADDR = 00: 0C: 33: 6a: 90: F8
TYPE = Ethernet
UUID = de83281a-sa20-4791-b588-5621718adf4d
ONBOOT = yes
BOOTPROTO = static
DNS1 =

Now we have the Internet, then configure the local network

cd / etc / sysconfig / network-scripts

vi ifcfg-eth0

DEVICE = eth1
HWADDR = 00: 0B: 35: 6a: 90: F3
TYPE = Ethernet
UUID = de83281a-sa10-4791-b577-5621718adf4d
ONBOOT = yes
BOOTPROTO = static

Now let's start the installation and setup of our proxy server to balance our traffic

--- Update packages on CentOS
yum update -y

# wget
# rpm -ivh nginx-release-rhel-6-0.el6.ngx .noarch.rpm

- Getting started with Nginx
yum install nginx -y

After installation in our catalog
/ etc / nginx

NginX configuration files appear, then backup the existing file
cp nginx.conf /etc/nginx/nginx.conf.backup

after which we start changing the config file
rm nginx.conf

vi nginx.conf

Now add our new config

# user and group from which the
user nginx process is launched ;

# 3 workflows
worker_processes 3;

# Log for errors
error_log /var/log/nginx/error.log debug;

events {
# maximum working connections
worker_connections 1024;

http {

# We
include the table mime include mime.types;
# default mime type
default_type application / octet-stream;
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
sendfile on;

# The module allows you to describe groups of servers that can be used
# in the proxy_pass and fastcgi_pass directives.
upstream web {
# This parameter hashes the session according to the first 4 acts of the IP address, which helps a lot if someone uses asynchronous
ip_hash requests ;
# The directive sets the name and parameters of the server. Please note that we will
# use the name “pv” in the proxy_pass
server directive weight = 2; # max_fails = 60 fail_timeout = 2s;
server weight = 2; # max_fails = 60 fail_timeout = 2s;

server {
# Listen 80 port
listen 80;
location / {

# The proxy_pass directive discussed earlier
proxy_pass web;
# We connect proxy settings
include /etc/nginx/proxy.conf;


In our config, we connected the proxy.conf file in which we specify the proxy settings

proxy_redirect off;
proxy_set_header Host $ host;
proxy_set_header X-Real-IP $ remote_addr;
proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
proxy_cache_bypass http;

Thanks for attention.

Also popular now: