Failover Cluster for Java Applications

Since the launch of the project, some time has passed and it is time to increase the computing power for the application. It was decided to build a cluster for this, which in the future can be easily scaled. Therefore, we need to configure the cluster to distribute requests between servers.

For this, we will use 4 servers on Linux CentOS 5.5, as well as Apache, Tomcat6, mod_jk, Heartbeat.
web1, web2 servers - for distributing requests using Apache and fault tolerance using Heartbeat. tomcat1, tomcat2 server - Tomcat server for Java application.

Software installation: Install

Apache and Heartbeat Since the repository does not have the latest stable version of Tomcat, I prefer to download it from the mirror

[root@web1 opt]$ yum -y install httpd heartbeat
[root@web2 opt]$ yum -y install httpd heartbeat




[root@tomcat1 opt]$ wget apache.vc.ukrtel.net/tomcat/tomcat-7/v7.0.21/bin/apache-tomcat-7.0.21.tar.gz
[root@tomcat1 opt]$ tar xvfz apache-tomcat-7.0.21.tar.gz
[root@tomcat1 opt]$ mkdir tomcat $$ mv apache-tomcat-7.0.21 tomcat
[root@tomcat1 opt]$ rmdir apache-tomcat-7.0.21
[root@tomcat1 opt]$ ln -s /opt/tomcat/bin/catalina.sh /etc/init.d/tomcat

[root@tomcat2 opt]$ wget apache.vc.ukrtel.net/tomcat/tomcat-7/v7.0.21/bin/apache-tomcat-7.0.21.tar.gz
[root@tomcat2 opt]$ tar xvfz apache-tomcat-7.0.21.tar.gz
[root@tomcat2 opt]$ mkdir tomcat $$ move apache-tomcat-7.0.21 tomcat
[root@tomcat2 opt]$ rmdir apache-tomcat-7.0.21
[root@tomcat2 opt]$ ln -s /opt/tomcat/bin/catalina.sh /etc/init.d/tomcat


In order for Apache to learn how to distribute the load between the tomcat1 and tomcat2 servers on the web1 and web2 servers, you need to connect the mod_jk module to Apache.
Download mod_jk for your version of Apache, rename it and move it to the / etc / httpd / modules directory. We do the same for the app2 server. Setting Heartbeat: Set the read-only parameter for the root user to the file /etc/ha.d/authkeys, otherwise heartbeat will not start. And add 2 lines to it. The file must be identical on both nodes. We do the same for the web2 server. Edit the file /etc/ha.d/ha.cf. The file must be identical on both nodes. We do the same for the web2 server.

[root@web1 opt]$ wget archive.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/linux/jk-1.2.31/i386/mod_jk-1.2.31-httpd-2.2.x.so
[root@web1 opt]$ move mod_jk-1.2.31-httpd-2.2.x.so /etc/httpd/modules/mod_jk.so

[root@web2 opt]$ wget archive.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/linux/jk-1.2.31/i386/mod_jk-1.2.31-httpd-2.2.x.so
[root@web2 opt]$ move mod_jk-1.2.31-httpd-2.2.x.so /etc/httpd/modules/mod_jk.so






[root@web1 opt]$ touch /etc/ha.d/authkeys
[root@web1 opt]$ touch /etc/ha.d/ha.cf
[root@web1 opt]$ touch /etc/ha.d/haresources




[root@web1 ha.d]$ chmod 600 /etc/ha.d/authkeys



[root@web1 ha.d]$ nano authkeys

auth 2
2 sha1 your-password






[root@web1 ha.d]$ nano ha.cf

logfacility local0
keepalive 2
deadtime 10
initdead 120
bcast eth0
udpport 694
auto_failback on
node web1
node web2
respawn hacluster /usr/lib/heartbeat/ipfail
use_logd yes
logfile /var/log/ha.log
debugfile /var/log/ha-debug.log




Edit the file /etc/ha.d/haresources. The file must be identical on both nodes. We do the same for the web2 server. Balancing settings: Add the following lines to the /etc/httpd/conf/httpd.conf file of both web servers: In the DocumentRoot section, add the following two lines In the / etc / httpd / conf folder of both web servers, create the workers.properties file. Add the following lines to them: In the /opt/tomcat/conf/server.xml of both Tomcat's, configure the ports (all ports must be different): Configuring session replication

[root@web1 ha.d]$ nano haresources

web1 192.168.0.1 httpd # общий ip для обращения из браузера.








LoadModule jk_module modules/mod_jk.so

JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat "%w %V %T"




JkMount /*.jsp loadbalancer
JkMount /servlet/* loadbalancer




[root@web1 conf]$ touch workers.properties
[root@web2 conf]$ touch workers.properties




worker.list=tomcat1, tomcat2, loadbalancer

worker.tomcat1.port=10010
worker.tomcat1.host=192.168.1.1
worker.tomcat1.type=ajp13
worker.tomcat1.lbfactor=1

worker.tomcat2.port=10020
worker.tomcat2.host=192.168.1.2
worker.tomcat2.type=ajp13
worker.tomcat2.lbfactor=1

worker.loadbalancer.type=lb
worker.loadbalancer.balanced_workers=tomcat1, tomcat2




[root@tomcat1 conf]$ nano server.xml




-->



[root@tomcat2 conf]$ nano server.xml




-->






In order to prevent a user session from being destroyed when one of Tomcat servers crashes, it makes sense to configure session replication between Tomcat servers. To do this, add the following lines to the /opt/tomcat/conf/server.xml in the "<Engine name =" Catalina defaultHost = "localhost"> "sections of all Tomcat's: This completes the setup of the failover and performance cluster for Java servlets . We have achieved fault tolerance and scalable performance, which will allow us to easily add new nodes to the cluster in case of lack of performance.

[root@tomcat1 conf]$ nano server.xml

<еngine name="Catalina" defaultHost="localhost" debug="0" jvmRoute="tomcat1">

className="org.apache.catalina.cluster.mcast.McastService"
mcastAddr="228.0.0.4"
mcastBindAddress="127.0.0.1"
mcastPort="45564"
mcastFrequency="500"
mcastDropTime="3000"/>
className="org.apache.catalina.cluster.tcp.ReplicationListener"
tcpListenAddress="auto"
tcpListenPort="4001"
tcpSelectorTimeout="100"
tcpThreadCount="6"/>
/>

[root@tomcat2 conf]$ nano server.xml

<еngine name="Catalina" defaultHost="localhost" debug="0" jvmRoute="tomcat2">

className="org.apache.catalina.cluster.mcast.McastService"
mcastAddr="228.0.0.4"
mcastBindAddress="127.0.0.1"
mcastPort="45564"
mcastFrequency="500"
mcastDropTime="3000"/>
className="org.apache.catalina.cluster.tcp.ReplicationListener"
tcpListenAddress="auto"
tcpListenPort="4002"
tcpSelectorTimeout="100"
tcpThreadCount="6"/>
/>




Also popular now: