An “ideal” cluster. Part 2.1: virtual cluster on hetzner

  • Tutorial


I want to note that this instruction was born in the process of examining various virtualization systems within the walls of Acronis .
Proxmox showed itself on the good side, perhaps our experience will be useful for solving your problems.


When renting another server in a data center, everyone thinks about the rationality of its use.
After all, it is no secret to anyone that a well-tuned server should not be too busy and there should be enough resources in it to do other work. In addition to the above, fault tolerance is important and therefore keeping multiple copies of the same server as a hot swap seems like a great idea.
To solve these problems and need virtualization.

Now I will tell you how to quickly make a single cluster of linux and windows based servers from one server.
In further articles, I will try to explain how to raise a secure web-cluster and use all the charms of modern virtualization technologies.
This guide will focus on the free Proxmox virtualization system, it is freely available, but requires a fee for support. We will try to do without the support and commercial repository of Proxmox. Here's what Wikipedia says about this product.

Proxmox Virtual Environment (Proxmox VE) is an open source virtualization system based on Debian GNU / Linux. Developed by the Austrian company Proxmox Server Solutions GmbH, sponsored by Internet Foundation Austria.
As a hypervisor uses KVM and OpenVZ. Accordingly, it is able to run any supported KVM OS (Linux, * BSD, Windows and others) with minimal loss of performance and Linux without loss.
Management of virtual machines and administration of the server itself is done through the web interface or through the standard Linux command line interface.
Many options are available for the created virtual machines: the hypervisor used, the type of storage (image file or LVM), the type of emulated disk subsystem (IDE, SCSI or VirtIO), the type of emulated network card, the number of available processors, and others.

Key features

  • Simple management through the web interface;
  • Real-time load monitoring
  • Library of installation images (in local or remote storage);
  • Connection to the “physical” console of guest systems directly from the browser (via VNC);
  • Combining servers in a cluster with the possibility of live migration of virtual machines (without stopping the guest system);
  • Quick deployment of guest systems from templates (available only for OpenVZ);
  • Automatic backup of virtual machines.





First of all, you need to order a server with debian 7 64 on board, the more memory the better! worry about the safety of your data, RAID 1 will not be redundant at all, although in itself it carries a number of risks. We are optimists, we take with RAID1.
As soon as we have root access to our new server, we get to work:

# Before installing proxmox itself, you need to decide on hostname and specify it

 nano /etc/hosts 


127.0.0.1 localhost
x.x.x.x  test.xxxx.info test
#
# IPv6
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
x:x:x:4105::2  test.xxxx.info


nano /etc/hostname

test


# Change the time zone

echo "Europe/Moscow" > /etc/timezone
dpkg-reconfigure -f noninteractive tzdata


# Create a folder for repositories

mkdir -p /etc/apt/sources.list.d/


# Download repositories

cd /etc/apt/sources.list.d/ 
wget http://sycraft.info/share/debian7/sources.list.d/debian7.list
wget http://sycraft.info/share/debian7/sources.list.d/dotdeb7.list
wget http://sycraft.info/share/debian7/sources.list.d/neurodebian.sources7.list
wget http://sycraft.info/share/debian7/sources.list.d/proxmox7.list


# Install the keys

cd /root/
wget http://www.dotdeb.org/dotdeb.gpg 
cat dotdeb.gpg | apt-key add -
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A040830F7FAC5991
apt-key adv --recv-keys --keyserver pgp.mit.edu 2649A5A9
wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -
rm *.gpg


# Update system

apt-get update && apt-get upgrade -f -y


# We set the necessary minimum

apt-get install ntp screen mc git ntpdate sudo zip unzip pigz locales tzdata nano aptitude htop iotop sysstat rkhunter chkrootkit nscd lsof strace subversion multitail -y -f


# Install the kernel from proxmox

apt-get install pve-firmware pve-kernel-2.6.32-26-pve -y -f
apt-get install pve-headers-2.6.32-26-pve -y -f


# Cleaning the system from old kernels

apt-get remove linux-image-amd64 linux-image-3.2.0-4-amd64 -y -f


# Generation grub

update-grub


# Reboot

reboot


# We were lucky, our server booted up and now you can install proxmox yourself

apt-get install proxmox-ve-2.6.32 ntp ssh lvm2 postfix ksm-control-daemon vzprocps open-iscsi bootlogd -y


# Delete repository for paid proxmox

rm -fr /etc/apt/sources.list.d/pve-enterprise.list


# Add iptables modules for all occasions

nano /etc/vz/vz.conf


IPTABLES="ipt_owner ipt_REDIRECT ipt_recent ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ipt_state iptable_nat ip_nat_ftp"


# Add modules when loading the kernel (an extra step, but just in case)

nano /etc/modules


ipt_MASQUERADE
ipt_helper
ipt_REDIRECT
ipt_state
ipt_TCPMSS
ipt_LOG
ipt_TOS
tun
iptable_nat
ipt_length
ipt_tcpmss
iptable_mangle
ipt_limit
ipt_tos
iptable_filter
ipt_helper
ipt_tos
ipt_ttl
ipt_REJECT
loop


Then a few words about the proposed architecture:

  • We order together with the server 2 external (public) IP addresses, on the first is the service port of the web panel proxmox, ssh, mysql and other service ports that no one should know about
  • The second address serves ports that should be accessible to everyone. For example 80 and 443 and all. In addition, this address was raised on an empty virtual machine devoid of unnecessary services. The rest will be resolved by port forwarding.


# Save current iptables rules

iptables-save > /etc/iptables.up.rules


# Add the rules to the * nat section for our external business address

nano /etc/iptables.up.rules


*nat
:PREROUTING ACCEPT [2164:136969]
:POSTROUTING ACCEPT [58:3659]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -d x.x.16.182/32 -p tcp -m tcp --dport 22 -j DNAT --to-destination 192.168.8.2:22
-A POSTROUTING -o vmbr0 -j MASQUERADE
-A POSTROUTING -d x.x.16.182 -p tcp -s 192.168.8.0/24 --dport 22 -j SNAT --to-source x.x.16.182
COMMIT


# Check the rules, there should be no errors

iptables-restore < /etc/iptables.up.rules 


A very important rule is POSTROUTING. If you want inside one of the virtual machines to access the forwarded port of the external address, nothing will work without this rule!


# Download openvz container images

cd /var/lib/vz/template/cache/
wget http://download.openvz.org/template/precreated/debian-7.0-x86_64.tar.gz
wget http://download.openvz.org/template/precreated/centos-6-x86_64.tar.gz
wget http://download.openvz.org/template/precreated/ubuntu-13.10-x86_64.tar.gz


# Drivers in case we need windows

cd /var/lib/vz/template/iso/
wget http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/virtio-win-0.1-74.iso


# Next, go to the external address of our server xx16.182 : 8006 /
After authorization, we see a message about using the free version, we need it. Just getting used to clicking OK or buying a subscription


Network Setup for Hetznet



# Let's start reconfiguring the network, setting up network bridges may seem strange, but in Hetzner there is a limit on the number of MAC addresses on the switch port, all external addresses will be provided by 1 MAC.
This setting works equally well in DCs without such restrictions, just a universal option
There is also a private network 192.168.8.0/16 - we use it for the internal network between our virtual machines




# Next, we reboot our server and look at our network settings

cat /etc/network/interfaces


auto lo
iface lo inet loopback
auto  eth0
iface eth0 inet static
       address   x.x.16.182
       netmask   255.255.255.224
       pointopoint   x.x.16.129
       gateway    x.x.16.129
dns-nameservers 8.8.8.8
auto vmbr0
iface vmbr0 inet static
       address   x.x.16.182
       netmask   255.255.255.224
       bridge_ports none
       bridge_stp off
       bridge_fd 0
pre-up iptables-restore < /etc/iptables.up.rules
       up ip route add x.x.150/32 dev vmbr0
#
auto vmbr1
iface vmbr1 inet static
       address   192.168.8.100
       netmask   255.255.0.0
       bridge_ports none
       bridge_stp off
       bridge_fd 0


# We write the gateway of the external service IP in pointopoint and gateway, in vmbr0 the same addresses are indicated but without gateway, the up ip route add route and pre-up iptables-restore firewall rules must be registered to the second address on which the public ports will be

For general development, here is an example of network settings for DCs without restrictions on the number of MAC addresses



nano /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto vmbr0
iface vmbr0 inet static
        address  x.x.16.182
        netmask  255.255.255.0
        gateway  x.x.16.1
        dns-nameservers 8.8.8.8
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        pre-up iptables-restore < /etc/iptables.up.rules
auto vmbr1
iface vmbr1 inet static
        address  192.168.8.100
        netmask  255.255.0.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0


# Here is an example of how we can install our Windows (if necessary)




In the settings of the video card, I specify SPICE and here is the client for it www.spice-space.org/download.html
Network and disk - virtio, I immediately install the drivers second cd-rom for the downloaded virtio iso


Well and the last in this article is the gw configuration. virtual machine that will forward us public ports. There will be no SSH or other services listening on the network on this virtual machine - it is a firewall node.
You create a CT with the debian image with the Network Device network.



In the container itself, it will look like this:

nano /etc/network/interfaces


auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
       address   x.x.x.150
       netmask   255.255.255.255
pointopoint  x.x.16.182
        gateway  x.x.16.182
pre-up iptables-restore < /etc/iptables.up.rules
auto eth1
iface eth1 inet static
       address   192.168.8.1
       netmask   255.255.0.0


# Pay attention to the mask, gateway and pointopoint for this interface - the address of our public service network.

# Add the rules to the * nat section for our external public address

nano /etc/iptables.up.rules


*nat
:PREROUTING ACCEPT [2164:136969]
:POSTROUTING ACCEPT [58:3659]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -d x.x.x.150/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.8.5:22
-A POSTROUTING -o eth0 -j MASQUERADE
-A POSTROUTING -d x.x.x.150 -p tcp -s 192.168.8.0/24 --dport 80 -j SNAT --to-source x.x.x.150
COMMIT


# Allow probross traffic when masquerading

echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p


# In this container, delete all unnecessary!

apt-get purge -y -f openssh-server postfix ssh samba bind9 sendmail apache2*apt-get autoremove -y 


If you have any difficulties or need a special person to make the story come true - I will always be happy to help! my contacts are welcome

In continuation of the topic, my article “Ideal” www cluster. Part 1. Frontend: NGINX + Keepalived (vrrp) on CentOS.
There will be, I hope, many, many more articles on this topic! Thanks for attention

Also popular now: