How I gave practice at the university and Nextcloud set up

Actually, and what is the text about?


How often do our expectations diverge from harsh reality?

So I, entering the country's best technical university at the Department of Information Security, counted on fascinating training, a fun student life, and, of course, an interesting practice. However, instead of deciphering the ciphers (hi, Alan) and opening the cryptex (good evening, Robert), I had to configure the Nextcloud cloud on several clustered servers. And it turned out to be interesting!

“So what's the big deal?” You thought. Indeed, the task is very, very trivial. However, when I searched the web for information on this subject, I still could not find a tutorial that would bring together all the parts of the setup. Therefore, in order to save time for the same dummies of network administration, I write this article, with your permission.

So, what will we get at the end - a small Nextcloud cloud based on a master server (yes, yes, of course we need a proxy. And several master servers. And also a skill. If you expect all this - I apologize.) (in this case, 2) servers playing the role of a united, fault-tolerant storage, database.

Nice little cloud for such a small company.

Let's go


Initial conditions


First of all, we must prepare the equipment (in my case VirtualBox, with three machines created in it).

Wizard - 192.168.0.105
Server1 - 192.168.0.108
Server2 - 192.168.0.109

1. Operating system - Ubuntu Server 16.04.
Machine parameters do not play a special role - for the deployment of our small cloud, the power of an average (very average) computer is enough. The overestimated parameters of virtual machines can play a trick on you - when I showed practice on a university computer with four gigabytes of OP, the launch of the third machine led to the computer crashing. Although, maybe the next custom Unix system to blame for this?

2. A working network.
If you, like me, use VirtualBox, then I advise you to specify "Network Bridge" in the network settings as the type of connection. Thus, you can ping the server in a box both from the main system and from other machines.

3. A good playlist (optional)
After the n-th hour of dancing with a tambourine in search of the optimal solution (maybe I'm still in the process) I became completely unbearable.

Cluster setup


To get fault-tolerant storage consisting of several servers, we need a cluster. I chose Gluster because I could find great mana in Russian (!!).

Let's start with the installation.

Open the console of our server and enter the following:

sudo apt-get install python-software-properties

Then, connect to the repository:

sudo add-apt-repository ppa:gluster/glusterfs-3.8
    sudo apt-get update

And finally, install:

sudo apt-get install glusterfs-server

We perform the same actions on the second (n-th) server.

Advice! (for dummies)
When I first encountered the configuration of servers in a VM, I was unpleasantly surprised by the lack of a common buffer between the guest and the main OS (although this was expected). Therefore, I used Putty to communicate with machines . Download, run putti.ekze. Before use, do not forget to install openssh-server on the machines we will be accessing:

sudo apt-get install openssh-server

I think the same client can be used when communicating with real machines. Or maybe it’s better to eat?

So, Gluster is installed. Now, create a connection. But first, find out the IP addresses of our servers:

ifconfig

Now we create, registering the following from the first server (before, logging in as root):
gluster peer probe 192.168.0.109

peer probe: success.

Great, now we have 2 servers in the cluster. We perform this operation as many times as we want to unite the servers in a cluster, changing only the server IP-address.

The storage that we create using Glusterfs has several types of content storage (for more details, see the link above). We are interested in replicated - the content is mirrored to all servers in the cluster.

So far, we have nowhere to store data, so we will create folders on our servers:
on the first:

mkdir /mnt/server1

and on the second:

mkdir /mnt/server2

The final part of the cluster - create storage:

gluster volume create nextcloud replica 2 transport tcp 192.168.0.108:/mnt/server1 192.168.0.109:/mnt/server2 force

where nextcloud is the name of our storage, and 2 is the number of servers in the cluster.

Do not forget about the word force at the end - you can get an error and puzzle for a long time, what’s wrong?

We launch:

gluster volume start nextcloud

Work with the cluster is almost complete. The rest is after installing the cloud.

Install Nextcloud


For this we need our master server. We go under the root of rights and enjoy.
You can use this article for installation . We get to Step 5. We stop.

Download the archive with the cloud:

cd ~
wget --no-check-certificate https://download.nextcloud.com/server/releases/nextcloud-11.0.0.tar.bz2
sudo tar -C /var/www -xvjf ~/nextcloud-11.0.0.tar.bz2 
rm ~/nextcloud-11.0.0.tar.bz2

Create a couple of folders:

sudo mkdir /var/www/nextcloud/data
sudo mkdir /var/www/nextcloud/assets


The most crucial moment. We recall about our cluster and connect it to the master server:
mount.glusterfs 192.168.0.108:/nextcloud /var/www/nextcloud/data/

Now all files falling into our cloud will be copied to all servers.
We complete the installation, following the tips from the article.

What is the result?


Well, what did we do? But it turned out a small cloud, which is fault tolerant to the fall of servers that store data - try to drop one of them specially - the wizard will think for a bit and everything will work again. When the crashed server recovers, the data on it will be automatically updated.

Of course, the bottleneck is the master server. The presence of several masters working with the same database under the control of, for example, Galera and a proxy server responsible for distributing traffic between them, would significantly increase the system's fault tolerance (although it is unlikely now that it can be called that). Maybe in the next article?

If you read up to this point you are a hero - thank you very much for your attention.
For a pleasant community - it's nice to write.

Also popular now: