Expand Your Cloud in the cloud. Install CoreOS

  • Tutorial
I've always been interested in cloud technology. Including the most trending of them - it is decentralization, clustering, optimization and distribution of everything: computing resources, data, donuts and power . Therefore, I could not get past CoreOS, about which there is a lot of talk in the IT community, and which has become my starting point for experiments.

To combine business with pleasure, I began to look for a suitable application on which, on the one hand, it would be interesting to use cloud technology, and on the other, it could be useful in the future. Therefore, I decided to deploy the OwnCloud installation based on CoreOS.
Now I will tell you what this led to, and in the course of the action I will provide links so that the interested person can deepen his knowledge in the subject field. But if you have questions - feel free to ask them in the comments.

So, I set myself the tasks:
  1. Install CoreOS on a bare-metal server
  2. Configure Distributed Data Warehouse
  3. Write your Dockerfile and run the application in the cluster
  4. Set up automatic updating and registration of containers
  5. Consider unused technologies and come up with their application (*)

In this article I will talk about installing CoreOS. About tuning and further experiments - in further.

Install CoreOS on a bare-metal server


To install OwnCloud on a server, you need to get a server.
Installing on virtual servers “for money” or the same virtual servers on a local machine is not as interesting as installing on “live” hardware. But with the current dollar rate, renting a server is expensive. Therefore, a raid on Google was made to find a provider offering a cheap and "metal" solution.

Google offered me two companies, both French: Kimsufi and Online .
Kimsufi is the daughter of OVH, one of the largest hosting provider.
Online is the daughter of iliad, one of the largest telecommunications companies.

Both companies offer cheap yet powerful solutions. Although the reviews on online.net have a better network, my choice fell on Kimsufi for two subjective reasons: 1) the server on the VIA Nano U2250 is too slow, and for the next one they ask for 16 euros in the line - the toad is strangling; 2) OVH / Kimsufi verified account.

registration


About registration, confirmation and removal of VAT at provider Kimsufi much was said ( geek magazine ) The only thing worth warning about is the waiting time. It seems that the support at Kimsufi works on the residual principle - the problems of the Customers are solved only when there are no tasks from the "Big Brother" (OVH). It should be borne in mind if you want to place production there.

We buy a server


I ordered three servers. Why three? Because only three servers can guarantee fault tolerance and the absence of split-brain.
Brief explanation
In the products tested further, the formula (n-1) / 2 is used to calculate fault tolerance, which shows that the minimum value of n is three. In our case, proofs can be found in the dock etcd and Percona XtraDB Cluster , discussed in the comment.

Since there are a lot of people who want to buy a server for such a price, it is quite difficult to become a happy owner of your hardware in an “honest” way. Personally, I used this script to capture them .
The difference between the KS-2 line of servers
KS-2a - percent N2800, 2TB HDD (or I'm lucky ^ _ ^)
KS-2b - percent D425, 1TB HDD
KS-2c - percent N2800, 40GB SSD


Install CoreOS


As soon as the servers are paid and passed the pre-check, you will receive a server certificate by mail. Immediately after this, the servers will be available through the web admin panel.
How to get rid of annoying popup in the admin area and the nuances of a refund policy
After the server appears in the Kimsufi admin area, we will be prompted to install the OS until we install at least something through the web interface. After that, you cannot return your money to the server.

DC owners can evaluate some features
Many , different ( undocumented ) are interesting.

First we need to get into the recovery, and then the installation goes according to the official installation instructions for the disk .
To boot into Rescue, in the web interface, click on Netboot -> Rescue. After this, the server needs to be rebooted, the easiest way to do this is by clicking on the Restart button. The password for entering will come to the mail.
Once logged in to the server via SSH, load the installation script
wget https://raw.github.com/coreos/init/master/bin/coreos-install
chmod +x coreos-install
./coreos-install --help

We write our cloud-init file and the installation process through coreos-install -C stable -c / path / to / cloud-init -d / dev / sda.

After the installation is completed, you can make changes manually: add an ssh key or edit cloud-init. To do this, you need to mount the ROOT partition - it is at number nine. For instance:
mount /dev/sda9 /mnt
echo 'ssh-rsa AAAAB... user@domain' > /mnt/home/core/.ssh/authorized_keys

Or you can put the key through cloud-init:
cloud-init number one
#cloud-config
hostname: core1
coreos:
write_files:
  - path: /home/core/.ssh/authorized_keys
    permissions: 0600
    owner: core
    content: |
      ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+jxun+xn31x4tP7NdM6nMFI5b00bbk+VK4JM5mdyS+30/lIhhArMWnhla7NTw0BINdvutErZRFzhIqf5yaR/+O7/Oqc9J53dWJiEnz0si9hutbVSYA/Peo0Z9nFBm6Aep3816AzJYNzKIZg17JwqTKpEnV/ArXOmbCek9hi50R7yuZvtehWmJMNqTxKhqb5aD1joARd2iTMfS39pFsLsrxn8b2mGfcQH9v0+HwmNEiCGpq+HCMFTpt9Z1SOukeTpKOWOiBEzQPqaeaIeqXTDHHj2zWHv0/elIuRBFpxgC00DvoshlAzmB6CwCttBkigGQP2Mlcnovuo0RyuJRAlw1 user@domain


cloud-init number two
#cloud-config
hostname: core1
coreos:
write_files:
  - path: /etc/ssh/sshd_config
    owner: root
    content: |
      # Use most defaults for sshd configuration.
      UsePrivilegeSeparation sandbox
      Subsystem sftp internal-sftp
      ClientAliveInterval 180
      UseDNS no
      AuthorizedKeysFile  %h/.ssh/authorized_keys.d/coreos-cloudinit
ssh_authorized_keys:
  - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+jxun+xn31x4tP7NdM6nMFI5b00bbk+VK4JM5mdyS+30/lIhhArMWnhla7NTw0BINdvutErZRFzhIqf5yaR/+O7/Oqc9J53dWJiEnz0si9hutbVSYA/Peo0Z9nFBm6Aep3816AzJYNzKIZg17JwqTKpEnV/ArXOmbCek9hi50R7yuZvtehWmJMNqTxKhqb5aD1joARd2iTMfS39pFsLsrxn8b2mGfcQH9v0+HwmNEiCGpq+HCMFTpt9Z1SOukeTpKOWOiBEzQPqaeaIeqXTDHHj2zWHv0/elIuRBFpxgC00DvoshlAzmB6CwCttBkigGQP2Mlcnovuo0RyuJRAlw1 user@domain



During the first boot of the system, scripts are run that do some magic (repair the GPT, resize the root file system (/ dev / sda9), etc.
Note
Read more about partitioning CoreOS in a dock , in a mail list, or on a github.

To boot into a freshly installed OS, you need to change the boot order through the web interface -> Netboot to boot from the hard drive and send the server to reboot (using the reboot command in the terminal or the Restart button in the web admin panel).
If you didn’t forget to put the ssh-key or indicated your user in the cloud-init, then you should be allowed to enter the system. If you succeed - congratulations! If not, something went wrong.

Repartition of a disk


Once the system is installed, you can begin to study it. A lot of interesting things: etcd, fleet, systemd and related technologies: kubernetes, confd and much more!
But before moving on, I decided to create two additional partitions: for storing user data (distributed) and for storing containers and system applications (btrfs).
Why was btrfs chosen if it is experimental? Because the goal of my experiment is to experiment with new technologies. And despite the fact that btrfs has been working steadily on desktops / laptops for a couple of years now, I have not used it in production.
Historical reference
Initially, a btrfs partition was created under the root in CoreOS. Recently, ext4 with AUFS / OverlayFS has been used for the root. The reason for leaving btrfs is related to two unpleasant bugs that should have been fixed from kernel version 3.18, which developers swear by. Nevertheless, btrfs may still have some problems when working with a large number (several thousand) layers (snapshots), but a discussion of this is beyond the scope of this article. Write comments!

In order to highlight something, you need to free at the beginning! To do this, go back to Rescue.
If you can not boot into Rescue
I ran into a problem: after a fresh install of CoreOS, the machine did not want to boot via PXE, including rescue. If your trouble has happened, you can use a one-time hack: through iptables we block all ICMP traffic (systemctl show iptables-restore.service) and do Restart via the web interface. Automation will consider that the server did not boot, and the engineer will manually load it into Rescue. You can fix it correctly only by changing the motherboard and interrupting the poppy address on the switch with the help of an engineer.

I reduced the size of the FS, then reduced the size of the partition, and after that I created new ones.
The root is on the ninth partition: / dev / sda9. Proceed:
e2fsck -yf /dev/sda9 # проверяем ФС на наличие ошибок
resize2fs /dev/sda9 100G # изменяем размер корневой ФС до 100ГБ
gdisk /dev/sda # меняем размер партиции
resize2fs /dev/sda9 100G # чтобы убедиться, что всё ок

In gdisk, you need to delete the partition and create a new one with a resized one.
If my memory serves me, then the keyboard shortcuts will be: d -> 9 -> n -> 9 -> -> + 100G -> -> c -> 9 -> ROOT -> w -> Y ->
If you did all right, you can safely boot from the hard drive and see that / dev / sda9 occupies 100GB or 93GB.
core3 ~ # df -h /dev/sda9
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda9        97G  128M   93G   1% /

Despite the fact that I install CoreOS on Kimsufi servers, the instruction is suitable for other providers. If you encounter nuances during installation - write, we will discuss.

This completes the story about installing CoreOS on a bare-metal Kimsufi server.

Who's next?


In the next article I will tell you how to create partitions from the vacant space, how to configure RTM ( Real Time Monitoring - a monitoring script for drawing beautiful graphs in the OVH web admin panel), etcd, fleet, and how to deploy a distributed FS.

Also popular now: