We transfer the working Debian 7 system to software raid1 / 10 on the example of hosting from Leaseweb



Today, dear habrazhiteli and guests of the Habr, I will share with you a story about how I built my lunapark with blackjack and girls. Those. how I transferred my dedicated server received from Leaseweb to software raid1 / 10 and lvm.

Why is this article needed?
It so happened that Leaseweb is probably one of the most harmful hosters, as You cannot install the system immediately with ready-made raid, according to the information I have, technical support does not do this either. KVM can be taken far from all series of servers, and it costs a lot of money. As a result, out of a desire to know linux more deeply, I began to understand this topic myself.
I found a lot of articles on the Internet how to do this, but they had two main problems: they require the ability to access the BIOS to change the boot order (which we don’t have) or use GRUB legasy (versions up to 1) that are not used in Debian 7.
The result of two weeks of living in Google, smoking manuals, Wikipedia and many experiments was this instruction "for dummies."

This article is by no means a Leaseweb hosting advertisement. Why didn’t I take the server from resellers with Russian support? Or why didn’t you take hosting from another company, where you can install raid in the machine or KVM is available? This happened historically and something had to be done with it.

Since at the time of solving this problem I myself had extremely modest knowledge of linux, in this article I will try to tell in more detail the process of converting a working system to raid 1/10 + lvm . The manual uses disk partitioning with mbr. GPT performance has not been tested.

So, the introductory conditions: There is a dedicated server from Leaseweb, in which there are 4 HDDs and Debian 7.0 64-bit is installed. The partitioning of the “Custom” sections:

1 section - boot (100 megabytes)
2 section - swap - 4 gigabytes
3 section - main - the remaining space, we will include it in lvm.

It is necessary: ​​We
make the boot partition an array of raid 1 on 4 disks.
Two swap partitions of 4 gigabytes per two raid1 arrays (if swap is not taken out on raid1, then if the disk fails, you can get a crashed system, since a system working from another disk from the array may try to write data to swap on a malfunctioning disk, which may cause an error). Swap can be made smaller or not located on the raid array, but then I recommend reading what swap is, how it is used, i.e. A definite answer is that it’s better not.
We organize the remaining space in raid10 bp of 4 partitions on different disks and use it already under lvm to be able to resize partitions.

ATTENTION! Performing any of the operations below may result in data loss! Before performing these steps, make sure that there are (if necessary) backup copies of the data and the integrity of these copies.

0. Preparation

THIS ITEM IS PERFORMED ONLY WHEN INSTALLED ON A NEW SERVER! If you are translating a working server, go to step 1.

I got the server with the disks already used (the operating time is no more than 1000 hours each) and taking into account the fact that sda (the first disk) was with the installed system and the mbr partition, and the second ( sdb), the third (sdc) and the fourth (sdd) disks were with gpt, I decided to completely delete all the information from the disks.

To do this, we need to boot into recovery mode (I used Rescue amd64). Through SCC, we start recovery mode and through the SSH client (I use putty) we connect to the server.

Then, using the commands, we zero the surface of the disks (the operation takes time, on 500 gigabyte disks this is about one hour per disk).
For those who argue that it is enough to erase the partition table (the first 512 bytes of the disk), I personally faced a situation where, after previous experiments, I created a new partition table identical to the one used in the previous experiment, I got the entire contents of the disk back. Therefore, I completely nullified the disks:

dd if = / dev / zero of = / dev / sda bs = 4k
dd if = / dev / zero of = / dev / sdb bs = 4k
dd if = / dev / zero of = / dev / sdc bs = 4k
dd if = / dev / zero of = / dev / sdd bs = 4k


As a result, based on the results of each command, we obtain a conclusion of this kind:

dd: writing `/ dev / sdd ': No space left on device
122096647 + 0 records in
122096646 + 0 records out
500107862016 bytes (500 GB) copied, 4648.04 s, 108 MB / s


For those who are not ready to wait a lot of time, this process can be started in parallel by running 4 copies of the ssh client.

A useful addition from Meklon :

you can use the

nohup dd if=dev/zero of=/dev/sda bs-4k &

If simplified command, the nohup command removes the information displayed on the screen and sends the information that the command displays to the file / home / username / nohup.out or /root/nohup.out, if we runs as root. This allows us not to stop the work of the team in the event of a disconnection. The ampersand (&) sign at the end of the command will start the command in the background, which will allow you to continue working with the system without waiting for the command to finish.

Now we need to create a clean mbr on the disks. To do this, simply run the program to work with the disk partitions and exit it, saving the result:

fdisk /dev/sda

then press w to save the result.

Repeat the operation for the sdb , sdc , sdd disks

. We reboot the server.

reboot (или shutdown –r now)

1. Installation of the system.

Now we are doing a clean installation of the operating system. We go into the SSC, in the server management section, click "reinstall", select the Debian 7.0 operating system (x86_64). If you do not plan to use more than 3 gigabytes of RAM, then you can install the x86 version. Next, select the partitioning of the “Custom partition” hard drive.



The tmp section is removed completely, if necessary, we will remove it separately, but already on the lvm section.

Click install, wait for the installation to complete and go into the system.

2. Installing the necessary modules

Install the raid modules.

apt-get install mdadm

Install lvm.

apt-get install lvm2

3. Copy the partition table to the second, third and fourth drives

At the moment, we already have a partition structure created automatically on the first disk (sda). You can see it with the command.

fdisk -l /dev/sda

Its conclusion is as follows:

Disk / dev / sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors / track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes
Disk identifier: 0x00095cbf
   Device Boot Start End Blocks Id System
/ dev / sda1 2048 4095 1024 83 Linux
/ dev / sda2 * 4096 616447 306176 83 Linux
/ dev / sda3 618494 976771071 488076289 5 Extended
/ dev / sda5 618496 9003007 4192256 82 Linux swap / Solaris
/ dev / sda6 9005056 976771071 483883008 83 Linux


From the presented partition structure, we need:
- the boot partition marked with an asterisk - it will be boot
- the sda5 partition with type 82 - Linux swap - this is accordingly swap
- the sda6 partition is the main partition .

Since we make mirrored arrays, then on the second, third and fourth disks we need an identical partition structure.

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

Repeat the procedure, replacing sdb with sdc and sdd.

For the Grub version used in Debian 7.0, there is no need to use any additional keys like –metadata = 0.9 when creating a raid array, everything works fine with superblock 1.2, and therefore there is no need to change the partition type to fd (raid autodetect).

4. Creating raid arrays

We create (array -C) an array with the name md0 of type raid1 for 4 partitions with one missing (missing) - this will be the boot partition (partition 2 on disks). Add the missing section later.

mdadm -C /dev/md0 --level=1 --raid-devices=4 missing /dev/sdb2 /dev/sdc2 /dev/sdd2

The first array for swap (let me remind you, I will use two of them) The
mdadm -C /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb5

second array for swap
mdadm -C /dev/md2 --level=1 --raid-devices=2 /dev/sdc5 /dev/sdd5

Now we create the main raid-array of type raid10, on which we will then install lvm
mdadm -С /dev/md3 --level=10 --raid-devices=4 missing /dev/sdb6 /dev/sdc6 /dev/sdd6

5. Create an lvm partition.

First, add the resulting main md3 array to the Phisical Volume (group of physical volumes)

pvcreate /dev/md3

Now create a group of logical volumes vg0 (you can use any name) to the size of the entire md3 array.

vgcreate vg0 /dev/md3

Well, now we will create the root partition we need (/)
If you create several partitions, then do not use all the space at once, it is better to add space to the desired partition later than to torment and cut off existing ones.

lvcreate -L50G -n root vg0

-L50G - the key indicates the size of the partition is 50 gigabytes, you can also use the letters K, M - kilobytes and megabytes, respectively
-n root - the key indicates that the created partition will have a name, in this case root, i.e. we can access it by the name / dev / vg0 / root
And this section is created on the vg0 logical volume group (if you used a different name in the previous command, then you enter it instead of vg0 in this command).

If you create separate sections under / tmp, / var, / home, and so on, then by analogy you create the necessary sections.

6. Creating file systems on partitions

On the boot section (md0 array) we will use the ext2 file system.

mkfs.ex2 /dev/md0

Create a swap

mkswap / dev / md1
mkswap / dev / md2


and turn them on with equal usage priorities (–p switch and priority number, if the priority is different, then one swap will be used, and the second will be idle until the first one overflows. And this is inefficient)

swapon -p1 / dev / md1
swapon -p1 / dev / md2


On other sections, I use ext4. It allows you to expand your partition in real time without stopping the server. Reducing a partition only with disabling (unmounting) the partition to be reduced.

mkfs.ext 4 /dev/vg0/root

7. Updating information on created raid arrays in the system

We will save the original configuration file of the array, we will need it later.

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig

Now we add to it the information that is currently relevant.

mdadm --examine --scan >> /etc/mdadm/mdadm.conf

8. Setting up automatic mounting (connection) of disks at system startup

On Linux, there are two ways to specify the name of a disk or partition - in a symbolic form like / dev / sda6 or by UUID. I chose the second method for use, in theory it should help to avoid a number of problems when the designation of the disk or partition can change. And in the end, now this is a common practice.

We get the UUID of the "partitions" we need md0 (boot), md1 (swap), md2 (swap), vg0 / root (root) using the command

blkid /dev/md0

We

/dev/md0: UUID="420cb376-70f1-4bf6-be45-ef1e4b3e1646" TYPE="ext2"

get this output In this case, we are interested in UUID = 420cb376-70f1-4bf6-be45-ef1e4b3e1646 (without quotes) and the type of file system is ext2.

We execute this command for / dev / md1, / dev / md2, / dev / vg0 / root and save the obtained values ​​(in putty you can select it with the mouse and press Ctrl + c, insertion is performed by a single right-click on the mouse. Well, for special masochists - manually rewrite :)

Next, open the fstab file for editing

nano /etc/fstab

and edit the file to the next view, substituting the necessary UUIDs:

# / etc / fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID = as a more robust way to name devices
# that works even if disks are added and removed. See fstab (5).
#
# 
# / was on / dev / sda6 during installation
UUID = fe931aaf-2b9f-4fd7-b23b-27f3ebb66719 / ext4 errors = remount-ro 0 1
# / boot was on / dev / sda2 during installation
UUID = 420cb376-70f1-4bf6-be45-ef1e4b3e1646 / boot ext2 defaults 0 2
# swap was on / dev / sda5 during installation
UUID = 80000936-d0b7-45ad-a648-88dad7f85361 none swap sw 0 0
UUID = 3504ad07-9987-40bb-8098-4d267dc872d6 none swap sw 0 0


If you connect other sections, then the format of the line is as follows:

UUID точка монтирования тип файловой системы параметры монтирования 0 2

If you want to learn more about mounting options and dump and pass values, then in the application at the end of the article you will find a link.

To save the file, press "ctrl + x" - "y" - "enter"

9. Partition mounting

Create folders in which we mount the root partition and the boot partition

mkdir / mnt / boot
mkdir / mnt / root


mount / dev / md0 / mnt / boot
mount / dev / vg0 / root / mnt / root


10. Updating the bootloader and boot image

Updating the Grub2 bootloader. During this update, the bootloader collects information about new mounted partitions and updates its configuration files.

update-grub

If everything goes well, then on the screen we get this output

Generating grub.cfg ...
Found linux image: /boot/vmlinuz-3.2.0-4-amd64
Found initrd image: /boot/initrd.img-3.2.0-4-amd64
Found Debian GNU / Linux (7.4) on / dev / mapper / vg0-root
done


And we update the boot image under the changed conditions

update-initramfs -u

11. Copying the contents of the first disk to raid

Copy the contents of the root partition of the installed system (/) to the root partition located on lvm

cp -dpRx / /mnt/root

Next, go to the boot directory and copy its contents to the separately mounted boot partition located on the / dev / md0 raid array

cd / boot
cp -dpRx. / mnt / boot


12. Installing the updated bootloader on all disks of the array

Using the next command, we install the updated bootloader on the first sda disk. We execute the

grub-install /dev/sda

same command for sdb, sdc, sdd disks.

Then we reboot the server.

reboot

Wait 10 minutes and start the Rescue (x64) recovery mode via SelfServiceCenter or if you installed the 32-bit version, then the corresponding version of Rescue.
If it does not start, then through SSC we reboot the server and try again. I personally did not start the first time.

13. Mount disks to update the server configuration

With these commands, we mount the root partition and the boot partition from raid arrays and the service file systems / dev, / sys, / proc

mount / dev / vg0 / root / mnt
mount / dev / md0 / mnt / boot
mount --bind / dev / mnt / dev
mount --bind / sys / mnt / sys
mount --bind / proc / mnt / proc


14. Changing the shell and changing the root user environment

In Linux, we are able to tell the system that now the root user is working on the next installed (but not running) system.
To do this, we need to execute the command chroot. But the recovery mode by default starts with the zsh shell, and you can’t execute the command in it chroot, at least I did not find how. To do this, we first need to change the shell used, and then execute the commandchroot

SHELL=/bin/bash
chroot /mnt

15. Adding the first sda drive to the created raid arrays

We add the boot section to the corresponding array. We

mdadm --add /dev/md0 /dev/sda2

add the swap section to the first swap array. We add the
mdadm --add /dev/md1 /dev/sda5

main section to the array of the main section.
mdadm --add /dev/md3 /dev/sda6

After executing these commands, synchronization and restoration of the array starts, this process takes a long time. It took me about 1.5 hours to add a 500 gigabyte hard drive.

You can observe how the process is going by the command.

watch cat /proc/mdstat

After the synchronization of arrays is completed, we get the following output:

Personalities: [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md3: active raid10 sda6 [4] sdb6 [1] sdd6 [3] sdc6 [2]
      967502848 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md2: active raid1 sdc5 [0] sdd5 [1]
      4190144 blocks super 1.2 [2/2] [UU]
md0: active raid1 sda2 [4] sdb2 [1] sdd2 [3] sdc2 [2]
      305856 blocks super 1.2 [4/4] [UUUU]
md1: active raid1 sda5 [2] sdb5 [1]
      4190144 blocks super 1.2 [2/2] [UU]
unused devices: 


You can return to the command line by pressing ctrl + c

16. Updating information on created raid arrays in the system

We have already performed a similar operation. Now we restore the original configuration file and add the current information about the raid arrays to it

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

17. Updating the bootloader and boot image

We update the bootloader again after the changes

update-grub

and the boot image of the system

update-initramfs -u

18. Installing the updated bootloader and completing updating the system settings

Install the bootloader on disks
grub-install / dev / sda
grub-install / dev / sdb
grub-install / dev / sdc
grub-install / dev / sdd


Then we exit the current environment

exit

and reboot the

reboot

URA server ! After the reboot, we get a working server on raid and lvm.

Application:

Learn more about using the mdadm raid array manager commands.

Learn more about using the lvm logical volume manager commands.

Read more about the fstab file, mount options for

PS: If you get a bicycling bike in some places or the terminology is not quite right, then please give a discount on poor knowledge Linux and tell true way.
I read errors, though on an article of such a volume I might have missed something.

Also popular now: