Another PROXMOX migration to softRAID1, but now 3.2 on GPT partitions, installing FreeNAS 9.2 on a virtual machine and forwarding a physical disk to it

Hello!

Once again, I needed a Proxmox server. The hardware is as follows: AMD FX-4300, 4Gb, two 500Gb disks for proxmox itself and two more for storage. The tasks were as follows: one of the FreeNAS machines, I wanted to forward several disks (preferably physical disks) into it to place storage on them, and several more VMs not related to the article.

I have a trick to always try to put the latest versions, and not the old ones checked. So it happened this time.
I downloaded Proxmox VE 3.2 and FreeNAS 9.2. But what came of it under the cut.

Once again installing Proxmox (the latest version 3.2 at the moment) decided to transfer it to SoftRAID1. But he found that, unlike 3.0, he (proxmox) converted the disk to GPT. Accordingly, recommendations inThe article I focused on is not entirely relevant. In addition, in all articles on the transfer of Proxmox to SoftRAID, there are only two sections (boot and LVM). In my case, the partitions on the disk were 3. The first GRUB, and then the standard boot and LVM.

This should not stop us.

Translation of proxmox to softRAID on GPT partitions


We go the standard way and put all the necessary software. And here we are waiting for another surprise related to the fact that from version 3.1 the repository of Proxmox has become paid. Therefore, before installing the necessary packages, you need to disable it (it might be more correct to specify a free test repository instead, but I managed to just comment out the paid one). Open it in any editor

# nano /etc/apt/sources.list.d/pve-enterprise.list
and comment out a single line.

If you still want to add a free repository, then run the command:
echo "deb http://download.proxmox.com/debian wheezy pve pve-no-subscription" >> /etc/apt/sources.list.d/proxmox.list
Thanks heathen for his comment.

Now we put the necessary packages:

# aptitude update && aptitude install mdadm initramfs-tools screen
the latter is needed if you are doing this remotely. The transfer of LVM to RAID takes a long time and it is advisable to do this through screen.

Check that the creation of arrays is now available:

# modprobe raid1
Next, we copy the partitions from sda to sdb. This is where the differences in MBR and GPT begin. For GPT, this is done like this:
# sgdisk -R /dev/sdb /dev/sda
The operation has completed successfully.
Assign a random UUID to the new hard drive.
# sgdisk -G /dev/sdb
The operation has completed successfully.
# sgdisk --randomize-guids --move-second-header /dev/sdb
The operation has completed successfully.


Check that the sections are created as we wanted:

sda drivesdb drive
# parted -s /dev/sda print
Model: ATA WDC WD5000AAKS-0 (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End     Size    File system  Name     Flags
 1      1049kB  2097kB  1049kB               primary  bios_grub
 2      2097kB  537MB   535MB   ext3         primary  boot
 3      537MB   500GB   500GB                primary  lvm
# parted -s /dev/sdb print
Model: ATA ST3500320NS (scsi)
Disk /dev/sdb: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End     Size    File system  Name     Flags
 1      1049kB  2097kB  1049kB               primary  bios_grub
 2      2097kB  537MB   535MB                primary  boot
 3      537MB   500GB   500GB                primary  lvm


Change the flags of the sdb2 and sdb3 partitions to raid:

# parted -s /dev/sdb set 2 "raid" on
# parted -s /dev/sdb set 3 "raid" on
# parted -s /dev/sdb print
Model: ATA ST3500320NS (scsi)
Disk /dev/sdb: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number  Start   End     Size    File system  Name     Flags
 1      1049kB  2097kB  1049kB               primary  bios_grub
 2      2097kB  537MB   535MB                primary  raid
 3      537MB   500GB   500GB                primary  raid
Everything turned out right.

Go ahead and just in case, clear the superblocks:

# mdadm --zero-superblock /dev/sdb2
mdadm: Unrecognised md component device - /dev/sdb2
# mdadm --zero-superblock /dev/sdb3
mdadm: Unrecognised md component device - /dev/sdb3
The output “mdadm: Unrecognized md component device - / dev / sdb3” means that the drive was not involved in RAID.
Actually, it's time to create arrays:
# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
# mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.


To the question “Continue creating the array?” We answer in the affirmative.

Let's see what we got:

# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1]
      487731008 blocks super 1.2 [2/1] [_U]
md1 : active raid1 sdb2[1]
      521920 blocks super 1.2 [2/1] [_U]


The output shows the state of the arrays - [_U]. This means that there is only one disk in the array. So it should be, because we have not yet included the second (first) in the array. (missing).

Add information about the arrays to the configuration file:

# cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
# mdadm --examine --scan >> /etc/mdadm/mdadm.conf


Copy the boot section to the appropriate array. (Added the section unmount commands here. Thanks to the skazkin user for information . His experience has shown that in some cases, without these actions, the boot section may be empty after a reboot):

# mkfs.ext3 /dev/md1
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
130560 inodes, 521920 blocks
26096 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
64 block groups
8192 blocks per group, 8192 fragments per group
2040 inodes per group
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
# mkdir /mnt/md1
# mount /dev/md1 /mnt/md1
# cp -ax /boot/* /mnt/md1
# umount /mnt/md1
# rmdir /mnt/md1
Next, we need to comment out in / etc / fstab a line describing the mounting of the boot partition with the UUID and write the mounting of the corresponding array:

# nano /etc/fstab


# 
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
# UUID=d097457f-cac5-4c7f-9caa-5939785c6f36 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/md1 /boot ext3 defaults 0 1


It should be something like this.
Reboot:

# reboot


We configure GRUB (we do it in exactly the same way as in the original article ):
# echo 'GRUB_DISABLE_LINUX_UUID=true' >> /etc/default/grub
# echo 'GRUB_PRELOAD_MODULES="raid dmraid"' >> /etc/default/grub
# echo 'GRUB_TERMINAL=console' >> /etc/default/grub
# echo raid1 >> /etc/modules
# echo raid1 >> /etc/initramfs-tools/modules


Reinstall GRUB:

# grub-install /dev/sda --recheck
Installation finished. No error reported.
# grub-install /dev/sdb --recheck
Installation finished. No error reported.
# update-grub
Generating grub.cfg ...
Found linux image: /boot/vmlinuz-2.6.32-27-pve
Found initrd image: /boot/initrd.img-2.6.32-27-pve
Found memtest86+ image: /memtest86+.bin
Found memtest86+ multiboot image: /memtest86+_multiboot.bin
done
# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-2.6.32-27-pve


Now add the boot partition from the first (sda) drive to the array. First, mark it with the “raid” flag:

# parted -s /dev/sda set 2 "raid" on


And then add:
# mdadm --add /dev/md1 /dev/sda2
mdadm: added /dev/sda2


If you look now at the state of arrays:

# cat /proc/mdstat
Personalities : [raid1]
md2 : active (auto-read-only) raid1 sdb3[1]
      487731008 blocks super 1.2 [2/1] [_U]
md1 : active raid1 sda2[2] sdb2[1]
      521920 blocks super 1.2 [2/2] [UU]
unused devices: 


then we will see that md1 has become a “dual-disk” - [UU]

Now we need to transfer the main partition - LVM. There are no differences from the “original”, with the exception of another numbering of sections and:

# screen bash
# pvcreate /dev/md2
  Writing physical volume data to disk "/dev/md2"
  Physical volume "/dev/md2" successfully created
# vgextend pve /dev/md2
  Volume group "pve" successfully extended
# pvmove /dev/sda3 /dev/md2
  /dev/sda3: Moved: 2.0%
 ...
  /dev/sda3: Moved: 100.0%
# vgreduce pve /dev/sda3
  Removed "/dev/sda3" from volume group "pve"
# pvremove /dev/sda3

Here, as recommended, skazkin added the pvremove command. Without it (again, not always) another problem may appear:
the system will not understand what happened to the disks and will not boot beyond the initramfs console


Add the sda3 section to the array:

# parted -s /dev/sda set 3 "raid" on
# mdadm --add /dev/md2 /dev/sda3
mdadm: added /dev/sda3
# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[2] sdb3[1]
      487731008 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.3% (1923072/487731008) finish=155.4min speed=52070K/sec
md1 : active raid1 sda2[2] sdb2[1]
      521920 blocks super 1.2 [2/2] [UU]
unused devices: 


and see that it is being added.

Since I am acting on the original article , I went to pour coffee.

After the array is rebuilt (and this is also not a fast matter), this part can be considered completed.

For those who, like me, did not understand why this is all , I explain. Because We actually transferred the LVM volume from one block device to another, then we do not need to register it (as it was with boot). I stalled at this place for a while.

FreeNAS 9.2 on AMD processors


My next step was installing FreeNAS version 9.2 on proxmox. I suffered for a long time. I have not yet tried to install from the same image (FreeNAS 9.2) on another proxmox server. It is slightly different from that described in the article: firstly, it is on Core i7, and secondly, proxmox 3.1. And there it naturally counted once. Those. The problem is either in AMD (no, it definitely cannot be), or that proxmox 3.2 broke support for FreeBSD9 (brrr). Digging for a long time. Then he began to experiment himself. As a result, all the same, AMD. What a problem they have there, but as soon as I set the type of processor Core 2 Duo FreeNAS 9.2 in the VM properties, it was installed without problems.

Forwarding a physical disk in KVM (proxmox)


For a long time I searched for the answer to this question in the vastness of the Web, but found only scraps. Maybe someone from them can immediately understand what and how, but not me.
In general, this is done like this (from the console):

# nano /etc/pve/nodes/proxmox/qemu-server/100.conf


and add the line at the end:

virtio0: /dev/sdc


where sdc is your device. Then you can specify other parameters separated by commas (you can see them in the wiki proxmox).

That's all. The truth is I do not know how much such a connection raises (or lowers) the speed of disk operations. The tests are yet to come.

Also popular now: