Install PROXMOX 4.3 on Soft-RAID 10 GPT

Good afternoon friends. Today I would like to share my personal experience setting up Proxmox on soft-Raid 10.

What we have:
  • HP ProLiant DL120 G6 Server (10 GB RAM)
  • 4x1000Gb SATA hard drive - no physical RAID controller on board
  • A flash drive with PROXMOX 4.3 (more on this below)

What we want:
  • Get the installation of PROXMOX 4.3 installed completely on the S-RAID 10 GPT, so that if any drive fails, the system continues to work.
  • Receive a notification about a failed disk drive e-mail.

What we do is a general plan of action:
  • Install PROXMOX 4.3
  • Raise and test RAID10
  • Set up email notifications

Under the cat, a step-by-step passage of the quest.

And now in stages.

First point: I
connected a USB flash drive - if in short - the installation disk was not found. I can not mount.

image

I did not understand what and how, but why. I wrote the image to a CD-ROM and connected a USB CDROM (fortunately, it was nearby) The

second point: I
connected the keyboard and the keyboard to the server’s front ports (it has two of them) - the first thing I saw, you can’t click anything on the first proxmox welcome screen without a mouse . That is, switching Tab by control buttons does not occur. Because the server was in a rack and it was problematic to climb up behind, I started to stick the keyboard and mouse in turn. I click "next" with the mouse, with the keyboard - I enter the data.

Installation consists of several steps:

  • Agrees with their requirements
  • Choose the hard drive where the system will be installed.
  • Choose country and time zone
  • Specify server name, addressing
  • And actually wait a bit to deploy the image to the server.

PROXMOX is installed on the first drive, which he called as / dev / sda. I connect from my laptop to the address that I specified during installation:

root@pve1:~#ssh root@192.168.1.3

Updating the system:

root@pve1:~#apt-get update 

I see the output
Ign http://ftp.debian.org jessie InRelease
Get:1 http://ftp.debian.org jessie Release.gpg [2,373 B]
Get:2http://security.debian.org jessie/updates InRelease [63.1 kB]
Get:3http://ftp.debian.org jessie Release [148 kB]        
Get:4 https://enterprise.proxmox.com jessie InRelease [401 B]      
Ign https://enterprise.proxmox.com jessie InRelease    
Get:5 https://enterprise.proxmox.com jessie Release.gpg [401 B]
Ign https://enterprise.proxmox.com jessie Release.gpg                        
Get:6http://ftp.debian.org jessie/main amd64 Packages [6,787 kB]            
Get:7 https://enterprise.proxmox.com jessie Release [401 B]
Ign https://enterprise.proxmox.com jessie ReleaseGet:8http://security.debian.org jessie/updates/main amd64 Packages [313 kB]          
Get:9 https://enterprise.proxmox.com jessie/pve-enterprise amd64 Packages [401 B]
Get:10 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en_US [401 B]            
Get:11 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en [401 B]                         
Get:12 https://enterprise.proxmox.com jessie/pve-enterprise amd64 Packages [401 B]                         
Get:13 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en_US [401 B]               
Get:14 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en [401 B]                        
Get:15 https://enterprise.proxmox.com jessie/pve-enterprise amd64 Packages [401 B]                        
Get:16 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en_US [401 B]                  
Get:17 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en [401 B]                     
Get:18 https://enterprise.proxmox.com jessie/pve-enterprise amd64 Packages [401 B]                      
Get:19 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en_US [401 B]                           
Get:20http://security.debian.org jessie/updates/contrib amd64 Packages [2,506 B]                 
Get:21 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en [401 B]                
Get:22 https://enterprise.proxmox.com jessie/pve-enterprise amd64 Packages [401 B]               
Err https://enterprise.proxmox.com jessie/pve-enterprise amd64 Packages                          
  HttpError401
Get:23 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en_US [401 B]      
Get:24http://security.debian.org jessie/updates/contrib Translation-en [1,211 B]                   
Ign https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en_US                                                    
Get:25 https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en [401 B]                  
Ign https://enterprise.proxmox.com jessie/pve-enterpriseTranslation-en                          
Get:26http://security.debian.org jessie/updates/mainTranslation-en [169 kB]                    
Get:27http://ftp.debian.org jessie/contrib amd64 Packages [50.2 kB]                                                          
Get:28http://ftp.debian.org jessie/contrib Translation-en [38.5 kB]                                                          
Get:29http://ftp.debian.org jessie/mainTranslation-en [4,583 kB]                                                            
Fetched 12.2 MB in15s (778 kB/s)                                                                                             
W: Failedtofetch https://enterprise.proxmox.com/debian/dists/jessie/pve-enterprise/binary-amd64/Packages  HttpError401
E: Someindex files failedto download. They have been ignored, orold ones used instead.


This is not the case. I do not plan to buy a support license yet. I am changing the official subscription to their “free” repository.

root@pve1:~#nano /etc/apt/sources.list.d/pve-enterprise.list

I see there:

deb https://enterprise.proxmox.com/debian jessie pve-enterprise

Change to:

deb http://download.proxmox.com/debian jessie pve-no-subscription

And again, I update and put the system updates:

root@pve1:~#apt-getupdate && apt-get upgrade

Now everything has been updated without hesitation and the system is in the latest state. I put packages for working with the raid:

root@pve1:~#apt-get install -y mdadm initramfs-tools parted

Now we determine the exact size of the first disk, it will be useful to us in the future:

root@pve1:~#parted /dev/sda print

Model: ATA MB1000EBNCF (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 
Number  StartEndSizeFilesystemName     Flags
 11049kB  10.5MB  9437kB               primary  bios_grub
 210.5MB  1000MB  990MB   ext4         primary
 31000MB  1000GB  999GB                primary

We see that exactly 1000GB - remember. We mark the remaining sections under our array. First of all, we clear the partition table on three empty disks and mark the disks under the GPT:

root@pve1:~#dd if=/dev/zero of=/dev/sb[bcd] bs=512 count=1

1+0 records in1+0 records out512 bytes (512 B) copied, 7.8537e-05 s, 6.5 MB/s

Marking:

Second:

root@pve1:~#parted /dev/sdb mklabel gpt

Warning: The existing disk label on /dev/sdb will be destroyed andall data on this disk will be lost. Do you want tocontinue?
Yes/No? yes                                                               
Information: You may need toupdate /etc/fstab.

Third:

root@pve1:~#parted /dev/sdc mklabel gpt

Warning: The existing disk label on /dev/sdc will be destroyed andall data on this disk will be lost. Do you want tocontinue?
Yes/No? yes                                                               
Information: You may need toupdate /etc/fstab.

Fourth:

root@pve1:~#parted /dev/sdd mklabel gpt

Warning: The existing disk label on /dev/sdd will be destroyed andall data on this disk will be lost. Do you want tocontinue?
Yes/No? yes                                                               
Information: You may need toupdate /etc/fstab.

Now we recreate the partitions in the same way as on the original first disk:

1.
root@pve1:~#parted /dev/sdb mkpart primary 1M 10M

Information: You may need toupdate /etc/fstab.

2.
root@pve1:~#parted /dev/sdb set 1 bios_grub on

Information: You may need toupdate /etc/fstab.

3.
root@pve1:~#parted /dev/sdb mkpart primary 10М 1G

Information: You may need toupdate /etc/fstab.

This is where knowledge of the size of the original first disk comes in handy.

four.
root@pve1:~#parted /dev/sdb mkpart primary 1G 1000GB

Information: You may need toupdate /etc/fstab.

We do all four of these steps for all of our drives: sdb, sdc, sdd. Here's what I got:

This is the original:

root@pve1:~#parted /dev/sda print

Model: ATA MB1000EBNCF (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 
Number  StartEndSizeFilesystemName  Flags
 117.4kB  1049kB  1031kB                     bios_grub
 21049kB  134MB   133MB   fat32              boot, esp
 3134MB   1000GB  1000GB                     lvm

And this is the second, third and fourth (with a difference in the drive letter).

root@pve1:~#parted /dev/sdb print

Model: ATA MB1000EBNCF (scsi)
Disk /dev/sdd: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 
Number  StartEndSizeFilesystemName     Flags
 11049kB  10.5MB  9437kB               primary  bios_grub
 210.5MB  1000MB  990MB                primary
 31000MB  1000GB  999GB                primary

Next, you need to clarify - if you are playing with this case for the first time and before that on the server, and most importantly on the hard drives, there was not even a RAID concept - you can skip this point. If something didn’t work out, then RAID was probably already installed and there are superblocks on the hard drives that need to be removed.

Check like this:

root@pve1:~#mdadm --misc --examine /dev/sda

/dev/sda:
   MBR Magic : aa55
Partition[0] :   1953525167 sectors at1 (type ee)

You need to check all four drives.

Now configure mdadm.

Create a config based on an example:

root@pve1:~#cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.orig

Emptying:

root@pve1:~#echo "" > /etc/mdadm/mdadm.conf

Open:

root@pve1:~#nano /etc/mdadm/mdadm.conf

Enter and save:

CREATE owner=root group=disk mode=0660 auto=yes
MAILADDR user@mail.domain

Let’s leave the mail as is, then we’ll return to it.

Now we raise our RAID in degradation mode (skipping the first working hard drive).

  • In / dev / md0 - I will have / boot
  • In / dev / md1 - VML partition with system

root@pve1:~#mdadm --create /dev/md0 --metadata=0.90 --level=10 --chunk=2048 --raid-devices=4 missing /dev/sd[bcd]2

mdadm: array /dev/md0 started.

And the second one:

root@pve1:~#mdadm --create /dev/md1 --metadata=0.90 --level=10 --chunk=2048 --raid-devices=4 missing /dev/sd[bcd]3

mdadm: array /dev/md1 started.

Here it is necessary to explain the keys:

  • --level = 10 - says that our RAID will be exactly 10
  • --chunk = 2048 - cluster size on the partition
  • --raid-devices = 4 - four devices will take part in the raid
  • missing / dev / sd [bcd] 2 - while the first working section is marked as missing, the remaining three are added to the raid

UDP After a lot of comments, I came to one important point.
In the process of creating, I deliberately set the chunk size to 2048, instead of skipping this flag and leaving it by default. This flag significantly reduces performance. This is especially noticeable even on Windows virtual machines.

That is, the correct creation command should look like this:
root@pve1:~#mdadm --create /dev/md0 --metadata=0.90 --level=10 --raid-devices=4 missing /dev/sd[bcd]2

and
root@pve1:~#mdadm --create /dev/md1 --metadata=0.90 --level=10 --raid-devices=4 missing /dev/sd[bcd]3


Save the configuration:
root@pve1:~#mdadm --detail --scan >> /etc/mdadm/mdadm.conf


Check the content:
root@pve1:~# cat /etc/mdadm/mdadm.conf

CREATE owner=root group=disk mode=0660 auto=yes
MAILADDR user@mail.domainARRAY /dev/md0 metadata=0.90UUID=4df20dfa:4480524a:f7703943:85f444d5
ARRAY /dev/md1 metadata=0.90UUID=432e3654:e288eae2:f7703943:85f444d5

Now we need to transfer the current LVM array to three empty disks. First, create the md1 - LVM partition in the raid:

root@pve1:~#pvcreate /dev/md1 -ff

Physical volume "/dev/md1" successfully created

And add it to the pve group:

root@pve1:~#vgextend pve /dev/md1

Volume group"pve" successfully extended

Now we transfer the data from the original LVM to the newly created:

root@pve1:~#pvmove /dev/sda3 /dev/md1

/dev/sda3: Moved: 0.0%

The process is long. It took me about 10 hours. It is interesting that I started it out of habit being connected via SSH and realized by 1.3% that sitting at work for so long with a laptop is at least not convenient. I canceled the operation via CTRL + C, went to the physical server and tried to run the transfer command there, but a smart piece of hardware unsubscribed that the process is already running and the command will not be executed a second time, and began to draw transfer percentages on the real screen. At least thanks :)

The process ended two times by writing 100%. We remove the first disk from LVM:

root@pve1:~#vgreduce pve /dev/sda3

  Removed "/dev/sda3"from volume group"pve"

We transfer the boot / boot to our new raid / md0, but first we format and mount the raid itself.

root@pve1:~#mkfs.ext4 /dev/md0

mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with4823044k blocks and120720 inodes
Filesystem UUID: 6b75c86a-0501-447c-8ef5-386224e48538
Superblock backups storedon blocks: 
	32768, 98304, 163840, 229376, 294912
Allocating grouptables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

Create a directory and mount the raid there:

root@pve1:~#mkdir /mnt/md0

root@pve1:~#mount /dev/md0 /mnt/md0

Copy the contents of the live / boot:

root@pve1:~#cp -ax /boot/* /mnt/md0

Unmount the raid and delete the temporary directory:

root@pve1:~#umount /mnt/md0

root@pve1:~#rmdir /mnt/md0

Define the UUID of the raid section where / boot is stored - this is necessary to correctly write it to the / etc / fstab table:

root@pve1:~#blkid |grep md0

/ dev / md0: UUID = "6b75c86a-0501-447c-8ef5-386224e48538" TYPE = "ext4"
Open the table and write the boot data / boot at its end:

root@pve1:~#nano /etc/fstab

We prescribe and save:

UUID="6b75c86a-0501-447c-8ef5-386224e48538" /boot ext4 defaults 01

Now mount / boot:

root@pve1:~#mount /boot

Let the OS boot, even if the status is BOOT_DEGRADED (that is, the raid is degraded due to disk failure):

root@pve1:~#echo "BOOT_DEGRADED=true" > /etc/initramfs-tools/conf.d/mdadm

We register loading ramfs:

root@pve1:~#mkinitramfs -o /boot/initrd.img-`uname -r`

Disable bootloader graphic mode:

root@pve1:~#echo "GRUB_TERMINAL=console" >> /etc/default/grub

Install the bootloader on all three disks:

root@pve1:~#grub-install /dev/sdb

Installing for i386-pc platform.
Installation finished. No error reported.

root@pve1:~#grub-install /dev/sdc>

Installing for i386-pc platform.
Installation finished. No error reported.

root@pve1:~#grub-install /dev/sdd

Installing for i386-pc platform.
Installation finished. No error reported.

Now a very important point. We take as a basis the second disk / dev / sd b , on which the system, bootloader and grub and transfer all this to the first disk / dev / sd a , in order to later make it also part of our raid. To do this, consider the first disk as clean and mark it up like the others at the beginning of this article

Zanulim and mark it as GPT:

root@pve1:~#dd if=/dev/zero of=/dev/sda bs=512 count=1

1+0 records in1+0 records out512 bytes (512 B) copied, 0.0157829 s, 32.4 kB/s

root@pve1:~#parted /dev/sda mklabel gpt

Information: You may need toupdate /etc/fstab.

We break it into sections exactly like the other three:

root@pve1:~#parted /dev/sda mkpart primary 1M 10M

Information: You may need toupdate /etc/fstab.

root@pve1:~#parted /dev/sda set 1 bios_grub on

Information: You may need toupdate /etc/fstab.

root@pve1:~#parted /dev/sda mkpart primary 10М 1G

Information: You may need toupdate /etc/fstab.

Here we again need the exact knowledge of the disk size. Let me remind you that we got it with the command, which in this case must be applied to the / dev / sdb disk:

root@pve1:~#parted /dev/sdb print

Since we have the same drives, the size has not changed - 1000Gb . Mark up the main section:

root@pve1:~#parted /dev/sda mkpart primary 1G 1000Gb

Information: You may need toupdate /etc/fstab.

It should be like this:

root@pve1:~#parted /dev/sda print

Model: ATA MB1000EBNCF (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 
Number  StartEndSizeFilesystemName     Flags
 11049kB  10.5MB  9437kB  fat32        primary  bios_grub
 210.5MB  1000MB  990MB                primary
 31000MB  1000GB  999GB                primary

It remains to add this disk to the shared array. The second section, respectively, in / md0, and the third in / md1:

root@pve1:~#mdadm --add /dev/md0 /dev/sda2

mdadm: added /dev/sda2

root@pve1:~#mdadm --add /dev/md1 /dev/sda3

mdadm: added /dev/sda3

We are waiting for synchronization ...

root@pve1:~#watch cat /proc/mdstat

This command in real time shows the synchronization process:

Every 2.0s: cat /proc/mdstat                                                                              Fri Nov 1110:09:182016
Personalities : [raid10]
md1 : active raid10 sda3[4] sdd3[3] sdc3[2] sdb3[1]
      1951567872 blocks 2048K chunks 2 near-copies [4/3] [_UUU]
      [>....................]  recovery =  0.5% (5080064/975783936) finish=284.8min speed=56796K/sec
      bitmap: 15/15 pages [60KB], 65536KB chunk
md0 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
      1929216 blocks 2048K chunks 2 near-copies [4/4] [UUUU]

And if the first raid with / boot was synchronized immediately, then synchronization of the second one required patience (around 5 hours).

It remains to install the bootloader on the added disk (here you need to understand that you need to do this only after the disks are fully synchronized).

root@pve1:~#dpkg-reconfigure grub-pc

A couple of times we press Enter without changing anything and at the last step we check off all 4
md0 / md1 disks with daws!

It remains to reboot the system and verify that everything is in order:

root@pve1:~#shutdown –r now

The system booted normally (I even changed the order of loading the screws in the BIOS several times - it loads equally correctly).

Checking arrays:

<source lang="vim">root@pve1:~#cat /proc/mdstat

Personalities : [raid10] 
md1 : active raid10 sda3[0] sdd3[3] sdc3[2] sdb3[1]1951567872 blocks 2048K chunks 2 near-copies [4/4] [UUUU]bitmap: 2/15 pages [8KB], 65536KB chunk
md0 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]1929216 blocks 2048K chunks 2 near-copies [4/4] [UUUU]

Four horseshoes in each raid indicate that all four discs are in operation. We look at the information on arrays (on the example of the first, or rather zero).

root@pve1:~#mdadm --detail /dev/md0

/dev/md0:
        Version : 0.90
  Creation Time : Thu Nov 10 15:12:21 2016
     Raid Level : raid10
     Array Size : 1929216 (1884.32 MiB 1975.52 MB)
  Used Dev Size : 964608 (942.16 MiB 987.76 MB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent
    UpdateTime : Fri Nov 1110:07:472016
          State : active 
 Active Devices : 4
Working Devices : 4Failed Devices : 0
  Spare Devices : 0
         Layout : near=2ChunkSize : 2048K
           UUID : 4df20dfa:4480524a:f7703943:85f444d5 (localto host pve1)
         Events : 0.27Number   Major   Minor   RaidDevice State
       0820      active syncset-A   /dev/sda2
       18181      active syncset-B   /dev/sdb2
       28342      active syncset-A   /dev/sdc2
       38503      active syncset-B   /dev/sdd2

We see that the array is of type RAID10, all disks are in place, active and synchronized.

Now it would be possible to play with disconnecting disks, changing the boot loader in the BIOS, but before that, let's set up an administrator notification when the disks fail, which means the raid itself. Without notice, the raid will die slowly and painfully, and no one will know about it.

In Proxmox, postfix is ​​already installed by default, I didn’t delete it, although I consciously understand that it would be easier to configure other MTAs.

We put the SASL library (I need it to work with our external mail server):

root@pve1:/etc#apt-get install libsasl2-modules

We create a file with data from which we will be authorized on our remote mail server:

root@pve1:~#touch /etc/postfix/sasl_passwd

We write the line there:

[mail.domain.ru] pve1@domain.ru:password

Now create the transport file:

root@pve1:~#touch /etc/postfix/transport

We write there:

domain.ru smtp:[mail.domain.ru]

Create generic_map:

root@pve1:~#touch /etc/postfix/generic

Here we write (we indicate from whom the mail will be sent):

root pve1@domain.ru

Create sender_relay (essentially, a route to an external server):

root@pve1:~#touch /etc/postfix/sender_relay

And we write there:

pve1@domain.ru smtp.domain.ru

Hash files:

root@pve1:~#postmap transport

root@pve1:~#postmap sasl_passwd

root@pve1:~#postmap geniric

root@pve1:~#postmap sender_relay

In the /etc/postfix/main.cf file, I got the following working configuration:

main.cf
# See /usr/share/postfix/main.cf.dist for a commented, more complete version

myhostname=domain.ru

smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
# appending .domain is the MUA's job.
append_dot_mydomain = no
# Uncomment the next line to generate «delayed mail» warnings
#delay_warning_time = 4h

alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = $myhostname, localhost.$mydomain, localhost
mynetworks = 127.0.0.0/8,192.168.1.0/24
inet_interfaces = loopback-only
recipient_delimiter = +

smtp_tls_loglevel = 1
smtp_tls_session_cache_database = btree:/var/lib/postfix/smtp_tls_session_cache
smtp_use_tls = no
tls_random_source = dev:/dev/urandom

## SASL Settings
smtpd_sasl_auth_enable = no
smtp_sasl_auth_enable = yes
smtpd_use_pw_server=yes
enable_server_options=yes
smtpd_pw_server_security_options=plain, login
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sender_dependent_authentification = yes
sender_dependent_relayhost_maps = hash:/etc/postfix/sender_relay
smtpd_sasl_local_domain = $myhostname
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous
smtpd_sasl_application_name = smtpd
smtp_always_send_ehlo = yes
relayhost =
transport_maps = hash:/etc/postfix/transport
smtp_generic_maps = hash:/etc/postfix/generic
disable_dns_lookups = yes

Reboot postfix:

root@pve1:~#/etc/init.d/postfix restart

Now you need to return to the raid settings file and fix it a bit. We indicate to whom to receive letters of happiness and from whom they will come.

root@pve1:~#nano /etc/dmadm/mdadm.conf

I have like this:

CREATE owner=root group=disk mode=0660 auto=yes
MAILADDR info@domain.ru
MAILFROM pve1@dpmain.ru
ARRAY /dev/md0 metadata=0.90UUID=4df20dfa:4480524a:f7703943:85f444d5
ARRAY /dev/md1 metadata=0.90UUID=432e3654:e288eae2:f7703943:85f444d5

Restart mdadm to re-read the settings:

root@pve1:~#/etc/init.d/mdadm restart

We check through the console testing the raid and sending the letter:

root@pve1:~#mdadm --monitor --scan -1 --test --oneshot

I received two letters with information on both of the raids I created. It remains to add the test task to the crowns and remove the –test switch. So that letters will come only when something happened:

root@pve1:~#crontab -e

Add a task (do not forget to press Enter after the line and move the cursor down so that an empty line appears):

05 * * * mdadm --monitor --scan -1 –oneshot

Every morning at 5 a.m. testing will be performed and if problems arise, mail will be sent.

That's all. Perhaps I was too smart with the postfix config - while I was trying to achieve normal sending through our external server, I added a lot of things. I would be grateful if you correct (simplify).

In the next article I want to share the experience of moving virtual machines from our Esxi-6 hypervisor to this new Proxmox. I think it will be interesting.

UPD.
It is necessary to separately cancel the moment with a physical place on the / dev / data section - this is the main section created as LVM-Thin
When Proxmox was installed, it automatically allocated / dev / sda, taking into account the fact that on the / root partition where the system, ISO, dumps and containers were stored, it allocated 10% of the capacity from the partition, namely 100Gb. In the remaining space, he created the LVM-Thin partition, which is essentially not mounted anywhere (this is another subtlety of the version> 4.2, after transferring the disks to GPT). And as you know, this section has become 900Gb in size. When we raised RAID10 from 4 drives of 1Tb, we got the capacity (taking into account the RAID1 + 0 reserve) - 2Tb.
But when we copied LVM to the raid, we copied it as a container, with its size of 900Gb.

When you first go to the Proxmox admin panel, an attentive viewer may notice that poking at the local-lvm (pve1) section - we observe these with 800Gb copecks

So, to expand LVM-Thin to the entire size of 1.9TB, we will need to execute all one command:
lvextend /dev/pve/data -l +100%FREE

After that, the system does not even need to be restarted.
There is no need to do resize2fs - and this is probably even impossible, because the system will begin to swear at
root@pve1:~# resize2fs /dev/mapper/pve-data
resize2fs 1.42.12 (29-Aug-2014)
resize2fs: MMP: invalid magic number while trying to open /dev/mapper/pve-data
Couldn't find valid filesystem superblock.  

And it’s right to start - this section is not mounted via fstab for us.

In general, while I was trying to understand how to expand the disk and read the Proxmox forum - the system, meanwhile, was already showing the new size in full, both in the table and on the scale.
image

Also popular now: