We deal with downloading ArchLinux over the network

  • Tutorial
In the previous article, we prepared the basic system. Finish the setup in the next article .

Here we will create a new Arch Linux system that can boot over the network and automatically launch the Firefox browser, and in between we will figure out the necessary functionality of the boot server. Then we configure the server itself and try to boot from it. Everything will happen exactly as in the picture that Google found on the request “PXE”:

Install Linux again

Archlinux compares favorably with ready-made distributions in that the installation of a new system from a working machine is carried out in the same way as when using the installation image, and in both cases you get the most current version of the system at the moment. Only small installation scripts are needed:
pacman -S arch-install-scripts

A perfectly predictable beginning:
export root=/srv/nfs/diskless
mkdir -p $root

Install only the basic packages, therefore:
pacstrap -d -c -i $root base

“We strive to minimize the volume of the installed system, because network performance is much lower than the performance of the slowest hard drive!” I should have written at this point, but I know that the volume can be reduced even more if you select specific packages from base package groups. I propose to do it yourself.

Next, repeat all the steps until the installation of the bootloader according to the previous article . Here is the checklist:
  • conduct Russification (internationalization);
  • specify the time zone and configure the NTP service to start automatically;
  • add username and block his password from changing.

Compare disk booting and network booting

In a previous article, we looked at the Linux boot process in terms of internal storage. Now we will imagine what is happening through the eyes of a network card. The picture from the header illustrates the events well, except that in our case all the servers will work on the same computer.

Immediately after turning on the computer, the PXE code is triggered (Preboot eXecution Environment, pronounced pixy - thanks to the wiki ), which is located directly in the ROM of the network card. Its task is to find the bootloader and transfer control to it.
Hereinafter, we consider the capabilities of a network card, which is built into almost any motherboard available for sale (although among some equipped with Socket 775, there were weak copies). If the corresponding PXE item is not in the list of bootable BIOS devices, then try to enable it in the integrated devices section among the network card settings. If all else fails, then read the instructions from your motherboard and make sure that it is suitable for further experiments.

The network adapter has no idea what network it is currently located in, so it assigns the address to itself and sends a DHCPDISCOVER message. Passport data is attached to the message , which we will definitely need:
  • ARCH Option 93 - PXE client architecture (UEFI, BIOS);
  • Vendor-Class Option 60 is an identifier that for all PXE clients looks like “PXEClient: Arch: xxxxx: UNDI: yyyzzz”, where the numbers xxxxx are the client architecture, yyyzzz is the major and minor versions of the UNDI driver (Universal Network Driver Interface).

The adapter expects to receive a response from the DHCP server via the BOOTP protocol ( Bootstrap Protocol ), where in addition to the desired IP address, subnet mask and gateway address, there is information about the TFTP server address and the name of the bootloader file that should be taken from it. The TFTP server, in turn, simply gives away to anyone who wants any files that they ask for.

After receiving a response and applying network settings, further download control is transferred to the received file, the size of which cannot exceed 32 kB, so a two-stage download is used. Everything you need to display the boot menu on the screen is downloaded next using the same TFTP protocol. The vast majority of network boot guides use the pxelinux bootloader, but GRUB can do the same, and even more: it has different bootloaders for different architectures, including UEFI.

Next, the download is paused while the boot menu is displayed, and then, using the same TFTP protocol, the selected vmlinuz and initramfs files are downloaded, to which further control of the download is transferred. At this stage, there is no longer any difference in the mechanism of loading over the network or from the internal drive.

Configuring network boot using GRUB

Since GRUB already exists on our server, use it to create the folder structure for the network client in this way:

grub-mknetdir --net-directory=$root/boot --subdir=grub

The grub folder and several others will appear in the $ root / boot folder. We will completely “give away” this file structure using a TFTP server. Now we use 64-bit ArchLinux for the reason that the 32-bit system does not have the / grub / x86_64-efi / folder, which is required to load UEFI systems. You can take this folder from our 64-bit server and transfer it unchanged to a 32-bit server, then UEFI support will also appear in it.

Create a bootloader configuration file with the following contents:
cat $ root / boot / grub / grub.cfg
function load_video {
    if [ x$feature_all_video_module = xy ]; then
        insmod all_video
        insmod efi_gop
        insmod efi_uga
        insmod ieee1275_fb
        insmod vbe
        insmod vga
        insmod video_bochs
        insmod video_cirrus
if [ x$feature_default_font_path = xy ] ; then
if loadfont $font ; then
    set gfxmode=auto
    insmod gfxterm
    set locale_dir=locale
    set lang=ru_RU
    insmod gettext
terminal_input console
terminal_output gfxterm
set timeout=5
set default=0
menuentry "Автозапуск Firefox" {
    set gfxpayload=keep
    insmod gzio
    echo "Загружается ядро..."
    linux /vmlinuz-linux \
         add_efi_memmap \
         ip="$net_default_ip":"$net_default_server": \
    echo "Загружается инициирующая файловая система..."
    initrd /initramfs-linux.img

I took the grub.cfg file from the server and removed everything that is not involved in displaying the GRUB boot menu or is somehow connected with disks.

Pay attention to the familiar string with kernel parameters:
linux /vmlinuz-linux add_efi_memmap ip="$net_default_ip":"$net_default_server": nfsroot=${net_default_server}:/diskless

As in the previous time, we assign the value to the variable "ip". I remind you that it is used in the "net" handler, which we adapted to configure the network card in the boot server. Here again, the static IP address and the constant name of the eth0 network card are indicated. The values ​​of $ net_default_ip and $ net_default_server are substituted by GRUB independently based on the data received from the very first DHCP request. $ net_default_ip is the IP address allocated for our machine, and $ net_default_server is the IP address of the boot server.

Most of the manuals on network booting (among those found on the open spaces of the Internet), suggest setting the variable so "ip = :::::: eth0: dhcp", which forces the net handler to send a new DHCPDISCOVER request to retrieve the network settings.

There is no objective reason to once again “spam” the DHCP server and wait until it responds, therefore we again use the statics and do not forget to specify the DNS servers. We already solved this problem, so just copy the necessary files and add the service to startup:
cp {,$root}/etc/systemd/system/update_dns@.service && cp {,$root}/etc/default/dns@eth0 && arch-chroot $root systemctl enable update_dns@eth0

We return to the line with the kernel parameters. The still unfamiliar add_efi_memmap (EFI memory map) command adds an EFI memory map to the available RAM . Last time, we intentionally missed it, due to the relatively complicated preliminary markup of media to support UEFI. Now we don’t need to mark anything, because the file system on the boot server already exists and will be used unchanged.

The kernel variable - nfsroot shows where exactly in the network you need to look for the root file system. It performs the same function as the root variable in the boot server. In this case, the address of the NFS server is specified , which in our case coincides with the TFTP server, but this is completely optional.

Preparing initramfs

The net handler is responsible for connecting the root file system via NFS. Last time we removed this functionality from it, but now we need it, however, in a slightly modified form. The fact is that the net handler out of the box only supports connection using the NFS version 3 protocol. Fortunately, support for version 4 is very simple to add.

First, install the package, which includes the net handler we need, as well as the utility package for working with NFS (the nfsv4 module and the mount.nfs4 program):

pacman --root $root --dbpath $root/var/lib/pacman -S mkinitcpio-nfs-utils nfs-utils

We will fix the net handler from the hooks folder (instead of the command to mount nfsmount, now we will use mount.nfs4):
sed s/nfsmount/mount.nfs4/ "$root/usr/lib/initcpio/hooks/net" > "$root/etc/initcpio/hooks/net_nfs4"

Using the handler installer from the install folder, add the nfsv4 module and the mount.nfsv4 program in iniramfs. First, copy and rename the workpiece:
cp $root/usr/lib/initcpio/install/net $root/etc/initcpio/install/net_nfs4

Now we fix only one build () function, and do not touch the rest:
nano $root/etc/initcpio/install/net_nfs4
build() {
    add_checked_modules '/drivers/net/'
    add_module nfsv4?
    add_binary "/usr/lib/initcpio/ipconfig" "/bin/ipconfig"
    add_binary "/usr/bin/mount.nfs4" "/bin/mount.nfs4"

Add a handler to initramfs by correcting the line in the mkinitcpio.conf file:
nano $root/etc/mkinitcpio.conf
HOOKS="base udev net_nfs4"

If you don’t touch anything, then usually the fast gzip archiver is used to compress the initramfs file. We are not in a hurry, as much as we want compression, so we will use xz. We remove the comment from this line in the mkinitcpio.conf file:

Archiving xz takes much longer, but the initramfs file is reduced at least a couple of times, which is why TFTP server transfers much faster over the network. Copy the preset from our server so that only one initramfs file is generated during operation, and then run mkinitcpio:

cp /etc/mkinitcpio.d/habr.preset $root/etc/mkinitcpio.d/habr.preset && arch-chroot $root mkinitcpio -p habr

Finally, edit fstab. Here you can choose the options for mounting the root file system in order to optimize its work, but we will not touch anything:
echo " / nfs defaults 0 0" >> $root/etc/fstab

The basic installation of the client system is now complete. But we want to add a graphical environment and automatic launch of Firefox.

Boot into Firefox

To reduce the amount of memory occupied by our system, we will abandon the use of the screen manager and focus on the simplest window manager , for example, openbox with automatic user authorization username. The use of "lightweight" components will allow the system to run remarkably and work even on the most ancient hardware.

Install the modules to support VirtualBox, server X, a nice TTF font, openbox and firefox (all other modules will be installed as dependencies):
pacman --root $root --dbpath $root/var/lib/pacman -S virtualbox-guest-modules virtualbox-guest-utils xorg-xinit ttf-dejavu openbox firefox

Turn on the startup of the virtualbox service:
arch-chroot $root systemctl enable vboxservice

Add the automatic login of username without entering a password, for this we will change the agetty launch line:
mkdir $root/etc/systemd/system/getty@tty1.service.d && \
echo -e "[Service]\nExecStart=\nExecStart=-/usr/bin/agetty --autologin username --noclear %I  38400 linux Type=simple %I" > $root/etc/systemd/system/getty@tty1.service.d/autologin.conf

Immediately after authorization of the user, the ~ / .bash_profile file is executed from his home folder, where we add the automatic launch of the graphic server:
echo '[[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && exec startx &> /dev/null' >> $root/home/username/.bash_profile

At the start of the X-server openbox should start:
cp $root/etc/X11/xinit/xinitrc $root/home/username/.xinitrc && echo 'exec openbox-session' >> $root/home/username/.xinitrc

Comment out the following lines at the very end of the file (from the twm line to the line we added to openbox, but not including it):
cat $root/home/username/.xinitrc
# twm &
# xclock -geometry 50x50-1+1 &
# xterm -geometry 80x50+494+51 &
# xterm -geometry 80x20+494-0 &
# exec xterm -geometry 80x66+0+0 -name login
exec openbox-session

Copy openbox configuration files
mkdir -p $root/home/username/.config/openbox && cp -R $root/etc/xdg/openbox/* $root/home/username/.config/openbox

Add firefox to autoload in openbox environment:
echo -e 'exec firefox habrahabr.ru/post/253573/' >> $root/home/username/.config/openbox/autostart

Since we just hosted username in the user's home folder on behalf of the superuser, we need to return him the rights to all files located in his folder:
chown -R username $root/home/username

The system’s preparation for booting over the network is complete, and it is time to move on to setting up the boot server. Now we know that for the download we need:
  • DHCP server with BOOTP protocol support for network card configuration;
  • TFTP server for transferring the bootloader and the vmlinuz and initramfs files, which are located in the $ root / boot / grub folder;
  • NFS server for hosting the root file system, which lies in our $ root folder.

Configure the boot server

Further steps with minor changes repeat this article from the wiki , so a minimum of comments on my part.

Install a DHCP server

Download the package:
pacman -S dhcp

and bring the contents of the configuration file /etc/dhcpd.conf to the following form:
mv /etc/dhcpd.conf /etc/dhcpd.conf.old 

nano /etc/dhcpd.conf

# Разрешаем использование протокола BOOTP
allow booting;
allow bootp;
# Утверждаем, что сервер является авторитетным (обычно роутеры либо не авторитетны, либо BOOTP не поддерживают, поэтому их PXE слушать не будет)
# получаем архитектуру клиента (это обсуждалось выше)
option architecture code 93 = unsigned integer 16;
# работаем в такой подсети (исправляйте под себя)
subnet netmask {
# в этот класс попадут все те, кто пытается загружаться
class "pxe_client" {
match if exists architecture;
pool {
# Разным архитектурам отдаём разные файлы:
if option architecture = 7 {
filename "/grub/x86_64-efi/core.efi";
} else {
filename "/grub/i386-pc/core.0";
# Рекомендую указать адрес TFTP сервера, несмотря на то, что это необязательно, раз он там же, где DHCP
# Здесь, как обычно (не забывайте исправлять под себя)
default-lease-time 600;
max-lease-time 7200;
option domain-name-servers;
option routers;
# Обрабатываем запросы только тех, кто загружается
allow members of "pxe_client";

As you can see, the DHCP server will only respond to those DHCPDISCOVER requests that come from PXE clients, and the rest will simply be ignored.

We start the DHCP server:
systemctl start dhcpd4

Install TFTP server

Download and install the necessary package:
pacman -S tftp-hpa

We need the TFTP server to provide access to the bootloader files that we placed in the $ root / boot folder. To do this, we modify the start of the service in a proven way:
mkdir -p /etc/systemd/system/tftpd.service.d && echo -e '[Service]\nExecStart=\nExecStart=/usr/bin/in.tftpd -s /srv/nfs/diskless/boot' > /etc/systemd/system/tftpd.service.d/directory.conf

The first line "ExecStart =" cancels the execution of the command specified in the original file $ root / usr / lib / systemd / system / tftpd.service, and "/usr/bin/in.tftpd -s / srv / nfs / diskless is executed instead / boot ". Only if the service is started once (Type = oneshot), we can use several lines of ExecStart = to execute commands one after another. This is not the case, so we cancel one command and execute another.

We start the TFTP server:
systemctl start tftpd.socket tftpd.service

Install NFS server

Download the package:
pacman -S nfs-utils

Add the folder in which we installed the system to the list of exported:
echo -e "/srv/nfs,fsid=root,no_subtree_check,no_root_squash)\n$root,no_subtree_check,no_root_squash)" >> /etc/exports

Do not forget to use the NFS v.4 syntax indicating the path relative to the folder with fsid = root (root in relation to all other exported folders, without which nothing will work).

We start the services providing operation of the NFS server:
systemctl start rpcbind nfs-server

On this, the boot server is ready to go.

We try to boot over the network

We will follow the process of downloading from the server using the tcpdump program
pacman -S tcpdump
tcpdump -v '( \
    src host and udp[247:4] = 0x63350101) or ( \
    dst host HabraBoot and dst port tftp) or ( \
    dst host HabraBoot and tcp[tcpflags] == tcp-syn)'

The first line “catches” the DHCPDISCOVER request from the PXE client. The output filtered by the second line will list the names of all the files requested by TFTP. The third line shows two tcp-syn requests sent at the very beginning of the NFS connection (the first connection is made by the net handler, and the second reconnection occurs during the processing of the fstab file).

We create a new virtual machine, for brevity we will call it “client”. In the network settings, we again indicate the connection type "Network Bridge" and turn on the machine. Immediately press the F12 key on the keyboard to select the boot device, and then the l key to boot over the network.

Wait for the download to finish. If everything is in order, then on the server we add the used services to autoload:

systemctl enable tftpd.socket tftpd.service dhcpd4 rpcbind nfs-server

All the DHCP, TFTP and NFS servers we launched on the same boot server. Doing so is optional. For example, Mikrotik routers support Bootp and allow you to use yourself as TFTP - just upload all the necessary files there and check the network settings.

Now the graphical environment will work only in VirtualBox, because we did not install drivers for hardware video cards. We will solve the problem of automatically selecting the right drivers in the next article . At the same time, we will speed up the loading of the system and make it a “live image”.

Also popular now: