Puppet, a configuration management system. Part II

    R2-D2 and C-3PO
    In the first part, I talked about the main features of the Puppet configuration management system. In the second part, we will set up two machines in order to try the basic things.

    For the hostnames, I decided to use the names of the robots from the George Lucas epic Star Wars: R2D2 and C-3PO . Since R2 is smarter, he will manage C-3PO.

    I decided to use Ubuntu Server 8.04.01-LTS as an OS for experiments. It was possible and Debian, and Cent OS, and FreeBSD - not fundamentally. I use Ubuntu Server because of the simplicity of its configuration, friendliness for me personally. Who likes something else is not a question.

    Management server


    So, let's start with R2D2, i.e. from the control machine. We put the puppetmaster package on it:

    sudo apt-get install puppetmaster

    after executing this command, the managing server will be installed and launched under the puppet account.

    Now create a configuration file for the management server. In terms of puppet, it is called a manifest. The site.pp manifest will be created in the / etc / puppet / manifests directory. The content is as follows: Here it should be noted right away that since we did not specify any nodes, all the parameters specified in the manifest will be applied to all client hosts. Thus, all the machines that applied to R2D2 for configuration will check the rights and ownership of the / etc / passwd file. The server runs on port 8140, so in case of problems, check the network settings, client machines must have access to port 8140 on the management server.

    file { "/etc/passwd":
    owner => "root",
    group => "bin",
    mode => 644,
    }






    Client


    We put the puppet package on the client: the

    sudo apt-get install puppet

    client, unlike the server, works under the root account in order to be able to make any changes to the system. First, the client generates a certificate, which upon the first connection to the server asks to sign. If the certificate is signed, the client receives the current config, and then applies it to the machine. In the future, every half hour, the client checks to see if the configuration has changed.

    Add the following lines at the end of the /etc/puppet/puppet.conf config: this tells the client which server to work with. You can specify ip, I have ip r2d2 registered in / etc / hosts. It is VERY IMPORTANT that the server name be exactly the same as the management server signs the certificates. You can check the server name in the certificates using openssl: Also comment out the line:

    [puppetd]
    server=r2d2.localdomain





    openssl s_client -showcerts -connect r2d2.localdomain:8140



    #pluginsync=true

    This option sets the synchronization of plugins with the server - while this is not needed, it is better to comment out.

    Now run the puppet client so that it generates a certificate, sends it to the management server and requests to sign: So, now the c3po certificate should be on r2d2, check its presence on r2d2, and if it is there, we will sign: The certificate is signed. We repeat the test launch of the client: Everything works OK. Now we’ll check what happens if you change the owner of the / etc / passwd file :-) My account is spanasik, so I will make myself the owner and set mask 777: Now run the puppet client: Voilà! The owner is again root, and the rights as expected - 644. Well, actually, now we launch the client daemon:

    spanasik@c3po:~$ sudo puppetd --verbose --test
    info: Creating a new certificate request for c3po.localdomain
    info: Creating a new SSL key at /var/lib/puppet/ssl/private_keys/c3po.localdomain.pem
    warning: peer certificate won't be verified in this SSL session
    notice: No certificates; exiting




    spanasik@r2d2:~$ sudo puppetca --list
    c3po.localdomain
    spanasik@r2d2:~$ sudo puppetca --sign c3po.localdomain
    Signed c3po.localdomain




    spanasik@c3po:~$ sudo puppetd --verbose --test
    warning: peer certificate won't be verified in this SSL session
    notice: Got signed certificate
    info: No classes to store
    info: Caching catalog at /var/lib/puppet/state/localconfig.yaml
    notice: Starting catalog run
    info: Creating state file /var/lib/puppet/state/state.yaml
    notice: Finished catalog run in 0.04 seconds





    spanasik@c3po:~$ sudo chown spanasik:users /etc/passwd
    spanasik@c3po:~$ sudo chmod 777 /etc/passwd
    spanasik@c3po:~$ ls -la /etc/passwd
    -rwxrwxrwx 1 spanasik users 1084 2009-09-01 12:01 /etc/passwd




    spanasik@c3po:~$ sudo puppetd --verbose --test
    notice: Ignoring cache
    info: No classes to store
    info: Caching catalog at /var/lib/puppet/state/localconfig.yaml
    notice: Starting catalog run
    notice: //File[/etc/passwd]/owner: owner changed 'spanasik' to 'root'
    notice: //File[/etc/passwd]/group: group changed 'users' to 'root'
    notice: //File[/etc/passwd]/mode: mode changed '777' to '644'
    notice: Finished catalog run in 0.03 seconds




    spanasik@c3po:~$ ls -la /etc/passwd
    -rw-r--r-- 1 root root 1084 2009-09-01 12:01 /etc/passwd




    spanasik@c3po:~$ sudo /etc/init.d/puppet start
    * Starting puppet configuration management tool [ OK ]
    spanasik@c3po:~$ ps auxw | grep puppet | grep -v grep
    root 6959 1.3 7.3 29584 18856 ? Ssl 13:46 0:00 ruby /usr/sbin/puppetd -w 0


    Everything works OK, now every half hour c3po will check the update of the config on r2d2 and make changes to the system.

    One machine, automatic deployment?


    If you have only one machine, then you need to install both packages, and configure exactly the same as described. The advantages of using the system on one machine I described in a previous article, the main thing is the quick launch on a new server after the crash.

    You see that in this article I did everything manually. Of course, this is not an option when you have hundreds of cars. In the case where you have a lot of machines, you can use the automatic deployment of the system. You make an image for installation, and spill it on the hard drives. At the first boot, the client system will connect to the management server, and then you can already use the default config, or work with each separately. I note that I myself did not do this, because Do not admin fleet.

    Here pingeee in the comment describesa possible option for spilling images on a grid, for which many thanks to him. A respected stasikos tells you about the FAI tool for debian-like distributors, for which we are equally grateful.

    In the following articles we will talk about more complex and interesting things that puppet allows you to do.

    Also popular now: