SALT - Python configuration management software
Dear colleagues, I want to bring to your attention one of the configuration management systems completely written in Python. It is quite new, but it deserves attention. If you are interested in how you can manage the fleet of servers and workstations as a single system using this application - please, use the cat.

Some time ago, I thought about the fact that the number of servers I manage has increased from several to more than 20 and continues to grow. There were questions of centralized software updates, changing passwords, quickly raising new virtual machines, and the like routine tasks of any IT specialist.
Naturally, I began to study the features of the market. Here, for example, is a list of free management systems on wikipedia: Comparison_of_open_source_configuration_management_software , which I actually focused on when choosing a system to study. When choosing, I used the following criteria:
As you understand, the choice fell on Salt .
As I understand it, Salt allows you to solve 2 problems:
The system consists of clients - minion, and servers - master. The connection is outgoing with respect to the minion, therefore NAT and so on are not very scary for us.
Grains
Computer groups are formed on the basis of the so-called "grain» ( Grains ): system parameters collected service minion at the start.
For example, if we want to check the availability of all the nodes on which CentOS is installed, we just need to write:
or we want to know the number of cores in all our 64-bit systems:
Along with standard grains, we can add our own:
States
The second component is state files that allow you to describe the requirements for the state of the system and later, based on these files, the minion can bring the necessary parameters of the client system to the state we need.
But this is somehow difficult to write, I offer an example:
First, check the wizard in the configuration file where the SLS files should be stored
looking for something similar to:
Therefore, in this folder (/ srv / salt /) our files will be our SLS files.
Required is the top.sls file located in this folder.
Here is an example of the contents of this file on my test machine:
fail2ban (Third line) can be: either a sls file or a folder in which the init.sls file would lie. The second option seems more convenient to me, so I did just that:
The contents of the fail2ban file for me is:
There are 3 entities in this SLS file:
In accordance with the sls file described by us, Salt will do the following:
Firstly, I highly recommend against using the bootstrap file proposed by the authors of the project. Since he does a lot of, in my opinion, strange actions. (for example yum –y update and using the epel test repository)
Actually, at least with centos, you won’t have to do anything complicated.
The salt-master and salt-minion packages are in epel.
The service names on CentOS are the same.
Service configuration files are located in / etc / salt /
By default, master opens 2 ports that should be enabled in iptables:
4505 - for communicating minions with a server
4506 - for transferring files
The basic setup of the minion file is to specify the address of the wizard and, personally, I still add another ID. If this is not done, the FQDN will be used.
When launched, the minion connects to the master and gives it its public key.
Next, on the wizard, when starting the salt-key utility:
[root @ test salt] # salt-key
Accepted Keys:
Unaccepted Keys:
web2
Rejected Keys: The
key we need appears in the Unaccepted Keys list to add it to the allowed ones:
[root @ test salt] # salt-key -a web2
Key for minion web2 accepted.
This completes the connection of the first minion. And you can check its availability
[root @ test salt] # salt 'web2' test.ping
web2: True
well, if we want to check how our fail2ban is configured:
[root @ test salt] # salt 'web2' state.highstate

The list of modules for remote control: salt.readthedocs.org/en/latest/ref/states/all/salt.states.pkg.html#module-salt.states.pkg The
list of modules for management and state control: salt.readthedocs.org /en/latest/ref/states/all/salt.states.file.html#module-salt.states.file
I have not yet figured out 2 points:
I will try to answer these questions in the following articles about salt after I try everything myself.
Thank you for your attention to the material. Waiting for comments and comments.

Why Salt?
Some time ago, I thought about the fact that the number of servers I manage has increased from several to more than 20 and continues to grow. There were questions of centralized software updates, changing passwords, quickly raising new virtual machines, and the like routine tasks of any IT specialist.
Naturally, I began to study the features of the market. Here, for example, is a list of free management systems on wikipedia: Comparison_of_open_source_configuration_management_software , which I actually focused on when choosing a system to study. When choosing, I used the following criteria:
- although weak, but support for Windows
- good linux support
- Open source
- Not Ruby (I know this is my omission, but I didn’t have a relationship with him)
As you understand, the choice fell on Salt .
Concept and basic concepts
As I understand it, Salt allows you to solve 2 problems:
- Centralized command execution on computer groups
- Support for systems in predefined states
The system consists of clients - minion, and servers - master. The connection is outgoing with respect to the minion, therefore NAT and so on are not very scary for us.
Grains
Computer groups are formed on the basis of the so-called "grain» ( Grains ): system parameters collected service minion at the start.
For example, if we want to check the availability of all the nodes on which CentOS is installed, we just need to write:
salt -G 'os: CentOS' test.ping
or we want to know the number of cores in all our 64-bit systems:
salt -G 'cpuarch: x86_64' grains.item num_cpus
Along with standard grains, we can add our own:
grains:
roles:
- webserver
- memcache
deployment: datacenter4
cabinet: 13
cab_u: 14-15
States
The second component is state files that allow you to describe the requirements for the state of the system and later, based on these files, the minion can bring the necessary parameters of the client system to the state we need.
But this is somehow difficult to write, I offer an example:
First, check the wizard in the configuration file where the SLS files should be stored
vim / etc / salt / master
looking for something similar to:
file_roots:
base:
- / srv / salt
Therefore, in this folder (/ srv / salt /) our files will be our SLS files.
Required is the top.sls file located in this folder.
Here is an example of the contents of this file on my test machine:
base:
'web2':
- fail2ban
fail2ban (Third line) can be: either a sls file or a folder in which the init.sls file would lie. The second option seems more convenient to me, so I did just that:
The contents of the fail2ban file for me is:
cat fail2ban / init.sls
- fail2ban.conf:
- file.managed:
- - name: /etc/fail2ban/fail2ban.conf
- - source: salt: //fail2ban/fail2ban.conf
- jail.conf:
- file.managed:
- - name: /etc/fail2ban/jail.conf
- - source: salt: //fail2ban/jail.conf
- fail2ban:
- pkg:
- - installed
- service.running:
- - enable: True
- - watch:
- - file: fail2ban.conf
- - file: jail.conf
There are 3 entities in this SLS file:
- Fail2ban.conf file (line 1)
- File jail.conf (term 6)
- The fail2ban package itself (line 11)
In accordance with the sls file described by us, Salt will do the following:
- Compare the files on the client so that it will update the client if there are any changes (lines 2 and 7)
- Checks if the fail2ban package is installed (12-13)
- Check service inclusion (15)
- Check service startup (16)
- If one of the files has changed, it will restart the service (17)
Installation
Firstly, I highly recommend against using the bootstrap file proposed by the authors of the project. Since he does a lot of, in my opinion, strange actions. (for example yum –y update and using the epel test repository)
Actually, at least with centos, you won’t have to do anything complicated.
The salt-master and salt-minion packages are in epel.
The service names on CentOS are the same.
Service configuration files are located in / etc / salt /
Master
By default, master opens 2 ports that should be enabled in iptables:
iptables -I INPUT -j ACCEPT -p tcp --dport 4505: 4506
4505 - for communicating minions with a server
4506 - for transferring files
Minion
The basic setup of the minion file is to specify the address of the wizard and, personally, I still add another ID. If this is not done, the FQDN will be used.
When launched, the minion connects to the master and gives it its public key.
Next, on the wizard, when starting the salt-key utility:
[root @ test salt] # salt-key
Accepted Keys:
Unaccepted Keys:
web2
Rejected Keys: The
key we need appears in the Unaccepted Keys list to add it to the allowed ones:
[root @ test salt] # salt-key -a web2
Key for minion web2 accepted.
This completes the connection of the first minion. And you can check its availability
[root @ test salt] # salt 'web2' test.ping
web2: True
well, if we want to check how our fail2ban is configured:
[root @ test salt] # salt 'web2' state.highstate

What can be done with minions?
The list of modules for remote control: salt.readthedocs.org/en/latest/ref/states/all/salt.states.pkg.html#module-salt.states.pkg The
list of modules for management and state control: salt.readthedocs.org /en/latest/ref/states/all/salt.states.file.html#module-salt.states.file
Questions
I have not yet figured out 2 points:
- How can I automate the registration of keys on a server for full automation? (most likely I will write my module, with http requests)
- What will happen to the minion if he changes the server. Will he obey this new server? Thanks for the question to my colleague and good friend, Jan.
I will try to answer these questions in the following articles about salt after I try everything myself.
Thank you for your attention to the material. Waiting for comments and comments.