Ansible: testing playbooks (part 1)

  • Tutorial


I think that any system administrator using Ansible to manage his zoo of servers wondered about checking the correct description of the configuration of his servers. How not to be afraid to make changes to the server configuration?
In a series of articles on DevOps, we will talk about this.



Several conditions under which we will perform configuration testing:

1. The entire configuration is stored in the git repository.
2. Jenkins (CI service) periodically polls the repository with our roles / playbooks for the changes made.
3. When changes occur, Jenkins starts the configuration build and covers it with tests. Tests consist of two stages:
3.1 Test-kitchen takes the updated code from the repository, launches completely fresh docker-containers, pours in them updated playbooks from the repository and runs ansible locally, in the docker-container.
3.2 If the first stage is successful, the serverspec starts in the docker container and checks if the new configuration has risen correctly.
4. If all the tests were successful in test-kitchen, then Jenkins will initiate a new configuration fill.

Of course, you can run every playbook / role in Vagrant (fortunately, there is such a cool thing as provisioning), to verify that the configuration is as expected, but each time to test a new or changed configuration, performing so many manual actions is a dubious pleasure. What for? After all, you can automate everything. To do this, we come to such wonderful tools as Test-kitchen , Serverspec and, of course, Docker .

First, let's look at how we can test the code in Test-kitchen using an example of a pair of spherical roles in a vacuum.

Ansible.



Ansible I collected the latest, most recent of the source. Prefer to collect with your hands. (To whom laziness - you can use Omnibus-ansible )
git clone git://github.com/ansible/ansible.git --recursive
cd ./ansible


We collect and install the deb package (we will test playbooks on Debian).
make deb
dpkg -i deb-build/unstable/ansible_2.1.0-0.git201604031531.d358a22.devel~unstable_all.deb


Ansible got up, check:
ansible --version
ansible 2.1.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides


Excellent! So it's time to get down to business.

Now we need to create a git repository.
mkdir /srv/ansible && cd /srv/ansible
git init
mkdir base && cd base # Создаём папку проекта с конфигурацией


The repository architecture is roughly the following:
├── ansible.cfg
├── inventory
│   ├── group_vars
│   ├── hosts.ini
│   └── host_vars
├── logs
├── roles
│   ├── common
│   │   ├── defaults
│   │   │   └── main.yml
│   │   ├── files
│   │   ├── handlers
│   │   │   └── main.yml
│   │   ├── tasks
│   │   │   ├── install_packages.yml
│   │   │   └── main.yml
│   │   ├── templates
│   │   └── vars
│   └── nginx
│       ├── defaults
│       ├── files
│       ├── handlers
│       │   └── main.yml
│       ├── tasks
│       │   ├── configure.yml
│       │   ├── install.yml
│       │   └── main.yml
│       ├── templates
│       │   └── nginx.conf.j2
│       └── vars
├── site.yml
├── Vagrantfile
└── vars
    └── nginx.yml


We will not change the configuration file by default, we will only make our own changes to the project configuration file.

ansible.cfg:
[defaults]
roles_path	    = ./roles/			        # Папка с ролями
retry_files_enabled = False                             # Отключаем retry-файлы в случае неудачного выполнения таска
become		    = yes					# Параметр эквивалентен вызову sudo
log_path	    = ./logs/ansible.log			# логи
inventory	    = ./inventory/				# Путь к inventory-файлам.


Next, we need an inventory file where we need to specify a list of hosts with which we will work.
mkdir inventory
cd invetory
mkdir host_vars
mkdir group_vars


Invetory file:
127.0.0.1 ansible_connection=local


All hosts that ansible will manage are listed here.
host_vars - the folder where the variables will be stored, which may differ from the base values ​​in the role.
Example: in ansible, a jinja2 template engine can be useful when working with files and configs.
We have the resolv.conf templates / resolv.conf.j2 template :
nameserver {{ nameserver }}


The default variables file ( roles / common / defaults / main.yml ) says:
nameserver: 8.8.8.8


But on host 1.1.2.2, we need to fill in resolv.conf with a different nameserver value .
We do this through host_vars / 1.1.2.2.yml :
nameserver: 8.8.4.4


In this case, when the playbook is executed, the standard resolv.conf (with a value of 8.8.8.8 ) will be poured on all hosts , and on the host 1.1.2.2 - with a value of 8.8.4.4 .
You can read more about this in the Ansible documentation.

common-role



This is the role that performs the standard tasks that must be performed on all hosts. Installing any packages, for example, a user institution, etc.
I described the structure a bit above. Let's go through the details.

Role structure:
./roles/common/
├── defaults
│   └── main.yml
├── files
├── handlers
│   └── main.yml
├── tasks
│   ├── install_packages.yml
│   └── main.yml
├── templates
└── vars


The roles / common / defaults / main.yml file contains default variables.
---
deb_packages:
  - curl
  - fail2ban
  - git
  - vim 
rh_packages:
  - curl
  - epel-release
  - git
  - vim


The files folder contains files that should be copied to the remote host.
The tasks folder lists all the tasks that must be completed when assigning a role to the host.
roles/common/tasks/
├── install_packages.yml
├── main.yml


roles / common / tasks / install_packages.yml
---
- name: installing Debian/Ubuntu pkgs
  apt: pkg={{ item }} update_cache=yes
  with_items: "{{deb_packages}}"
  when: (ansible_os_family == "Debian")
- name: install RHEL/CentOS packages
  yum: pkg={{ item }}
  with_items: "{{rh_packages}}"
  when: (ansible_os_family == "RedHat")


The with_items and when loops are used here . If the distribution is of the Debian family - packages from the deb_packages list will be installed using the apt module . If the distribution kit is RedHat family - packages from the rh_packages list will be installed using the yum module .

roles / common / tasks / main.yml
---
- include: install_packages.yml


(Yes, I really like to decompose roles into separate files with my tasks).

The main.yml file simply includes yaml files that describe all the tasks described in the tasks folder .

The templates folder contains templates in the Jinja2 format (the example with resolv.conf was considered above).

The handlers folder lists the actions that can be performed after executing any tasks. Example: we have a piece of task:
- name: installing Debian packages
  apt: pkg=fail2ban update_cache=yes
  when: (ansible_os_family == "Debian")
  notify:
    - restart fail2ban


and handler roles / common / handlers / main.yml :
---
- name restart fail2ban
  service: name=fail2ban state=restarted


In this case, after executing the apt: pkg = fail2ban update_cache = yes task, the restart fail2ban handler task will start . In other words, fail2ban will restart immediately as soon as it is installed. Otherwise, if fail2ban is already installed in our system, then notification and handler launch will be ignored.

In the vars folder, you can specify variables that should not be used by default.
/vars/common.yml
---
deb_packages:
  - curl
  - fail2ban
  - vim
  - git
  - htop
  - atop
  - python-pycurl
  - sudo
rh_packages:
  - curl
  - epel-release
  - vim
  - git
  - fail2ban
  - htop
  - atop
  - python-pycurl
  - sudo


Test-kitchen + serverspec.



Resources that were used:

serverspec.org/resource_types.html

github.com/test-kitchen/test-kitchen
github.com/portertech/kitchen-docker
github.com/neillturner/kitchen-verifier-serverspec
github.com/neillturner/ kitchen-ansible
github.com/neillturner/omnibus-ansible

Test-kitchen is an integration testing tool. It prepares the testing environment, allows you to quickly launch the container / virtual machine and test the playbook / role.
Able to work with vagrant. but we will use docker as the provider.
Installs as gem, you can use gem install test-kitchenbut I prefer to use bundler. To do this, create a Gemfile in the project folder and register all gems and their versions in it.
source 'https://rubygems.org'
gem 'net-ssh','~> 2.9'
gem 'serverspec'
gem 'test-kitchen'
gem 'kitchen-docker'
gem 'kitchen-ansible'
gem 'kitchen-verifier-serverspec'


It is very important to specify the version of the net-ssh gem, since it will probably not work with a newer version of test-kitchen.
Now you need to run bundle install and wait until all gems with dependencies are installed.
In the project folder, make kitchen init. The .kitchen.yml file appears in the folder , which should be reduced to something like this:
---
driver:
  name: docker
provisioner:
  name: ansible_playbook
  hosts: localhost
  require_chef_for_busser: false
  require_ansible_omnibus: true
  use_sudo: true
platforms:
  - name: ubuntu-14.04
    driver_config:
      image: vbatuev/ubuntu-rvm
  - name: debian-8
    driver_config:
      image: vbatuev/debian-rvm
verifier:
  name: serverspec
  additional_serverspec_command: source $HOME/.rvm/scripts/rvm
suites:
  - name: Common
    provisioner:
      name: ansible_playbook
      playbook: test/integration/default.yml
    verifier:
      patterns:
      - roles/common/spec/common_spec.rb


At this point, I had difficulty running serverspec in the container, so I had to apply a little workaround.
All the images were collected by me and uploaded to dockerhub , the kitchen user was entered into each image, from which tests are run, and rvm with version ruby ​​2.3 is installed.
The additional_serverspec_command parameter indicates that we will use rvm. This is a way in which dances with a tambourine around ruby ​​versions in standard repositories, gem dependencies and rspec launch are not needed. Otherwise, with the launch of serverspec-tests you have to sweat.
The fact is that kitchen-verifier-serverspec is still pretty dank. While writing an article - I had to send several bug reports and PR to the author.

In sectionsuites we indicate a playbook with a role that we will check.
playbook: test / integration / default.yml
---
- hosts: localhost
  sudo: yes
  roles:
    - common


and patterns for serverspec test.
    verifier:
      patterns:
      - roles/common/spec/common_spec.rb


What the test looks like:
common_spec.rb
require '/tmp/kitchen/roles/common/spec/spec_helper.rb'
describe package( 'curl' ) do
    it { should be_installed }
end


It is also very important to indicate exactly this path in the require header . Otherwise, he will not find and will not work.

spec_helper.rb
require 'serverspec'
set :backend, :exec


A complete list of what serverspec can verify is listed here .

Commands:

kitchen test - runs all stages of the tests.
kitchen converge - launches a playbook in a container.
kitchen verify - launches serverspec.

The results should be something like this:

When playing a playbook:
       Going to invoke ansible-playbook with: ANSIBLE_ROLES_PATH=/tmp/kitchen/roles sudo -Es  ansible-playbook -i /tmp/kitchen/hosts  -c local -M /tmp/kitchen/modules         /tmp/kitchen/default.yml
       [WARNING]: log file at ./logs/ansible.log is not writeable and we cannot create it, aborting
       [DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and 
       make sure become_method is 'sudo' (default). This feature will be removed in a 
       future release. Deprecation warnings can be disabled by setting 
       deprecation_warnings=False in ansible.cfg.
       PLAY ***************************************************************************
       TASK [setup] *******************************************************************
       ok: [localhost]
       TASK [common : include] ********************************************************
       included: /tmp/kitchen/roles/common/tasks/install_packages.yml for localhost
       TASK [common : install {{ item }} pkgs] ****************************************
       changed: [localhost] => (item=[u'curl', u'fail2ban', u'git', u'vim'])
       TASK [common : install {{ item }} packages] ************************************
       skipping: [localhost] => (item=[]) 
       TASK [common : include] ********************************************************
       included: /tmp/kitchen/roles/common/tasks/create_users.yml for localhost
       TASK [common : Create admin users] *********************************************
       TASK [common : include] ********************************************************
       included: /tmp/kitchen/roles/common/tasks/delete_users.yml for localhost
       TASK [common : Delete users] ***************************************************
       ok: [localhost] => (item={u'name': u'testuser'})
       RUNNING HANDLER [common : start fail2ban] **************************************
       changed: [localhost]
       PLAY RECAP *********************************************************************
       localhost                  : ok=7    changed=2    unreachable=0    failed=0   
       Finished converging  (3m58.17s).


When starting serverspec:
       Running Serverspec
       Package "curl"
         should be installed
       Package "vim"
         should be installed
       Package "fail2ban"
         should be installed
       Package "git"
         should be installed
       Finished in 0.12682 seconds (files took 0.40257 seconds to load)
       4 examples, 0 failures
       Finished verifying  (0m0.93s).


If everything went well, then we just prepared the first stage for testing playbooks and ansible roles. In the next part, we will look at how to add even more automation to test Ansible's infrastructure code with a great tool like Jenkins.

How do you check your playbooks?

Posted by DevOps Southbridge Administrator Victor Batuev

Also popular now: