Amazon + Ansible

    image+image

    In this article I want to talk about some methods of solving problems on the ansible + amazon bunch, maybe someone will be useful or someone will tell you better solutions. There is already a lot of information about installing / configuring ansible, so I'll skip it. I can’t add anything original to work with amazon. So let's get started.

    Introduction


    They asked for help with a one-time project setup, specifically part of ansible + amazon, as well as the configuration of ubuntu-based servers and services. Requirements for projects were set out as follows:
    • Server balancers on nginx
    • Nth number of nginx + nodejs backends
    • Redis master server
    • Nth number of redis slave
    • Nth number of windows servers (something for cryptography, with the condition of a ready-made template)
    • Create SQS Queues
    • Create security groups


    Note

    The customer wished that there was no need to make corrections to the servers already working, if you need to change something - they change the creation scripts, demolish the old and create the new ... The owner is the master.

    Work

    For most tasks, you can use the ec2 module. The module is wonderful :), with a lot of “submodules” you can read their description here: / usr / share / ansible / cloud (for ubuntu / debian).
    In the ansible (hosts) inventory file, you need to register only 1 host:

    [local]
    localhost
    


    I started from the simplest with windows server. Here is an example playbook:

    - hosts: localhost
      connection: local
      gather_facts: False
      vars:
        hostname: Windows
        ec2_access_key: "Secret"
        ec2_secret_key: "Secret_key”
        instance_type: "t2.micro"
        image: "ami-xxxxxxxx"
        group: "launch-wizard-1"
        region: "us-west-2"
      tasks:
        - name: make one instance
          ec2: image={{ image }}
               instance_type={{ instance_type }}
               aws_access_key={{ ec2_access_key }}
               aws_secret_key={{ ec2_secret_key }}
               instance_tags='{ "Name":"{{ hostname }}" }'
               region={{ region }}
               group={{ group }}
               wait=true
    


    It's simple, ansible using your aws_access_key and aws_secret_key sends a request to amazon to create a machine with the name hostname = "Windows" like instance_type = "t2.micro" from the previously created image image = "ami-xxxxxxxx" in the region region = "us- west-2 "and assigns sequrity group group =" launch-wizard-1 "and nothing more.

    Further complicated, we will do nginx backends. To begin with, everything is the same, we create a car, but you need to work with it in the same playbook.
    Create an amazon keypair in your account, call it, for example, aws_ansible. Download the key and copy it to ~ / .ssh / id_rsa of the user from which you run playbooks.

    I’ll give an example of a backend creation playbook:

    - hosts: localhost
      connection: local
      gather_facts: False
      vars:
        hostname: Nginx_nodejs
        ec2_access_key: “Secret"
        ec2_secret_key: "Secret_key"
        keypair: "aws_ansible"
        instance_type: "t2.micro"
        image: "ami-33db9803"
        group: "launch-wizard-1"
        region: "us-west-2"
      tasks:
        - name: make one instance
          ec2: image={{ image }}
               instance_type={{ instance_type }}
               aws_access_key={{ ec2_access_key }}
               aws_secret_key={{ ec2_secret_key }}
               keypair={{ keypair }}
               instance_tags='{ "Name":"{{ hostname }}" , "Group":"nginx_backend" }'
               region={{ region }}
               group={{ group }}
               wait=true
          register: ec2_info
        - debug: var=ec2_info
        - debug: var=item
          with_items: ec2_info.instance_ids
        - add_host: hostname={{ item.public_ip }} groupname=ec2hosts
          with_items: ec2_info.instances
        - name: wait for instances to listen on port:22
          wait_for:
            state=started
            host={{ item.public_dns_name }}
            port=22
          with_items: ec2_info.instances
    - hosts: ec2hosts
      gather_facts: True
      user: ubuntu
      sudo: True
      vars:
         connections : "4096"
      tasks:
         - include: nginx/tasks/setup.yml
      handlers:
         - name: restart nginx
           action: service name=nginx state=restarted
    - hosts: ec2hosts
      gather_facts: True
      user: ubuntu
      sudo: True
      tasks:
     - include: nodejs/tasks/setup.yml
    


    Now, what has changed:
    We indicated that for this machine use our keypair key: "aws_ansible" Indicated the
    image of a clean ubuntu image: "ami-33db9803".
    Using registr and debug, we received the public_ip of the new machine and recorded it in a temporary inventory in the ec2hosts group, it is impossible to write to the hosts file from the playbook (I did not find how).
    The next action, “wait for instances to listen on port: 22”, is waiting for ssh to become available.
    And after all this, we execute the usual scripts with a normal server, in my case, installing / configuring nginx and nodejs

    I also added Tag “Group”: “nginx_backend”, this is necessary in order to work with all backends at once. How? There is a script suitable for inventorying amazon servers in ansible. You can read about it, as well as download it here docs.ansible.com/intro_dynamic_inventory.html#id6 .

    Great, but my situation is not much different, I need to do upstream in nginx with an unknown number of backends in advance. After wandering around the open spaces of the documentation for ansible, I did not find how to make dynamic lists. That is, to dynamically substitute the backend ip - please, but change their number ... As always, the old way came to the rescue, wrote a bike an python. not a big script that is called from the playbook before configuring nginx and generates a config with upstream.

    Listing:

    #!/usr/bin/env python
    import  sys, os
    from commands import *
    group = '"tag_Group_nginx_backend": ['
    template = "/etc/ansible/playbooks/nginx/templates/balance.conf.j2"
    list_ip = []
    #Create ec2_list
    data = getoutput("/etc/ansible/ec2.py --refresh-cache")
    flag = 0
    for line in data.split("\n"):
        if flag:
            if line.strip() != "],":
                list_ip.append(line.strip().strip(",").strip("\""))
            else:
                break
        if line.strip() == group:
            flag = 1
    f = open(template, 'w')
    f.write('''# upstream list
    upstream backend {''')
    f.close()
    for ip in list_ip:
        f = open(template, 'a')
        f.write('''
        server '''+ip+''':80 weight=3 fail_timeout=15s;''')
        f.close()
    f = open(template, 'a')
    f.write('''
    }''')
    f.close()
    


    Another problem was with redis master, its ip had to be assigned to each slave. I decided to do it using include_vars.

    When creating a wizard before checking ssh availability, I do this:

       - replace: dest={{ redis_master_ip }} regexp='^(\s+)(master\:)\s(.*)$' replace='\1\2 {{ item.public_ip }}'
          with_items: ec2_info.instances
    

    In the variables indicated:

      redis_master_ip: "/etc/ansible/playbooks/redis/files/master_ip.yml"
    

    The file itself should initially be and looks something like this:

     master: 1.2.3.4
    

    Then in the playbook settings redis slave add:

    - name: Get master IP
      include_vars: "{{ redis_master_ip }}"
    


    We use the variable {{master}} in the template.

    Sequrity group is created simply, we use the ec2_group module:

    - hosts: localhost
      connection: local
      tasks:
        - name: nginx  ec2 group
          local_action:
            module: ec2_group
            name: nginx
            description: an nginx EC2 group
            region: us-west-2
            aws_secret_key: "Secret"
            aws_access_key: "Secret"
            rules:
              - proto: tcp
                from_port: 80
                to_port: 80
                cidr_ip: 192.168.0.0/24
              - proto: tcp
                from_port: 22
                to_port: 22
                cidr_ip: 0.0.0.0/0
            rules_egress:
              - proto: all
                cidr_ip: 0.0.0.0/0
    


    It turned out to be more difficult with the queues, there was no module for them, I even tried to finish someone's attempts to write it. But quickly came to his senses and did through cloudformation.

    This is the playbook:

    - hosts: localhost
      connection: local
      gather_facts: False
      vars:
        sqs_access_key: “Secret"
        sqs_secret_key: "Secret"
        region: "us-west-2”
      tasks:
      - name: launch some aws services
        cloudformation: >
          stack_name="TEST"
          region={{ region }}
          template=files/cloudformation.json
    <\code>
    А так template:
    
    {
      "AWSTemplateFormatVersion" : "2010-09-09",
      "Description" : "AWS CloudFormation SQS”
      "Resources" : {
        "MyQueue" : {
          "Type" : "AWS::SQS::Queue"
        }
      },
      "Outputs" : {
        "QueueURL" : {
          "Description" : "URL of newly created SQS Queue",
          "Value" : { "Ref" : "MyQueue" }
        },
        "QueueARN" : {
          "Description" : "ARN of newly created SQS Queue",
          "Value" : { "Fn::GetAtt" : ["MyQueue", "Arn"]}
        }
      }
    }
    


    Резюме

    Популярность облачных решений и систем управления конфигурациями растет, но я потратил порядочно времени выискивая варианты того или иного решения, собирая информацию и тд. Отсюда и родилась идея написать эту статью.

    Автор: Бурнашев Роман, главный системный администратор компании centos-admin.ru

    Also popular now: