New network automation features in Red Hat Ansible

    In the light of the significant improvements implemented in Ansible Engine 2.6, as well as taking into account the release of Ansible Tower 3.3 and the recent release of Ansible Engine 2.7, let's take a closer look at the prospects for network automation.

    In this post:

    • Plug-in connection httpapi.
      • Support for Arista eAPI and Cisco NX-API.
    • New network automation modules.
      • net_get and net_put.
      • netconf_get, netconf_rpc, and netconf_config.
      • cli_command and cli_config.
    • Improved web interface Ansible Tower.
    • Management of credentials in Ansible Tower when working with network devices.
    • Simultaneous use of different versions of Ansible in Tower

    Do not forget that the release of each new version of Ansible is accompanied by updating the porting guide , which will greatly facilitate your transition to the new version.

    HTTPAPI connection plugin

    The connection plug (connection plugins) - this is what allows Ansible connect to the target host, to then perform their tasks. Starting with Ansible 2.5, a new plugin of this type called network_cli has appeared. It eliminates the need to use the provider parameter and standardizes the order of execution of network modules, with the result that playbooks for network devices are now designed, executed, and work exactly the same way as playbooks for Linux hosts. In turn, for Ansible Tower, the difference between network devices and hosts disappears, and it no longer needs to distinguish between usernames and passwords for network devices and machines. Read more about it here., but in short, logins-passwords for Linux-server and Arista EOS switch or Cisco router can now be used and stored in the same way.

    In Ansible 2.5, it was only possible to connect via the eAPI and NX-API methods using the old provider method. Ansible 2.6 no longer has this restriction and you can freely use the httpapi high-level connection method. Let's see how this is done.

    First you need to enable eAPI or NX-API on a network device so that you can then use the httpapi method. This is easily done by the corresponding Ansible command, for example, here's how to enable eAPI on an Arista EOS switch.

    [user@rhel7]$ ansible -m eos_eapi -c network_cli leaf01
    leaf01 | SUCCESS => { 
       "ansible_facts": { 
         "eos_eapi_urls": { 
           "Ethernet1": [ 
            "Management1": [ 
        "changed": false, 
        "commands": []

    When connected to a real Arista EOS switch, the show management api http-commands command will show that the API is enabled:

    leaf01# show management api http-commands
    Enabled:      Yes
    HTTPS server: running, set to use port 443
    <<<rest of output removed for brevity>>>

    The following Ansible Playbook script simply executes the show version command, and then in the debug section it returns only the version from the task JSON output.

    - hosts: leaf01 
      connection: httpapi 
      gather_facts: false 
        - name: type a simple arista command 
            - show version | json 
          register: command_output 
        - name: print command output to terminal window 
            var: command_output.stdout[0]["version"]

    Running this script will produce the following result:

    [user@rhel7]$ ansible-playbook playbook.yml
    PLAY [leaf01]********************************************************
    TASK [type a simple arista command] *********************************
    ok: [leaf01]
    TASK [print command output to terminal window] **********************
    ok: [leaf01] => { 
         "command_output.stdout[0][\"version\"]": "4.20.1F" 
    PLAY RECAP***********************************************************
    leaf01 : ok=2 changed=0 unreachable=0 failed=0

    NOTE: Arista eAPI does not support abbreviated versions of commands (such as “show ver” instead of “show version2), so you have to write them completely. More information about the httpapi plugin can be found in the documentation.

    New network automation modules

    Ansible 2.6 and October 2.7 released seven new modules for network automation.

    • net_get - copies a file from a network device to Ansible Controller.
    • net_put - copies a file from Ansible Controller to a network device.
    • netconf_getRetrieves configuration / status information from a network device with NETCONF enabled.
    • netconf_rpc - performs operations on a network device with NETCONF enabled.
    • netconf_config — the configuration of the netconf device; the module allows the user to send an XML file to the netconf devices and check if the configuration has changed.
    • cli_command - runs the cli command on network devices that have a command interface (cli).
    • cli_config - sends (push) a text configuration to network devices via network_cli.

    net_get and net_put

    • net_get - copies a file from a network device to Ansible Controller.
    • net_put - copies a file from Ansible Controller to a network device.

    The net_get and net_put modules are not tied to the hardware of a particular manufacturer and simply copy files from the network device to the devices using standard SCP or SFTP transfer protocols (specified by the protocol parameter). Both of these modules require the use of the network_cli connection method, and also work only if scp is installed on the controller (pip install scp) and SCP or SFTP is enabled on the network device.

    We assume that in the example with our playbook, we have already executed the following command on the Leaf01 device:

    leaf01#copy running-config flash:running_cfg_eos1.txt
    Copy completed successfully.

    Here’s what a playbook with two tasks looks like (the first one copies the file from the Leaf01 device, and the second one copies the file to the Leaf01 device):

    - hosts: leaf01 
      connection: network_cli 
      gather_facts: false 
           src: running_cfg_eos1.txt 
          src: temp.txt

    netconf_get, netconf_rpc and netconf_config

    • netconf_getRetrieves configuration / status information from a network device with NETCONF enabled.
    • netconf_rpc - performs operations on a network device with NETCONF enabled.
    • netconf_config — the configuration of the netconf device; the module allows the user to send an XML file to the netconf devices and check if the configuration has changed.

    The network configuration control protocol NETCONF (Network Configuration Protocol) is developed and standardized by the IETF. According to RFC 6241, NETCONF can be used to set, manipulate, and delete network device configurations. NETCONF is an alternative to the command line SSH (network_cli) and APIs like Cisco NX-API or Arista eAPI (httpapi).

    To demonstrate the new netconf modules, we first enable netconf on Juniper routers using the junos_netconf module . Netconf does not support all models of network equipment, so before using it, consult the documentation.

    [user@rhel7 ~]$ ansible -m junos_netconf juniper -c network_cli
    rtr4 | CHANGED => { 
        "changed": true, 
        "commands": [ 
             "set system services netconf ssh port 830" 
    rtr3 | CHANGED => { 
        "changed": true, 
        "commands": [ 
             "set system services netconf ssh port 830" 

    Juniper Networks offers Junos XML API Explorer for Operational Tags and for Configuration Tags . Consider an example of a transactional query that Juniper Networks uses in its documentation for illustration of an RPC query for a specific interface.


    This is easily translated into the Ansible Playbook language. get-interface-information is an RPC call, and additional parameters are specified as key-value pairs. In this example, there is only one parameter — interface-name — and on our network device we just want to see em1.0. We use the register parameter set by the task, just to save the results, so we use the debug module and display the results on the terminal screen. The netconf_rpc module also allows you to translate XML output directly into JSON.

      hosts: juniper 
      gather_facts: no 
      connection: netconf 
        - name: GET INTERFACE INFO 
            display: json 
            rpc: get-interface-information 
              interface-name: "em1.0" 
          register: netconf_info 
            var: netconf_info

    After launching the playbook, we’ll get this:

    ok: [rtr4] => {
       "netconf_info": {
       "changed": false,
       "failed": false,
       "output": {
          "rpc-reply": {
            "interface-information": {
             "logical-interface": {
              "address-family": [
                  "address-family-flags": {
                     "ifff-is-primary": ""
                   "address-family-name": "inet",
                   "interface-address": [
                      "ifa-broadcast": "",
                      "ifa-destination": "10/8",
                      "ifa-flags": {
                          "ifaf-current-preferred": ""
                      "ifa-local": ""
    <<<rest of output removed for brevity>>>

    Additional information can be found in the Juniper Platform Guide and in the NETCONF documentation .

    cli_command and cli_config

    • cli_command - runs a command on network devices using their command line interface (if there is one).
    • cli_config - sends (push) a text configuration to network devices via network_cli.

    The cli_command and cli_config modules that appeared in Ansible Engine 2.7 are also not tied to the equipment of a particular manufacturer and can be used to automate various network platforms. They are repelled from the value of the variable ansible_network_os (specified in the inventory file or in the group_vars folder) in order to use the required pluon cliconf. A list of all the values ​​of the variable ansible_network_os is provided in the documentation . In this version of Ansible, you can still use the old modules for specific platforms, so do not rush to rewrite those with playbooks. Additional information can be found in the official porting guides .

    Let's see how these modules are used in the Ansible Playbook. This playbook will run on two Cisco Cloud Services Routers (CSR) systems running IOS-XE. The variable ansible_network_os in the inventory file is set to ios.

    <config snippet from inventory>
    rtr1 ansible_host= 
    rtr2 ansible_host=

    Here's how cli_config and cli_command are used:

      hosts: cisco 
      gather_facts: no 
      connection: network_cli 
         - name: CONFIGURE DNS 
             config: ip name-server 
         - name: CHECK CONFIGURATION 
               command: show run | i ip name-server 
           register: cisco_output 
         - name: PRINT OUTPUT TO SCREEN 
             var: cisco_output.stdout

    But the output of this playbook:

    [user@rhel7 ~]$ ansible-playbook cli.yml
    PLAY [AGNOSTIC PLAYBOOK] *********************************************
    TASK [CONFIGURE DNS] *************************************************
    ok: [rtr1]
    ok: [rtr2]
    TASK [CHECK CONFIGURATION] *******************************************
    ok: [rtr1]
    ok: [rtr2]
    TASK [PRINT OUTPUT TO SCREEN] ****************************************
    ok: [rtr1] => { 
        "cisco_output.stdout": "ip name-server"
        [rtr2] => { 
        "cisco_output.stdout": "ip name-server"
    PLAY RECAP **********************************************************
    rtr1 : ok=3 changed=0 unreachable=0 failed=0
    rtr2 : ok=3 changed=0 unreachable=0 failed=0

    Please note that these modules are idempotent as long as you use the appropriate syntax for the corresponding network devices.

    Improved Ansible Tower Interface

    The web interface in Red Hat Ansible Tower 3.3 has become more convenient and more functional, allowing you to click less on various tasks.

    Credential management, task scheduler, inventory scenarios, role-based access control, notifications and more are now collected in the main menu on the left side of the screen and are available in one click. The Jobs View task view screen immediately displays important additional information: who started the task and when; what inventory were fulfilled during its execution; from which project the playbook was taken.

    Manage credentials for network devices in Ansible Tower

    With Red Hat Ansible Tower 3.3, it is much easier to manage logins and passwords for network devices. In the new web interface, the Credentials command is located immediately in the main menu, in the Resources group.

    In Ansible Tower 3.3, there is a special, so-called “network” type of credentials (Network), which sets the environment variables ANSIBLE_NET_USERNAME and ANSIBLE_NET_PASSWORD used in the old provider method. All of this still works, as written in the documentation , so that existing Ansible Playbooks scenarios can be used. However, for new high-level connection methods httpapi and network_cli, the network type of credentials is no longer needed, since now Ansible works equally well with login passwords connecting to network devices as well as connecting to Linxu hosts. Therefore, for network devices, you now need to select the Machine credential type — just select it and enter the login password, or provide an SSH key.

    NOTE: Ansible Tower encrypts the password immediately after you save credentials.

    Thanks to encryption, credentials can be safely delegated to other users and groups — they simply won’t see or know the password. For more information about how credentials work, what kind of encryption is used, what other types of credentials (for example, Amazon AWS, Microsoft Azure and Google GCE) can be found in the documentation for Ansible Tower .

    A more detailed description of Red Hat Ansible Tower 3.3 can be found here .

    Simultaneous use of different versions of Ansible in Tower

    Suppose we want some playbooks to run through Ansible Engine 2.4.2, and others through Ansible Engine 2.6.4. Tower has a special virtualenv tool for creating isolated Python environments to avoid problems with conflicting dependencies and different versions. Ansible Tower 3.3 allows you to set virtualenv at different levels - Organization, Project or Job Template. Here’s what the Job Template we’ve created in Ansible Tower for backing up our network.

    When you have at least two virtual environments, the Ansible Environment drop-down menu appears in the Ansible Tower main menu, where you can easily specify which version of Ansible should be used during the task.

    Therefore, if you have a mix of playbooks to automate the network, some of which use the old provider method (Ansible 2.4 and previous versions), and some are new httpapi plug-ins or network_cli (Ansible 2.5 and higher), you can easily assign each task Ansible version. In addition, this feature will be useful if developers and production use different versions of Ansible. In other words, you can safely upgrade Tower without worrying that after that you have to switch to any one version of the Ansible Engine, which greatly increases the flexibility in automating various types of network equipment and environments, as well as usage scenarios.

    Additional information on using virtualenv can be found in the documentation .

    Also popular now: