Configuring ssh callback on the server in conjunction with Ansible

    Everyone knows that using ssh you can do port forwarding (create tunnels). You could also learn from the ssh manual that OpenSSH can dynamically open ports for remote forwarding and execute strictly defined commands. Also, everyone knows that for Ansible (not counting the Tower) there is no such thing as a server and client (in the sense of ansible-server / ansible-agent) - there is a script (playbook) that can be executed both locally and remotely via an ssh connection. There is also Ansible-pull, this is a script that checks the git repository with your playbooks and, if there are changes, launches the playbook to apply updates. Where you cannot push, in most cases you can use pull, but there are exceptions.

    In the article I will try to talk about how you can use dynamic port allocation for ssh tunnels in the implementation of the similarity of the provisioning-callback functionfor the poor on any server with OpenSSH and Ansible, and how I got to that.

    So, what if you still need a (central) server on which the ansible project will be stored, perhaps even with secret keys and access to the entire infrastructure. A server to which (for example) new hosts can connect for initial configuration (initialization). Sometimes this initialization may affect other hosts, such as an http balancer. Here, ssh-port remote forwarding and ssh back-connect can help very well.

    In principle, using tunnels, you can do various useful things, in particular, the reverse connection is useful for:
    • Remote machine support for NAT (such as helpshell, configure the environment, repair something, etc.);
    • Perform backups (I don’t know why, but possible);
    • Access settings to your workplace in the office.

    In general, here you can probably come up with something else, it all depends on your imagination. For now, let’s dwell on what is reverse ssh connection, more about this later.

    Quick reference how reverse connection occurs


    No magic, only OpenSSH-Client and OpenSSH-Server. The whole process of creating a tunnel by a client in one command:
    ssh -f -N -T -R22222:localhost:22 server.example.com

    -f - switch to background mode; (this key and the following two are optional)

    -N - do not execute remote commands;

    -T - disabling pseudo-terminal (pts) is useful if you need to run this command on the crown.

    -R [bind_address:] port - by default on the server the binding occurs at 127.0.0.1, the port can be arbitrary (from the upper ports), we set 22222. Accordingly, you can connect back to client port 127.0.0.1 on port 22.0.0.1.

    After on the server, you can simply do:
    ssh localhost -p22222

    And start to perform some actions for remote support / configuration / backup / execution of other commands.

    A little more about setting up and authorization


    If you know all about this, then you can skip this part.

    Read if you do not know how to configure key access
    Suppose we have ansible user on a central server (SCM / Backup / CI / etc) and such a user on a client machine (names are not important, they may be different). On both, openssh- (server / client) is installed.

    On the client machine (as on the server) we generate an ssh-key (for example rsa).
    ssh-keygen -b 4096 -t rsa -f $HOME/.ssh/id_rsa

    The client and server administrator exchange public keys. At the same time, the central server administrator should register something like this in authorized_keys:
    $ cat  $HOME/.ssh/authorized_keys
    command="echo 'Connect successful!'; countdown 3600",no-agent-forwarding,no-X11-forwarding" ssh-rsa AAAAB3NzaC1...JhPWP ansible@dev.example.com

    Read more about options in "man authorized_keys". In my case, a function is triggered here that does the countdown, after an hour it shuts down (the -f / -N switches are not used).

    After that, the client will be able to back up to the server and see something like this:
    ansible@dev:~$ ssh -f -N -T -R22222:localhost:22 server.example.com
    Connect successful!
    00:59:59
    

    Seeing the countdown, the user (developer / accountant ?!) happily informs the server administrator that there is a reception (if he doesn’t know yet) and you can already do something with him.

    The administrator only needs to connect using the ssh client and start shamanism:
    ansible@server:~$ ssh localhost -p22222

    Everything is simple, clear and accessible.

    But what if you need to make such a connection without the participation of a user and an administrator, for example, for backing up or automating server settings when scaling a web project with a dynamic increase in computing power. About it further.


    From idea to turnkey solution


    The idea of ​​automating routine processes and keeping everything under control is quite intrusive and familiar (perhaps) to any system administrator.

    If you have complete order on the servers now, then you usually know little about the working environment of the developer. In my case, the developer's working environment is connected with a virtual machine (VM) that repeats production almost completely.

    People come and go, the basic image of VM that we give out to beginners is changing. To synchronize the settings of the local dev environment with stage / production and to do manual work less often, a playbook was written that applied roles similarly to the combat environment and the corresponding cron-job was started.
    In principle, this is all good, VMs receive updates in pull-mode. But the moment came when we began to store some important keys and passwords in the repository (of course, in encrypted form) and it became obvious that "secrecy" would lose its meaning if we all distributed our vault-password. Therefore, it was decided to push changes to the VM using ssh tunnels.

    In the beginning, there was a simple idea of ​​“hammering everything” so that the connection was made on predefined ports on the server. And in principle, this is normal if you have 3-5 people, even if 10-15. But what if there are 50-100 of them in six months? In general, even here you can come up with a kind of “playbook” that will serve all this at our direction, but this is not our method. I began to think, read mana, google.

    If you look at man (man ssh), you can find the following lines there:
         -R [bind_address:]port:host:hostport
    ...
    If the port argument is '0', the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O forward the allocated port
    will be printed to the standard output.

    Those. An ssh server can dynamically allocate ports, but only the client will know about them. On the server you can see the list of ports open for listening (netstat / lsof), but given that there can be several simultaneous connections, this information is pretty useless - it’s not clear what to connect with.

    Then I accidentally came across an article  in which the author said that he wrote a patch for OpenSSH that adds the variable SSH_REMOTE_FORWARDING_PORTS. The variable contains local ports assigned during initialization of the reverse tunnel. Unfortunately, the patch was never accepted. OpenSSH developers are very conservative. Judging by the correspondence, they did their best to kick back and propose alternative solutions. Perhaps not in vain. :)
    After some thought, I came up with a simple crutch how to tell the server which port he allocated. When connecting to the server, the client can execute commands on it by passing them as a command line argument. The server recognizes this argument as a variable SSH_ORIGINAL_COMMAND. Nothing prevents us from creating a tunnel in the background to save the output that contains the port, parse it by selecting only the port and transfer it with the next command to the server. And on the server, a wrapper script is executed that substitutes the variable SSH_ORIGINAL_COMMAND as a port for connecting ansible-playbook.

    What does it look like?


    On the client (a fragment of a script with a connection function):
    ansible@client:~$ cat ssh-tunnel

    #!/usr/bin/env bash
    SERVER="server.example.com"
    REMOTE_PORT="22"
    BACKCONNECT_PORT="0"
    KEY="/home/ansible/.ssh/id_rsa"
    ...
    # Connect and update function
    exec_update ()
    {
            tunnel_args="-o ControlMaster=auto -o ControlPersist=600 -o ControlPath=/tmp/%u%r@%h-%p"
            out_file="/tmp/ssh_tunnel_$USER.out"
    # Check log file exists and clean
    	touch $out_file
    	truncate -s 0 $out_file
    # Start connection
    	echo "Initializing ssh backconnect to remote address: $SERVER"
    	echo "Pulling updates from $SERVER"
    	echo "Press ctrl+c to interrupt connection"
    	ssh -f -N -T -R0:localhost:22 ansible@"$SERVER" -p"$REMOTE_PORT" -i"$KEY" $tunnel_args -E "$out_file"
    # Wait for port allocation
    	sleep 5
    # Get the port number
    	port=`awk '{print $3}' $out_file`
    # Print port to stdout
    	echo "Port open on $SERVER: $port"
    # Connect again to initialize update proccess
    	ssh $SERVER "$port"
    # Close tunnel
    	if ssh -T -O "exit" -o ControlPath=/tmp/%u%r@%h-%p $SERVER; then
    		echo "Done"
    	else
    		echo "Ssh-tunnel connection can't be closed. Command failed!"
    		echo "Please add folowing lines to $HOME/.ssh/config: "
    		echo 'Host * '
    		echo 'ControlMaster auto '
    		echo 'ControlPath /tmp/%u%r@%h-%p '
    		echo 'ControlPersist 600 '
    		exit 1
    	fi
    }
    ...

    The function is performed in two approaches, the first - creates a permanent tunnel with multiplexing , the second - transfers the received port value to the server and calls the reverse connection. After the script has worked, the connection to the server is closed through the control socket.

    Here I had to play a little with the options so that everything would start both manually from the terminal and over the crown.
    For cron, you need to explicitly set the variables in the cron file that you need to pass to the script.

    On server:
    ansible@server:~$ cat initial_run

    #!/bin/bash
    # Play vars
    # Set ansible ssh-port 
    REMOTE_PORT="$SSH_ORIGINAL_COMMAND"
    INVENTORY="$HOME/ansible/remote"
    PLAY_DIR="$HOME/ansible/playbooks"
    PLAY="remote.yml"
    TAGS=""
    # Send notification 
    notify () {
        MAILTO="admin@server.example.com"
        CLIENT=`echo $SSH_CLIENT | awk '{print $1}'`
        echo -e "

    The system update process is started for $CLIENT

    " | mail -a "Content-Type: text/html" -s "Notice from '$HOSTNAME': Playbook run - '$CLIENT'" $MAILTO } # Run playbooks with all args run_playbooks () { cd $HOME/ansible # Run tasks ansible-playbook -i "$INVENTORY" "$PLAY_DIR"/"$PLAY" --tags "$TAGS" -e ansible_ssh_port="$REMOTE_PORT" } # Main function main () { run_playbooks } # Do it main "$@"


    Here the key point is getting the port to which the server needs to connect from the SSH_ORIGINAL_COMMAND variable. In principle, you could just assign it to ansible_ssh_port, but I decided that for the order it’s worth allocating a separate variable REMOTE_PORT.
    The content of playbooks / roles is no longer important here, although I added examples in my repository on github.com .
    That’s probably all. What to do with this and how it can come in handy is up to you.

    I would point out a couple of interesting use cases:
    • Dynamic server allocation and automatic configuration (load-balancer / app-server bundle);
    • Consistent maintenance of geographically dispersed servers to which there is no direct access (different branches, offices, etc.).

    Suggest your options in the comments, talk about more interesting implementations of such functionality.

    I would be grateful if you let me know about the found "eyeglass" in PM.

    Also popular now: