Why you do not need sshd in the docker container

Original author: Jerome Petazzoni
  • Transfer
  • Tutorial
When people launch their first Docker container, they often ask: “How do I get into the container?” And the answer “on the forehead” to this question, of course: “So start the SSH server in it and connect!”. The purpose of this topic is to show that in fact you do not need sshd inside your container (well, of course, except when your container is actually designed to encapsulate an SSH server).

Launching an SSH server is a tempting idea, as it provides quick and easy access “inside” the container. Everyone knows how to use SSH clients, we do it every day, we are familiar with password and key accesses, port forwarding, and in general SSH access is a familiar thing, it will work for sure.

But let's think again.

Let's imagine that you are building a Docker image for Redis or a Java web service. I would like to ask you some questions:

Why do you need ssh?
Most likely you want to make backups, check logs, maybe restart processes, edit settings, debug something using gdb, strace or similar utilities. So, this can be done without SSH.

How will you manage your keys and passwords?
There are not many options - either you will sew them “tightly” into an image, or put them on an external volume. Think about what you will need to do to update your keys or passwords. If they are sewn in, you will have to rebuild the image, remodel it, restart the containers. Not the end of the world, but somehow not elegant. By far the best solution would be to put data on an external volume and control access to it. This works, but it’s important to verify that the container does not have write access to the volume. After all, if there is access, the container can damage the data, and then you cannot connect via SSH. Even worse - if one volume is used as a storage medium for authentication in several containers, you will lose access to all at once. But this is only if you use SSH access everywhere.

How will you manage security updates?
An SSH server is actually quite a reliable thing. But still this is a window into the outside world. So we will need to install updates, monitor security. Those. in any harmless container itself we will now have an area that is potentially vulnerable to hacking from the outside and needs attention. We have created a problem for ourselves.

Is “just adding an SSH server” enough for everything to work?
Not. Docker manages and monitors one process. If you want to control several processes inside a container, you will need something like Monit or Supervisor. They also need to be added to the container. Thus, we turn the simple concept of “one container for one task” into something complex that needs to be built, updated, managed, maintained.

Are you responsible for creating the container image, but are you also responsible for managing container access policies?
In small companies, this does not matter - most likely you will perform both functions. But when building a large infrastructure, most likely one person will create images, and completely different people will be involved in managing access rights. So, “embedding” an SSH server into a container is not the best way.

But how can I ...



Do backups?

Your data must be stored on an external volume. After that, you can start another container with the --volumes-from option, which will have access to the same volume. This new container will be specifically designed to perform data backup tasks. Separate profit: in case of updating / replacing backup tools and data recovery, you do not need to update all containers, but only the one that is designed to perform these tasks.

Check logs?

Use an external volume! Yes, again the same solution. Well, what can you do if it suits? If you write all the logs to a specific folder, and it will be on an external volume, you can create a separate container (the "log inspector") and do whatever you need in it. Again, if you need any special tools for analyzing logs, you can install them in this separate container without littering the original one.

Restart my service?

Any properly designed service can be restarted using signals. When you execute the foo restart command , it almost always sends a certain signal to the process. You can send a signal using the docker kill -s command . Some services do not respond to signals, but receive commands, for example, from a TCP socket or UNIX socket. You can connect to a TCP socket from the outside, and for a UNIX socket, again, use an external volume.

“But it's all complicated!” - No, not really. Let's imagine that your foo service creates a socket in /var/run/foo.sock and requires you to run fooctl restart to restart correctly. Just start the service with -v / var / run (or add a volume/ var / run in the Dockerfile). When you want to restart the service, run the same image, but with the --volumes-from option . It will look something like this:

# запуск сервиса
CID=$(docker run -d -v /var/run fooservice)
# перезапуск сервиса с помощью внешнего контейнера
docker run --volumes-from $CID fooservice fooctl restart


Edit configuration?

First, operational configuration changes should be distinguished from fundamental ones. If you want to change something significant that should affect all future containers launched on the basis of this image, the change must be sewn into the image itself. Those. in this case, you do not need an SSH server, you need to edit the image. “But what about operational change?” You ask. “After all, I may need to change the configuration during the course of my service, for example, add virtual hosts to the web server config?” In this case, you need to use ... wait-wait ... outer volume! The configuration should be on it. You can even pick up a special container with the role of “config editor”, if you want, install your favorite editor there, plugins for it, but whatever. And all this will not affect the base container.
“But I just make temporary edits, experiment with different values ​​and look at the result!” Ok, for the answer to this question, read the next section.

Debug my service?

And so we got to the case when you really need real console access "inside" of your container. After all, you need to run gdb, strace somewhere, edit the configuration, etc. And in this case you need nsenter .

What is nsenter


nsenter is a small utility that allows you to get inside namespaces . Strictly speaking, it can both enter existing namespaces and start processes in new namespaces. “What kind of namespaces are we talking about here?” This is an important concept related to Docker containers, allowing them to be independent of each other and from the parent operating system. If you do not go into details: using nsenter you can get console access to an existing container, even if there is no SSH server inside it.

Where to get nsenter?

From Github: jpetazzo / nsenter . Can run
docker run -v /usr/local/bin:/target jpetazzo/nsenter


This will install nsenter in / usr / local / bin and you can use it right away. In addition, nsenter is already integrated in some distributions .

How to use it?

First, find out the PID of the container you want to get inside:
PID=$(docker inspect --format {{.State.Pid}} )


Now go into the container:
nsenter --target $PID --mount --uts --ipc --net --pid


You will get console access "inside" the container. If you want to run a script or program right away, add them as an argument to nsenter . It works a bit like chroot , with the only difference being with regards to containers, not just directories.

What about remote access?


If you need remote access to the docker container, you have at least two ways to do this:
  • SSH to the host machine, and then use nsenter
  • SSH to the host machine with a special key that allows you to run a specific command (in our case, nsenter )


The first way is quite simple, but it requires root rights on the host machine (which is not very good from a security point). The second way involves using the special “ command ” feature of the SSH authorization keys. You've probably seen "classic" authorized_keys like this:

ssh-rsa AAAAB3N…QOID== jpetazzo@tarrasque

(Of course, the real key is much longer.) Here you can specify a specific command in it. If you want to give a specific user to check the amount of free RAM on your machine using SSH access, but do not want to give him full access to the console, you can write the following in authorized_keys :

command="free" ssh-rsa AAAAB3N…QOID== jpetazzo@tarrasque


Now, when the user connects using this key, the free command will be launched immediately. And nothing else can be started. (Technically, you might want to add no-port-forwarding , see the details in the manpage for authorized_keys ). The idea of ​​this mechanism is the separation of powers and responsibilities. Alice creates container images, but does not have access to production servers. Betty has the right to remote access for debugging. Charlotte - only for viewing logs. Etc.

conclusions


Is it really just plain A terrible thing to run an SSH server in every Docker container? Let's be honest - this is not a disaster. Moreover, this may even be the only option when you do not have access to the host system, but you definitely need access to the container itself. But, as we saw from the article, there are many ways to do without an SSH server in the container, having access to all the necessary functionality and at the same time having a very more elegant system architecture. Yes, docker can do this and that. But before you turn your Docker container into such a "mini-VPS", make sure that this is really necessary.

Also popular now: