Docker Remote API with certificate authentication with revocation checking

    Description of the problem


    For the needs of remote control of Docker, Docker can provide web API.
    This API can either not require authentication at all (which is highly discouraged), and use certificate authentication.


    The problem is that native certificate authentication does not provide for certificate revocation checking. And this can have serious consequences.


    I want to tell how I solved this problem.


    Solution to the problem


    First you need to say what to say, I will be about Docker for Windows. Maybe Linux is not so bad, but now is not about that.


    What we have? We have a Docker, with this config:


    {
        "hosts": ["tcp://0.0.0.0:2376", "npipe://"],
        "tlsverify": true,
        "tlscacert": "C:\\ssl\\ca.cer",
        "tlscert": "C:\\ssl\\server.cer",
        "tlskey": "C:\\ssl\\server.key"
    }

    Customers can connect with their certificates, but these certificates are not checked for revocation.


    The idea of ​​solving the problem is to write your own proxy service, which would act as an intermediary. Our service will be installed on the same server as Docker, pick up port 2376 for itself, communicate with Docker via //./pipe/docker_engine.


    Without thinking, I created an ASP.NET Core project and did the simplest proxying:


    Simple proxy code
    app.Run(async (context) =>
    {
        var certificate = context.Connection.ClientCertificate;
        if (certificate != null)
        {
            logger.LogInformation($"Certificate subject: {certificate.Subject}, serial: {certificate.SerialNumber}");
        }
        var handler = new ManagedHandler(async (host, port, cancellationToken) =>
        {
            var stream = new NamedPipeClientStream(".", "docker_engine", PipeDirection.InOut, PipeOptions.Asynchronous);
            var dockerStream = new DockerPipeStream(stream);
            await stream.ConnectAsync(NamedPipeConnectTimeout.Milliseconds, cancellationToken);
            return dockerStream;
        });
        using (var client = new HttpClient(handler, true))
        {
            var method = new HttpMethod(context.Request.Method);
            var builder = new UriBuilder("http://dockerengine")
            {
                Path = context.Request.Path,
                Query = context.Request.QueryString.ToUriComponent()
            };
            using (var request = new HttpRequestMessage(method, builder.Uri))
            {
                request.Version = new Version(1, 11);
                request.Headers.Add("User-Agent", "proxy");
                if (method != HttpMethod.Get)
                {
                    request.Content = new StreamContent(context.Request.Body);
                    request.Content.Headers.ContentType = new MediaTypeHeaderValue(context.Request.ContentType);
                }
                using (var response = await client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, context.RequestAborted))
                {
                    context.Response.ContentType = response.Content.Headers.ContentType.ToString();
                    var output = await response.Content.ReadAsStreamAsync();
                    await output.CopyToAsync(context.Response.Body, 4096, context.RequestAborted);
                }
            }
        }
    });

    That was enough for simple GET and POST requests from the Docker API. But this is not enough, because for more complex operations (requiring user input), Docker uses something similar to WebSocket. The ambush was that Kestrel flatly refused to accept requests that came from the Docker Client, arguing that the request with the Connection: Upgrade header could not have a body. And it was.


    I had to abandon Kestrel and write a little more code. In fact - your web server. Independently open a port, create a TLS connection, parse HTTP headers, establish an internal connection with Docker and exchange I / O streams. And it worked.


    The source can be found here .


    So, the application is written and it is necessary to start it somehow. The idea is to create a container with our application, flip inside npine: // and publish port 2376


    Build Docker Image


    To build the image, we need a public certificate of a certification authority (ca.cer), which will issue certificates to users.


    This certificate will be installed in the trusted root certificate authorities of the container in which our proxy will run.


    Installing it is necessary for the certificate verification procedure.


    I did not bother with writing such a Docker file that I would build the application myself.
    Therefore, it must be collected independently. From the folder with dockerfile run:


    dotnet publish -c Release -o ..\publish .\DockerTLS\DockerTLS.csproj

    Now we have to be: Dockerfile, publish, ca.cer. Putting the image:


    docker build -t vitaliyorg.azurecr.io/docker/proxy:1809 .
    docker push vitaliyorg.azurecr.io/docker/proxy:1809

    Of course, the name of the image can be any.


    Launch


    To run the container, we need a server certificate certificate.pfxand a password file password.txt. The entire contents of the file is considered a password. Therefore, there should be no extra line feeds.


    Let all this stuff be in the folder: c:\dataon the server where Docker is installed.


    On the same server, run:


    docker run --name docker-proxy -d -v "c:/data:c:/data" -v \\.\pipe\docker_engine:\\.\pipe\docker_engine --restart always -p 2376:2376 vitaliyorg.azurecr.io/docker/proxy:1809

    Logging


    With the help docker logsyou can see who did what. There you can also see connection attempts that failed.


    Also popular now: