Secure deployment of ElasticSearch server

    After a successful transition from MongoDB full-text search to ElasticSearch, we managed to launch several new services running on Elastic, an extension for the browser and in general, I was extremely pleased with the migration.

    But in a barrel of honey, there was one fly in the ointment - about a month after the configuration and successful operation, LogEntries / NewRelic unanimously shouted that the search server was not responding. After logging in to the Digital Ocean deshardboard, I saw a support letter saying that the server was suspended due to large outgoing UDP traffic, which most likely indicated that the server was compromised.

    DigitalOcean provided a link to instructionswhat to do in this case. But the most interesting thing was in the comments, almost everyone who suffered from attacks in recent times had a deployed ElasticSeach cluster with an open 9200 port. Attackers took advantage of Java and ES vulnerabilities, gained access to the server and turned it into an integral part of some bot-network.

    I had to restore the server from scratch, but this time I will not be so naive, the server will be reliably protected. I will describe my setup using Node.js, Dokku / Docker , SSL.

    Why is that?


    Despite the full power of ElasticSearch, it does not provide any internal means of protection and authorization, you need to do everything yourself. There is a good article on this topic.

    Attackers (most likely) exploit the vulnerability of dynamic elastic scripts, so if they are not used (as in my case) they are recommended to be disabled.

    And finally, the open 9200 port is like a decoy; you need to close it.

    What will be the plan?


    My plan was this: to raise a “clean” Digital Ocean droplet, deploy Elastic Search inside the Docker container (even if the instance is compromised, all you need to do is restart the container), close 9200/9300 for access from the outside and serve all traffic to the elastic through Node.js proxy server, with a simple authorization model, through the "shared secret".

    Raise the new droplet


    DigitalOcean provides a pre-prepared image with Dokku / Docker on board on Ubuntu 14, so it makes sense to immediately select it. As usual, raising a new car takes a couple of tens of seconds and we are ready to work.

    image

    Expand ElasticSearch in the container


    The first thing we need is a Docker image with ElasticSearch. Despite the fact that there are several plug-ins for Dokku, I decided to go by installing it myself, so it seemed to me easier with the configuration.

    The image for Elastic is already ready and there are good instructions for its use.

    $ docker pull docker pull dockerfile/elasticsearch
    

    Once the image is loaded, we must prepare a volume that will be external to the running container (even if the container stops and restarts, the data will be stored on the host file system).

    $ cd /
    $ mkdir elastic
    

    In this folder, we will create a configuration file, elasticsearch.yml. In my case, it is very simple, I have a cluster from one machine, so all the default settings satisfy me. But, as mentioned above, it is necessary to disable dynamic scripts.

    $ nano elasticsearch.yml
    

    Which will consist of only one line,

    script.disable_dynamic: true
    

    After that, you can start the server. I created a simple script, for the time of configuration and debugging, you may need to restart several times,

    docker run --name elastic -d -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -v /elastic:/data dockerfile/elasticsearch /elasticsearch/bin/elasticsearch -Des.config=/data/elasticsearch.yml
    

    Pay attention to -p 127.0.0.1:9200:9200,, here we “tie” use 9200only with localhost. I spent several hours trying to configure iptablesand close 9200/9300 ports, to no avail. Thanks to the help of @darkproger and @kkdoo, everything worked as it should.

    -v /elastic:/datawill map the container volume /datato local /elastic.

    Proxy Node.js server


    Now you need to start the proxying Node.js server, which will serve traffic from / to localhost : 9200 to the outside world, safely. I made a small project based on http-proxy called elastic-proxy , it is very simple and can very well be reused in other projects.

    $ git clone https://github.com/likeastore/elastic-proxy
    $ cd elastic-proxy
    

    The server code itself,

    var http = require('http');
    var httpProxy = require('http-proxy');
    var url = require('url');
    var config = require('./config');
    var logger = require('./source/utils/logger');
    var port = process.env.PORT || 3010;
    var proxy = httpProxy.createProxyServer();
    var parseAccessToken = function (req) {
      var request = url.parse(req.url, true).query;
      var referer = url.parse(req.headers.referer || '', true).query;
      return request.access_token || referer.access_token;
    };
    var server = http.createServer(function (req, res) {
      var accessToken = parseAccessToken(req);
      logger.info('request: ' + req.url + ' accessToken: ' + accessToken + ' referer: ' + req.headers.referer);
      if (!accessToken || accessToken !== config.accessToken) {
          res.statusCode = 401;
          return res.end('Missing access_token query parameter');
      }
      proxy.web(req, res, {target: config.target});
    });
    server.listen(port, function () {
      logger.info('Likeastore Elastic-Proxy started at: ' + port);
    });
    

    It proxies all requests and "skips" only those that specify access_token as a request parameter. access_token is configured on the server through an environment variable PROXY_ACCESS_TOKEN.

    Since the server is already configured for Dokku, all that remains to be done is to push the sources and Dokku will deploy a new service.

    $ git push master production
    

    After the deployment, go to the server and configure the access token,

    $ dokku config proxy set PROXY_ACCESS_TOKEN="your_secret_value"
    

    I also like that everything runs through the SSL, Dokku with this very easy to make, copy server.crt, and server.keyin /home/dokku/proxy/tls.

    We restart the proxy to apply the latest changes, make sure that everything is OK by clicking on the link https://search.likeastore.com - if everything is fine, it will give out:

    Missing access_token query parameter
    

    Bind Proxy and ElasticSeach containers


    We need to connect two containers to each other, the first with Node.js proxies, the second with ElasticSearch itself. I really liked the dokku-link plugin, which does just what it needs. Install it,

    $ cd /var/lib/dokku/plugins
    $ git clone https://github.com/rlaneve/dokku-link
    

    And after installation, we connect the proxy with the elastic,

    $ dokku link proxy elastic
    

    After this, the proxy will need to be restarted again. If all is well, then following the link proxy.yourserver.com?access_token=your_secret_value,we will see an answer from ElasticSearch,

    {
      status: 200,
      name: "Tundra",
      version: {
          number: "1.2.1",
          build_hash: "6c95b759f9e7ef0f8e17f77d850da43ce8a4b364",
          build_timestamp: "2014-06-03T15:02:52Z",
          build_snapshot: false,
          lucene_version: "4.8"
      },
      tagline: "You Know, for Search"
    }
    

    We adjust the client


    It remains to configure the client so that for all requests to the server it passes access_token. For a Node.js application, it looks like this,

    var client = elasticsearch.Client({
      host: {
          protocol: 'https',
          host: 'search.likeastore.com',
          port: 443,
          query: {
              access_token: process.env.ELASTIC_ACCESS_TOKEN
          }
      },
      requestTimeout: 5000
    });
    

    Now you can restart the application, make sure that everything works as it should ... and exhale.

    Afterword


    This setup worked (and works now) for Likeastore perfectly. However, over time, I saw a certain overhead of this approach. Most likely, you can get rid of the proxy server, and configure nginx c basic-authorization, with upstreama docker container, also with SSL support.

    Also, Elastic will probably have good ideas in a private network, and all requests to it should be done through the application API. This may not be very convenient from a development point of view, but more reliable from a security point of view.

    Threat. This is a retelling in Russian of my post from a personal blog .

    Also popular now: