NGINX and gRPC are now true friends

    A few days ago, a new version of Nginx was released - 1.13.10 . The main feature of this release is native support for HTTP / 2 proxies, and, as a result, gRPC.

    Probably, now that the world is flooded with microservices, as well as heterogeneous technology stacks, everyone knows what gRPC is. If not, then it's like protobuf (which gRPC can use for serialization as well), or Apache Thrift on steroids. The technology allows you to organize the interaction of many services with each other in an extremely effective manner.

    High gRPC performance is achieved due to several things: using HTTP / 2 for multiplexing, data compression. In addition, the framework encourages programmers to develop their services in a non-blocking style (a-ka NIO), using libraries such as Netty within themselves.

    image

    Image taken from https://www.slideshare.net/borisovalex/enabling-googley-microservices-with-grpc-at-jdkio-2017

    Another important feature of gRPC is native backpressure support. This property is implemented using the abstraction of deadline: an intimate client timeout is exposed through the whole chain of services. If the next call does not fit into the specified deadline (timeout), then the entire call chain will be flaked. This protects the system from a chain reaction. Read more about this Alexander Borisovfrom google. You can watch this report on youtube.

    Back to our rams: Nginx and gRPC. At first glance, it might seem that these are two incompatible technologies. Nginx is used as an entry point to the system. While gRPC is a tool for the interaction of microservices within the system. However, this is not always the case.

    Consider a company that is developing an API. This company may have mobile applications that consume the same API. Applications usually cannot go directly to microservices that are not accessible from the public network. Therefore, some Gateway is required that will accept requests from outside and proxy them to internal microservices.

    The Gateway function can run several classes of systems. Firstly, it can be an honest application in some programming language. Of the advantages of this approach is great flexibility. Of the minuses - often, reduced productivity. In addition, when programming your Gateway, it’s quite simple to make bugs that can affect the security of the system.

    Another implementation option for Gateway is to use a turnkey solution from the Reverse Proxy class. It may be familiar to all Nginx. But there are other modern alternatives. This and Envoy, this and Træfik, this and Caddy. Probably, the advantages of Proxies are clear to everyone: it’s fast, it’s reliable. We get traffic balancing out of the box. We get SSL termination from the box. In addition, in any Proxy, a very flexible routing system is usually implemented, which allows you to route traffic to different applications using different URLs.

    So, we realized that sometimes we need to expose gRPC to the outside of the system, apparently using some kind of Reverse Proxy. But this is bad luck. We have a Nginx project, we don’t want anything fashionable, but in the old man a bug - there is no way to proxy HTTP / 2. There is a solution - upgrade to 1.13.10! The guys finally made native support for HTTP / 2 proxying, as well as gRPC.

    image

    Out of the box you will receive a whole package of goods. TLS-termination, traffic balancing by nodes, powerful routing, as well as a number of other Nginx features known to you.

    All you need to do to start using proxying gRPC traffic is to flip the config (and, possibly, build Nginx with a couple of new modules if you build Proxies yourself). HelloWorld config is described as follows:

    server {
        listen 80 http2;
        charset utf-8;
        access_log logs/access.log;
        location / {
            grpc_pass grpc://movie:6565;
        }
    }
    

    I myself am a simple man: until I see it, I won’t believe it. Therefore, I sent a demo for demonstration, where there is a Server that gives a list of the best films (a set of given lines), and there is a Client who reads these films. The client and server work through Nginx.

    We make movies like this:

    @Override
    public void getRating(Moviesrating.GetRatingRequest request,
                          StreamObserver responseObserver) {
        log.info("getRating(): request={}", request);
        List bestMovies = Arrays.asList(
                "The Shawshank Redemption",
                "The Godfather",
                "The Dark Knight",
                "Interstellar"
        );
        responseObserver.onNext(Moviesrating.GetRatingResponse.newBuilder()
                .addAllMovie(bestMovies)
                .build());
        responseObserver.onCompleted();
    }
    

    And so we read movies:

    @GetMapping("/top")
    Mono> top() {
        log.info("top()");
        ListenableFuture ratingFuture
                = moviesRatingStub.getRating(
                        Moviesrating.GetRatingRequest.newBuilder().build());
        CompletableFuture> completable = new CompletableFuture>() {
            @Override
            public boolean cancel(boolean mayInterruptIfRunning) {
                boolean result = ratingFuture.cancel(mayInterruptIfRunning);
                super.cancel(mayInterruptIfRunning);
                return result;
            }
        };
        ratingFuture.addListener(() -> {
            try {
                completable.complete(ratingFuture.get().getMovieList().stream()
                        .map(Movie::new)
                        .collect(Collectors.toList()));
            } catch (InterruptedException | ExecutionException e) {
                e.printStackTrace();
            }
        }, executor);
        return Mono.fromFuture(completable);
    }
    

    Everything works, the guys from Nginx did not deceive, you can believe. And if you do not believe it - https://github.com/Hixon10/grpc-nginx - check it yourself.

    Also popular now: