gRPC - a framework from Google for remote procedure call

  • Tutorial

In the case of remote call procedures have long been the case in the situation just like in the famous comic strip "14 standards" - which just has nothing to do napridumali : DCOM and ancient of Corba, SOAP strange and .NET Remoting, REST and modern AMQP (yes, I know that something -which of these is not formally RPC, in order to discuss the terminology even we recently created a special topic , nevertheless, it is all used as RPC, and if something looks like a duck and floats like a duck - well, you know ) .

And of course, in full accordance with the comic book scenario, Google came to the market and announced that now it finally created another, the last and most correct RPC standard. Google can understand - to continue to drive petabytes of data over the old and inefficient HTTP + REST in the 21st century, losing money on every byte is simply silly. At the same time, taking someone else’s standard and saying “we couldn’t come up with anything better” is completely not in their style.

Therefore, meet gRPC, which stands for "gRPC Remote Procedure Calls" - a new framework for remote procedure calls from Google. In this article, we’ll talk about why, unlike the previous “14 standards”, he will still capture the world (or at least part of it), try to build the gRPC build under Windows + Visual Studio (and don’t even tell me that the instruction is not needed - in the official documentation there are missing 5 important steps, without which nothing is going to happen), and also try to write a simple service and client exchanging requests and answers.

Why do we need another standard?


First of all, let's look around. What do we see? We see REST + HTTP / 1.1. No, there is everything, but it is this cloud that closes the good three-quarters of the firmament of client-server communications. Taking a closer look, we see that REST degenerates into CRUD in 95% of cases .

As a result, we have:
  • HTTP / 1.1 protocol inefficiency - uncompressed headers, lack of full two-way communication, inefficient approach to using OS resources, extra traffic, extra delays.
  • The need to pull our model of data and events on REST + CRUD, which often turns out like a balloon on a globe and forces Yandex to write such undoubtedly very good articles that, however, would not be needed if people didn’t have to think “ Why cast a spell to summon an elemental - PUT or POST? And what kind of HTTP-code to return, so that it means "Go 3 cells forward and draw a new card"? "


This is where gRPC begins. So out of the box we have:
  • Protobuf as a tool for describing data types and serialization. Very cool and well-proven thing in practice. As a matter of fact, those who needed productivity - before they took Protobuf, and then they separately bother with transport. Now everything is complete.
  • HTTP / 2 as a transport. And this is an incredibly powerful move! The whole charm of complete data compression, traffic control, triggering events from the server, reusing one socket for several parallel requests is beautiful.
  • Static paths - no more “service / collection / resource / request? parameter = value. " Now only “service”, and what is inside - describe in terms of your model and its events.
  • No binding of methods to HTTP methods, no binding of return values ​​to HTTP statuses. Write what you want.
  • SSL / TLS, OAuth 2.0, authentication through Google services, plus you can screw your own (for example, two-factor)
  • Support for 9 languages: C, C ++, Java, Go, Node.js, Python, Ruby, Objective-C, PHP, C # plus, of course, no one forbids taking and implementing your version for at least a brainfac.
  • Support for gRPC in Google’s public APIs. Already works for some services . No, REST versions, of course, will also remain. But judge for yourself, if you have a choice - use, say, a REST version from a mobile application, giving data for 1 second or with the same development costs take a gRPC version that works for 0.5 seconds - what will you choose? And what will your competitor choose?


GRPC assembly


We need:
  • Git
  • Visual Studio 2013 + Nuget
  • CMake


We take the code


  1. We take gRPC repository from Github
  2. Execute the command
    git submodule update --init
    

    - this is necessary in order to download the dependencies (protobuf, openssl, etc.).


Putting Protobuf together


  • Go to the grpc \ third_party \ protobuf \ cmake folder and create the build folder there, go to it.
  • We execute the
    cmake -G command "Visual Studio 12 2013" -DBUILD_TESTING = OFF ...
  • Open the protobuf.sln file created in the previous step in Visual Studio and assemble it (F7).
    At this stage, we get valuable artifacts - the protoc.exe utility, which we will need to generate the serialization / deserialization code for the data and the lib files that will be needed when linking gRPC.
  • Copy the grpc \ third_party \ protobuf \ cmake \ build \ Debug folder to the grpc \ third_party \ protobuf \ cmake folder.
    Once again - the Debug folder needs to be copied 1 level higher. This is some kind of inconsistency in the gRPC and Protobuf docs. Protobuf says that you need to build everything in the build folder, but the sources of the gRPC projects do not know anything about this folder and look for Protobuf libraries directly in grpc \ third_party \ protobuf \ cmake \ Debug


Build gRPC


  1. Open the file grpc \ vsprojects \ grpc_protoc_plugins.sln and collect it.
    If you completed the Protobuf build correctly in the previous step, everything should go smoothly. Now you have plugins for protoc.exe that allow it not only to generate serialization / deserialization code, but also to add gRPC functionality to it (in fact, remote procedure call). Plugins and protoc.exe need to be put in one folder, for example, in grpc \ vsprojects \ Debug.
  2. Open the file grpc \ vsprojects \ grpc.sln and collect it.
    In the course of the assembly, Nuget should start and download the necessary dependencies (openssl, zlib). If you do not have Nuget or for some reason did not download the dependencies, there will be problems.
    At the end of the build, we will have all the necessary libraries that we can use in our project for communication through gRPC.


Our project


Let's write such an API for Habrahabr using gRPC
. We will have the following methods:
  • GetKarma will receive a string with the username, and return a fractional number with the value of its karma
  • PostArticle will receive a request to create a new article with all its metadata, and return the result of the publication - a structure with a link to the article, the time of publication and the error text if the publication failed


All this we need to describe in terms of gRPC. It will look something like this (type descriptions can be found in the protobuf documentation ):

syntax = "proto3";
package HabrahabrApi;
message KarmaRequest {
  string username = 1;
}
message KarmaResponse {
  string username = 1;
  float karma = 2;
}
message PostArticleRequest {
  string title = 1;
  string body = 2;
  repeated string tag = 3; 
  repeated string hub = 4; 
}
message PostArticleResponse {
  bool posted = 1;
  string url = 2;
  string time = 3;
  string error_code = 4;
}	
service HabrApi {
  rpc GetKarma(KarmaRequest) returns (KarmaResponse) {}
  rpc PostArticle(PostArticleRequest) returns (PostArticleResponse) {}
}


Go to the grpc \ vsprojects \ Debug folder and run 2 commands there (by the way, note that in the official documentation there is an error in this place, incorrect arguments):
protoc --grpc_out=. --plugin=protoc-gen-grpc=grpc_cpp_plugin.exe habr.proto
protoc --cpp_out=. habr.proto

At the output we get 4 files:
  • habr.pb.h
  • habr.pb.cc
  • habr.grpc.pb.h
  • habr.grpc.pb.cc


This, as it’s not difficult to guess, is the workpiece of our future client and service that will be able to exchange messages using the protocol described above.

Let's create a project!


  1. Create a new solution in Visual Studio, call it HabrAPI.
  2. We add two console applications to it - HabrServer and HabrClient.
  3. Add h and ss files generated in the previous step to them. All 4 must be included in the server, only habr.pb.h and habr.pb.cc in the client.
  4. Add the path to the folders grpc \ third_party \ protobuf \ src and grpc \ include in the project settings in Additional Include Directories
  5. Add the path to grpc \ third_party \ protobuf \ cmake \ Debug in the project settings in the Additional Library Directories
  6. Add the libprotobuf.lib library in the project settings in Additional Dependencies
  7. We set the type of link to the same one that Protobuf was built with (the Runtime Library property on the Code Generation tab). At this point it may turn out that you did not build Protobuf in the configuration you need, and you have to go back and rebuild it. I chose both there and there / MTd.
  8. Add dependencies on zlib and openssl through Nuget.


Now everything is going with us. True, nothing is working yet.

Client

Everything is simple here. First, we need to create a class inherited from the stub generated in habr.pb.h. Secondly, implement GetKarma and PostArticle methods in it. Thirdly, call them and, for example, display the results in the console. It will turn out something like this:

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include "habr.grpc.pb.h"
using grpc::Channel;
using grpc::ChannelArguments;
using grpc::ClientContext;
using grpc::Status;
using HabrahabrApi::KarmaRequest;
using HabrahabrApi::KarmaResponse;
using HabrahabrApi::PostArticleRequest;
using HabrahabrApi::PostArticleResponse;
using HabrahabrApi::HabrApi;
class HabrahabrClient {
 public:
  HabrahabrClient(std::shared_ptr channel)
      : stub_(HabrApi::NewStub(channel)) {}
  float GetKarma(const std::string& username) {
    KarmaRequest request;
    request.set_username(username);
    KarmaResponse reply;
    ClientContext context;
    Status status = stub_->GetKarma(&context, request, &reply);
    if (status.ok()) {
      return reply.karma();
    } else {
      return 0;
    }
  }
  bool PostArticle(const std::string& username) {
    PostArticleRequest request;
    request.set_title("Article about gRPC");
    request.set_body("bla-bla-bla");
    request.set_tag("UFO");
    request.set_hab("Infopulse");
    PostArticleResponse reply;
    ClientContext context;
    Status status = stub_->PostArticle(&context, request, &reply);
    return status.ok() && reply.posted();
  }
 private:
  std::unique_ptr stub_;
};
int main(int argc, char** argv) {
  HabrahabrClient client(
      grpc::CreateChannel("localhost:50051", grpc::InsecureCredentials(),
                          ChannelArguments()));
  std::string user("tangro");
  std::string reply = client.GetKarma(user);
  std::cout << "Karma received: " << reply << std::endl;
  return 0;
}

Server

The server has a similar story - we inherit from the class of service generated in habr.grpc.pb.h and implement its methods. Next, we start the listener on a specific port, well, and wait for customers. Something like that:

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include "habr.grpc.pb.h"
using grpc::Server;
using grpc::ServerBuilder;
using grpc::ServerContext;
using grpc::Status;
using HabrahabrApi::KarmaRequest;
using HabrahabrApi::KarmaResponse;
using HabrahabrApi::PostArticleRequest;
using HabrahabrApi::PostArticleResponse;
using HabrahabrApi::HabrApi;
class HabrahabrServiceImpl final : public HabrApi::Service {
  Status GetKarma(ServerContext* context, const KarmaRequest* request,
                  KarmaResponse* reply) override {
    reply->set_karma(42);
    return Status::OK;
  }
  Status PostArticle(ServerContext* context, const PostArticleRequest* request,
                  PostArticleResponse* reply) override {
    reply->set_posted(true);
    reply->set_url("some_url");
    return Status::OK;
  }
};
void RunServer() {
  std::string server_address("0.0.0.0:50051");
  HabrahabrServiceImpl service;
  ServerBuilder builder;
  builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
  builder.RegisterService(&service);
  std::unique_ptr server(builder.BuildAndStart());
  std::cout << "Server listening on " << server_address << std::endl;
  server->Wait();
}
int main(int argc, char** argv) {
  RunServer();
  return 0;
}


Good luck using gRPC.

Also popular now: