Building a microservice architecture on Golang and gRPC, part 1

Introduction to microservice architecture


Part 1 of 10


Adaptation of articles Ewan Valentine.


This is a series of ten parts, I will try once a month to write about building microservices on Golang. I will use protobuf and gRPC as the main transport protocol.


The stack that I used: golang, mongodb, grpc, docker, Google Cloud, Kubernetes, NATS, CircleCI, Terraform, and go-micro.


Why do I need it? Since it took me a long time to figure this out and solve the accumulated problems. I also wanted to share with you what I learned about the creation, testing and deployment of microservices on Go and other new technologies.


In this part, I want to show the basic concepts and technologies for building microservices. We write a simple implementation. The project will include the following entities:


  • loads
  • inventory
  • ships
  • users
  • roles
  • authentication

image


To go further, you need to install Golang and the necessary libraries, as well as create a git repository (I hope you will not be difficult to figure out, otherwise I recommend to first deal with the basics).


Theory


What is microservice architecture?


Microservices isolate a separate functional in a service that is self-sufficient in terms of the function performed by this service. For compatibility with other services, it has a known and predefined interface.
Microservices communicate with each other using messages transmitted through some intermediary, message broker.



Thanks to microservice architecture, the application can be scaled not in whole, but in parts. For example, if the authorization service "twitches" more often than others, we can increase the number of its instances. This concept is responsive to the concept of cloud computing and contextualization in general.


Why golang?


Microservices are supported by almost all languages, after all, microservices are a concept, not a specific structure or tool. However, some languages ​​are better suited and, in addition, have better support for microservices than others. One language with great support is Golang.


Let's get acquainted with protobuf / gRPC


As mentioned earlier, microservices are divided into separate code bases, one of the important problems connected with microservices is communication. If you have a monolith, then you can simply call the code directly from another place in your program.


To solve the communication problem, we can use the traditional REST approach and transmit data in JSON or XML format via HTTP. But this approach has its drawbacks, for example, the fact that, before sending a message, you have to encode your data and decode it back on the host side. And this overhead and increases the complexity of the code.


There is a solution! This gRPC protocol is a lightweight, binary-based protocol , which eliminates the transfer of HTTP headers, and this will save us a certain amount of bytes. Also, the future HTTP2 protocol implies the use of binary data, which again speaks in favor of gRPC. HTTP2 allows bidirectional communication, and it's awesome!


Also, gRPC allows you to define an interface to your service in a friendly format — this is> protobuf .


Practice


Create a file /project/consigment.proto.
Official protobuf documentation


consigment.proto
//consigment.proto
syntax = "proto3";
packagego.micro.srv.consignment; 
service ShippingService {
  rpc CreateConsignment(Consignment) returns (Response) {}
}
message Consignment {
  string id = 1;
  string description = 2;
  int32 weight = 3;
  repeated Container containers = 4;
  string vessel_id = 5;
}
message Container {
  string id = 1;
  string customer_id = 2;
  string origin = 3;
  string user_id = 4;
}
message Response {
  bool created = 1;
  Consignment consignment = 2;
}

This is a simple example that contains the service you want to provide to other services: service ShippingService, then we define our own messages. Protobuf is a statically typed protocol, and we can create custom types (similar to structures in golang). Here the container is nested in a batch.


Install the libraries, the compiler and compile our protocol:


$ go get -u google.golang.org/grpc
$ go get -u github.com/golang/protobuf/protoc-gen-go
$ sudo apt install protobuf-compiler
$ mkdir consignment && cd consignment
$ protoc -I=. --go_out=plugins=grpc:. consignment.proto

The output should be a file:


consignment.pb.go
// Code generated by protoc-gen-go. DO NOT EDIT.// source: consignment.protopackage consignment
import (
    fmt "fmt"
    proto "github.com/golang/protobuf/proto"
    context "golang.org/x/net/context"
    grpc "google.golang.org/grpc"
    math "math"
)
// Reference imports to suppress errors if they are not otherwise used.var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file// is compatible with the proto package it is being compiled against.// A compilation error at this line likely means your copy of the// proto package needs to be updated.const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto packagetype Consignment struct {
    Id                   int32`protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
    Description          string`protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
    Weight               int32`protobuf:"varint,3,opt,name=weight,proto3" json:"weight,omitempty"`
    Containers           []*Container `protobuf:"bytes,4,rep,name=containers,proto3" json:"containers,omitempty"`
    VesselId             string`protobuf:"bytes,5,opt,name=vessel_id,json=vesselId,proto3" json:"vessel_id,omitempty"`
    XXX_NoUnkeyedLiteral struct{}     `json:"-"`
    XXX_unrecognized     []byte`json:"-"`
    XXX_sizecache        int32`json:"-"`
}
func(m *Consignment)Reset()         { *m = Consignment{} }
func(m *Consignment)String()string { return proto.CompactTextString(m) }
func(*Consignment)ProtoMessage()    {}
func(*Consignment)Descriptor()([]byte, []int) {
    return fileDescriptor_3804bf87090b51a9, []int{0}
}
func(m *Consignment)XXX_Unmarshal(b []byte)error {
    return xxx_messageInfo_Consignment.Unmarshal(m, b)
}
func(m *Consignment)XXX_Marshal(b []byte, deterministic bool)([]byte, error) {
    return xxx_messageInfo_Consignment.Marshal(b, m, deterministic)
}
func(m *Consignment)XXX_Merge(src proto.Message) {
    xxx_messageInfo_Consignment.Merge(m, src)
}
func(m *Consignment)XXX_Size()int {
    return xxx_messageInfo_Consignment.Size(m)
}
func(m *Consignment)XXX_DiscardUnknown() {
    xxx_messageInfo_Consignment.DiscardUnknown(m)
}
var xxx_messageInfo_Consignment proto.InternalMessageInfo
func(m *Consignment)GetId()int32 {
    if m != nil {
        return m.Id
    }
    return0
}
func(m *Consignment)GetDescription()string {
    if m != nil {
        return m.Description
    }
    return""
}
func(m *Consignment)GetWeight()int32 {
    if m != nil {
        return m.Weight
    }
    return0
}
func(m *Consignment)GetContainers() []*Container {
    if m != nil {
        return m.Containers
    }
    returnnil
}
func(m *Consignment)GetVesselId()string {
    if m != nil {
        return m.VesselId
    }
    return""
}
type Container struct {
    Id                   int32`protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
    CustomerId           string`protobuf:"bytes,2,opt,name=customer_id,json=customerId,proto3" json:"customer_id,omitempty"`
    Origin               string`protobuf:"bytes,3,opt,name=origin,proto3" json:"origin,omitempty"`
    UserId               string`protobuf:"bytes,4,opt,name=user_id,json=userId,proto3" json:"user_id,omitempty"`
    XXX_NoUnkeyedLiteral struct{} `json:"-"`
    XXX_unrecognized     []byte`json:"-"`
    XXX_sizecache        int32`json:"-"`
}
func(m *Container)Reset()         { *m = Container{} }
func(m *Container)String()string { return proto.CompactTextString(m) }
func(*Container)ProtoMessage()    {}
func(*Container)Descriptor()([]byte, []int) {
    return fileDescriptor_3804bf87090b51a9, []int{1}
}
func(m *Container)XXX_Unmarshal(b []byte)error {
    return xxx_messageInfo_Container.Unmarshal(m, b)
}
func(m *Container)XXX_Marshal(b []byte, deterministic bool)([]byte, error) {
    return xxx_messageInfo_Container.Marshal(b, m, deterministic)
}
func(m *Container)XXX_Merge(src proto.Message) {
    xxx_messageInfo_Container.Merge(m, src)
}
func(m *Container)XXX_Size()int {
    return xxx_messageInfo_Container.Size(m)
}
func(m *Container)XXX_DiscardUnknown() {
    xxx_messageInfo_Container.DiscardUnknown(m)
}
var xxx_messageInfo_Container proto.InternalMessageInfo
func(m *Container)GetId()int32 {
    if m != nil {
        return m.Id
    }
    return0
}
func(m *Container)GetCustomerId()string {
    if m != nil {
        return m.CustomerId
    }
    return""
}
func(m *Container)GetOrigin()string {
    if m != nil {
        return m.Origin
    }
    return""
}
func(m *Container)GetUserId()string {
    if m != nil {
        return m.UserId
    }
    return""
}
type Response struct {
    Created              bool`protobuf:"varint,1,opt,name=created,proto3" json:"created,omitempty"`
    Consignment          *Consignment `protobuf:"bytes,2,opt,name=consignment,proto3" json:"consignment,omitempty"`
    XXX_NoUnkeyedLiteral struct{}     `json:"-"`
    XXX_unrecognized     []byte`json:"-"`
    XXX_sizecache        int32`json:"-"`
}
func(m *Response)Reset()         { *m = Response{} }
func(m *Response)String()string { return proto.CompactTextString(m) }
func(*Response)ProtoMessage()    {}
func(*Response)Descriptor()([]byte, []int) {
    return fileDescriptor_3804bf87090b51a9, []int{2}
}
func(m *Response)XXX_Unmarshal(b []byte)error {
    return xxx_messageInfo_Response.Unmarshal(m, b)
}
func(m *Response)XXX_Marshal(b []byte, deterministic bool)([]byte, error) {
    return xxx_messageInfo_Response.Marshal(b, m, deterministic)
}
func(m *Response)XXX_Merge(src proto.Message) {
    xxx_messageInfo_Response.Merge(m, src)
}
func(m *Response)XXX_Size()int {
    return xxx_messageInfo_Response.Size(m)
}
func(m *Response)XXX_DiscardUnknown() {
    xxx_messageInfo_Response.DiscardUnknown(m)
}
var xxx_messageInfo_Response proto.InternalMessageInfo
func(m *Response)GetCreated()bool {
    if m != nil {
        return m.Created
    }
    returnfalse
}
func(m *Response)GetConsignment() *Consignment {
    if m != nil {
        return m.Consignment
    }
    returnnil
}
funcinit() {
    proto.RegisterType((*Consignment)(nil), "Consignment")
    proto.RegisterType((*Container)(nil), "Container")
    proto.RegisterType((*Response)(nil), "Response")
}
funcinit() { proto.RegisterFile("consignment.proto", fileDescriptor_3804bf87090b51a9) }
var fileDescriptor_3804bf87090b51a9 = []byte{
    // 281 bytes of a gzipped FileDescriptorProto0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x64, 0x91, 0xbf, 0x4e, 0x33, 0x31,
    0x10, 0xc4, 0xbf, 0xcb, 0xff, 0x5b, 0x7f, 0x02, 0xc5, 0x05, 0x58, 0x50, 0x70, 0xba, 0x2a, 0xa2,
    0x70, 0x11, 0x9e, 0x00, 0xa5, 0x4a, 0xeb, 0xd0, 0xa3, 0x60, 0xaf, 0x2e, 0x2b, 0x11, 0xfb, 0x64,
    0x3b, 0xe1, 0x75, 0x78, 0x54, 0x74, 0xbe, 0x1c, 0x18, 0x51, 0xce, 0xac, 0x67, 0xf7, 0xa7, 0x31,
    0x2c, 0xb5, 0xb3, 0x81, 0x1a, 0x7b, 0x44, 0x1b, 0x65, 0xeb, 0x5d, 0x74, 0xf5, 0x67, 0x01, 0x6c,
    0xf3, 0xe3, 0xf2, 0x2b, 0x18, 0x91, 0x11, 0x45, 0x55, 0xac, 0xa6, 0x6a, 0x44, 0x86, 0x57, 0xc0,
    0x0c, 0x06, 0xed, 0xa9, 0x8d, 0xe4, 0xac, 0x18, 0x55, 0xc5, 0xaa, 0x54, 0xb9, 0xc5, 0x6f, 0x60,
    0xf6, 0x81, 0xd4, 0x1c, 0xa2, 0x18, 0xa7, 0xd4, 0x45, 0xf1, 0x47, 0x00, 0xed, 0x6c, 0xdc, 0x93,
    0x45, 0x1f, 0xc4, 0xa4, 0x1a, 0xaf, 0xd8, 0x1a, 0xe4, 0x66, 0xb0, 0x54, 0x36, 0xe5, 0xf7, 0x50,
    0x9e, 0x31, 0x04, 0x7c, 0x7f, 0x25, 0x23, 0xa6, 0xe9, 0xc6, 0xa2, 0x37, 0xb6, 0xa6, 0x3e, 0x42,
    0xf9, 0x9d, 0xfa, 0xc3, 0xf7, 0x00, 0x4c, 0x9f, 0x42, 0x74, 0x47, 0xf4, 0x5d, 0xb6, 0xe7, 0x83,
    0xc1, 0xda, 0x9a, 0x0e, 0xcf, 0x79, 0x6a, 0xc8, 0x26, 0xbc, 0x52, 0x5d, 0x14, 0xbf, 0x85, 0xf9,
    0x29, 0xf4, 0xa1, 0x49, 0x3f, 0xe8, 0xe4, 0xd6, 0xd4, 0x2f, 0xb0, 0x50, 0x18, 0x5a, 0x67, 0x03,
    0x72, 0x01, 0x73, 0xed, 0x71, 0x1f, 0xb1, 0x3f, 0xb9, 0x50, 0x83, 0xe4, 0x12, 0x58, 0x56, 0x66,
    0xba, 0xcb, 0xd6, 0xff, 0x65, 0x56, 0xa5, 0xca, 0x1f, 0xac, 0x9f, 0xe1, 0x7a, 0x77, 0xa0, 0xb6,
    0x25, 0xdb, 0xec, 0xd0, 0x9f, 0x49, 0x23, 0x97, 0xb0, 0xdc, 0xa4, 0x6d, 0x79, 0xff, 0xbf, 0x56,
    0xdc, 0x95, 0x72, 0x40, 0xa9, 0xff, 0xbd, 0xcd, 0xd2, 0x8f, 0x3d, 0x7d, 0x05, 0x00, 0x00, 0xff,
    0xff, 0x84, 0x5c, 0xa4, 0x06, 0xc6, 0x01, 0x00, 0x00,
}
// Reference imports to suppress errors if they are not otherwise used.var _ context.Context
var _ grpc.ClientConn
// This is a compile-time assertion to ensure that this generated file// is compatible with the grpc package it is being compiled against.const _ = grpc.SupportPackageIsVersion4
// ShippingServiceClient is the client API for ShippingService service.//// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.type ShippingServiceClient interface {
    CreateConsignment(ctx context.Context, in *Consignment, opts ...grpc.CallOption) (*Response, error)
}
type shippingServiceClient struct {
    cc *grpc.ClientConn
}
funcNewShippingServiceClient(cc *grpc.ClientConn)ShippingServiceClient {
    return &shippingServiceClient{cc}
}
func(c *shippingServiceClient)CreateConsignment(ctx context.Context, in *Consignment, opts ...grpc.CallOption)(*Response, error) {
    out := new(Response)
    err := c.cc.Invoke(ctx, "/ShippingService/CreateConsignment", in, out, opts...)
    if err != nil {
        returnnil, err
    }
    return out, nil
}
// ShippingServiceServer is the server API for ShippingService service.type ShippingServiceServer interface {
    CreateConsignment(context.Context, *Consignment) (*Response, error)
}
funcRegisterShippingServiceServer(s *grpc.Server, srv ShippingServiceServer) {
    s.RegisterService(&_ShippingService_serviceDesc, srv)
}
func _ShippingService_CreateConsignment_Handler(srv interface{}, ctx context.Context, dec func(interface{})error, interceptorgrpc.UnaryServerInterceptor) (interface{}, error) {
    in := new(Consignment)
    if err := dec(in); err != nil {
        returnnil, err
    }
    if interceptor == nil {
        return srv.(ShippingServiceServer).CreateConsignment(ctx, in)
    }
    info := &grpc.UnaryServerInfo{
        Server:     srv,
        FullMethod: "/ShippingService/CreateConsignment",
    }
    handler := func(ctx context.Context, req interface{})(interface{}, error) {
        return srv.(ShippingServiceServer).CreateConsignment(ctx, req.(*Consignment))
    }
    return interceptor(ctx, in, info, handler)
}
var _ShippingService_serviceDesc = grpc.ServiceDesc{
    ServiceName: "ShippingService",
    HandlerType: (*ShippingServiceServer)(nil),
    Methods: []grpc.MethodDesc{
        {
            MethodName: "CreateConsignment",
            Handler:    _ShippingService_CreateConsignment_Handler,
        },
    },
    Streams:  []grpc.StreamDesc{},
    Metadata: "consignment.proto",
}

If something went wrong. Pay attention to the arguments -I path where the compiler is looking for files, --go_out where the new file will be created. There is always a certificate


$ protoc -h 

This is the code that is automatically generated by the gRPC / protobuf libraries so that you can associate your protobuf definition with your own code.


Let's write main.go


main.go
package seaport
import (
    "log""net"// Импотртируем код протобуфера
    pbf "seaport/consignment""golang.org/x/net/context""google.golang.org/grpc""google.golang.org/grpc/reflection" 
)
const (
    port = ":50051"
)
//IRepository - интерфейс хранилищаtype IRepository interface {
    Create(*pbf.Consignment) (*pbf.Consignment, error)
}
// Repository - структура для эмитации хранилища,// после мы заменим её на настоящие хранилищемtype Repository struct {
    consignments []*pbf.Consignment
}
//Create - создаём новое хранилищеfunc(repo *Repository)Create(consignment *pbf.Consignment)(*pbf.Consignment, error) {
    updated := append(repo.consignments, consignment)
    repo.consignments = updated
    return consignment, nil
}
// Служба должна реализовать все методы для удовлетворения сервиса// которые мы определили в нашем определении протобуфа. Вы можете проверить интерфейсы// в сгенерированном коде для точных сигнатур методов и т. д.type service struct {
    repo IRepository
}
// CreateConsignment - мы создали только один метод для нашего сервиса,// который является методом create, который принимает контекст и запрос// потом они обрабатываются сервером gRPC.func(s *service)CreateConsignment(ctx context.Context, req *pbf.Consignment)(*pbf.Response, error) {
    // Сохраним нашу партию в хранидище
    consignment, err := s.repo.Create(req)
    if err != nil {
        returnnil, err
    }
    // Возвращаем сообщение `Response`,// которое мы создали в нашем определнии пробуфаreturn &pbf.Response{Created: true, Consignment: consignment}, nil
}
funcmain() {
    repo := &Repository{}
    // Стартуем наш gRPC сервер для прослушивания tcp
    lis, err := net.Listen("tcp", port)
    if err != nil {
        log.Fatalf("failed to listen: %v", err)
    }
    s := grpc.NewServer()
    // Зарегистрируйте нашу службу на сервере gRPC, это свяжет нашу// реализацию с кодом автогенерированного интерфейса для нашего// сообщения `Response`, которое мы создали в нашем протобуфе
    pbf.RegisterShippingServiceServer(s, &service{repo})
    // Регистрация службы ответов на сервере gRPC.
    reflection.Register(s)
    if err := s.Serve(lis); err != nil {
        log.Fatalf("failed to serve: %v", err)
    }
}

Please carefully read the comments left in the code. Apparently, here we are creating implementation logic in which our gRPC methods interact using generated formats, creating a new gRPC server on port 50051. Now our gRPC service will live there.
You can run it using $ go run main.go , but you will not see anything, and you will not be able to use it ... So let's create a client to see it in action.


Let's create a command line interface that takes a JSON file and interacts with our gRPC service.


In the root directory, create a new $ mkdir consignment-cli subdirectory . In this directory, create a file cli.go with the following contents:


cli.go
package main
import (
    "encoding/json""io/ioutil""log""os"
    pbf "seaport/consignment""golang.org/x/net/context""google.golang.org/grpc"
)
const (
    address         = "localhost:50051"
    defaultFilename = "consignment.json"
)
//Функция парсит переданный фаилfuncparseFile(file string)(*pbf.Consignment, error) {
    var consignment *pbf.Consignment
    data, err := ioutil.ReadFile(file)
    if err != nil {
        returnnil, err
    }
    json.Unmarshal(data, &consignment)
    return consignment, err
}
funcmain() {
    // Установим соединение с сервером
    conn, err := grpc.Dial(address, grpc.WithInsecure())
    if err != nil {
        log.Fatalf("Не могу подключиться: %v", err)
    }
    defer conn.Close()
    client := pbf.NewShippingServiceClient(conn)
    // Передадим в обработку consignment.json,// иначе возьмём путь к файлу из аргументов командной строки
    file := defaultFilename
    iflen(os.Args) > 1 {
        file = os.Args[1]
    }
    consignment, err := parseFile(file)
    if err != nil {
        log.Fatalf("Не возможно распарсить фаил: %v", err)
    }
    r, err := client.CreateConsignment(context.Background(), consignment)
    if err != nil {
        log.Fatalf("Не удалось создать: %v", err)
    }
    log.Printf("Создан: %t", r.Created)
}

Now create a consignment (consignment-cli / consignment.json):


{
  "description": "Тестовая партия груза",
  "weight": 100,
  "containers": [
    {
      "customer_id": "Заказчик_001",
      "user_id": "Пользователь_001",
      "origin": "Порт Находка"
    }
  ],
  "vessel_id": "судно_001"
}

Now, if you run $ go run main.go from the seaport package, then run $ go run cli.go in a separate terminal panel . You should see the message "Created: true".
But how can we check what exactly was created? Let's update our service using the GetConsignments method so that we can view all of our created batches.


consigment.proto
//consigment.proto
syntax = "proto3";
service ShippingService{
  rpc CreateConsignment(Consignment) returns (Response) {}
  // Создадим новый метод
  rpc GetConsignments(GetRequest) returns (Response) {}
}
message Consignment {
  int32 id = 1;
  string description = 2;
  int32 weight = 3;
  repeated Container containers = 4;
  string vessel_id = 5;
}
message Container {
  int32 id =1;
  string customer_id =2;
  string origin = 3;
  string user_id = 4;
}
// Создадим пустой запрос
message GetRequest {}
message Response {
  bool created = 1;
  Consignment consignment = 2;
  // Добавим повторяющиюся запись партий// в наше ответное сообщение
  repeated Consignment consignments = 3;
}

So, here we have created a new method on our service called GetConsignments , we also created a new GetRequest , which so far does not contain anything. We also added a field of sent batches to our reply message. You will notice that the type here has the keyword repeated up to type. This, as you probably guessed, simply means to treat this field as an array of these types.


Do not rush to run the program, the implementation of our gRPC methods is based on matching the interface created by the protobuf library, we need to make sure that our implementation fits our proto definition.


//seaport/main.go//IRepository - интерфейс хранилищаtype IRepository interface {
    Create(*pbf.Consignment) (*pbf.Consignment, error)
    GetAll() []*pbf.Consignment
}
//GetAll - метод получения всех партий из хранилищаfunc(repo *Repository)GetAll() []*pbf.Consignment {
    return repo.consignments
}
//GetConsignments - метод для получения всех партий из ответа сервераfunc(s *service)GetConsignments(ctx context.Context, req *pbf.GetRequest)(*pbf.Response, error) {
    consignments := s.repo.GetAll()
    return &pbf.Response{Consignments: consignments}, nil
}

Here we included our new GetConsignments method, updated our storage and interface, respectively, created in the definition of consignments.proto. If you run $ go run main.go again , the program should work again.


Let's update our cli tool to include the ability to call this method and the opportunity to list our batches:


cli.go
package main
import (
    "encoding/json""io/ioutil""log""os"
    pbf "seaport/consignment""golang.org/x/net/context""google.golang.org/grpc"
)
const (
    address         = "localhost:50051"
    defaultFilename = "consignment.json"
)
//Функция парсит переданный фаилfuncparseFile(file string)(*pbf.Consignment, error) {
    var consignment *pbf.Consignment
    data, err := ioutil.ReadFile(file)
    if err != nil {
        returnnil, err
    }
    json.Unmarshal(data, &consignment)
    return consignment, err
}
funcmain() {
    // Установим соединение с сервером
    conn, err := grpc.Dial(address, grpc.WithInsecure())
    if err != nil {
        log.Fatalf("Не могу подключиться: %v", err)
    }
    defer conn.Close()
    client := pbf.NewShippingServiceClient(conn)
    // Передадим в обработку consignment.json,// иначе возьмём путь к файлу из аргументов командной строки
    file := defaultFilename
    iflen(os.Args) > 1 {
        file = os.Args[1]
    }
    consignment, err := parseFile(file)
    if err != nil {
        log.Fatalf("Не возможно распарсить фаил: %v", err)
    }
    r, err := client.CreateConsignment(context.Background(), consignment)
    if err != nil {
        log.Fatalf("Не удалось создать: %v", err)
    }
    log.Printf("Создан: %t", r.Created)
    getAll, err := client.GetConsignments(context.Background(), &pbf.GetRequest{})
    if err != nil {
        log.Fatalf("Не возможно получить список партий: %v", err)
    }
    for _, v := range getAll.Consignments {
        log.Println(v)
    }
}

Add the code above in cli.go and run $ go run cli.go again . The client will launch CreateConsignment, and then call GetConsignments. And you should see that in the answer list contains the composition of the party.


Thus, we have the first microservice and client to interact with it using protobuf and gRPC.


The next part of this series will include the integration of go-micro, which is a powerful basis for creating microservices based on gRPC. We will also create our second service. Let's look at how our services work in Docker containers, in the next part of this article series.


Also popular now: