Building a Microservice Architecture on Golang and gRPC, Part 2 (docker)
- Tutorial
It's time to tackle the containers
First of all, we use the latest Linux Alpine image. Linux Alpine is a lightweight Linux distribution designed and optimized to run web applications in Docker. In other words, Linux Alpine has enough dependencies and functionality to run most applications. This means that the image size is about 8 MB!
Compared to, say ... a Ubuntu virtual machine with a capacity of about 1 GB, that's why Docker images have become more natural for microservices and cloud computing.
So, now I hope that you see value in containerization, and we can start “Dockerising” our first service. Let's create a Dockerfile $ touch consignment-service / Dockerfile .

First part
Original EwanValentine repository
Original article
In the Dockerfile, add the following:
FROM alpine:latest
RUN mkdir /app
WORKDIR /app
ADD consignment-service /app/consignment-service
CMD ["./consignment-service"]
Then we create a new directory to host our application. Then we add our compiled binary to our Docker container and run it.
Now, let's update the assembly record of our Makefile to create a Docker image.
build:
...
GOOS=linux GOARCH=amd64 go build
docker build -t consignment .
We added two more steps, and I would like to explain them in more detail. First of all, we create our Go binary. However, you will notice two environment variables before we run $ go build. GOOS and GOARCH allow you to cross-compile your binary for another operating system. Since I'm developing for a Macbook, I cannot compile the go executable and then run it in a Docker container that uses Linux. The binary will be completely meaningless in your Docker container and it will throw an error.
The second step I added is the docker build process. Docker will read your Dockerfile and create an image named consignment-service, the dot denotes the directory path, so here we just want the build process to look at the current directory.
I am going to add a new entry to our Makefile:
run:
docker run -p 50051:50051 shippy-service-consignment
Here we launch our docker image by opening port 50051. Since Docker operates on a separate network layer, you need to redirect the port. For example, if you want to start this service on port 8080, you must change the -p argument to 8080: 50051. You can also run the container in the background by including the -d flag. For example, docker run -d -p 50051: 50051 consignment-service .
Run $ make run , then in a separate terminal panel again $ go run main.go and check that it still works.
When you run $ docker build, you embed your code and runtime in the image. Docker images are portable snapshots of your environment and its dependencies. You can share Docker images by posting them to the Docker Hub. Which is similar to npm or the yum repository for docker images. When you define FROM in your Dockerfile, you tell Docker to pull this image from the Docker repository for use as a base. You can then expand and redefine parts of this base file, overriding them as you wish. We will not publish our docker images, but feel free to browse the docker repository and note that almost any software has already been packaged in containers. Some really wonderful things were dockerised.
Each ad in Dockerfile is cached the first time it is built. This eliminates the need to rebuild the entire runtime every time you make changes. The docker is smart enough to figure out which details have changed and which need to be rebuilt. This makes the build process incredibly fast.
Enough about the containers! Let's get back to our code.
When creating the gRPC service, there is a lot of standard code for creating connections, and you need to hard-code the location of the service address in the client or another service so that it can connect to it. This is difficult because when you start services in the cloud, they may not use the same host, or the address or ip may change after the service is redeployed.
This is where the discovery service comes into play. The discovery service updates the directory of all your services and their locations. Each service is registered at runtime and unregisters at close. Each service is then assigned a name or identifier. Thus, even if it may have a new IP address or a host address, provided that the service name remains the same, you do not need to update calls to this service from other services.
As a rule, there are many approaches to this problem, but, like most things in programming, if someone has already dealt with this problem, it makes no sense to reinvent the wheel. @Chuhnk (Asim Aslam), creator of Go-microSolves these problems with fantastic clarity and ease of use. He single-handedly produces fantastic software. Please consider helping him if you like what you see!
Go micro
Go-micro is a powerful microservice framework written in Go, for use, for the most part, with Go. However, you can use Sidecar to interact with other languages.
Go-micro has useful features for creating microservices in Go. But we will start with perhaps the most common problem that he solves, and this is the discovery of a service.
We will need to make several updates to our service in order to work with go-micro. Go-micro integrates as a Protoc plugin, in this case replacing the standard gRPC plugin that we are currently using. So, let's start by replacing this in our Makefile.
Be sure to install the go-micro dependencies:
go get -u github.com/micro/protobuf/{proto,protoc-gen-go}
Update our Makefile to use the go-micro plugin instead of the gRPC plugin:
build:
protoc -I. --go_out=plugins=micro:. \ proto/consignment/consignment.proto
GOOS=linux GOARCH=amd64 go build
docker build -t consignment .
run:
docker run -p 50051:50051 shippy-service-consignment
Now we need to update our shippy-service-consignment / main.go to use go-micro. This abstracts most of our previous gRPC code. It easily processes registration and speeds up the writing of a service.
shippy-service-consignment / main.go
// shippy-service-consignment/main.go
package main
import (
"fmt"
//Импортируем основной protobuf код
pb "github.com/EwanValentine/shippy/consignment-service/proto/consignment"
"github.com/micro/go-micro"
"context"
)
//repository - интерфейс хранилища
type repository interface {
Create(*pb.Consignment) (*pb.Consignment, error)
GetAll() []*pb.Consignment
}
// Repository - структура для эмитации хранилища,
// после мы заменим её настоящим хранилищем
type Repository struct {
consignments []*pb.Consignment
}
func (repo *Repository) Create(consignment *pb.Consignment) (*pb.Consignment, error) {
updated := append(repo.consignments, consignment)
repo.consignments = updated
return consignment, nil
}
func (repo *Repository) GetAll() []*pb.Consignment {
return repo.consignments
}
// Служба должна реализовать все методы для удовлетворения сервиса
// которые мы определили в нашем определении proto.
// Вы можете проверить интерфейсы в сгенерированном коде для точных сигнатур методов.
type service struct {
repo repository
}
// CreateConsignment - мы создали только один метод для нашего сервиса,
// который является методом create, который принимает контекст и запрос
// потом они обрабатываются сервером gRPC.
func (s *service) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error {
// Save our consignment
consignment, err := s.repo.Create(req)
if err != nil {
return err
}
// Return matching the `Response` message we created in our
// protobuf definition.
res.Created = true
res.Consignment = consignment
return nil
}
//GetConsignments - метод для получения всех партий из ответа сервера
func (s *service) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error {
consignments := s.repo.GetAll()
res.Consignments = consignments
return nil
}
func main() {
repo := &Repository{}
// Регистрируем новый сервис через Go-micro
srv := micro.NewService(
// Это имя должно совпадать с именем пакета объявленного в файле proto
micro.Name("shippy.service.consignment"),
)
// Init will parse the command line flags.
srv.Init()
// Регистрация обработчитка
pb.RegisterShippingServiceHandler(srv.Server(), &service{repo})
// Запуск сервера
log.Println("Запуск сервера")
if err := srv.Run(); err != nil {
fmt.Println(err)
}
}
The main change here is the way we create our gRPC server, which has been neatly abstracted from mico.NewService (), which handles the registration of our service. And finally, the service.Run () function, which processes the connection itself. As before, we register our implementation, but this time with a slightly different method.
The second largest change concerns the service methods themselves: the arguments and types of responses are slightly modified to accept both the request and the response structures as arguments, and now only return an error. In our methods, we set the response that go-micro processes.
Finally, we no longer program the port. Go-micro must be configured using environment variables or command line arguments. To set the address, use MICRO_SERVER_ADDRESS =: 50051. By default, Micro uses mdns (multicast dns) as a service discovery broker for local use. Usually you do not use mdns to discover services in a production environment, but we want to avoid having to run something like Consul or etcd locally for testing. More on this later.
Let's update our Makefile to reflect this.
build:
protoc -I. --go_out=plugins=micro:. \ proto/consignment/consignment.proto
GOOS=linux GOARCH=amd64 go build
docker build -t consignment .
run:
docker run -p 50051:50051 \ -e MICRO_SERVER_ADDRESS=:50051 \ shippy-service-consignment
-e is the flag of the environment variable, it allows you to pass environment variables to your Docker container. You must have a flag for each variable, for example -e ENV = staging -e DB_HOST = localhost, etc.
Now, if you run $ make run, you will have a Dockerised service with a service discovery. So, let's update our Cli tool to use this.
consignment-cli
package main
import (
"encoding/json"
"io/ioutil"
"log"
"os"
"context"
pb "github.com/EwanValentine/shippy-service-consignment/proto/consignment"
micro "github.com/micro/go-micro"
)
const (
address = "localhost:50051"
defaultFilename = "consignment.json"
)
func parseFile(file string) (*pb.Consignment, error) {
var consignment *pb.Consignment
data, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
json.Unmarshal(data, &consignment)
return consignment, err
}
func main() {
service := micro.NewService(micro.Name("shippy.cli.consignment"))
service.Init()
client := pb.NewShippingServiceClient("shippy.service.consignment", service.Client())
// Contact the server and print out its response.
file := defaultFilename
if len(os.Args) > 1 {
file = os.Args[1]
}
consignment, err := parseFile(file)
if err != nil {
log.Fatalf("Could not parse file: %v", err)
}
r, err := client.CreateConsignment(context.Background(), consignment)
if err != nil {
log.Fatalf("Could not greet: %v", err)
}
log.Printf("Created: %t", r.Created)
getAll, err := client.GetConsignments(context.Background(), &pb.GetRequest{})
if err != nil {
log.Fatalf("Could not list consignments: %v", err)
}
for _, v := range getAll.Consignments {
log.Println(v)
}
}
Here, we imported the go-micro libraries for creating clients and replaced the existing connection code with the go-micro client code, which uses the permission of the service instead of directly connecting to the address.
However, if you run this, it will not work. This is because we are now launching our service in the Docker container, which has its own mdns, separate from the mdns host that we are currently using. The easiest way to fix this is to make sure that both the service and the client are running in dockerland, so they both work on the same host and use the same network layer. So, let's create make consignment-cli / Makefile and create some entries.
build:
GOOS=linux GOARCH=amd64 go build
docker build -t shippy-cli-consignment .
run:
docker run shippy-cli-consignment
As before, we want to build our binary for Linux. When we launch our docker image, we want to pass an environment variable to give the go-micro command to use mdns.
Now let's create a Dockerfile for our CLI tool:
FROM alpine:latest
RUN mkdir -p /app
WORKDIR /app
ADD consignment.json /app/consignment.json
ADD consignment-cli /app/consignment-cli
CMD ["./shippy-cli-consignment"]
This is very similar to our service Dockerfile, except that it also extracts our json data file.
Now when you run $ make run in your shippy-cli-consignment, you should see Created: true, just as before.
Now, it seems time to take a look at the new Docker feature: multi-stage builds. This allows us to use multiple Docker images in one Dockerfile.
This is especially useful in our case, since we can use one image to create our binary file with all the correct dependencies. And then use the second image to launch it. Let's try this, I will leave detailed comments along with the code:
consignment-service / Dockerfile
# consignment-service/Dockerfile
# Мы используем официальное изображение golang, которое содержит все
# правильные инструменты сборки и библиотеки. Обратите внимание на `as builder`,
# это дает этому контейнеру имя, на которое мы можем ссылаться позже.
FROM golang:alpine as builder
RUN apk --no-cache add git
# Установит рабочий каталог на наш текущий сервис в gopath
WORKDIR /app/shippy-service-consignment
# Скопирует текущий код в рабочий каталог
COPY . .
RUN go mod download
# Создаст двоичный файл с флагами, который позволит
# нам запустить этот двоичный файл в Alpine.
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-consignment
# Здесь мы используем второй оператор FROM,
# это говорит Docker начать новый процесс сборки с этим же образом.
FROM alpine:latest
# Пакет, связанный с безопасностью - хорошо бы его иметь
RUN apk --no-cache add ca-certificates
# Как и прежде, создайте каталог для нашего приложения.
RUN mkdir /app
WORKDIR /app
# Здесь вместо того, чтобы копировать двоичный файл с нашего хоста,
# мы извлекаем двоичный файл из контейнера с именем `builder`
# Это позволяет заглянуть в наш предыдущий образ,
# и найти двоичный файл, который мы создали ранее,
# и поместить его в этот контейнер. Удивительно!
COPY --from=builder /app/shippy-service-consignment/shippy-service-consignment .
# Запустит бинарный файл как обычно! На этот раз с бинарной сборкой в
# отдельном контейнере со всеми правильными зависимостями и
# run time библиотеками.
CMD ["./shippy-service-consignment"]
Now I will move on to other Docker files and take this new approach. Oh, and don't forget to remove $ go build from your Makefiles!
Ship service
Let's create a second service. We have a service (shippy-service-consignment), which deals with the coordination of the batch of containers with the ship, which is best suited for this batch. To match our batch, we must send the weight and number of containers to our new ship service, which will then find a vessel capable of handling this batch.
Create a new directory in your $ mkdir vessel-service root directory , now create a subdirectory for our new protobuf services definition, $ mkdir -p shippy-service-vessel / proto / vessel . Now let's create a new protobuf file, $ touch shippy-service-vessel / proto / vessel / vessel.proto .
Since the definition of protobuf is indeed the core of our software design, let's start with it.
vessel / vessel.proto
// shippy-service-vessel/proto/vessel/vessel.proto
syntax = "proto3";
package vessel;
service VesselService {
rpc FindAvailable(Specification) returns (Response) {}
}
message Vessel {
string id = 1;
int32 capacity = 2;
int32 max_weight = 3;
string name = 4;
bool available = 5;
string owner_id = 6;
}
message Specification {
int32 capacity = 1;
int32 max_weight = 2;
}
message Response {
Vessel vessel = 1;
repeated Vessel vessels = 2;
}
As you can see, this is very similar to our first service. We create a service with one rpc method called FindAvailable. This takes a type of Specification and returns a type of Response. The Response type returns either the Vessel type or multiple vessels using a repeating field.
Now we need to create a Makefile to handle our build logic and our startup script. $ touch shippy-service-vessel / Makefile . Open this file and add the following:
// vessel-service/Makefile
build:
protoc -I. --go_out=plugins=micro:. \
proto/vessel/vessel.proto
docker build -t shippy-service-vessel .
run:
docker run -p 50052:50051 -e MICRO_SERVER_ADDRESS=:50051 shippy-service-vessel
This is almost identical to the first Makefile we created for our consignment service, however note that the names of the services and ports have changed a bit. We cannot run two dock containers on the same port, so we use Dockers port forwarding so that this service redirects from 50051 to 50052 on the host network.
Now we need a Dockerfile using our new multi-stage format:
# vessel-service/Dockerfile
FROM golang:alpine as builder
RUN apk --no-cache add git
WORKDIR /app/shippy-service-vessel
COPY . .
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-vessel
FROM alpine:latest
RUN apk --no-cache add ca-certificates
RUN mkdir /app
WORKDIR /app
COPY --from=builder /app/shippy-service-vessel .
CMD ["./shippy-service-vessel"]
Finally, we can write our implementation:
vessel-service / main.go
// vessel-service/main.go
package main
import (
"context"
"errors"
"fmt"
pb "github.com/EwanValentine/shippy/vessel-service/proto/vessel"
"github.com/micro/go-micro"
)
type Repository interface {
FindAvailable(*pb.Specification) (*pb.Vessel, error)
}
type VesselRepository struct {
vessels []*pb.Vessel
}
// FindAvailable - проверяет спецификацию по карте судов,
// если вместимость и максимальный вес ниже вместимости судна и максимального веса,
// тогда возвращаем это судно в ответ.
func (repo *VesselRepository) FindAvailable(spec *pb.Specification) (*pb.Vessel, error) {
for _, vessel := range repo.vessels {
if spec.Capacity <= vessel.Capacity && spec.MaxWeight <= vessel.MaxWeight {
return vessel, nil
}
}
//если не найдём нужного судна
return nil, errors.New("судов с задаными параметрами не найдено")
}
// Наш обработчик сервиса grpc
type service struct {
repo repository
}
func (s *service) FindAvailable(ctx context.Context, req *pb.Specification, res *pb.Response) error {
// Поиск следующего подходящего судна
vessel, err := s.repo.FindAvailable(req)
if err != nil {
return err
}
// Определяем ответ с заполненым полем судна
res.Vessel = vessel
return nil
}
func main() {
vessels := []*pb.Vessel{
&pb.Vessel{Id: "vessel001", Name: "Boaty McBoatface", MaxWeight: 200000, Capacity: 500},
}
repo := &VesselRepository{vessels}
srv := micro.NewService(
micro.Name("shippy.service.vessel"),
)
srv.Init()
// Регистрация нашего сервиса
pb.RegisterVesselServiceHandler(srv.Server(), &service{repo})
if err := srv.Run(); err != nil {
fmt.Println(err)
}
}
Now let's move on to the interesting part. When we create a consignment, we need to change our cargo handling service to contact the ship search service, find the ship and update the ship_id parameter in the created consignment:
shippy / consignment-service / main.go
package main
import (
"context"
"fmt"
"log"
"sync"
pb "github.com/EwanValentine/shippy-service-consignment/proto/consignment"
vesselProto "github.com/EwanValentine/shippy-service-vessel/proto/vessel"
"github.com/micro/go-micro"
)
const (
port = ":50051"
)
type repository interface {
Create(*pb.Consignment) (*pb.Consignment, error)
GetAll() []*pb.Consignment
}
// Repository - структура для эмитации хранилища,
// после мы заменим её настоящим хранилищем
type Repository struct {
mu sync.RWMutex
consignments []*pb.Consignment
}
//Create - создаём новую партию груза
func (repo *Repository) Create(consignment *pb.Consignment) (*pb.Consignment, error) {
repo.mu.Lock()
updated := append(repo.consignments, consignment)
repo.consignments = updated
repo.mu.Unlock()
return consignment, nil
}
//GetAll - метод получения всех партий из хранилища
func (repo *Repository) GetAll() []*pb.Consignment {
return repo.consignments
}
// Служба должна реализовать все методы для удовлетворения сервиса
// которые мы определили в нашем определении proto.
// Вы можете проверить интерфейсы в сгенерированном коде для точных сигнатур методов
type service struct {
repo repository
vesselClient vesselProto.VesselServiceClient
}
// CreateConsignment - мы создали только один метод для нашего сервиса create,
// который принимает контекст и запрос, после он обрабатывается сервером gRPC.
func (s *service) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error {
// Здесь мы опледеляем судно исходя из веса нашего груза,
// и количества контейнеров
vesselResponse, err := s.vesselClient.FindAvailable(context.Background(), &vesselProto.Specification{
MaxWeight: req.Weight,
Capacity: int32(len(req.Containers)),
})
log.Printf("Судно найдено: %s \n", vesselResponse.Vessel.Name)
if err != nil {
return err
}
// В ответ мы передадим id судна
req.VesselId = vesselResponse.Vessel.Id
// Сохраним партию груза в репозиторий
consignment, err := s.repo.Create(req)
if err != nil {
return err
}
res.Created = true
res.Consignment = consignment
return nil
}
// GetConsignments - метод для получения всех партий из ответа сервера
func (s *service) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error {
consignments := s.repo.GetAll()
res.Consignments = consignments
return nil
}
func main() {
//Создание пустого хранилища
repo := &Repository{}
//Создание экземпляра micro
srv := micro.NewService(
micro.Name("shippy.service.consignment"),
)
srv.Init()
vesselClient := vesselProto.NewVesselServiceClient("shippy.service.vessel", srv.Client())
// Регистрация службы ответов на сервере gRPC.
pb.RegisterShippingServiceHandler(srv.Server(), &service{repo, vesselClient})
// Запуск сервера
if err := srv.Run(); err != nil {
fmt.Println(err)
}
}
Here we created a client instance for our ship service, which allows us to use the service name, i.e. shipy.service.vessel to call the ship's service as a client and interact with its methods. In this case, only one method (FindAvailable). We ship the batch weight along with the number of containers we want to ship as a specification for the ship's service. Which returns us the vessel corresponding to this specification.
Update the consignment-cli / consignment.json file, delete the hard-coded ship_id, because we want to confirm that our ship search service is working. Also let's add some more containers and increase the weight. For example:
{
"description": "Тестовая партия груза",
"weight": 55000,
"containers": [
{
"customer_id": "Заказчик_001",
"user_id": "Пользователь_001",
"origin": "Ростов-на-Дону"
},
{
"customer_id": "Заказчик_002",
"user_id": "Пользователь_001",
"origin": "Новоросийск"
},
{
"customer_id": "Заказчик_003",
"user_id": "Пользователь_001",
"origin": "Туапсе"
}
]
}
Now run $ make build && make run in consignment-cli. You should see an answer with a list of the created goods. In your games, you should see that the vessel_id parameter is set.
So, we have two interconnected microservices and a command line interface!
In the next part of this series, we will consider saving some of this data using MongoDB. We will also add a third service and use docker-compose to locally manage our growing container ecosystem.
Part I
Original EwanValentine Repository