Get acquainted with the Moleculer microservice framework

Hi% habrauser%!

Today I want to tell you about one great, in my opinion, microservice framework Moleculer .



Initially, this framework was written in Node.js, but later it had ports in other languages ​​such as Java, Go, Python, and .NET, and most likely, in the near future, it will appearand other implementations. We have been using it in production in several products for about a year and it is difficult to describe in words what a blessing it seemed to us after using Seneca and our_bicycles. We got everything we need out of the box: collecting metrics, caching, balancing, fault-tolerance, optional transports, parameter validation, logging, concise method declaration, several ways of inter-service interaction, mixins, and much more. And now in order.

Introduction


The framework, in fact, consists of three components (in fact, no, but you will learn about it below).

Transporter


Responsible for discovering and communicating between services. This is an interface that, with a strong desire, you can implement yourself, or you can use ready-made implementations that are part of the framework itself. Out of the box there are 7 transports available: TCP, Redis, AMQP, MQTT, NATS, NATS Streaming, Kafka. Here you can see more. We use Redis transport, but we plan to switch to TCP with its exit from the experimental state.

In practice, when writing code, we do not interact with this component. You just need to know what it is. Used transport is indicated in the config. Thus, to switch from one transport to another, simply change the config. Everything. Like that:

// ./moleculer.config.jsmodule.exports = {
  transporter: 'redis://:pa$$w0rd@127.0.0.1:6379',
  // ... прочие параметры
}

Data, by default, go in the JSON format. But you can use anything: Avro, MsgPack, Notepack, ProtoBuf, Thrift, etc.

Service


The class from which we inherit when writing our microservices.

Here is the simplest service without methods, which, however, will be detected by other services:

// ./services/telemetry/telemetry.service.jsconst { Service } = require('moleculer');
module.exports = classTelemetryServiceextendsService{
  constructor(broker) {
    super(broker);
    this.parseServiceSchema({
      name: 'telemetry',
    });
  }
};

ServiceBroker


Exaggerating, we can say that this is a layer between transport and service. When one service wants to somehow interact with another service, it does so through a broker (examples will be lower). The broker is engaged in load balancing (supports several strategies, including custom ones, by default - round-robin), taking into account live services, available methods in these services, etc. For this ServiceBroker under the hood uses another component - the Registry, but I will not dwell on it, for the acquaintance we do not need it.

Having a broker gives us an extremely convenient piece. Now I will try to explain, but I will have to step aside a little. In the context of the framework, there is such a thing as node. In simple language, a node is a process in the operating system (i.e., what happens when we enter “node index.js” in the console, for example). Each node is a ServiceBroker with a set of one or more microservices. Yes, you heard right. We can compose our stack of services as we please. What is convenient? For development, we start one node, in which all microservices are started at once (1 piece each), just one process in the system with the ability to very easily connect hotreload, for example. In the production - a separate node for each instance of the service. Well, either a mix, when part of services in one node, part in another, and so on (though I don’t know why, just to understand

Here is our index.js
const { resolve } = require('path');
const { ServiceBroker } = require('moleculer');
const config = require('./moleculer.config.js');
const {
  SERVICES,
  NODE_ENV,
} = process.env;
const broker = new ServiceBroker(config);
broker.loadServices(
  resolve(__dirname, 'services'),
  SERVICES
    ? `*/@(${SERVICES.split(',').map(i => i.trim()).join('|')}).service.js`
    : '*/*.service.js',
);
broker.start().then(() => {
  if (NODE_ENV === 'development') {
    broker.repl();
  }
});


In the absence of an environment variable, all services from the directory are loaded, otherwise by mask. By the way, broker.repl () is another handy feature of the framework. When starting in development mode, we immediately, in the console, have an interface for calling methods (what you would do, for example, through postman in your microservice that communicates via http), only here it is much more convenient: the interface in the same console where npm start was performed.

Interservice interaction


It is carried out in three ways:

call


Most commonly used. Made a request, received an answer (or error).

// Метод сервиса "report", который вызывает метод сервиса "csv".async getCsvReport({ jobId }) {
  const rows = [];
  // ...returnthis.broker.call('csv.stringify', { rows });
}

emit


It is used when we just want to notify other services about an event, but we don’t need a result.

// Метод сервиса "user" триггерит событие о регистрации.async registerUser({ email, password }) {
  // ...this.broker.emit('user_registered', { email });
  returntrue;
}

Other services can subscribe to this event and respond accordingly. Optionally, the third argument, you can explicitly specify the services that can receive this event.

The important point is that the event will receive only one instance of each type of service, i.e. if we have 10 “mail” and 5 “subscription” services that are subscribed to this event, then in fact only 2 copies will receive it - one “mail” and one “subscription”.

broadcast


The same as emit, but without restrictions. All 10 “mail” and 5 “subscription” services will catch this event.

Validation of parameters


By default, to validate the parameters used by the fastest-validator , it seems like very fast. But nothing prevents you from using any other, for example, the same joi, if you need more advanced validation.

When we write a service, we inherit from the base class Service, we declare business logic methods in it, but these methods are “private”, they cannot be called from the outside (from another service) until we explicitly want to, declaring them in special section of actions during service initialization (public methods of services in the context of the framework are called actions).

Example method declaration with validation
module.exports = classJobServiceextendsService{
  constructor(broker) {
    super(broker);
    this.parseServiceSchema({
      name: 'job',
      actions: {
        update: {
          params: {
            id: { type: 'number', convert: true },
            name: { type: 'string', empty: false, optional: true },
            data: { type: 'object', optional: true },
          },
          async handler(ctx) {
            returnthis.update(ctx.params);
          },
        },
      },
    });
  }
  async update({ id, name, data }) {
    // ...
  }
}


Mixins


Used, for example, to initialize connections to databases. Avoid duplication of code from service to service.

Mixin example for initializing connection to Redis
const Redis = require('ioredis');
module.exports = ({ key = 'redis', options } = {}) => ({
  settings: {
    [key]: options,
  },
  created() {
    this[key] = new Redis(this.settings[key]);
  },
  async started() {
    awaitthis[key].connect();
  },
  stopped() {
    this[key].disconnect();
  },
});


The use of mixin in the service
const { Service, Errors } = require('moleculer');
const redis = require('../../mixins/redis');
const server = require('../../mixins/server');
const router = require('./router');
const {
  REDIS_HOST,
  REDIS_PORT,
  REDIS_PASSWORD,
} = process.env;
const redisOpts = {
  host: REDIS_HOST,
  port: REDIS_PORT,
  password: REDIS_PASSWORD,
  lazyConnect: true,
};
module.exports = classAuthServiceextendsService{
  constructor(broker) {
    super(broker);
    this.parseServiceSchema({
      name:   'auth',
      mixins: [redis({ options: redisOpts }), server({ router })],
    });
  }
}


Caching


Method calls (actions) can be cached in several ways: LRU, Memory, Redis. Optionally, you can specify by what key calls will be cached (by default, object hash is used as the caching key) and with which TTL.

Example of cached method declaration
module.exports = classInventoryServiceextendsService{
  constructor(broker) {
    super(broker);
    this.parseServiceSchema({
      name: 'inventory',
      actions: {
        getInventory: {
          params: {
            steamId: { type: 'string', pattern: /^76\d{15}$/ },
            appId: { type: 'number', integer: true },
            contextId: { type: 'number', integer: true },
          },
          cache: {
            keys: ['steamId', 'appId', 'contextId'],
            ttl:  15,
          },
          async handler(ctx) {
            returntrue;
          },
        },
      },
    });
  }
 // ...
}


The caching method is set via the ServiceBroker config.

Logging


Here, however, everything is also quite simple. There is a fairly good built-in logger that writes to the console, it is possible to set custom formatting. Nothing prevents you from stealing up any other popular logger, be it winston or bunyan. Detailed manual is in the documentation . Personally, we use the built-in logger, in the prod, the custom formatter is simply truncated for a couple of lines of code that spam into the JSON console, after which they get into the graylog with the help of the driver’s driver log.

Metrics


If you wish, you can collect metrics for each method and treys it all in some zipkin. It is configured, as well as caching, when a method (action) is declared.

Fault tolerance


The framework has a built-in circuit-breaker, which is controlled through the ServiceBroker settings. If any service fails and the number of these failures exceeds a certain threshold, then it will be marked as unhealthy, requests to it will be severely limited, until it ceases to throw errors.

As a bonus, there is also a fallback that can be customized individually for each method (action), if we assume that the method can fail and, for example, give cached data or a stub.

Conclusion


The introduction of this framework for me was a breath of fresh air, eliminated from a huge amount of smut (except for microservice architecture is one big smut) and cycling, made writing the next microservice simple and transparent. There is nothing superfluous in it, it is simple and very flexible, and you can write the first service in an hour or two after reading the documentation. I would be glad if this material will be useful to you and in your next project you will want to try this miracle, as we did (and have never regretted it). All good!

Also popular now: