Before microservices you need to grow, and not start with them

Original author: Nick Janetakis
  • Transfer

I propose to talk about when microservices are needed, and when not. Spoiler: it depends on the project.

We, software developers, have a rather interesting profession. We can calmly encode all day long, and then read the article about something - and she questions our entire work, because some Netflix said XYZ.

Just because of the opinion of one person or company, you begin to doubt everything that you have been doing for many years, even if everything worked perfectly.

You are not Google (unless you are Google)

When we read HackerNews and other news sites, we often see technical messages from Google, Netflix, Amazon and Facebook, and they like to talk about how many hundreds or thousands of services they launch. They talk about the benefits of doing everything in their own way. This is the trend of the past few years.

But let's face it. It is unlikely that you have 1000 developers who are working on a massive project with more than 10 years of history.

Just because Google does this does not mean that you should do it too.

We work in a completely different galaxy. Google faces challenges that we’ll probably never encounter, but at the same time we can do things that Google cannot.

How do most software projects start?

Many projects start with one person who does all the work. There are a million examples, but let's look at Shopify. Initially, the service was coded by Tobias Lutke (it was based on Ruby on Rails and, by the way, still works on rails).

Do you think Tobias sat indecisively, painstakingly thinking through the ideal architecture on microservices, before writing the first line of code?

Damn, no. I was not present when developing the first version of Shopify, which was originally just an online store for snowboarding, but if Tobias is like me (a typical developer), then the process looked like this:

  1. Learn new technology in the process of writing the original product.
  2. Write a rather non-standard (unclean), but completely working code.
  3. See how everything works together and get excited.
  4. Reconstruct the “fire-burning” type and improve the code when a problem occurs.
  5. Repeat this cycle when adding new features and running in a production environment.

This may seem like a very simple cycle, but it took me about 20 years of programming to understand how deep it was.

You do not become better as a programmer, theoretically thinking out the optimal settings, before writing the first line of code. You become better by writing a lot of code with the absolute and explicit intention to replace almost everything you write with the best code as soon as you begin to experience real problems.

The original code that you replaced is not wasted time or effort. Over time, he largely helps you improve your level. This is the secret ingredient.

Let's talk about code abstractions.

As developers, we have all heard the phrase “DRY: don't repeat yourself” , and in general it is a sensible advice not to do the work again. But often it is worth doing.

It is worth repeating, when you're trying to abstract something without fully understanding and create what is called an incomplete abstraction (leaky abstraction).

I usually do the work three times before THINKING at all about refactoring some code and removing duplicates. Often, only after the 4th or 5th time do I take any measures.

You really need to see several times how you duplicate the code in different situations before it becomes clear what to translate into variables and delete from the places where this code is originally registered.

Levels of abstraction, from embedded code to external libraries:

  1. Write embedded code.
  2. Duplicate code in several different places.
  3. Extract duplicate code in function, etc.
  4. Some time to use these abstractions.
  5. See how this code interacts with other code.
  6. Extract general functionality to the internal library.
  7. For a long time use the internal library.
  8. Really understand how all the pieces come together.
  9. Create an external library (open source, etc.), if that makes sense.

The point is that you can not "invent" a good library or framework. Almost all the very successful tools that we use today come from real projects. There our favorite tool is extracted from real cases of internal use.

Rails is a great example. DHH (author of Rails) did not wake up once and said: "Oh! It's time to create directories models /, controllers / and views /! ”.

Not. He developed Basecamp (a real product), then certain templates appeared, and these templates were generalized and then extracted from Basecamp into Rails. This process is still ongoing today, and in my opinion, this is the only reason why Rails remains so successful.

This is a perfect storm of very well-run (read: not theoretically developed) abstractions combined with a programming language that allows you to write attractive code. It also explains why almost all frameworks like “Rails, only in XYZ” do not work. They skip the key components of the chain of abstractions and think that they can simply duplicate Rails.

From abstractions to microservices

For me, microservices are just another level of abstraction. This is not necessarily step 10 in the list above, because not all libraries are intended for microservices, but at the conceptual level it is similar to that.

Microservices is not something you start with, just as you wouldn’t have tried to create the perfect open source external library right before writing a line of code. At this moment you do not even know what exactly you are developing.

Microservice based architecture is what a project can turn into with time when you run into real problems.

You may never encounter these problems, and many of them can be solved otherwise. Take a look at Basecamp and Shopify. They both work well as monolithic applications.

I don’t think someone will call them small, although they don’t work on Google’s scale.

Shopify earns $ 17 million a month on a monolith

As of mid-2018, Shopify has publicly announced that over 600,000 online stores operate on their platform .

Shopify is a SaaS application that has the cheapest rate of $ 29 per month, and I have a feeling that many companies choose the $ 79 rate per month. In any case, even if 600,000 customers used the cheapest plan for $ 29, this is $ 17.4 million in revenue per month only from the SaaS line of their business.

Basecamp is another great monolithic application. What is interesting about Basecamp is that they only have about 50 employees, and only some of them are software developers working on the main product.

I want to say that you can go VERY far, without going down the rabbit hole of microservices. Do not create microservices just like that.

When should I use microservices?

It all depends on your decision. This is just one of those things where you won’t google “microservices against monolith”. If you really need microservices, you will already know about it.

But it may be a situation that you have a bunch of developers who are best suited to work on individual parts of the application. Having dozens of teams working on different components of a product in isolation is one of the reasonable reasons for using microservices.

Keep in mind that if you have a small team that slowly grows over time, then you can start with one or two microservices. In such a situation, you probably should not break the monolith at once into 100 microservices, starting right off the bat.

Is the game worth the candle?

It is also worth mentioning that the transition to microservices is accompanied by its own set of problems. You change one problem for another, so you need to weigh the pros and cons, whether the game is worth the candle specifically for your project.

One of the main problems is monitoring. Suddenly, you have a bunch of services that can be written on different technological stacks, work on several machines - and you need a way to track them in detail.

This is a difficult task, because ideally we would like all microservices to use a single monitoring service.

You probably do not want to develop your own tools, because this in itself can turn into a full-fledged work. That's the reason for the success of companies like LightStep. This is one of the most interesting monitoring services that I came across.

Their product is more focused on large-scale applications (understand why), but also works for small projects. I recently heard about them because they were presented at Cloud Field Day 4 .

In any case, monitoring is only one of many difficulties, but I decided to mention it because it is one of the most painful problems.

Final thoughts

Basically, I wrote this article for two reasons:

First , two weeks ago I visited Cloud Field Day 4 and accidentally participated in a group podcast on a related topic. It should come out in a few months, but here I wanted to elaborate on some points.

Secondly , as the author of online courses, I get a lot of questions about how to create my own applications.

Many developers “hang out” trying to break their applications into isolated services even before they wrote the first line of code. It comes to the fact that from the very beginning they are trying to use several databases for application components.

This moment prevents them from moving forward, and as a fellow developer, I know how hard it is to get stuck in indecision (I had it!).

By the way, I am currently working on a fairly large SaaS application - a platform for hosting custom courses. Right now I am working on a project alone, and you can be sure that I just opened the editor and started writing code on the very first day.

I'm going to save the project as a completely monolithic application, until it makes sense to implement microservices, but the sense suggests that such a moment never comes.

What do you think about it? Let me know in the comments.

Also popular now: