Typescript library automation

Original author: Vladimir Repin
  • Transfer
I want to make a reservation right away: this article does not give a recipe ready for use. It is rather my story of traveling to the world of Typescript and NodeJS, as well as the results of my experiments. Nevertheless, at the end of the article there will be a link to the GitLab repository, which you can see, and maybe take something you like for yourself. Maybe even in my experience, create your own automated solution.

Why is it necessary

So, why do you need to create libraries at all, or in a particular case NPM packages?

  1. Reusing code between projects.

    It all started with the fact that I noticed the habit of creating a folder / tools in projects. In addition, I also take most of this folder with me when I switch to a new project. And then I asked myself a question, why not make an NPM package instead of copy paste and then just connect it to any project?
  2. Different life cycle. In one of the applications, there was a large corporate assembly of components as a dependency. It was possible to update it only in its entirety, even if only one component was updated. Changes in other components could break something and not always we had enough estimated time to retest it. This model is very inconvenient. When each package serves its purpose or a small set of related goals, they can already be updated when it is really needed. Also, the following versions of the package are released only when they have some changes, and not "for the company."
  3. Separate minor code from core business logic. DDD has the principle of domain distillation; it involves identifying pieces of code that do not belong to the main domain and isolating themselves from them. And how is it better to isolate than to take this code into a separate project.
    By the way, domain distillation is very similar to the SRP principle only at a different level.
  4. Own code coverage. In one of the projects where I participated, the code coverage was about 30%. And the library that I took out of it has a coverage of about 100%. The project, although it lost the percentage of coverage, as it was in the red zone before the removal, it remained. And the library has such enviable indicators to this day, after almost a year and 4 major versions.
  5. Open source Code that does not contain business logic is the first candidate for separation from the project, so it can be made open.

Launch new libraries “expensive”

There is such a problem: in order to create a library it is not enough to get a git repository under it. You also need to configure the task so that the project can be assembled, conduct static verification (lint) and test. Also, in addition to testing, it is advisable to collect code coverage. In addition, you will have to publish the package manually each time. And still need to write readme. That's just with readme I can not help.

So, what can be done with all these boring, uninteresting tasks?

First step: Seed

I started by creating a seed project. It’s a kind of starter kit, it had the same structure as my first project to take the code into a separate package. I created in it gulp tasks and scripts that would build, test and collect the package coverage in one action. Now, to create another project, I needed to clone seed into a new folder and change origin so that it would point to the freshly created repository on GitHub (then I still used GitHub).

This way of creating projects provides another advantage. Now, changes regarding the construction or testing of the project are made once, in the seed project. And copy-paste these changes is no longer necessary. Instead, in the final project, the next time I have to work with it, I create a second remote called seed and take these changes from there.

And it worked for me for a while. Until I used seed in a project where several developers participated. I wrote an instruction in three steps: take the last master, build and publish. And at some point, one of the developers, for some reason, completed the first step and the third. How is this even possible?

Second Step: Auto Publish

Despite the fact that it was a single mistake, such manual actions as publishing are boring. Therefore, I thought that it was necessary to automate it. In addition, CI was needed to prevent red commits from getting into master. At first I tried using Travis CI, but ran into the following restriction. He considers pull-request in master equivalent to a commit in master. And I had to do different things.

One of my colleagues advised me to pay attention to GitLab and its CI, and everything that I wanted worked there.

I created the following process of working with the project, which is used when you need to fix a bug, add new functionality or create a new version:

  1. I create a branch from master. I write code and tests in it.
  2. I create a merge request.
  3. GitLab CI automatically runs tests in a node: latest container
  4. The request passes Code Review.
  5. After the request is frozen, GitLab runs the second set of scripts. This set creates a tag on the commit with the version number. The version number is taken from package.json, if it is manually increased there, if not, then the latest published version is taken and auto-incremented.
  6. The script builds the project and runs the tests again.
  7. In the last steps, the version tag is sent to the repository and the package is published to NPM.

Thus, the version indicated in the tag always matches the version of the package published from this commit. In order for these operations to work, you need to specify environment variables with access keys to the repository and NPM in the GitLab project.

Last Step: Automate Everything

At this point, I already automated a lot, but there were still quite a lot of manual actions to create a project. This, of course, was already progress anyway, because the actions were done once per project, and not on each version. But still, the instruction consisted of 11 steps. And I myself was mistaken a couple of times by taking these steps. Then I decided that since I started to automate, I need to bring this to the end.

For this full automation to work, but the computer I need to have 3 files in the .ssh folder. I thought that this folder is quite protected, since the id_rsa private key is already stored there. This file will also be used to enable GitLab CI to pass tags to the repository.

The second file I put there is “gitlab”, it contains the access key to the GitLab API. And the third file is “npm”, the access key for publishing the package.

And then the magic begins. All you need to create a new package is to run one command in the seed folder: "gulp startNewLib -n [npmName] / [libName]". Done, the package is created, ready for development and auto-publication.

For example, the package "vlr / validity" was created in this way.

This command does the following:

  1. Creates a project on GitLab
  2. Clones seed to a local folder next to the folder from which the command is running.
  3. Changes origin to the project created in step 1
  4. Push the master branch
  5. Creates environment variables in a project from files in a .ssh folder
  6. Creates a firstImplementation branch
  7. Changes the name of the library in package.json, commits and pushes the branch

All you need after this is to put the code there and create a merge request.

As a result, which can be proud of, from the moment when it is decided to put some kind of code into a separate project until the first version is published, it takes about five minutes. Four of them occupy two GitLabCI pipelines, and one minute to launch the above command, drag and drop the code, and click the buttons in the GitLab interface to create and then hold the request.

There are some limitations: GitLab name must match the name in npm. And yet, this command, unlike the rest of the functionality in the seed project, works only on Windows.

If you are interested in this seed project, you can study it at the following link .

Also popular now: