Improving Front-End Stability

  • Tutorial
In the continuation of the previous article on testing interfaces at Tinkoff Bank, I will tell you how we write unit tests for javascript.

image


There are a lot of articles about approaches to testing TDD and BDD, and so I will not tell you more about their features again. This article is more likely for beginners or for developers who just want to start writing tests, but more experienced developers may also find useful information for themselves.

A few words about development


First, how we develop the front-end at Tinkoff Bank, so that you know about tools that make our life easier.

Development process steps

  1. Formulation of the problem
  2. Writing technical specifications
  3. Design development
  4. Development of code and unit tests
  5. QA Testing and Debugging
  6. Launch in combat

Before the task falls to the developer, it goes through the specification stage. At the output, in an ideal variant, the task is obtained in JIRA + description in WIKI + ready-made designs. After that, the task goes to the developer, and when the development is completed, the task is transferred to the testing department. If it succeeds, the release goes public.

In our work, we use the following tools (their choice, including, is justified by the simplification of the development process and interaction with managers):
  1. Atlassian Jira;
  2. Atlassian Stash;
  3. Atlassian Confluence;
  4. JetBrains TeamCity;
  5. JetBrains IntelliJ Idea.

All Atlassian products integrate seamlessly with each other and with TeamCity.

As a Git Branch Workflow, we decided to use the familiar Gitflow Workflow, more about which can be read here .

In a few words, it comes down to the following:
  1. there are two main branches of master, which corresponds to the latest release, and develop, which contains all the latest changes;
  2. for each release from the develop branch, a release branch is created, for example, release-1.0.0;
  3. Further edits to the release merge into the release branch;
  4. after a successful release, release-1.0.0 merges into the master branch and can be deleted.

Atlassian Stash allows you to set up a similar Workflow in a couple of clicks and work comfortably with it, allowing you to:
  1. check the names of branches;
  2. prohibit merge directly to parent branches;
  3. automatically merge pull requests from the release branch to the develop branch, and if conflicts arise, automatically create a branch to resolve the conflict;
  4. prohibit merging pull request if the task in jira is in the wrong status, for example, in “In Progress” instead of “Ready”.

The integration of Atlassian Stash with TeamCity is also very convenient. We configured it so that when creating a new pull request or making changes to an existing one, TeamCity automatically starts building and testing the code for this pull request, and in Stash we set the merge ban setting until the build and tests are successful. This allows us to keep the code in the parent branches operational.

Bit of theory


Front-end testing at Tinkoff Bank covers only critical pieces of code: business logic, calculations and common components. The visual part of the UI is being tested by our QA department. When writing tests, we are guided by the following principles:
  1. the code should be modular, and not monolithic, since tests are written for this unit;
  2. poor connectivity between components;
  3. each unit must solve one problem, and not be universal.

If one of these principles fails, then the code needs to be refined to make it easier to test.

It is best if the components are loosely coupled, but this is not always the case. In this case, we use the decomposition method:
  1. we test each component individually and make sure that the tests pass, and the components work correctly;
  2. testing the dependent component apart from other modules using Mocks.

Since we test the behavior, describing the ideal operation of the code, it is necessary to develop a standard of code behavior, as well as provide for possible situations in which the code will break. That is, the test should describe the correct behavior of the code and respond to error situations. This approach allows you to generate a code specification at the output and eliminate the risk of breakage during refactoring.

With this approach, development comes down to three steps:
  1. write a test and watch how it feils;
  2. write code to pass the test successfully;
  3. refactor code.



Developer Toolkit


To write tests, you need to select test runner and test framework. The following technology stack is used in our development process:
  1. Jasmine BDD Testing framework;
  2. SinonJS;
  3. Karma;
  4. PhantomJS or any other browser;
  5. NodeJS;
  6. Gulp.

We run tests both locally and in CI (TeamCity). In CI, tests are run in PhantomJS, and reports are generated using teamcity-karma-reporter.

Practice


So, let's get down to practice. I already made a small draft of the project, the code of which can be found here . What to do with this, I think, everyone should understand.

I will not describe how to configure Karma and Gulp, everything is described in the official documentation on the project sites.

We will be launching Karma in conjunction with Gulp. We’ll write two simple tasks - to run tests and watch to monitor changes with autostart of tests.

Jasminebdd

В Jasmine есть практически все, что может потребоваться для тестирования UI: matchers, spies, setUp / tearDown, stubs, timers.

Остановимся чуть подробнее на matchers:
toBe — равно
toEqual — тождество
toMatch — регулярное выражение
toBeDefined / toBeUndefined — проверка на существование
toBeNull — null
toBeTruthy / toBeFalse — истина или ложь
toContain — наличие подстроки в строке
toBeLessThan / toBeGreaterThan — сравнение
toBeCloseTo — сравнение дробных значений
toThrow — перехват исключений

Каждый из matchers может сопровождаться исключением not, например:
expect(false).not.toBeTruthy()

Consider a simple example: suppose you want to implement a function that returns the sum of two numbers.
The first thing to do is write a test:
describe('Matchers spec', function() {
	it("should return sum of 2 and 3", function() {
		expect(sum(2, 3)).toEqual(5);
	});
})


Now let's make the test pass:
function sum(a, b) {
    return a + b;
}


Now the example is a bit more complicated: we write a function for calculating the area of ​​a circle. Like last time, we write a test, and then code.
describe('Matchers spec', function() {
	it("should return area of circle with radius 5", function() {
		expect(circleArea(5)).toBeCloseTo(78.5, 1);
	});
})


function circleArea(r) {
	return Math.PI * r * r;
}


Since we have tests, we can, without fear of refactoring the code, use the Math.pow function:
function circleArea(r) {
	return Math.PI * Math.pow(r, 2);
}


Tests passed again - the code works.

Matchers are quite easy to use, and there’s no point in dwelling on them in more detail. Let's move on to more advanced functionality.

In most situations, you need to test functionality that requires preliminary initialization, for example, environment variables, and also allows you to get rid of code duplication in specs. To prevent this initialization with each Spec, setUp and tearDown are provided in Jasmine.

beforeEach - performing actions necessary for each Spec
afterEach - performing actions after each Spec
beforeAll - performing actions before running all Specs
afterAll - performing actions after running all Specs

At the same time, there are two ways to share resources between each test cases:
  1. use a local variable for the test case (code);
  2. use this;

To better understand how you can use setUp and tearDown, I’ll immediately give an example using Spies.
The code
describe('Learn Spies, setUp and tearDown', function() {
	beforeEach(function(){
		this.testObj = {//используем this для шаринга ресурсов
			myfunc: function(x) {
				someValue = x;
			}
		}
		spyOn(this.testObj, 'myfunc');//создаем Spies
	});
	it('should call myfunc', function(){
		this.testObj.myfunc('test');//вызываем функцию
		expect(this.testObj.myfunc).toHaveBeenCalled();//проверяем, что myfunc вызывался
	});
	it('should call myfunc with value \'Hello\'', function(){
		this.testObj.myfunc('Hello');
		expect(this.testObj.myfunc).toHaveBeenCalledWith('Hello');//проверяем, что myfunc вызывался с Hello
	});
});


spyOn essentially creates a wrapper over our method that calls the original method and stores the invocation arguments and the method invocation flag.
These are not all the features of Spies. You can read more in the official documentation.
Javascript is an asynchronous language, so it's hard to imagine code that needs to be tested without asynchronous calls. The whole point boils down to the following:
  1. beforeEach, it, and afterEach accept an optional callback, which must be called after making an asynchronous call;
  2. Specs will not be executed until the callback starts, or until DEFAULT_TIMEOUT_INTERVAL ends

The code
describe('Try async Specs', function() {
	var val = 0;
	it('should call async', function(done) {
		setTimeout(function(){
			val++;
			done();
		}, 1000);
	});
	it('val should equeal to 1', function(){
		expect(val).toEqual(1);//вызовется только после выполнения done, либо по окончанию DEFAULT_TIMEOUT_INTERVAL 
	});
});


Sinonjs

We use SinonJS mainly for testing the functionality that makes AJAX requests to the API. There are several ways to test AJAX in SinonJS:
  1. create a stub on an AJAX call function using sinon.stub;
  2. use fake XMLHttpRequest, which replaces native XMLHTTPRequest with fake;
  3. Create a more flexible fakeServer that will respond to all AJAX requests.

We use the more flexible fakeServer approach, which allows you to respond to AJAX requests with pre-prepared JSON mocks. So the logic of working with the API can be tested in more detail.
The code
describe('Use SinonJS fakeServer', function() {
	var fakeServer, spy, response = JSON.stringify({ "status" : "success"});
	beforeEach(function(){
		fakeServer = sinon.fakeServer.create();//создаем fake server
	});
	afterEach(function(){
		fakeServer.restore();//сбрасываем fake server
	});
	it('should call AJAX request', function(done){
		var request = new XMLHttpRequest();
		spy = jasmine.createSpy('spy');//создаем Spies
		request.open('GET', 'https://some-fake-server.com/', true);
		request.onreadystatechange = function() {
			if(request.readyState == 4 && request.status == 200) {
			    spy(request.responseText);//запрос выполнен
				done();
		    }
		};
		request.send(null);
		//отвечаем на первый запрос
		fakeServer.requests[0].respond(
	        200,
	        { "Content-Type": "application/json" },
	        response
	    );
	});
	it('should respond with JSON', function(){
		expect(spy).toHaveBeenCalledWith(response);//проверяем ответ
	});
});


In this example, the easiest way to answer requests was used, but SinonJS allows you to create more flexible fakeServer settings with the url, method and response mappings, that is, it provides the ability to completely emulate the operation of the API.

PS


Writing tests is cool and fun. Do not think that with this approach, development is complicated and stretched out in time.

Code testing has several advantages:
  1. code covered with tests can be refactored without fear of breaking it;
  2. the output is a code specification expressed by tests;
  3. development is faster, since there is no need to manually check the performance of the code - for this, tests and test cases have already been written.

The most important thing: remember that tests are the same code, and therefore, you must be extremely careful when writing them. An incorrectly working test will not be able to signal an error in the code.

Resources


  1. JasmineBDD ;
  2. SinonJS ;
  3. Karma ;
  4. Testable Javascript Book ;
  5. Book Test-Driven Javascript Development ;
  6. The Workflow Gitflow ;
  7. Code .

Only registered users can participate in the survey. Please come in.

What testing framework are you using

  • 37.3% Jasmine 93
  • 38.5% Mocha 96
  • 10% QUnit 25
  • 0.4% Buster.JS 1
  • 13.6% Other 34

Also popular now: