Secrets of testing interfaces in TCS Bank

    I’ll try to give a general outline of how the process of testing interfaces looks like in the TCS bank.

    image

    The vague past


    It was all simple: the task came, the task was done, the task was manually tested by the tester, the task was left for users to see. But then, everything became more complicated, tasks became more and more, developers were added, and testing, it happened, came to a standstill.

    Charming present


    Our team has changed a lot - the small web development department has become many times larger. The process itself has changed - now our interfaces are covered with tests both inside (code) and outside. And yes, we have a code review, and we develop tasks in branches, carefully write documentation in the wiki and JS DOC generators.



    Code testing


    Obviously, where there is data processing, various calculations - there should be unit tests. Yes, what is there to be modest - where there is code, there should be tests.

    There are various approaches to development through testing: TDD, BDD, etc. We will not understand how they differ from each other, but dwell on our testing process.

    Grunt is responsible for building statics and running tests. We use a bunch of Grunt + Karma + PhantomJS + Jasmin + Sinon + CoffeeScript. Yes, you heard right about CoffeeScript.

    We used to have heated discussions about the fact that CS is beautiful, fashionable, and greatly accelerates development, but nevertheless, for many reasons, we abandoned this bad idea to write all the code in CS. But! We write tests for CS for one main reason - writing and reading a callbacks sheet is much nicer on CS than on JS. The code is more compact and enjoyable.

    Jasmine - chosen for simplicity, Sinon - for emulating API requests, Karma - just a cool test runner, and PhantomJS - for running autotests from Team City.

    I’ll make a reservation right away, we didn’t fanat and cover unit tests with everything, but only the common components and the places where the data is processed. Yes, let everyone say that this is bad and that all the code should be covered with tests, but we did not see such a need for this, especially since working with the DOM can be covered with tests, but it is pointless and long.

    We have Team City, which, according to our instructions, automatically starts the assembly and tests for each branch submitted to the code review, and if something goes wrong, the developer will know about it and the broken code will not get into master.
    All our tests are divided into modules. A module is a test case + config to run. This approach makes it possible to run the necessary tests separately, or, using a common configuration file, run everything at once.

    There are certain moments when you want to cover the DOM with unit tests, and CS helps us with this, its wonderful ability to make multi-line comments. You just write the HTML you need in the test case or in a separate file, and then connect it where you need it.

    GUI Testing


    As I wrote earlier, developers do not cover unit tests with the DOM, because they consider this a pointless undertaking. For this, TCS Bank has a testing department, and it is engaged in testing the visual part of the interface.

    There are two types of testing:
    • Manually
    • Auto tests

    With the first option, everything is clear, we click until the mouse breaks, and the buttons pop out of the keyboard. The second one is a bit more complicated ...

    For testing the interface, we have designated not only browsers, but also versions of individual browsers, there are also a bunch of test cases written for them, in addition, there is test data that needs to be used in testing on that or another browser, in its various versions. Naturally, all this is quite difficult to verify manually. Yes, and there was a desire to save the manual from routine, boring and tiring work. In general, yes, testing automation is almost indispensable today, and increasingly fast-growing commercial and opensource tools and solutions force us to look at testing automation from the other side, and for those who have doubts to look in that direction more often.

    Our autotests use Selenium WebDriver. We have developed our own “muzzle” testing framework, based on a bunch of popular and proven solutions, which allows us to write as clean and transparent tests as possible, eliminating code duplication and driving them into a tight framework for designing and building a framework, which allows us to achieve the flexibility of final tests and the simplicity of supporting their performance .

    Testing itself takes place in the selenium grid deployed distributed network, where running machines with a certain OS and a set of browser versions are waiting in the wings (actually, of course, faster) glory. Tests from TeamCity are launched, partly automatically, looking at their assembly trigger, such as smoke tests that run after each test bench deployment, partly manually, upon demand, for example, more cumbersome tests from a set of regression tests that allow , identify introduced bugs. Speaking of bugs. Autotests cover not only surface testing of the portal GUI, most autotests are comprehensive, and cover testing at the database and web services level. So in case of a fall in the autotest, the tester receives not only a screenshot with the information “it’s broken here, look further on yourself”,

    In addition to the test environment, we have smoke tests for the combat environment, they are less numerous and cover only critical functionality in case of unforeseen failures.

    I would be grateful for the comments and comments on the case, for the formation of the following articles, where we will describe in more detail how and what we have arranged.

    Also popular now: