One day in the life of QA-automation

    How I had to become a little qa-automation: what I felt, experienced and wonderful architecture and infrastructure of automatic testing through the eyes of a developer passing by.


    As they say, I'm not a gynecologist, I just walked by and decided to look. Therefore, for starters, I will say about the reasons for writing. It seems to me that a specialist would cope with the problem an order of magnitude faster and would not collect as many rake on his way. But he would not be as interested as me, and you would have nothing to read.

    Another reason that led me to write: for quite a while I did not understand what was so special about the infrastructure of autotests. Moreover, many managers from PM and above also did not fully understand what is so cosmic there and why developers write unit tests for themselves once or twice, and for the phycestests, individual guys are taken who write tests more slowly, constantly repair something and the chance of passing all the financial specs is strictly less than 100%, provided that the sample is large enough.


    On the third day, the issue spec on the rail project fell. Somewhere a week before, our only automation engineer decided to go to work in Chicago and we have not yet found a new specialist. Therefore, I had to roll up my sleeves and pretend to be QA. About how it was and try to tell.

    The problem looks pretty harmless. But, for starters, a little background and description of the environment. We have many address selectors in our platform. Honestly, this is one of the main entities of the platform. Selectors go to google api for data. In autotests, all requests are stalled in order to save money and speed up tests. Also, a bit of logic has been added to give approximately the same address line that was requested without going to external services.

    What has broken: we enter the desired address in the address bar, a drop-down box appears with several options, we select the desired one and ... the value of the neighboring element is inserted into the input. Is always.

    A long way to the truth

    First hypotheses and naive approach

    Without further ado, I took the nearest falling test, found the line on which the address was selected and began to carefully examine it and the neighbors. It looks innocuous line: But we understand that behind a beautiful code there is always a bunch of weird little designs and unpleasant entrails.

    I quickly recalled that all this economy works on SitePrism . This thing makes it possible to wrap the page and elements on it in the class and class methods, respectively. For clicks and other actions, Capybara and RSpec are responsible . But they have no doubt, they are reliable, like the entire civilian fleet. And if so, the first hypothesis immediately suggests itself: either someone poorly wrote selectors for the prism, or someone twisted the layout at the front.

    The first part of the hypothesis quickly disappeared, selectors are written perfectly. No one was found xpathwith the choice of the third one liinside the element and the code itself did not change the last year.

    However, in the area of ​​the method selectheaped with logic with regexps for choosing the right option from the drop-down list. Of course, I get angry at regexps and go to check them. I spend half an hour and I understand that everything works fine. Exactly the line that is needed is selected. It is called on it click. And everything should work. That is, the second part of the layout hypothesis also disappears. But there is a thought about the curve js. After all, the element on the page is custom, and jsaround it, in order, and, moreover, we jshave been tinkering with this very recently.

    Js is to blame

    This is the standard reason for all obscure issues. Something like "get on js, for he is already shell-shocked." And, it seems, in my case, it was not without js. In general, without thinking twice, I run to the front-end team and point my finger at the falling tests, stating "everything works on our side, please repair your side."

    But, the guys from the dandy are not a miss, they pick around there for a couple of hours, find a couple of bugs that are not related to the narrative and say they are jsnot to blame! At the same time, they throw up interesting information that one request for the select is not enough and he makes another one, the answer to which exactly matches the incorrect contents of the input.

    I did not expect such a turn. We have to go back in and figure out how the mock of requests works for us.

    Comprehensive approach

    So, there’s nowhere else to wait for help, Moscow is behind us and stuff like that.

    First of all, we look at the ways of Moka requests to Google. More precisely, first we look for where this happens. This is not a trivial task. It turns out we have a whole module MockServices::Baseresponsible for the mock of different requests using VCR . He is cunningly kneeling in the base controller and just because of the name of the service responsible for external requests, he can not be found.

    Okay, found the moki. Now we look at their implementation. The first request gets wet simply: information is taken from paramsand substituted into the template with the answer. Just in case, I checked the contents paramsand, as expected, everything comes there as it should.

    The moka method of the next request is more interesting. Someone appears there mock_data. This is not paramsand it would be necessary to figure out where this date comes from. Having fallen five times deep into it, it is revealed that this data is taken from the RequestStore by key x_mock_data. Already more interesting.

    We return to the original test and notice that there is a thing set_mock_headerthat, upon closer examination, adds some data to the same RequestStore. More interesting!

    Somewhere at this moment, a cognitive dissonance occurs in the head: on the one hand, it’s the probable cause of the problem, the global variables that the side request gives us broke. But there is a nuance: the server for the financial specs and the financial specs themselves are two independent processes (in fact, the server is at least 3 processes), therefore, a debit with credit cannot converge in any way, because no global variables between the processes have been brought into this world yet. And with a multi-threaded web server, it will be fierce game, which will not physically work. Means I hollowed something and it is necessary to search.

    We look further and find a certain bmone who is affixed by the headers. Go ahead and understand that bmthis is BrowserMob . Then I got a little bit fiddled, for it is a proxy on Java in a ruby ​​wrapper. Just a piano in the bushes.

    We start picking further and understand that for the "global" variables between the client rspecand the server with the application (for example, puma ), the very same X-Mock-Dataheaders in the request are used. The problem is that the application should not know anything about these readers. Just for this, you need a proxy through which all requests will fly and which will take care of setting up the headers. Cunningly, you will not say anything.

    We go to test and discovers that just this thing does not work. Headers are nowhere to be seen: neither in requests nor in responses. But RequestStore is filled on the side rspecand empty on the web server side. That means for sure - it's in the proxy.

    Here and then, in the meantime, it turns out that we have not only tests with addresses lying around, but also everything that uses the above set_mock_header.

    Excellent. It remains to understand how to fix this.

    We deal with a proxy

    We omit the moments of excavation in the region of launching the jarfile and its subsequent management through Ruby. Better pay attention to the way the browser proxy is specified. We use Chrome and pass proxy information in one of the many arguments of the command line when it starts. The proxy feature is that we use the pacfile that we generate from the template so that traffic from web sockets is not allowed through the proxy.

    Somewhere here there is a desire to go and google what's on the chrome with the proxy config. It turns out that you don’t have to go far and in version 72+ the guys “finished” his work. On this occasion, they even brought a separate bug . My favorite comment:

    "Can you please stop REMOVING functionality?"

    The sadness is that it is considered a feature and in the future they promise even more tin in terms of "secrecy".

    In short, Chrome no longer has protocol support file:in the argument proxy-pac-url. The solutions are one better than the other:

    • pass in an argument jsthat is read pac-file and turn it into the base64: --proxy-pac-url='data:application/x-javascript-config;base64,'$(base64 -w0 /path/to/pac/script);
    • raise your web server in python in order to distribute one file according to a more "correct" protocol, which is supported in the argument for the pacproxy;
    • turn off NetworkServiceand then the protocol file:should work, but they promise that in the future they will also be "fixed".

    The first two options certainly did not inspire me, and the third, oddly enough, helped.

    Short-lived joy

    Rejoicing that a tricky connection was found between idle dropdowns and updated chrome, I was not happy for long. It turns out that our CI updated not only chrome, but also all the adjacent packages, and now we have even more tests due to an unknown error Selenium::WebDriver::Error::NoSuchDriverError, which, oddly enough, is not related to the chromedriver , but is related to the chrome config, library versions and parallel execution of spec .

    But this is the task for the next working day ...

    Looking ahead: an argument helped disable-dev-shm-usage.


    Do not scold the automation. He seems to suffer most from external circumstances that are independent of him.

    It’s better to make a friend of the automation engineer with the devs, so that they organize their infrastructure with preference and courtesanswith fixed versions and a controlled test environment. For me, this is better than suffering from proprietary CI, each of which has its own very sophisticated crutches and underwater stools, which you will learn about only after tight integration of your application and tests with someone else's environment.

    Also popular now: