Test it all: how did Heisenbug 2018 Piter go



    If you try to describe the past Heisenbug in one word, it will be "diversity." Speakers from giant companies and from young startups, topics from testing mobile games to testing blockchain, reports with a bunch of code and absolutely no code; finally, there were no reports at all, but “birds of a feather” sessions.

    Probably the best way to talk about such an event is not to try to find a “common denominator,” but to give several different examples of what you could hear at the conference. What we did under the cut.

    Fuzzing testing and compilers




    Do you think that you have a very difficult and responsible job? We thought so too of ours, until we listened to the report of Maxim Kazantsev (Azul Systems) and did not realize what life is like when testing compilers.

    Firstly, there users believe “it should always work correctly”: if people are ready to understand and forgive a mistake in a mobile application, then the situation “the compiler made a mistake in my program” is puzzling at best. Secondly, from the faith of users “there can’t be any bugs”, the probability of their appearance does not disappear - on the contrary, in a project of such scale and complexity it is even more difficult to deal with them than usual. And thirdly, the Falcon JIT compiler, which Maxim is working on, is now very actively developing - that is, with such complexity and such quality requirements, it is necessary to test very large innovations in a very short time.

    But the report was not devoted to what people go through, whose work we usually don’t even think about. He was dedicated to Fuzzing testing, which can help both when working on compilers and in completely different projects. Its essence is to automatically generate millions of different tests with an element of randomness, "blindly scorching in all directions": under certain conditions, the probability of getting in will be higher than it would be with slow aimed fire.

    In general, it is unclear how long a million monkeys could write “War and Peace,” but they can test the compiler with proper supervision in a reasonable amount of time.



    Zeptolab and game autotest




    A “mobile game” may not sound as responsible as a “JIT compiler”, but from the point of view of testing it is also a task. While in “ordinary” mobile testing they use a whole set of automated tools, in the game many of them are unsuitable: the interface elements do not have standard view id, on different devices everything can load at completely different speeds, and in general a lot of its own complex specifics.

    Not surprisingly, game dev from the outside may seem unsuitable for automated testing. And when creating the Cut the Rope super hit, Zeptolab was not helped by the idea of ​​logging the actions of manual testers: yes, you can record at what moment the tap or swipe happened with what coordinates, but you can’t reuse this log on a device with a different screen resolution or a less powerful processor.

    However, Zeptolab did not bury the idea of ​​automation on this, when working on the King of Thieves game, they returned to it - and there they abstracted from both the exact coordinates “which pixel was poked into” and the exact time intervals, learning to determine the essence of tapas instead. And now Dmitry Alekseev and Evgeny Shumakov talked about this. It is curious that at one of the past Heisenbug Philip Keks spoke with the topic “How to teach robots to play games”, but there it was about a game with very straightforward gameplay (drag racing) - and King of Thieves’s specifics is different. And it’s also interesting that the Appium project came in handy: its creator Dan Cuyayar on Heisenbug also already performed.



    Configuration Testing and Developers




    For ambitious tasks like “automate non-automated” it is easy to forget about less spectacular, but no less necessary. Fortunately, there was someone to remind about at Heisenbug.

    For example, everyone remembers and thinks about the "main" code from the developers, realizing the importance of testing business logic - but that which relates to the configuration can easily elude attention as "an insignificant auxiliary entity." But Ruslan Cheremin reminded that, in fact, with this part everything is in some sense even more complicated. It can also lead to errors that cost the business money, and it usually depends on the environment. And this means that the seemingly tested “on my machine” may surprise in production.

    The report, in fact, developed the themeAndrey Satarin from the previous Heisenbug (moving from general to more particular), and Heisenbug’s slogan “about testing not only for testers” well illustrated. Visitors to our Java conferences have long known Ruslan ( we organized a JUG meeting with him back in 2012), and here he gave exactly the view “from the developer's side”, and his report suggested that the audience did not hear the word “Java” for the first time.



    Yandex, VK and crowdsourcing




    Two large companies at Heisenbug shared at once how they use not only regular full-time employees, but also wider circles for testing: Olga Megorskaya talked about working with the help of “assessors” of the Yandex.Tolok project, and Anastasia Semenyuk about VK Testers program.

    What are the advantages of this approach, what are the difficulties, and how to overcome these difficulties? For example, Olga said that this allowed expanding the “throughput” of testing and saving on QA outsourcing, so many Yandex large projects are already actively using the new process, but there were some difficulties. Using thousands of "non-testers" even for routine tasks means that these tasks need to be described in detail in a language that they understand, and developers were not always ready to do this. As a result, an intermediate layer of “experienced assessors” helps, receiving test cases in any form and translating them into an understandable detailed algorithm: they just understand what issues other assessors will have.

    As many Heisenbug participants noted in the reviews, these reports were interesting, but not too applicable for "ordinary" companies: when you have neither an army of enthusiastic loyal users, like VK, nor a large-scale crowdsourcing project, like Yandex, attempts to do something then a similar one may be impractical. But it was just the uniqueness of these situations that made the reports themselves unique: if you can listen to a lot about someone about a “typical experience”, then about that kind of thing only from these people. As Olga noted, when building its processes, Yandex could not even learn from the experience of others, and had to fill all the bumps on their own.



    Vitaly Friedman and UX




    Vitaliy Fridman is widely known, but not in test circles: the Smashing Magazine website he founded for web designers and web developers is very appreciated in the relevant industry. His reports are also greeted with a bang, but usually at very different conferences. However, such important topics for Vitaly as UI / UX also require testing, and he spoke to an audience atypical for himself.

    What does the carousel element look like on Turkish sites? Why is it better not to use it, and if you still use it, then how? Which site contains probably the longest comparison of product characteristics in the world, and what could be done to make it convenient to use? Why in such comparisons do you need the ability to swap columns? What rating is ideal for a product if “5.0” feels fraudulent and deceitful? What checklist do we take into account when implementing the accordion pattern?

    All this does not look like questions for a testing conference - and the report really turned out to be a kind of “offtopic”. However, audience reviews showed that it was the right decision to include it in the program: many spoke in the spirit of "yes, not about testing, but it was amazing."

    (And if you are interested in the question about the "accordion" - Vitaly has a big article about this pattern , and there is the mentioned checklist at the end.)



    BoF and format experiments




    Among the reports there were a lot of interesting things - however, everything about the format of the reports is clear. And there was another format at Heisenbag that was not previously held at this conference. In the evening of the first day, in addition to the party and sports “What? Where? When? ”, BoF sessions were held (the name of the format appeared because of the English proverb“ birds of a feather flock together ”, roughly corresponding to“ two pair boots ”).

    What were they like? The chairs are arranged in a circle, some of the seats are occupied by speakers, part of the audience - and the discussion of a predetermined topic begins. How often can you see how Michael Bolton and Simon Stewart take part in one conversation at once?

    There were two sessions: in Russian they spoke about testing in production, in English - about the eternal dichotomy of "use a finished tool or sawing your bike." The Russian-speaking audience gathered more viewers, but both of them passed quite lively.



    Michael Bolton (and the point)




    In principle, one could restrict oneself to a name that speaks for itself in the testing community. However, there are cases when, with a well-deserved reputation, a person is not the most vivid speaker. We, as organizers, are guided by viewer feedback - and when, after the very first Heisenbug, the feedback on Rex Black’s performance was not very enthusiastic, we wound it around the mustache. We are pleased to announce that with Bolton everything is different: his closing keynote "Testers are their worse enemies" has collected very enthusiastic reviews.

    This is probably due to the fact that Bolton turned out to be very “alive”: he absolutely did not bronze in his regalia, jokes on and off the stage, pulls two glasses of beer at once to the BoF session (“I’m very interested in how many glasses can be effectively to carry at the same time? ”) and immediately creates an informal atmosphere.

    But “informal” does not mean “unprofessional”: in his speech he thoughtfully walked over what he considered to be serious problems. “People confuse testing with simple build checks. There is a definition of a program as a “set of instructions for a computer”, and I see a problem in it. This is the same as giving the word “home” the definition of “building materials assembled in a certain way”. It is reasonable to define a house as a place where people live. And the program is like what people use. "We are fixated on testing tools, and I am not against the tools themselves, but we use them as a way to avoid contact with people."



    There was much more interesting - from the story of Artyom Eroshenko about the next version of the Allure Framework to the report of Alexey Rodionov about how Petri nets can help in testing. But it would be possible to continue for so long that it is better to dwell on this moment. If you forgot to mention something completely important or wrote something incorrectly, we accept bug reports in PM. And we begin to wait for the next Heisenbug, which will be held in Moscow in December!


    Also popular now: