Practice writing tests. Yandex lecture

    Happy holidays, friends! If you do not mind learning something new during the holidays, read a lecture by Kirill Borisov, a developer of Yandex authorization systems. Cyril explains how to build the testing process of Android applications, introduces modern tools and the specifics of their use.

    - Before we move forward, let's have a short opinion poll. How many knows what tests are? Who writes the tests? And who knows why he writes tests? About the same people.

    For those who are familiar with the tests firsthand and did not put their hands on them, I want to present an example of a simple test.

    As you can see, that's okay. This is the simplest test that checks that the laws of mathematics have not changed and 2 + 2 is still 4. This is all. You have a full test before your eyes.

    In fact, a test is just a function in some programming language. In our case, it is more likely to be Java, although it may be Kotlin, etc.

    The test is launched by a certain software package, a test framework that takes on all the dirty work of detecting, running, processing results, etc. The most common is the jUnit package, which will be further discussed in our lecture, but nothing stops you from using some other packages or writing your own.

    Tests are grouped into classes. A decent function must live in the classroom. Then these classes are divided by semantic attribute into various categories of tests, which we will consider later.

    Tests are grace and benefit. Firstly, they will free your testers from the routine tasks of checking already fixed bugs, the so-called regression, and, in turn, will help them reproduce complex cases that require a large set of actions and lend themselves to automation.

    But you say: wait, what testers? I am a simple independent developer, I write an application alone, one I post, one I earn money. I don’t want to upset you, but you are the very tester, simply because you still have to check sooner or later how the application works, poke the main scenarios of its use, etc. And this poor tester will be very helpful with auto tests.

    Secondly, they, you, the developer, will increase confidence in your code. Knowing that some kind of soulless computer is written in your code verifies that it all works as expected, will help you not to worry about it and develop code freely. As soon as something changes in an unpredictable way, a red flag will immediately jump up and you will understand that you need to fix it.

    In fact, this leads to the fact that you are improving the quality of the code. As you cover your code with tests, as you process it so that you can test it, you will suddenly notice that the code becomes more and more pleasing to the eye, more and more easy to read and understand. You can even attract other programmers who will also read this code and understand that this is a wonderful piece of programming art.

    But most importantly, they help you maintain compatibility. As your application grows and blooms, you will somehow hurt other sections of the code that other people can rely on. Imagine that you are not alone in the team, you have the same fellow developers, and they write their modules, focusing on how your modules work. If you are covered in tests, if you have ensured yourself that it still works the way you intended it, as soon as you break something, you will know about it right away. By doing so, you will help them not to think that he suddenly broke something today, I’ll go check again. In fact, you put all the worries on a soulless computer, leaving yourself just for creativity.

    Unfortunately, like everything in this world, nothing comes just like that. Everything has a price. I think it’s obvious that it will take you even more time to develop. There is no escape from this. You will have to write tests, you will have to think about them, you will have to test them, and all this takes precious time. But like investing in something good, it will pay off sooner or later.

    Secondly, you have to improve your skills. We all have room for growth, including in the area of ​​technical skills. As you write tests, you will have to learn new tools and techniques. But all this benefits you.

    And the worst thing that can scare not only you, but also your manager, you may need refactoring. A terrible word that inspires horror in anyone planning a software release. Unfortunately, tests are not a magic wand. In order for them to work, you will have to rework your program code, for example, to use some programming interfaces that were previously unavailable, or to make it more modular to make testing easier. Unfortunately, all this all takes time, money and effort - the most valuable thing in our industry.

    And in the end, all this complicates the release of the code. When earlier you could take and just compile the code, send it to the Play Store and go eat pizza, this time you will not succeed. Most likely, you will have processes such as launching a code that checks your code, viewing a test report, sending it to the Continuous Integration server, etc. Unfortunately, this is the price that you will have to pay, but like all the previous ones, it’s will pay off in the future.

    The most tricky question related to tests, which I hear most often when I try to push this idea: how can I convince my manager? Because these people do not understand why we need tests. They think that again these programmers came up with something, they want some kind of tests, we need to do features, release them.

    Unfortunately, there are few arguments for this, but there is a proven list that will always help you.

    Firstly, they are very happy when there are fewer bugs in your final product. As you cover your code with tests, the number of errors, including stupid ones that slip by accident, will decrease. After all, you will detect them and correct them in a timely manner. It is this that leads to the fact that you are accelerating the search for the causes of errors, and as soon as managers come running to you shouting: "Chef, everything is lost," you immediately look at the list of completed and failed tests, you understand where and what broke, correct and save the day.

    All this leads to the fact that money and time are saved. It pleases almost everyone. As it takes less effort to find errors in your code, less effort to process this code and fix it, you can spend more time on really interesting things: developing new features, bring more money for shorter periods of time. In theory. I don’t know how this works specifically in your case, but it should work something like this.

    And most importantly, think about it, you come to a new company and ask if you have tests? They say: "Yes, we have 100% coverage, everything is checked." And you think that I have come to the right company. Agree, very cool. And when young proud developers come to you, you say - and we have tests here, 97% coverage, we are all testing here. And they will look and understand that yes, this is a cool company, a cool development team, I want to stay with them.

    We pass to a specific theory. Let's look at the already mentioned tests in the context.

    Here is the skeleton of almost any type test in a vacuum. It consists of several blocks, which are necessarily marked in yellow, and gray - for those who need it.

    The most important is the name of the test. I met a lot of people who think, why give the tests any meaningful names? Test1, Test2, Test3 is already good, different and okay.

    Actually the name of the test is like the name of the book. This is something that should fit as much meaning as possible in a very short space. This is what you will see in your report, that will glow in your code editor. You should already have one idea of ​​the test to get an approximate idea of ​​what it checks, what is happening in it. Therefore, it is worthwhile to make an effort and think about how to put the meaning of what you are testing in one sentence of three to four words.

    Next, their mandatory blocks is the completion of the action. To check something, we need to do something. This block is doing some kind of impact on your system. For example, you pull the function, start the service, open the window. After receiving the result of this action, you go to the main, sweetest part of any test - checking the results. This is where the heart of the dough is. It is here that you verify that the world has changed in the way you expected from it. That window opened, and not all files from the device were deleted. That you started the video, but did not erase the memory, etc.

    And what do the gray blocks marked here carry in themselves, preparing the environment and freeing up resources? In fact, these are the boring repeating pieces that will sooner or later begin to appear in your code.

    If you have all your tests associated, for example, with files, if you create the same file in each test, open the same files, then close them, delete, why carry all this from the test to the test? You can simply use any of the tools in your test framework and place them in a separate small function that will be called before and after your test.

    In fact, this may not always come in handy. Your test may well do without these optional blocks. But if something happens, know that it does not matter, you just put them in a separate function, and everything works.

    Thus, your test stays exclusively from the required parts, simple and elegant.

    We understood what a test is and how to write it. And what will we test?

    There are several theories about what should be tested using autotests. Almost all agree that there are a number of such typical cases. First, the first is the happy path, a typical way to execute your code. This is what you know, that I click on the button, and a window will appear. You first check it that you really pressed the button, and a window appeared. Or they entered your name, and it was highlighted in a special way. That's all, you know how this should work, you expect it to work like that, but just in case, write a test for it. Because if it suddenly breaks, it will be sad.

    Then you check all possible edge cases. For example, if a person enters his name in Japanese characters or if suddenly he enters emoji in the age column. What will I do in this case? Will my code handle this?

    You write a separate test for each such case, which checks that your application will act in a certain way, for example, it will throw a window with an error, or it will just end and refuse to start anymore - everything is your choice.

    Then you go to the most commonplace. What happens if I start stuffing null anywhere in my code wherever I think. Yeah, but I have a function - send null. I have an optional argument - send null. etc. Your code, in a good way, should be resistant to the fact that one of your arguments comes unexpectedly empty. And last but not least, I think it’s worth touching the script when nothing is working. Your application, designed to send your photos to Instagram every five seconds, suddenly realizes that it does not have a network. And there is no camera. What to do? In a good way, it is necessary that your application in some meaningful way makes it clear to the user that, sorry, I will not work. This is what you should test, that in the case when everything went wrong, your application is still working as expected. Nothing upsets the user like a NullPointerException window with an error from Android or something like that, it's scary to think.

    When to test all this? As soon as we start development or when have you already finished? There is no consensus on this, but there is a set of established concepts. Firstly, it makes no sense to write tests when your application code changes literally every hour. If you and your friends are in captivity of the muse, and the concept changes literally every hour, you change the entire structure of the program, architecture floats, if you write tests for this, you will have more time to constantly rewrite these tests after your creative thought.

    Well, your code has more or less settled down, but your designer, too, has fallen into the grip of this terrible woman (muse) and begins to tug on your UI every minute - oh my God, new design, we support a new concept.

    Writing tests that interact with the UI is also not a good idea at this point, because you also have to rewrite them from scratch.

    Once your application code has stabilized, the UI no longer jumps around the screen, is it worth writing tests further? Yes, if only because sooner or later in your application there will be so-called regressions, common bugs. This is something that indicates a malfunction of your application. For example, your name is displayed from right to left, because we accidentally thought that we were in a country with Arabic cuneiform, etc. This is a regression, this is a bug, but you need to write a test to verify that in the future in these conditions your application will be still work as expected.

    Here are three cases where you should definitely write code. When it does not swim, when the UI does not swim, and when you already have some kind of bug. With this you will begin your journey.

    Tests are divided into several categories. Such a pyramid of tests shows an approximate order in which plan you should start writing tests in your application. This pyramid is based on the so-called unit tests. This is the lowest level, the salt of the earth. These are low-level tests when you test individual units in isolation from each other.

    But what is a unit?

    Scientists are still arguing on this issue, writing scientific papers. Everyone decides for himself what is the unit in his application. Most often, a class is selected as a unit, and the function of this class, the methods of this class, and various conditions for its interaction are tested. We assume that the class is a kind of self-contained entity, for example, a class that calculates the length of a string or an encryption class, etc. Usually it is not connected with any other classes in a relatively explicit sense, and it can be tested.

    You will have most of these tests. It is these tests that you will run most often. If you have the habit of running every five minutes, that's fine. We do this and everything is going well.

    Unit tests are designed to first of all control you in the process of writing your code, precisely because they are the lowest level and should pass as quickly as possible.

    As you launch them, for example, press Ctrl + S, and then your tests are run, and you immediately notice that something is broken. Agree, it is better to detect the error before it has time to penetrate somewhere else.

    Consider an example of such a unit test. Consider our favorite class of static utilities. There is a class containing exactly one function that checks our hypothetical application, whether the user has entered a strong password, whether hackers crack it or not. This simplest function contains three basic conditions, three basic invariants that our strong password should not contain less than seven characters, must contain at least one uppercase Latin letter, and at least one digit. Otherwise, it’s for chickens to laugh.

    If all these three conditions pass, we return that everything is fine, register the user, we move on.

    How will we test this? Here we select this function isStrongPassword as our unit, and we will test each of these three cases separately.

    We begin with the first condition that strings longer than 6 characters must be passed to our functions in order for them to be recognized as successful. In our first test case, we check that if we pass lines that have less than seven characters, our function will return false. The assertFalse function is responsible for this, which will throw up its hands in a panic and stop the entire testing process if it suddenly receives true as an argument instead of false.

    In the same spirit, we check our main cases, and check one counterexample, that if we nevertheless pass our function more than 6 characters long, it will return true. Such a test case in a vacuum. We checked some conditions that cause the decline of our function. We checked that if we pass her the expected parameters, she responds as expected. And in the same spirit, we are testing everything else.

    We have a separate test case for checking that our password has at least one digit. We have a separate test case to verify that there is at least one letter in our password. And you ask, where is the fourth test case? But we had four ways to exit the function. In fact, in the previous three test cases, we already checked that if we pass a password that meets all of these conditions, then we will return true one way or another.

    Let's look at the main star of these tests, at the functions starting with the word assert. They belong to a class of functions called assertions. These functions are just ancillary tools provided by the jUnit test framework that simply help you express your intentions. For example, if you call the assertEquals function, you say that I expect these two parameters to be equal. If not, everything is broken, everything is lost, complete the check. If assert = null, not null, etc.

    These functions are your test tools in tests. As long as they receive the expected condition at the input, the test is not interrupted. As soon as it is interrupted, then you have a problem.

    If it seems to you that the assertions from the previous slides are difficult to read, special tools written by our community are at your service.

    One of the most popular is AssertJ.

    It is an attempt to make the verification of the results of your code more readable, to bring it closer to standard English. The simplest example is the That (count) .isGreaterThan (original) assert. This reads a lot easier than assert true a <b. You have almost an English text, and Shakespeare would be glad to see him.

    If you need something even more complex as a test, then AssertJ will come to your aid in this. Imagine you have an array of abstract objects that have a count field in it. These are some counters. You want to check that the array of counters that returned from your function contains only the numbers 1, 3, 4. Nothing was counted twice. With AssetJ, you can write this in a fairly simple declarative way: assertThat (counters) .extracting (“count”) .contains (1, 3, 4) and .doesNotContain (2). I think it reads a lot easier than a complex loop that pulls out elements, stuffs it into another array, checks for compliance. The easier the test is to read, the clearer it is.

    If you don’t like the style of AssertJ, then there is another tool made in this spirit.

    Hamcrest is a favorite of many of my acquaintances. It performs approximately the same function - it tries to make your code more readable, just does it in a different way. Unlike AssertJ, where the code is written as a sequence of calls from a builder, a hierarchical tree of matchers is used here. This scary name hides simply the fact that your verification functions are sometimes embedded in each other to express a more or less complex condition, but as a result the text is still read.

    The same example with the text that some counter is less than the original value is read as well. With counters, the same thing, albeit less readable. This is about the same thing, and most importantly, about the same readable.

    Further down the pyramid were integration tests.

    From a code point of view, these are roughly the same unit tests, but slightly different.

    Unlike unit tests, which are aimed at verifying the operation of one particular isolated component, integration tests are designed to perform the interaction of several components. You ask why? Class A is tested, class B, there is a unit test, all the rules. They should work fine.

    In fact, life presents us with unexpected surprises, and for the first time using them in the application, you will suddenly notice that your class B starts a thread that expects something that awaits and stream B. Separately, they worked fine, and together suddenly began to break Hang your application and give it a headache.

    That is why it is necessary to write integration tests that verify the integration of different components. These tests should be fewer than unit tests. Unit tests should be the main unit of your tests, because they run faster, run them faster, and they come in handy more during the development process.

    An integration test is a test that runs when you submit your code to a shared source code repository. And before you want to send and perpetuate what you have done, it would be nice to verify that in the end it works. That your giant colossus of various components can interact with each other.

    Unfortunately, they can be significantly more difficult to implement, because these tests can sometimes come closer to your main code in complexity. While earlier you just tested separate functions in isolation, here you have to show some resources, open a connection to the database, establish a connection to the server, create files, etc. But it's worth it. For this reason, they are usually run less frequently than unit tests, because they can take longer to run.

    And at the very top of our pyramid is the most noble class of tests - UI tests. These are tests that are so noble that, unlike unit tests and integration tests, they generally do not know how our application works from the inside. For them, it is a black box, as for the average user. After all, they are intended to check the basic scenarios of working with our application.

    Think for yourself, you have well-tested code, components work side by side, elbow on elbow, and it would seem that everything is fine. You send the application to the Play Store, and you receive the first report that “I have nothing on the screen, not a single button on how to use it”. And you suddenly realize that all this time you enthusiastically tested the code, knowing how it works, your tests passed, and you forgot to add a button to the interface that starts the whole process. Sadness. What to do?

    For this, there are UI tests designed to verify, from the point of view of users, the main scenarios of interaction with your application. There is not much price for your application if it cannot send photos to the Instagram server, even if it can do it. The main thing is that the user can do this.

    Unlike previous tests, which climbed up to the guts of your application, they work just like a regular user with the same tools. They enter text, press buttons, scroll the screen, etc. They run much less often, usually just before the release, acceptance tests, if only because they require some preparation, launch on the device, and take a lot of time. If you ran them as unit tests, every time you save a file, you could leave to drink tea a lot of times a day, which affects the health and speed of development.

    We figured out what tests are, how they look from the inside. Let's see how bad they are.

    There is such a thing as test smells.

    What is the use of the test if it says that there is an error, but it is not clear how it happened? What is the use of a test that either works or fails? and so on. This can all complicate the support of the code, because foul-smelling tests do not cause a desire to touch them.

    What tests should be in order to be considered good.

    First, repeating code should be extracted from the tests. If you repeat the same operations over and over again, it makes no sense to keep them in each instance of the test. Firstly, it will make your understanding difficult, your tests will suddenly turn into giant lumps of repeating code, you get copy-paste - fu.

    Also, your code should not check everything at once. It is advisable to split the code so that each test checks one specific thing. You don’t need to do a test called the “hardest to eat” or “full test”, which checks everything from how you save files to how they are encrypted if there is a full moon in the sky.

    The smaller your tests, the easier they will be to understand. The essence of the test should also be clear from its code. It is very difficult to find the cause of the error, why the test fell, when it is a pile of strange things, when you have a lot of obscure operations that seem to be related to the test, and maybe not related. A little later we will consider one example of such a test.

    Let's move on to the main postulate of a good test. It must be reproducible. The test for this exists that in a certain set of data conditions, when it starts and passes, then everything works as expected.

    If your test will depend on what day the number or temperature is outside - seriously, I have seen such tests - then work, then not work - it will not give you any confidence. Does my code work? Seems to be yes. The test is red. Does not work. Sadness. Stop, green again.

    The test should have a reproducible result. It should not be fragile, it should not break from external independent conditions.

    Let us consider the three main smells of tests that I encounter very often, as well as my colleagues, who sometimes force us to tear our hair out of our heads and sometimes make lengthy explanations.

    First, the conditions in the tests. It would seem a completely normal test. There is an action, checking that the file exists, and if an error is returned - assertThatNoFileDownloaded.

    What's so bad about that? It seems harmless, but think about it, do you know how the test will pass at a certain point? How will he behave if you run it twice in a row? The conditions in the tests are redundant for the reason that your test is already a test of some condition. Your test should go from top to bottom, preferably the only possible way, so that when it breaks, you clearly understand that such a sequence of steps led to the fact that my test breaks.

    When a condition suddenly appears in your test, you suddenly have to keep another variable in mind. “Wait a minute, today is Wednesday, 2017, he’ll probably go this way, so he’s broken here.” But what happens if another error code arrives? What is generally checked here?

    Correctly it should look something like this.

    We still check two specific cases when an error comes from the server and when a normal response comes. At the same time, this is highlighted in two short tests that check each specific case. They have separate names describing what is checked in them. It’s much easier to understand, it’s much faster to run, and when you fix your bug, you can run just one of these tests without worrying about what if it doesn’t work. It will be sad.

    Another slightly less harmless smell, but very sweet to the heart of any programmer, is the cycles in the code. You say that such and such? These are cycles, I just wanted to make less copy paste. I have an array, it has several elements. If I write an assert for all this, it’s a lot of space, it takes a lot of time.

    In fact, not everything is as scary as it seems. But think about it: you have a for, it performs some kind of check. Unexpectedly, on the second element, the check crashes. Yeah, everything is fine, the test performed its function and found an error. But can you be sure that all the remaining elements that for did not reach will also not cause errors?

    You fixed this error, run the test, it passes again - it crashes again. And so you have to sit and run this test over and over again, wait until it finally reaches the end of the array.

    It will be more correct to use special tools that will allow you to declaratively write about the same thing, but in a more understandable way, and most importantly, with a more correct understanding of what is happening.

    In this example, for is replaced with a construct from the mentioned Hamcrest package, which reads something like this: check that all the names in this array correspond to the installed packages in the system. The check passes through all the elements, is performed for each of them, and in the end you see that these elements specifically caused false. You fix it at once, restart it at once - everything works. Agree, it’s much simpler, more pleasant, and most importantly - caresses the eye.

    The most terrible and indecent - "thick" tests.

    These are tests that contain a lot of everything. When a person decided to follow the path of greatest visibility, he uncovered all the operations of the test and stuffed them into one function.

    Let's offhand, what is going on at first sight?

    We have a conveyor belt class on which parcels with specific numbers are loaded. After we loaded, something is written to the log. After that, we check that after starting the package processing function, packages with a number less than 10 remain on the tape. Yes, this is not so obvious, simply because your eye is getting blurry and slipping - there is too much code. Therefore, in such cases, it is necessary to make a code that does not have a specific value for this particular test. In this case, I would take out the code for creating a conveyor belt, loading parcels into it. Because we do not check this, we check how they are sorted, not how they are loaded.

    Therefore, I created a separate function called "create a loaded pipeline", which takes on all the functions of creating an object, loading packages on it, etc. My test becomes very simple. I have an array with parcel numbers, I create a pipeline with them, I start and check.

    Here we see that we are processing parcels, and that the result of this processing should be the absence of parcel processing with numbers 1 and 3, which are less than 10, as was said in the name of the test.

    We figured out the theory of tests in a vacuum. Let's move on to the specifics of testing for Android.

    Not so simple. Android is an environment that makes adjustments to your normal process of writing Java code. One of these adjustments is the addition of two more categories of tests that are orthogonal to our pyramid, which we saw earlier.

    This is a division into local tests and instrumented tests. The instrumented reserve the right to call them such.

    Local tests are tests that can run on the developer's computer. Typically, this is Java code that does not interact with anything from Android, for example, code that counts the Pi number to the millionth decimal place. It can well be run both under Windows, and under Linux, and on Android. It is much easier and faster to run it on your local machine, directly from the IDE, and it will work.

    Instrumented tests are forced to run on any Android device. On an emulator, for example, on a live phone. This is due to the fact that these tests pull a specific Android API. They cannot be run on a regular OS, simply because it does not contain anything that they can interact with. If you run such a test on a regular computer without marking it accordingly, an exception will be thrown. They say why you pull the methods of the Android OS? They are not here.

    The problem is quite relevant. Due to the fact that instrumented tests are good, you can run it on your phone, on your emulator, but they take much more time than regular local tests. This is due to the fact that while your test is running on your machine, you need to pack these tests in a separate APK, download it to your device, run it there, then collect the result from there, transfer them to your computer again, process it and show.

    If you have a lot of such tests, then your test suite can run a lot of time. This is a rather big problem, but it makes additional adjustments not only to this. Imagine that you have the mythical Continuous Integration, that somewhere in your company there is a server that runs these tests and checks them each time the release is released, the code is sent to the repository, etc.

    This is where the problem appears. I have a server somewhere, for example, on Windows or on Linux, how will these tests run there? Well, the developer has a phone, connected and launched.

    Something needs to be done with this situation. There are various approaches. There are very extreme ones, for example, 100 phones are bought, connected to the server and tests are launched for them. I heard this happens, and you might have to face it someday. But it may well be that you can do without it.

    Let's try to see how.

    One of the most popular solutions are moki, imitators. This is a stuffed animal, an imitation of the public interface of a class. Whoever knows Python, he has a mantra that if it walks like a duck, quacks like a duck, then it is probably a duck. Although it could be a drake? I do not know.

    If your class and its public interface meets some expectations of the function, if it is ready to accept a class of a certain interface as an input, then nothing prevents you from making a class that looks outside with the same functions, but at the same time gives instead of the real one code and the results of computing the real code a predetermined value. For example, instead of calculating a million decimal places in your test, you will simply give it away, a pre-calculated value.

    In our case, we can just take and replace the class we need from the Android OS, for example, PackageManager, one of the most common examples, with our stuffed animal, which will produce the expected result.

    They also have a unique property - they allow you to record calls to class methods. Sometimes it’s interesting to us, but did he even use the code that requested this object for input? Maybe the programmer accidentally forgot to use this object, forgot to remove this stub that returns the same value, etc. And by chance this value coincided with what we expected. Honestly, this happens, I saw, suffered, corrected.

    In this case, it would not hurt us sometimes to check that calls to the methods of our class were made at all.

    The popular Mockito tool will help us with this, it's about moki. It allows you to create "skeletons" of classes, instead of doing it manually, it automates this process. Just give it some class, it returns some other class. Reinforced, added, perhaps trickier. This library allows you to define the call responses of individual class functions. For example, you want to say that if I have function A, I call it - then give 42. If function B - give null. etc.

    It also allows you to verify the fact of calling method A, etc.

    As usual, all this has a price. Your code complexity is increasing.

    Let's see how it really looks. There is a simplest example of a simple test. There is some abstract Restaurant class in which there are two methods - getFreeSeats and getOccupiedSeats. These methods return the number of free places and occupied places.

    As the code of this class from external sources takes the number of these places, for example, reads their issued preferences, where some other application wrote them, for example, Airbnb, etc. If we tested it in the usual way, we needed to run this on the phone so that he could get access to these data in these methods, etc. But it doesn’t matter where he gets this data from. It is important for us that the code that interacts with the Restaurant class is an informer class, it just received the data we expected, so that we could later verify that the results of processing this data meet our expectations.

    We create some kind of mok, and say that when the getFreeSeats method is called, return 42. When getOccupiedSeats is called, return 56. And then check. When this “stuffed animal” is fed to our informer class, when we call a certain method of the informer class, which we test, we see that the result of this function is a line where our numbers are actually contained - 42 and 56. In case the programmer too lazy, in a hurry or did not think, we check whether these methods were actually called.

    Using the verify function, we verify that they were called. The example is simplified, but you can also verify that these methods were called with the values ​​you expected, for example, if they accept some parameters.

    A case closer to reality is the simplest class YandesPackageCounter, which considers how many Yandex applications there are on this device. It makes this a very sophisticated heuristic - it will check that the name “yandex” is in the package name. By the way, I think in a normal situation this is enough. This class is suitable for our purposes, it is taken from my little program. This class accesses the methods of the PackageManager class. He expects it at the entrance, and in a normal situation we would have to run it on the device, because the PackageManager is not under Windows.

    In our case, we are trickier. We make the mock of this class PackageManager.

    First, we make the result of this function. If we get a list of packages, we must return something. We prepare some effigy of the ApplicationInfo class, indicate the names of our packages there, which are stored somewhere in the array, and then in the test we say that you make the mock from the PackageManager and ask you to return a specific array when it is called.

    Here we check that we return this only when we request the getInstalledApplications method with the GET_META_DATA tag.

    The test further does not differ much from what we saw earlier. YandexPackageCounter is called, a scarecrow is passed to it, and it does not notice anything, believes that it has received a list of packages, a complex heuristic is triggered, and we get the result. Two Yandex packages in the system.

    Just in case, we check that the list of packages was generally requested. And then all of a sudden. Programmers.

    Android OS is extensive, its SDK is extensive, there are many interfaces, classes that you can call from there. If you cook with your hands every time a stuffed animal for each such class, you cook mochi, sooner or later you will lose your desire to live, write tests, and generally go to managers or artists.

    In this case, fellow programmers did not leave you in trouble and prepared an interesting tool called Robolectric. It is a good simulation of the Android API. In fact, this is already a whole forest of ready-made mooks or not mooks, shadows of existing classes from the Android SDK, which are trying to pretend in the minimum necessary way that you have an Android OS, and it even works relatively.

    It is constructed in such a way that when you, for example, call the package installation functions in the system, Robolectric, of course, will not install it anywhere, but will add to itself the record that he was trying to install a package with that name.

    When you then ask the Robolectric stuffed PackageManager for a list of packages, he will see that he has installed such and such a package and give it back.

    It saves from a lot of routine and makes it easy to translate instrumented tests into the category of local tests. Because you can run them on your computer.

    But as usual, nothing happens just like that. The flip side is that it takes quite a lot of time to initialize all this splendor of shadows with the prepared SDKs. It may take up to 100 ms or more to initialize one such test wrapped in Robolectric. It must be used wisely, this is not a silver bullet, but a tool that allows you to get out of certain situations that are difficult to solve.

    It also requires a fairly non-obvious setting. You will understand this if you decide to go deeper, it is difficult to convey on the scale of one lecture.

    Consider the simplest case.

    All the same class, but now it is wrapped in a RobolectricTestRunner class. He initializes everything, prepares the internal structure, moki, replaces these challenges. When you simply run RuntimeEnvironment.application.getPackageManager in your code, it returns the ShadowPackageManager object, ShadowPM, to you. This cool name hides the same class of PackageManager mok, but wrapped in a slightly more convenient set of functions that allows you, for example, to manually create an application info package, etc.

    We are also preparing a list of packages, and our test is no different from what it would look like if it were a simple instrumented test. We get the PackageManager from our Environment, give it to the counter object, to the list of packages, etc. The only thing is that the fact of calling the function to get the list of packages is not checked here, then we can say that we will leave it as homework.

    The next topical issue of Android OS is the already mentioned testing of UI applications. What tools are there?

    First, UI-related tests are very fragile. If earlier I know that I am calling a function, there is a log, I will hang myself on it, wait until it finishes, and everything will be fine. Here you don’t have this, you are an ordinary user, you clicked on the button and you got a spinner, for example, and your test thinks that an input field will appear now, I will enter the data there. But the input field did not appear, because your phone or the emulator decided that let me free up the memory now or re-index all the files. And the input field appears literally a second later. And your test did not expect this - yeah, the input field did not appear, I'm falling, master, error! As a result, the code seems to work, and the test crashes, and what should I do?

    This is due to the fact that operations with the UI are not asynchronous. You cannot just rely on the fact that something appears at a certain interval or that something works for a strictly defined period of time. Last but not least, you need to understand how I will generally imitate the actions of the user, because I poke my hands, and the code is not a person.

    An attempt to solve all this is undertaken by the Espresso tool. Familiar firsthand to those who decided to enter this slippery path. It is part of the Android Testing Support Library, presents an API for interacting with the interface. It hides the entire ugly kitchen of expectation, asynchrony and other things under the hood of quite simple functions. The main thing is that it is also expanding very well. Since our UI is not limited to the simplest buttons and input fields, for example, your application can send intents or open a WebView. There are many additions to this tool that expand its functionality. We will consider only the most basic applications here. But if you decide to use it - you know, there are many adventures ahead.

    And we will deal with boredom.

    There is a simple application, an old friend for checking the strength of a password, the same function isStrongPassword, only now we can not call it directly and check true, false, values ​​and so on. We have an interface. We want to do a test, when we enter a password value that is considered weak, it will be displayed on the UI. We click on the button, and the word "Weak" appears in a terrible red color. We will not test color yet, it is a matter of taste.

    What does a test of this kind look like? Firstly, there are many additional imports. We statically import additional tools from the Espresso package that are useful to us. Static imports are here in order not to worry about showing the full path every time, and since we need a lot of everything from this package.

    The main pulp is here. This is the AndroidJUnit4 class that wrapped the CheckerUITest class. It is a task of basic constants that we expect to see in our application. I apologize, here they are not submitted to the string.xml file, as it should be according to all the canons, this will be suitable for our purpose.

    Further, one of the most basic parts is the annotation of Rule and a certain variable mActivityRule. Rule are JUnit test framework tools that are designed to make your life easier. As mentioned earlier, the tasks of preparing the environment, freeing up resources can be quite painful. An ActivityRule, in particular, faces the whole tedious function of launching your Activity, displaying it on the screen, then deleting it, releasing the resources associated with it. Why do you think about this? You just want to test.

    We have a fairly simple test - badPasswordDisplayed. He proudly allows us to describe the sequence of actions. Firstly, we say that there is a kind of view from which we know Please do the following set of actions with her. And then listing with special operators what you want to do. In particular, you want to enter the text of a weak password, and then close your on-screen keyboard, because it prevents, for example, clicking on the button. Not a very thoughtful interface, agree.

    Then you perform the action. Click on the button “check password”. She has id.check_strength, and you say execute the click method. Next is the main verification unit. You say that there is such a view, please check that it has a text set that meets our expectations. And we use the matchr from the mentioned Hamcrest. Here we check that matches (withText (WEAK_STATUS)). If there is suddenly strong, then this test will fail.

    It allows you to sufficiently concisely, concisely, and most importantly conveniently describe the basic interaction with the interface. No "find me in this interface an element with such a name, and if you did not find it or it is null, then that ..."

    And most importantly, Espresso takes care of the entire waiting process, it will check if this element appears or you have to wait a bit, etc.

    If you don’t have time right now to study Espresso, if you need to do a UI test right now to test the most basic functionality, then for the most lazy, dexterous and skillful there is an Espresso Test Recorder tool.

    A relative newcomer to our toolkit, it is part of Android Studio and allows you to record interactions with your application. While earlier you scrupulously prescribed it yourself, here you just launch the emulator, poke the mouse with something else, and it writes for you here. It is located in the Run menu next to your favorite Run and Debug items, and when it starts, something like this will appear.

    In it you can see the result of how I poked into the input field, entered certain values ​​into it, clicked on the button, and bottom assert. What code will be generated by our robot assistant?

    This test we will not be analyzing for the following reason:

    As you can understand, automated test generation is a good thing, but at the end you get something difficult to maintain, simply because a soulless machine does not have that talent for writing capacious, expressive code and tries to play it safe in all respects, write a bunch of additional verification code, etc.

    Such code is suitable if you need to test something right here and now, but you should not use it as a strategy for generating tests supported in the long run. You yourself will shoot yourself in the foot and sprinkle it with salt. Do not do so.

    At the same time, it remains a good tool in your box. If something unexpected comes in handy for you, most likely it will be it.

    How deep is this rabbit hole? Another question pops up when we ask to test Android code. But I have a service! With the UI it’s still clear, it’s clear with the unit test. And how to test that my service is working fine? After all, we need to take care to start this service, we need to leave it, join it, and if it doesn’t work out, etc. All this requires instrumented tests.

    Unfortunately, with the instrumentation of this, conventional tools are unlikely to help you, Robolectric may be able to help, but it requires a certain immersion in this deep environment. And with the first two points ServiceTestRule will help. This is also the battery that is part of the standard Android Testing Support Library, it takes care of the whole black kitchen to start the test, stop the test, get a connection to the test. But unfortunately, it is also limited in its capabilities, and if your service is not an ordinary service that plays music against the background, but some kind of intent service that responds to Broadcasts and so on, it will not help here anymore, you will have to get out by ourselves.

    How can this more convenient tool help you brighten up your hard life at least a little?

    There is an example of a service test taken from a page dedicated to it in the Android Testing Support Library documentation. He tests a service that receives a certain number and returns a different number at the output. This is a very frequent pattern of writing services when you get something and give something away. Here it will be tested.

    To begin with, our old friend Rule, in our case ServiceTestRule, which is instantiated and further says that everything, don’t worry, I take everything on myself.

    Already in our test we just take and, as it were, create an Intent, which we want to use to start our service. Then we put some specific argument there, and we get a link to our service.

    When we call the getService function, we get an instance of our service that has already connected, and we call the method directly from it, we check with the help of the mentioned matchers that the answer was any random value, but most importantly, that the class is integer.

    Five lines, and already have a tested service. Other cases with testing the service will be more involved, let's leave it for independent study.

    Understood the specifics of tests for Android. Learned how to test and what to test. Let's look at a more important important thing, how to live with tests in general? How to write them, when to write? How to incorporate them into your normal development process.

    The answer to this, first, is the usual Continuous Integration. This is not just a thing, program or server, it is an approach to the development of your software. In our case, we single out one aspect.

    You select a specific server on which you install software, for example, Jenkins, TeamCity - it doesn’t matter, and set it up so that it checks that as soon as you get some kind of pull request to your github or somewhere else so that he runs the tests and checks that everything passes, the tests work, you can merge, I allow.

    This is done so that the negligent programmer, who is in a hurry to drink beer with friends, does not give in to the temptation to simply send the code, push it into the master branch, and then live like he does. Probably it will work.

    Continuous Integration is scary, which does not allow you to spoil your code in such an obvious and illogical way. He simply will not let him be frozen if the tests fail. Of course, we all understand that there are situations when it is necessary to freeze, even if the tests do not pass, just merge, everything is on fire, we were hacked, everyone was gone. In these cases, usually there are red buttons that allow your Continuous Integration server to wave its hand and say that it’s alright, so be it, today it’s merge, but then - no, no.

    All of this needs to be integrated into your core development processes. If this Continuous Integration server just stands and does nothing, even if it is fully configured, then there will be no benefit from it. Your programmers, that is, you yourself first of all, should get used to the idea that a freebie will not work here anymore, you will have to think, do and be responsible for your code.

    One of the development approaches, the parable of all those who like to develop with tests, can help you with this - Test Driven Development. This is an approach that is one of the most popular, but at the same time not the only one dominant. It is also used by some teams in Yandex with very interesting results, I would even say positive results. It is a very interesting development procedure. While you used to take the idea and start writing code, develop it, and then write tests. Honestly, then, I promise. Here you first write tests. It looks very strange, because we do not have code, how will we test? What to test? This is the highlight of this approach. First, with the help of tests, you describe what you expect from your code, as if writing that there is a function mentioned by isStrongPassword, it must in this situation with such a line return this, here it should return. Then you run these tests, which fail right away, because there is no code yet, why run them? And we understand that there is no code, we begin to write it. They wrote enough code to pass one test, then they wrote to pass the second test. Suddenly, all the tests begin to pass. Excellent.

    You circle and go back to writing tests. After all, you still have features in your application. You write tests for them, write code for them, tests pass. Write more tests, more code ... There is no special way out of this cycle. But actually there is a way out. At some point you can say that everything. Here our development process stops, basta, make a release.

    This is due to the fact that it is very easy to get carried away and start writing tests in order to write code for them, then tests again ... Stop me someday.

    The main thing is to choose for yourself the scale of coverage with this testing, a certain set of features that you will cover with tests, and only then iteratively generate exactly as much as you have allocated for this cycle.

    After which you do other things, make a release, test it manually, etc. And there is already a certain next sprint, development process, and so on. Start again: write tests, tests fail, code, tests, code, tests ... And until your application becomes as close as possible to an abstract ideal.

    I want to be upset. While you, the developers, are the real heroes of your work, on whom this application depends, without which there wouldn’t be anything, unfortunately, you alone will not be able to write tests in the normal way. You can say: how so? I know how the application works, I wrote it.

    Besides you in the team, most likely there are people like managers and manual testers? Why do we need them? I don’t want to upset you, but often managers know better what the user needs from your application than you do. Therefore, they even better understand how your application works. A common problem in software development, not only on mobile, desktop, web, and so on, is that the programmer thinks I know how to use this application. He uses it in a certain way, verifies that it works like this. Then a stupid user comes in and suddenly, instead of clicking on the logical “enter” button using the OAuth token, clicks on “remember password” and the application crashes. Well, what password to remember? You have not logged in yet. A programmer would never think of such a thing. The manager knows this, he sets tasks, he communicates with users, he has a picture of requirements in front of him. It will help you write a test script that might be worth covering. Not necessarily in the first place, in its rhythm, but worth covering.

    A manual tester, which, it would seem, generally should be replaced by a soulless robot that runs your autotests, is often a person with a special education, a special mentality. While your autotests verify that the errors already found have been fixed and that your code works in a certain way, manual testers are very good at doing things that are often not available to you. They know how to break your application.

    I remember that we have a great tester, when I send a release for testing, leave for tea, come back - 14 open tickets, because ... I just went in, clicked and everything broke. But how? She could explain it, but why? During this time, she found 4 more bugs. Wonderful woman, I hope she is doing well.

    It is these people who find new mistakes for you, so that you can cover them with self-tests - and then you will make their work easier. When they don’t take the trouble to double-check old errors, they are even more actively looking for ways to break your application and find more new errors. It is in this bundle, the tandem with such people, that you can most effectively write tests, cover your application, make it beautiful, good, convenient and, most importantly, reliable. Using tests.

    When you have tests, you can be sure that this piece of code did not start to behave differently overnight. Well, I continue to work with him in the expected manner.

    The more time we can spend on adding new features that bring money, the more we get it. In theory. The economy works something like this, I know.

    The less time we spend searching for mistakes, the more time is left for much more valuable things than money — for free time, hobbies, hobbies, relatives and friends, walking, etc. Agree, it’s worth spending a little time on tests, to get so much good in return.

    As soon as you start writing tests, you will suddenly realize that as your skills grow, it becomes easier to do, tests themselves get out of your hands. You wrote the code, drank tea, made tests. We wrote the code, did not have time to look back, and you already wrote tests for it. Once this process starts, it is already becoming difficult to stop. After all, you find yourself in negative feedback. It works for us. Let's hope this helps you too.

    Also popular now: