Android app in memory. Optimization Report for Yandex.Luncher

    The lightweight Android Go system has increased requirements for pre-installed applications - the size and memory used. We faced the challenge of meeting these requirements. We carried out a number of optimizations and decided to seriously change the architecture of our graphical shell - Yandex.Luncher. The head of the mobile application development team Alexander Starchenko shared this experience.

    - My name is Alexander, I am from St. Petersburg, from the team that develops Yandex.Loncher and Yandex.Phone. Today I will tell you how we optimized memory in Launcher. First, I’ll briefly explain what Launcher is. Next, we discuss the reasons why we need to optimize memory. After that, we will consider how to correctly measure memory and what it consists of. Then let's move on to practice. I will talk about how we optimized memory in Launcher and how we came to a radical solution to the problem. And in the end I’ll talk about how we monitor the use of memory, how we keep it under control.

    "Launcher" or "Launcher" - not so important. We at Yandex used to call him Launcher, and in the report I will use the word "Launcher".

    Another important point: Launcher is quite widely distributed through presets, that is, when you buy a new phone, Yandex.Loncher can quite often turn out to be the one and only application manager, home desktop manager on your phone.

    Now for the reasons why we need to optimize memory. I'll start with our reason. In short, this is Android Go. And now longer. At the end of 2017, Google introduced Android Oreo and its special version, the Android Oreo Go edition. What is it special about? This version is designed for low-end, for inexpensive phones with up to one gigabyte of RAM. What else is she special? For applications that are preinstalled on this version of Android, Google puts forward additional requirements. In particular - the requirements for the consumption of RAM. Roughly speaking, some time after launch, the application’s memory is removed, and the size should not exceed 30-50 megabytes for Launcher, depending on the size of the phone’s screen. 30 on the smallest, 50 on the big screens.

    It should also be noted that Google continues to develop this area, and there is already an Android Pie Go edition.

    What other reasons could there be for optimizing memory usage? First, your application will be less likely to download. Secondly, it will work faster, since it will be less likely to work out the garbage collector and memory will be allocated less often. Extra objects will not be created, extra views will not be inflated, etc. Indirectly, judging by our experience, this will lead to a decrease in the apk size of your application. All this together will provide you with more installations and better ratings on Google Play.

    Ok, now we know why to optimize memory. Let's see by what means to measure it and what it consists of.

    Link from the slide

    Probably many of you have seen this picture. This is a screenshot from Android Studio Profile, from a memory view. This tool is described in sufficient detail on Probably many of you have used them. Who did not use - try.

    What is good here? It is always at hand. It is convenient to use in the development process. However, it has some disadvantages. Not all allocations of your application are visible here. For example, downloaded fonts are not visible here. Also, with the help of this tool it is inconvenient to see which classes are loaded into memory, and you cannot use this tool in automatic mode, that is, you cannot configure some kind of automatic test based on the Android Studio Profile.

    Links from the slide: first , second

    The following tool has existed since Android development in Eclipse, it is Memory Analyzer, MAT, for short. It is provided as a standalone application and is compatible with memory dumps that you can save from Android Studio.

    To do this, you will need to use a small utility, a professional converter. It comes with the Android Go edition and has several advantages. For example, it can build Paths to gs roots. It helped us a lot to see exactly which classes are loaded by Launcher and when they are loaded. We could not do this using the Android Studio Profiler.

    The next tool is the dumpsys utility, specifically dumpsys meminfo. Here you see part of the output of this command. It provides a fairly high-level knowledge of memory consumption. However, it has certain advantages. It is convenient to use in automatic mode. You can easily configure tests that simply invoke this command. It also shows the memory immediately for all processes. And shows all the locations. As far as we know, Google uses the memory value from this tool in the test process.

    Let’s use an example of output to briefly describe what the application memory consists of. The first is Java Heap, all locations of your Java and Kotlin code. Usually this section is large enough. Next up is the Native Heap. Here are allocations from the native code. Even if you do not explicitly use the native code in your application, allocations will be present here, since many Android objects - the same view - allocate native memory. The next section is Code. Everything related to the code gets here: bytecode, fonts. Code can also be quite large if you use many third-party, non-optimized libraries. The following is the software stack of Java and native code, usually small. Next comes the graphic memory. This includes Surface, textures, that is, the memory that spreads between the CPU and the GPU is used for rendering. Next is the Private Other section. This includes everything that did not fall into the above sections, everything that the system could not scatter over them. Usually these are some kind of native allocations. Next is the System section, this is the part of system memory that is attributed to your application.

    And in the end we have TOTAL, this is the sum of all the sections listed. We want to reduce it.

    What else is important to know about memory measurement? First of all, our application does not fully control all allocations. That is, we, as developers, do not have full control over what code will be downloaded.

    Following. The application memory can jump a lot. During the measurement process, you can observe strong differences in readings. This may be due to time taken as well as to various scenarios. In this regard, when we optimize memory, analyze it, it is very important to do this under the same conditions. Ideally, on the same device. Even better if you have the option to call the Garbage Collector.

    Excellent. We know why we need to optimize memory, how to correctly measure it, what it consists of. Let's get to practice, and I'll tell you how we optimized memory in Launcher.

    That was the situation at first. We had three processes, which in total allocated about 120 megabytes. This is almost four times more than we would like to receive.

    Regarding the allocation of the main process, there was a large section of Java Heap, a lot of graphics, big code, and a fairly large Native Heap.

    First, we approached the problem quite naively and decided to follow some recommendations from Google from some resources, try to solve the problem quickly. We drew attention to the synthetic methods that are generated during the compilation process. We had more than 2 thousand of them. In a couple of hours, we all deleted them, removed the memory.

    And they got a gain of about one or two megabytes in the code section. Excellent.

    Next, we turned our attention to enum. As you know, enum is a class. And as Google eventually admitted, enum is not very memory efficient. We translated all enum into InDef and StringDef. Here you can object to me that ProgArt will help here. But in fact, ProgArt will not replace all enum with primitive types. It’s better to do it yourself. By the way, we had more than 90 enum, quite a lot.

    This optimization has already taken days, since most had to be done manually, and we won about three to six megabytes in the Java heap section.

    Next, we drew attention to the collection. We used fairly standard Java collections, such as HashMap. We had more than 150 of them, and all of them were created at the start of Launcher. We replaced them with SparseArray, SimpleArrayMap and ArrayMap and began to create collections with a predetermined size so that empty slots would not be allocated. That is, we pass the size of the collection to the constructor.

    This also gave a certain gain, and this optimization also took us days, most of which we did manually.

    Then we took a more specific step. We saw that we have three processes. As we know, even an empty process in Android takes about 8-10 megabytes of memory, quite a lot.

    Details about the processes were told by my colleague Arthur Vasilov. Not so long ago at the Mosdroid conference was his report , also about Android Go.

    What did we have after these optimizations? On the main test device, we observed memory consumption in the region of 80-100 megabytes, not bad enough, but still not enough. We began to measure memory on other devices. We found that on faster devices, memory consumption was much greater. It turned out that we had many different pending initializations. After some time, Launcher inflated some views, initiated some libraries, etc.

    What have we done? First of all, we went through the view, all the layouts. Removed all views that inflated with visibility gone. They brought them into separate layouts, began to inflate them programmatically. Those that we did not need, we generally stopped inflating until the moment the user needed them. We paid attention to image optimization. We stopped loading images that the user does not see right now. In the case of Launcher, these were pictures-icons of applications in the full list of applications. Until its opening, we do not ship them. This gave us a very good win in the graphics section.

    We also checked our image caches in memory. It turned out that not all of them were optimal; not all pictures corresponding to the screen of the phone on which Launcher was running were stored in memory.

    After that, we began to analyze the code section and noticed that we had a lot of rather heavy classes from somewhere. It turned out that these are mostly library classes. We found some strange things in some libraries. One of the libraries created HashMap and in a static initializer it clogged it with a sufficiently large number of objects.

    Another library also loaded audio files in a static block, which occupied about 700 kilobytes of memory.

    We stopped initializing such libraries, began to work with them only when these functions are really needed by users. All of these optimizations took several weeks. We tested a lot, checked that we did not introduce additional problems. But we also got quite a good win, about 25 of 40 megabytes in the Native, Heap, Code, and Java Heap sections.

    But this was not enough. Memory consumption has still not dropped to 30 megabytes. It seemed that we had exhausted all the options for some simple automatic and safe optimizations.

    We decided to consider radical solutions. Here we saw two options - the creation of a separate lite-application or the processing of the Launcher architecture and the transition to a modular architecture with the ability to build Launcher without additional modules. The first option is quite long and expensive. Most likely, the creation of such an application will result in a full-fledged separate application for you, which will need to be fully supported and developed. On the other hand, the option with a modular architecture is also quite expensive, quite risky, but still it is faster, since you are already working with a well-known code base, you already have a set of both automatic unit tests, integration tests, and manual tests cases.

    It should be noted that no matter which option you choose, you will somehow have to abandon part of the features of your application in the version for Android Go. This is normal. Google does the same in its Go apps.

    As a result, having implemented a modular architecture, we quite reliably solved our memory problems and began to pass tests even on devices with a small screen, that is, we reduced memory consumption to 30 megabytes.

    A bit about memory monitoring, about how we keep memory usage under control. First of all, we set up static analyzers, the same Lint on error in cases when we use enum, create synthetic methods or use non-optimized collections.

    Further more difficult. We set up automatic integration tests that run Launcher on emulators and after a while take off memory consumption. If it is very different from the previous build, warnings and alerts are triggered. Then we begin to investigate the problem and do not publish changes that increase the use of Launcher memory.

    To summarize. There are various tools for monitoring memory, measuring memory for fast and efficient operation. It is better to use them all, as they have their pros and cons.

    Radical solutions with a modular architecture turned out to be more reliable and efficient for us. We regret that we did not take them immediately. But the steps that I spoke about at the very beginning of the report were not in vain. We noticed that the main version of the application began to optimally use memory, to work faster. Thanks.

    Also popular now: