IPhone, don’t lag. Part 1: multithreading for practitioners


    My name is Maxim, and I'm an alcoholic doing iOS development for more than 7 years.

    In the wake of applicants, I’ll say that I regularly conduct interviews with mobile developers for companies.

    Among the candidates there are shots who smoke hookah right on a Skype interview, try to google questions on the go, want a ZP 180k for 3 months of experience, behave as if they gop-stopped me on the street (with the appropriate terminology) and so on.

    But in most cases, even adequate middle specialists have a common gap: a lack of understanding of the principles of asynchronous task execution and hardware acceleration in iOS.

    In this article, I decided to tell in simple words about the application of multithreading in iOS, so that after the first reading it was possible to easily and fully understand the knowledge gained in practice.

    (If you are too lazy to read, then the video is attached ) There

    will be two materials, one dedicated to multithreading (this one), and the second on hardware acceleration: how to evenly distribute the load between the CPU and GPU in order to get a perfectly-smooth interface.

    For those who are interested in not only learning how to apply the techniques, but also understanding Zen, there is an excellent article . True, it is still for Swift 3, but the essence has not changed during this time.

    SHOCK! The true causes of lags!
    As one expert told me during an interview: the application slows down because the signal from the server cannot go faster than the speed of light. And everything lags in this gap.
    So, physics, heartless you scum. The mystery is revealed, we diverge.

    Brief Practical Theory


    A practical theory is a theory without which you are not a result-oriented practitioner, but simply an uneducated savage.

    And before you start fantasizing about asynchrony, threads, and other tricks, you need to answer the question: why do you need to parallelize something? Here is the main stream, why not turn everything on it? I hope for many the obvious answer: because everything will slow down, ale.

    And what is the main stream so special then? Its exclusivity is that it interacts with the application from the outside: touch processing, notifications, system messages, and more.

    And the main thing in our case is that the entire responder chain hangs on the main thread:
    UIApplication -> UIWindow -> UIViewController -> UIView.



    All screen taps, all user interaction, come exactly there.

    But alright, let the clicks be processed on the main thread, but, damn it, Apple, why can't I, like Clint Eastwood, draw with two hands?

    Yes, because for communication of several streams you will have to cover yourself with a thick layer of gizmos for synchronization, and this is all unnecessary junk and pressure on the already scanty resources. Apple even introduced the Main Thread Checker to help avoid all sorts of exotic bugs caused by inhuman handling of the main thread.

    In general, the first rule is to leave the main thread for the UI and the UI for the main thread.

    Okay, where do the rest then?

    IOS has a lot of tools for these purposes: threads, posix threads, gcd, operation queue.
    Each has its own application, but in everyday life, consisting of banal tasks of the type: go to the server, bring, save and display, enough gcd and operation queue.

    GCD is an Apple library for parallel tasks. It consists of the performed operations (tasks) and queues that contain these same operations. The most banal FIFO collection with tasks. Of course, there are still a bunch of options, but we don’t need them yet.

    NSOperationQueue - the same queue, only high-level and OOP oriented. But in fact, just a beautiful wrapper over gcd, it has no functional advantages, although it used to.

    The choice between one and the other, in most cases, depends on taste, with rare exceptions. Work with whatever is more convenient for you. Personally, I prefer gcd because of its better handling and lack of additional overhead.

    By the way, such obscurantism is frequent among developers that supposedly NSOperationQueue is no longer based on GCD, but specially redesigned for iOS and therefore faster / higher / stronger. But this is by no means the case, to quote an apple: Apple :



    So NSOperationQueue has no special advantages over GCD.

    GCD & NSOperationQueue Priorities


    Let's go over the main components.

    Each queue has such a concept as the priority with which it receives resources. This is called quality of service, most often used by the abbreviation 'qos' or quality of service.
    The higher the priority, the faster and more CPU time is allocated for tasks in this queue. Yes, yes, it’s also faster, you heard right. The system can optimize processor awakening, thereby saving energy. This is useful to remember when working with low power mode when the user runs out of battery.

    I really want Yandex.Taxi authors to find out. After all, you can save the battery in such a simple way, and not arrange "bitcoin mining" on my iPhone.

    What are the priorities? There are several of them and you need to remember everything so that it does not infinitely hurt. And many say that it is supposedly not being sanctified anywhere, and it’s not necessary at all.

    And so the priorities are: userInteractive , userInitiated , default , utility and background .

    Main is not considered, because this is not a priority, but a separate queue for the main thread. She, by the way, also has a priority: userInteractive. So, for example, by running shamanism with pictures in a separate queue with userInteractive priority, you will get non-fake lags, because the race for resources will begin. Smaller problems than if you just run in main, but harder to debug, because the lags will be unstable.
    (there is still unspecified, but this is generally wildness, which you are unlikely to ever encounter)

    If you want to understand exactly how operations are thrown between queues, the link to the article cited above.

    So when to use what?

    • userInteractive - no need to apply at all. It is behind the scenes reserved by the main thread, as I wrote above. Apple defines the spectrum of its application as: operations critical for user interaction, taking no more than a split second. Sounds like a little thing for the UI, doesn't it? In practice, I had only one task, which was supposed to compete with the interface in speed and requiring surgical accuracy, but it was not solved through gcd. In short, userInteractive is for the gods from Apple, not simple hard workers like us.
    • userInitiated - local operations requiring an instant result. For example, saving something to the database before moving to the next screen. But, at the same time, it is not a blocking UI. Particularly focused on the fact that it is local operations. The network is not included.

      Suppose you turn the download indicator in the middle of the screen and you need to urgently do something to show the content. Obviously, this cannot be done on the main thread, because gui will start to fail, but it also makes no sense to throw too deep into the background, because from the whole interface one single twist is spinning. In this case, userInitiated is used.
    • default - default priority. Apple disagrees with itself, in the documentation saying that you don’t need to use it, but at WWDC, on the contrary, it says that this is the best priority for downloading pictures and other network communications. Having played around with different QoS, I can say that default is best for downloading images or small files that affect the user's perception of the application. The difference between utlity (the next level) and default is really felt when working with images, especially when pre-rendering. Default fulfills much faster, but at the same time does not compete with the interface for resources. My recommendation is to leave all network business logic and images in default.
    • utility - something not too priority, but still needed in the near future. For example, processing bulky files or complex manipulations with the database, media conversion, and so on. Simply put, when you need to do an urgent task for an application, but where a couple of extra seconds of waiting, the role will not play. By the way, such operations are the first candidate for transferring to background mode when working with lower power mode.
    • background is the most vegetable mode of all. As they say, for those who know life and are in no hurry. It should be used while saving energy, or for super-heavy operations. Type of loading thick files, backups and more. And if suddenly the user turned on lower power mode, and your operation was already in the background priority, then maybe its nafig right away, huh?

    Practice in the real world


    Speaking about the application, if you use a third-party framework for some task, then most of the tools do the work on the queue with which they were called, or support its explicit mention. If it was not possible in 5 minutes to find a way to explicitly indicate the priority, then it simply means simply wrapping the operation in dispatch_async and not worrying.

    The main thing to note is that often callbacks are called on the main thread for some historical reasons. It happens that you make a request with default qos, and then pulled the save to the database in the completion block, forgetting that you are already at home. And you scratch your turnips why this application barely rides.

    So if there is no certainty, then we put a breakpoint in the block and look at the call stack. In such moments, it is better to double-check than then to look for lags through the profiler. I love to ask about profiler at interviews.

    Main thread:



    Any other:



    In general, it is imperative to pay attention to which thread the action is on. Save a lot of nerves and time later.

    Another nuance that arises when you run up to asynchrony: how much do you need to carry out in separate operations? Where is the border? What are the implications?

    Philosophically, if something is asynchronous, then it can be asynchronized. But we will approach more pragmatically: if your application is composed of many presecond operations, then you should first think about whether these little things can certainly be combined in some larger task? If you produce a separate operation for each sneeze, then there will only be more lags.

    For example: we have a certain table with products in the store. Each cell is a price, avatar, multi-line description. The price is localized (ruble symbol + formatting), description too (has a certain verbal prefix). As a rule, compiling a localized string is done right at the moment the values ​​are set in the corresponding labels.

    But can this be done asynchronously? First we localize it in the background, and then put it on the label.
    So, this shitty decision. The best option would be for each product object to compose localized values ​​immediately after a request to the server, writing data to the corresponding fields of the entity.

    It will also be especially useful to calculate the sizes of these same fields in advance by writing them into the model. Yes, this is normal, despite the fact that it looks unusual.

    Our team has long adopted such a practice - to count the cell heights obviously when receiving data from the server, saving them to the database. Or into an array, if you do not use the database, if only this would happen in advance and in the background. Better let your user see the spinning spinner for an extra fraction of a second than admire the friezes later.

    And no need to worry about the amount of storage. In the current reality, any memory on an iPhone is a cheap resource, and processor time is expensive. It is worth remembering.

    Conclusion : prepare the data for the interface in advance. So cheaper and more beautiful.

    And since you have already forgotten half, here are the questions you should ask yourself in order to stop writing nonsense:

    1. Is it possible to do the operation in advance in the background and cache the result?
    2. Which priority is best for the task?
      • userInitiated : minor and urgent actions
      • utility or default : network tasks, rendering
      • background : long processes
    3. On which thread are callbacks called? Is there an extra load on the main thread? (can be easily checked through the call stack on the breakpoint)

    In the next series we will discuss hardware acceleration. It sounds scary, but it will be easy.

    PS I would be grateful to any feedback on the video. The first experience, every minute literally took an hour.

    Also popular now: