Microinteractions in iOS. Yandex lecture

    A few weeks ago, a special event of the CocoaHeads community took place at the Yandex office - more ambitious than traditional meetings. Developer Anton Sergeev spoke at this meeting and talked about the model of micro-interactions that UX-designers usually use, as well as how to put the ideas in it into practice. Anton paid the most attention to animation.


    - It is very important for me that I was honored to meet the guests. I see here those with whom I have known for a very long time, those with whom I have been acquainted recently, and those with whom I have not yet been acquainted. Welcome to CocoaHeads.

    I will tell about microinteractions. This is a little clickback - we are engineers, developers, let's talk more about the software part, but let's start with a very humanitarian topic, such as microinteraction. As a result, we will apply this humanitarian theme in the technical part in order to learn more effectively and simply to design very small visual components, such as buttons, small loaders, bars. They are saturated with animation, and branched animation code can often look very difficult, it is extremely difficult to maintain.

    But first, let's digress a little. Think about it, do you remember when you decided to become a developer? I remember that clearly. It all started with a table. Once I decided to learn ObjC. Fashionable language, fun, just like that, without far-reaching plans. I found the book, it seems, Big Nerd Ranch, and began to read chapter after chapter, perform each exercise, checked, read, until I reached the table. It was then that I first became acquainted with the delegate pattern, more precisely with its subtype "Data Source", the data source. This paradigm now seems very simple to me: there is a data source, delegate, everything is simple. But then it blew my mind: how can I separate a table from completely different data? You once saw a table on a piece of paper in which you can put an infinite number of lines, completely abstract data. It affected me a lot. I realized that programming, development has great potential, and it will be very interesting to apply them. Since then, I decided to become a developer.

    During the development, various patterns were encountered. Huge, which are called architectures that describe the whole application. Small, which dozens fit in a small button. It is important to understand that all these patterns came not from the air, but from the humanitarian industry. The same delegate pattern. Delegation appeared long before programming, and programming takes over all these humanitarian things to work more efficiently.

    Today I will talk about another approach that adopts another humanitarian thing. In particular - about microinteraction.

    It all started with a loader. In the previous work, before Yandex, I had the task to repeat the google material design loader. There are two of them, one is uncertain, the other is certain. I had a task to combine them into one, he had to be able to both certain and indefinite, but there were strict requirements for it to be extremely smooth. At any moment we can go from one state to another, and everything should be smooth and neatly animated.

    I am a smart developer, I did everything. I got more than 1000 lines of incomprehensible noodle code. It worked, but I received a cool remark on the code review: “I really hope that no one will ever rule this code.” And for me it is practically incompetent. I wrote terrible code. It worked cool, it was one of my best animations, but the code was terrible.

    Today I will try to describe the approach that I discovered after I left that job.



    Let's start with the most humanitarian theme - microinteraction models. How are they embedded and generally where are they hidden in our applications? We continue with the use of this model in our technical world. Consider how UIView, which deals with the display and animation, how it works. In particular, let's talk a lot about the CAAction mechanism, which is closely integrated into UIView, CALayer and works with it. And then consider small examples.

    First definition. Apparently, the author really liked the “micro” prefix, but there are no macro or nanointeractions, the size does not matter. For simplicity, we will simply call them interactions. This is such a convenient model that allows you to describe any interaction with the application, from beginning to end. It consists of four points: a trigger, business logic that needs to be implemented in this interaction, feedback to convey something to the user, and a change in the state of the application.

    I will tell one story with three different roles. I'll start with the user, as the most important in the development. When I was preparing for the report, I got sick. I needed to find a pharmacy, and I opened Yandex.Maps. I opened the app, look at it, it looks at me, but nothing happens. I then realized that I’m the user, I’m the main user, giving instructions to what the application should do. I oriented, clicked on the “search” button, entered “pharmacy”, clicked “OK”, the application did the internal work, found the necessary pharmacies that were next to me, and brought it to the screen.

    I looked for the right one and found that on the screen, besides the pharmacies, there was also a special button - to build a route. Thus, the application has moved to a new state. I clicked on it, and went to the pharmacy. I went to this application for some purpose - to find a pharmacy. I reached it. I am a happy user.

    Before this application appeared, and I was able to search for something in it, it was first developed. What did the UX designer think when he invented this process? It all started with the need to get out of the dumb scene, when the user and the application look at each other, and nothing happens. For this was needed some kind of trigger. Everything has a beginning, and here, too, it was necessary to start somewhere.

    The trigger has been selected - the search button. When clicking on it, it was necessary to solve the problem from a technical point of view. Request data on the server, parse the response, somehow update the model, analyze. Request the current position of users and so on. And so we got this data and know exactly where all the pharmacies are.

    It would seem that it was possible to finish it. After all, we solved the problem, found all the pharmacies. There is only one problem: the user still knows nothing about these pharmacies. He needs to get this across.

    Somehow pack our solution to this problem and bring it to him in a beautiful package so that he will understand it. It so happened that users are people, they interact with the outside world through their senses. The current state of technology is such that only three senses are available to us, as mobile application developers: vision — we can show something on the screen, hearing — can be reproduced in speakers, and a tactile sensation, we can push a user into a hand.

    But the man is much more functional. But the current state of technology is such that at the moment we can only rely on these three. And we choose the screen in this case, show the nearest pharmacies on the top of the map, and a list with more detailed information about these pharmacies. And it would seem that everything is exactly there, the user found a pharmacy and everything is fine.

    But there is a problem. When the user entered the application, he was in a context in which he does not know where the pharmacies are located. And the tasks he had to find her. But now the context has changed, he knows where the pharmacies are, he no longer needs to look for them. He had the following task - to pave the route to the next pharmacy. That is why we need to display additional controls on the screen, in particular, this is the button for constructing the route, that is, to transfer the application to another state, in which it is ready to accept new triggers for the next interactions.

    Imagine, the UX designer came up with all this, comes to the developer and begins to describe in colors how the user presses the button, how it happens, how it is searched, how the user is satisfied, how we increase the DAU and so on. The developer’s stack of unresolved issues overflowed somewhere else on the first sentence, when we first mentioned the button.

    He listens patiently to everything, and in the end, when it ends, he says that, okay, it's cool, but let's discuss the button. This is an important element.

    During the discussion, it turns out that the button is inherently a trigger, it contains within itself the logic by which it can receive messages from the system, in particular about user clicks on the screen. Based on this click, it can launch a chain of events, which begins with the same button sending messages to different objects about the need to start various processes, in this case request information on the server, some more.

    When pressed, the button changes its state, it becomes pressed. When the user releases - ceases to be pressed. That is, it gives feedback to the user so that he understands what to expect from this button. And the button can be pressed, not pressed, be active or inactive, in different states, and go through different logic from one state to another.

    Thus, we looked at the fact that the same microinteraction model, which consists of a trigger, business logic, feedback and state changes, can describe our application at various scales, as in the whole use case, a huge search for the nearest pharmacy, and in terms of a little button.

    And this is a very convenient model that allows you to simplify the interaction within the team and describe programmatically, to separate four entities: trigger, business logic, feedback, and state change. Let's see what UIKit provides us to use. And not just provides, but it uses. When implementing various animations, a small component of a UIView subclass, it only uses this mechanism and does not follow a different path.

    Let's start with UIView, how it fits into this model. Then we will consider CALayer, what it provides to us to support these states, and we will consider the mechanism of actions, the most interesting point.

    Let's start with UIView. We use it to display some rectangles on the screen. But in fact, UIView cannot draw itself, it uses another CALayer object for this. In fact, UIView is responsible for receiving messages about system touch, as well as other calls, about the API that we defined in our UIView subclasses. Thus, UIView itself implements the trigger logic, that is, the start of some processes, receiving these messages from the system.

    Also, UIView can notify its delegates about events that have occurred, and also send messages to subscribers, such as, for example, subclass UIControl with various events. Thus, the business logic of this UIView is implemented. Not all of them have business logic; many of them are only display elements and do not have feedback in the sense of business logic.



    We looked at two points, trigger and business logic. And where is the feedback and state change hidden in UIView? To understand this, we must remember that UIView does not exist by itself. When creating it, it creates itself a backlayer, a subclass of CALayer.



    And he appoints himself a delegate. To understand how UIView uses CALayer, it can exist in different states.

    How to distinguish one state from another? They differ in the data set that needs to be stored somewhere. We will consider what opportunities CALayer provides us with for UIView so that it stores the state.



    The interface is expanding a bit, the interaction between UIView and CALayer, the UIView has an additional task - to update the storage inside CALayer.

    A little-known fact that few people use: CALayer can behave like an associative array, which means that we can write arbitrary data to it on any key as follows: setValue (_: forKey :).



    This method is present in all subclasses of NSObject, but unlike many others, when it receives a key that is not redefined by it, it does not fall. And it records correctly, and then we can count it. This is a very handy thing that allows, without creating subclasses of CALayer, to write down any data there and then read it in consultation with them. But this is a very primitive simple repository, in fact, one dictionary. CALayer is much more progressive. It supports styles.

    This is implemented by the Style property that any CALayer has. By default, it is nil, but we can redefine it and use it.



    Generally, this is a regular dictionary and nothing more, but it has a peculiarity about how CALayer works with it if we request value forKey, another method that NSObject has. It acts very interestingly, it searches for the necessary values ​​in the style dictionary recursively. If we pack one style existing in a new style with the style key and write some keys there, it will be searched in the following way.



    First look at the root, then inland, and so on, until it makes sense. When style becomes nil, then there is no sense to look further.

    It is in this way that UIView, using the infrastructure provided by CALayer, can organize state changes, update CALayer’s internal storage, either using style, a very powerful storage that can simulate a stack, or using a regular associative array, which is also very effective and very useful .

    Finished with the repository, starting with CAAction. I will tell you more about it.



    There is a new challenge for UIView - to request actions from CALayer. What are action games?



    CAAction is only a protocol with only one method - run. Apple generally loves cinematic themes, action here is like “camera, motor!”. This “motor” is just an action, and it’s not just that the name was used. The run method means to launch an action that can start, run and end, which is the most important. This method is very generic, it has only the string event, and everything else can be of any type. In ObjC, these are all id and the usual NSDictionary.



    Inside UIKit, there are classes that satisfy the CAAction protocol. First is the animation. First, we know that you can add animation to a layer, but this is a very low-level thing. High-level abstraction over it - to launch action with the necessary parameters with a layer.

    The second important exception is NSNull. We know that it cannot call any methods, but it satisfies the CAAction protocol, and this is done in order to conveniently look for CAAction on layers.



    As we said before, UIView is a delegate for CALayer, and one of the delegate methods is action (for: forKey :). The layer has a method, action forKey.



    We can call him at the layer at any time, and he will give the correct action or nil at any time, as he can also give. Algorithm is a very unusual search. Here the pseudocode is written, let's take a look at the lines. When receiving such a message, he first consults with the delegate. The delegate can either return nil, which will mean that the search should continue elsewhere, or it can return a valid action, a valid object that satisfies the CAAction protocol. But there is a logical rule: if it returns an NSNull that satisfies this protocol, then it will later be converted to nil. That is, if we return Null, in fact it will mean "stop searching". Action is not and is not necessary.

    But at the same time there is the next one. After he had consulted with the delegate, and the delegate returned nil, he continues to search. First, in the Actions dictionary, which the layer has, and then it will recursively search in the style dictionary, where there can also be a dictionary with the key actions into which many actions can be written, and it will also be able to search them recursively. If it didn’t work there either, it will request the default action forKey class method, which is defined by CALayer and until recently returned something, but recently it always returns nil in recent iOS versions.

    Understood with the theory. Let's see how everything is applied in practice.

    There are events, they have keys, some actions are taking place on these events. In principle, two different types of events can be distinguished. The first is the animation of stored properties. Suppose when we call View backgroundcolor = red, then it is theoretically possible to animate it.



    What is the report about patterns without a scheme? I drew a couple. UIView has some kind of interface that we defined in subclasses or the one that is received from the system with events. The task of UIView is to request the necessary action, update the internal store and start the action that occurred. The order is very important about the request: the action, only then the update of the action, and only then the update of the store and the action.



    What happens if at UIView we update backgroundColor. We know that in UIView all that concerns the display on the screen, it is all the same proxy to CALayer. It is everything that it receives, just in case, it caches, but at the same time, CALayer broadcasts everything, and CALayer is further engaged in all the logic. What happens inside CALayer when he is asked to change the background? Everything is a little more complicated here.



    For starters, he will ask for action. And it is important to understand that the action will be requested first. This will allow CALayer to ask its current values, including backgroundColor, at the time of the action creation, only then the store will be updated, and when the received action receives the run command, it will be able to consult CALayer and get new values. Thus, he will have both old and new, and this will allow him to create an animation, if necessary.

    But there is one feature in UIView, if we change the backgroundColor in UIView, if we do it in the animation block, then it is animated, and if outside the animation block, it is not animated.



    It's very simple, there is no magic. But it is enough to remember that UIView is a delegate to CALayer, it has such a method. Everything is very simple.

    If this method was launched in a block of animations, then it will return some kind of action. If outside the animation block, this method will return NSNull, which means you don’t need to animate anything. Thus, the natural flow of action would be interrupted when the CALayer had to be sanitized in the normal way.

    But what if we want to add ourselves, UIView has a set of properties that are animated. And what if we want to make our own property? Is it really private and not to issue?



    Actually, no, it's very simple. UIView has a class variable that is read only, which you can consult about the current inherited inheritedAnimationDuration. This property is very simple. If it is inside an animation, it can potentially be greater than zero. In all other cases, it is zero.

    Why potentially? Because we can start a block of animations with zero duration, and then everything will happen without animations. And it is precisely this property that allows us inside the action, when the run comes, to see whether it is worth animating or not.



    What if we want to create our own property, not CAAction, not backgroundcolor or opacity, which are already animated in UIView. It would seem that we need to reimplement this logic, the action request, the update of the stack, the start of the action. But in fact, for us it's all done. And in the setValue forKey method, all this is already done, it is enough just to transfer the desired value to the required key, he will request the necessary action, update the stop himself, and start it himself with the necessary key, then this animation can be calculated by receiving their conditions.

    Our task is only to give the correct action in the delegate method, so that it either animates or does not animate, if necessary.

    The second type of event is when we animate non-persistent properties. For example, we can issue the command “activate” or “deactivate” in the sense of a loader. Start spinning or stop.

    Here is another scheme.



    Doing exactly the same thing. The only thing that we delegate the function of updating the story to actions, and it is actions that will now deal with both updating the story and feedback. Thus, we pull out all the logic by feedback and by updating the store from UIView and from CALayer, we no longer need to create their subclasses, completely into other objects, CAAction, which, fortunately, is very simple to implement.





    Remember, when we made stored properties, we called such a method, and it did a lot of things for us. Here you have to do it yourself and just remove all the logic of interaction with the stor, just request the necessary actions and run them. All the rest will take action.

    It all started with a loader. It looks something like that.

    When I do not know about microinteractions and did not know about all the logic of CAAction, I drew such a scheme, and began to implement. Everything was great, I wrote a lot of code, it was a huge class, hundreds of lines.



    Then I realized that the user may, in the course of the application, say, press the home button, and then return again. Loader can leave the screen, and can return back. So, we need to handle these events too.



    I began to develop this scheme further. And it turned out something like that.



    Here I realized that somewhere I turned the wrong way, something went wrong. But the problem was that this scheme is correct, there are no errors in it. And it is difficult to correct mistakes where they are not.

    But at the same time, the code turned out terrible, very complex and incomprehensible, it is difficult to maintain and difficult to maintain.



    When I found out what CAAction is and how they relate to micro-interactions, I began to reason. The loader does not send any messages, neither the UIControl subclass, nor any table that notifies about something, just spinning and showing progress, nothing more is needed from it, there is no business logic in it.

    OK, the task is simplified. We realized that UIView and business logic, which in this case does not exist, should receive some events from the system, including the appearance on the screen, the disappearance on the screen, the touch in this case will be ignored.

    And the right side is feedback and state changes. We need to transfer this logic to action games.

    How did I come up? I disentangled this scheme. I realized that we have six states and only five events. A loader may be off-screen - this is one state. Consider this on the example of the state of activating, when we switch from inactive to active. At the moment it is in a temporary state, it is there, and at this moment different messages may come.



    We limit the set of messages. There are only five of them, the system onOrderIn and onOrderOut. These are the messages that the system and UIKit sends when it appears on the screen and when it disappears.

    Plus mine related to business logic is to activate, deactivate and update progress.



    It looked like this. I was able to make the UIView subclass interface very thin, it contained only two properties: isActive and progress. But these two properties were transformed into five events. I just had to write CAAction for each state, which can handle each event.

    We take Cartesian product, events and states. We get five events, six states, 30 CAACtion, which I needed to write. But it was not one big method in thousands of lines, it was 30 classes, and the vast majority of them were just NSNull. In fact, 15 classes had a length of less than 15 lines. This is a very simple class. In general, simplicity is the highest value in programming. Complex code is bad, but simple - it is simple and good.

    It turned out that I turned one big task into a set of simple ones. It turned out to be extremely easy to implement.

    We considered microinteraction. We realized that absolutely any interaction with the application can be written if we select four entities: a trigger, a business logic, feedback, and a change of states.

    Decomposing all the interactions into these four pieces, we can not interfere with the logic of one with the other, simplify. Therefore, try to analyze your applications and various tasks using these micro-interactions. Remember that UIKit provides a huge infrastructure to conveniently and beautifully do it. Do not neglect her. Often there are very old methods, they are rarely used, but very important and will help you to realize your elements beautifully, simply and quickly. Thanks for attention.

    Also popular now: