A bit more about multitasking in microcontrollers

    In a previous article , I talked about how, according to the author, it is possible to program the usual actions of the microcontroller in real time, dividing them into several tasks that are independent (or almost independent) of each other.


    A microcontroller was chosen, with a core from a very widespread family of ARM Cortex M. Of the familiar to many, and not only the author, options with the numbers 0,3,4 and 7, M4 was chosen, because it was at hand.


    The two considerations that prompted us to embark on the slippery and shaky path of “inventing a bicycle,” as some readers wittily remarked, were actually simple. The first was that we still have to live and live with these “cortexes”. And the second is to try to do not something universal (gaining fame and fortune), but to do something more narrowly focused, hoping to achieve efficiency and simplicity. Those who sometimes do something with their hands will easily remember that, as a rule, a specially selected screwdriver is better than the one taken from a brilliant universal set.


    An assembler example was given in order to show that no more than 80 clock cycles are spent on switching. And at 72 megahertz clock speed it turns out a little more than 1 microsecond. So a tick, 50 microseconds in size, will not be that expensive. Just 2 percent of the overhead. Therefore, as one of the author’s favorite characters said, “it is advisable to suffer”.


    So, we have N tasks, each of which is guaranteed to run a piece (tick T) of time and is guaranteed to repeat this segment no later than after (N-1) T ticks, plus a delay not exceeding D. This annoying delay, fortunately , is limited by the maximum possible size in time, which is equal to the sum of the duration of the operation of all allowed interrupts. In other words, the task that has the most potential interruptions for a given period before the next tick is most unlucky. For a longer time, the task cannot be delayed. She will inevitably receive her time slot no later than in (N-1) T + D microseconds. In my opinion, this is called hard real-time.


    Tasks must complete their tasks and report on the implementation. Who should I report to? Apparently, someone is in charge, and it happens, as a rule, there are much fewer than subordinates (in truth, the author also met exceptions when there were four bosses for three workers with both left hands, of which only one knew and respected the word “ adequacy").


    And if “you are many, but I am one”, then this means a queue. Many will start pushing and try to slip through. And someone will have to wait, and then be late and explain. Despite the fact that this all looks terrible, it is called beautiful: the struggle for the resource. Queues are a well-known solution to all. I knew many who do not feed bread - let me stand in line.


    But ours cannot wait! In the sense, tasks. They are from difficult real time. Suppose two tasks read readings once every second, and the third should measure something every 10 milliseconds, put it in a pile, and report to the top. And they say to her: “Tell me, we have not finished with the chefs.”


    Apparently, you have to turn, to put it mildly, to not quite real time (soft real-time).


    Let us have a special task that knows how to wait and loves to do it. The resource that it will serve will be the communication channel. As you know, you won’t shove everything into it at once.
    But, you can immediately figure out what speed the channel should be so that nothing is lost. To do this, you need to know the performance with which all of our graphomaniacs, pah, tasks work. Obviously, you must also calculate the size of the buffer or buffers from which all packages will be sent up (or to the right).


    If the channel is not one, then the essence does not change. For each channel, a separate task is simply added, which is designed to wait (and, of course, hope and believe).


    A few words about the communication channel with the operator, or more simply, with the person. Here the channel is bidirectional, but the outward direction is most interesting. Immediately make a reservation, there is one circumstance that, even with a strong desire, it is impossible to exclude. This is channel overload. During testing, we must reach it and must have a mechanism to see its onset. I don’t argue, it’s not good to deceive, but a little bit can be left out. Vaughn, Gerasim, abused it altogether.


    Therefore, we immediately assume that the message to the operator from the task may be lost. And so that a person knows about this, we will number them. This will determine where and how many times our operator was left with nothing. Ultimately, you can always do something in the code, or add in the calculations, or even attach it in the electrical circuit to correct the situation. It seems, for the time being, it will be easier. But, of course, this is not necessary for military applications. To be honest, losing a message does not look like a disgrace only when debugging.


    For example, let us have a duplex serial interface without acknowledgment at 115200 baud. For example, RS422 in the configuration "economy" two wires - there, two - back. Its capability is approximately 10,000 bytes per second. Let's take the average message size for a person equal to 50 bytes. We get 200 messages per second or one message in 5 milliseconds. If we have three tasks that want to communicate something, then let them do it every 15 milliseconds each. On average, of course. And if not on average, then serious statistical calculations or a full-scale experiment will be required. Choose the last one. We, after all, have learned to detect missing messages and will see everything on the screen of the terminal emulator.
    So, let the three tasks create individual messages. Let the messages differ in importance or urgency of the content and our tasks put them in the appropriate buffer. Allocate these three ring buffers for three levels of urgency as shown in Figure 1.



    The fourth task selects from these buffers a message according to our approved plan and tries to deliver it. If sending is not yet possible, then the fourth task estimates how much it can sleep and does it. After sleeping, she already has the necessary space in the ring buffer to send.


    The buffers of various urgency, of course, do not store the messages themselves, but their addresses (links). At the same time, the tasks themselves do not need to wait at all. Fine? No, not at all. That doesn't work, and here's why. Each of these three ring buffers is a shared resource. Imagine task 1 was about to put an address in a middle buffer. She reads the word, checks that the place is empty, that is, the value is zero and (at this moment she is replaced by another task 2, which wants to do exactly the same and she succeeds), the first task, returning, puts the word there, overwriting everything that succeeded second. Here is a colleague asking for words. I seem to know what he will say.
    -Yes, everything is very simple, you can prohibit interruptions for the duration of the test and nothing bad will happen, it’s not for long.
    - True, not for long, but how many times? How much time will we take away from the task? And which of them? I forgot to warn, we never forbid interrupts, our hard sect of real time forbids us to do this.
    -And if you do not prohibit interruptions, then you can ask our task switch to put the message address there. He can do it atomically.
    - Yes, maybe, but then I want to ask him something else, then another. And why then did we achieve 72 degrees, then to dilute everything with water? Sorry, I meant 72 cycles for switching context.
    Let's try to do it easier, as in Figure 2.



    In this case, each task has its own buffer or its own set of buffers, if you want a different urgency, pomp and importance. Personally, I, as a simple operator, still have the same importance for everyone.


    Such a scheme does not make you fight for the resource. Now we have a very working option. Just do not like it. But what if the tasks on the left in the picture have nothing to send? Then it would be wiser to ask the task on the right to be woken up when the reason appears, and not to wake up just to set the alarm again. The tasks on the left are easier to do. In addition, a function that helps wake a friend was mentioned in a previous post.


    I foresee a rationalization proposal: “Let the interrupt from the serial port (UART) itself be engaged in what task 4 is doing now, there will be savings.” You can do this, but I don’t think it’s good. I'll try to clarify. The tasks on the left, indeed, can themselves activate the UART interrupt procedure, and it will start working and will not stop until it does everything. The interrupt procedure should now do everything that task 4 did before. The time taken to process the interrupt will swell, not a single task will be able to turn on until the next “spool” is over. And what do we say to our comrades from the persistent real-time circle? But we were told that the response to any external interrupt should be as short as possible. This is just a good tone. Or, in other words: it is necessary to do good, bad and without you it will work.


    Figure 3 explains what the process is and what challenges are located.



    Now we turn to the situation, one might say, a mirror. This is when information comes from outside. Let it be an SPI channel with several gondoliers with gondolas and a small amateur string orchestra. No, it's too early to think about rest. Leave only the SPI interface and a few chips. For example, atmospheric pressure sensor, accelerometer and stored memory.


    I must say right away - a stupid example. Not because of the gondoliero with their eternal “I should add, gentleman.” No, it’s stupid, in fact, to mix in one interface such input data of different importance. Indeed, if you need to know the acceleration, then, for sure, in order to quickly figure out when to remove the gas pedal, or turn the flaps, or close your eyes, finally. This information is often needed. But the pressure, it changes slowly and will have to fly down about three meters, so that in the lower ranks life will become warmer.


    As for the stored memory, and who generally put it on this SPI? Is there a second SPI? And is not expected? Nowhere to go, something needs to be done. Redirect the arrows in the opposite direction in Figure 2 and start thinking.


    Task 4 now serves the SPI and wakes up only by its signals. Its connection with task 1, which wants to put something in the stored memory, is directed outward and is carried out through the queue. It is also necessary to provide a mechanism to monitor the overflow of the ring buffer. The production of acceleration and pressure values ​​of task 4 should provide without the participation of two consuming tasks. You just need to spin and keep up. Now we can sketch out an explanatory picture and write an explanatory note. In Figure 4, these
    actions are shown schematically (or block diagram).



    Underflow check - these actions help you find out if the acceleration value has time to change before it is read again by the consuming task. This check is shown by a separate action in Figure 4 only in order to draw attention to it. In fact, this step occurs along with reading the value of the accelerometer according to the scheme, as shown in Figure 5.



    It should be noted that there is a shared resource, since the place of storage of the result is also an indicator of actions (semaphore). Races are possible here, speaking the language of circuitry, but for us this is not an omission. After all, slipping into the closing door of a vehicle only in life can be considered a fortune. Here we will confidently consider it a delay.


    Memory access occurs in portions in order to limit the time of each such step. Thus, we will ensure uniform reading of the rapidly changing acceleration values, and in between we will be in time taking care of the rest.


    Well, now it remains to find something suitable from iron and experiment as it should. This, I think, will be the next story.


    Also popular now: