Overview of one Russian RTOS, part 7. Means of exchanging data between tasks
In addition to interaction at the level of mutual locking, tasks must interact with each other and at the data level. At the same time, the distinguishing feature of the MAX MAX RTOS is the ability to exchange data not only within one controller,
Fig. 1. An example of the interaction of tasks within the same controller
but also between controllers, completely hiding the transport layer.
Fig. 2. An example of the interaction of tasks between controllers.
At the same time, different controllers are equivalent to different processes, since their memory is completely isolated. In the OS version published on our website , the physical channel between the controllers can be wired SPI or UART interfaces, as well as a wireless interface via RF24 radio modules.
It is not recommended to use the SPI and UART options, since in the current implementation no more than two controllers can be connected through them.
Further, I’ll tell you more about this, and other chapters of the “Knowledge Book” can be found here:
Part 1. General information
Part 2. Kernel RTOS MAX
Part 3. Structure of the simplest program
Part 4. Useful theory
Part 5. First application
Part 6. Synchronization tools streams
Part 7. Means of exchanging data between tasks (this article)
Part 8. Working with interrupts
The classic approach to the operation of RTOS is as follows: tasks exchange data with each other using message queues. At least that is what all academic textbooks require. I belong to practical programmers, therefore, I admit that sometimes it’s easier to get by with any direct means of exchange made for a specific case. For example, banal ring buffers that are not tied to the system. But nevertheless, there are cases where message queues are the most optimal objects (if only because, unlike non-system things, they can block tasks when the buffer being polled is full or empty).
Consider a textbook example. There is a serial port. Of course, by circuitry, to simplify the system, it is made without flow control lines. Data on wires can go one after another. At the same time, the equipment of many (although not all) standard controllers does not imply a large hardware queue. If you don’t have time to collect the data, it will be erased with new portions coming from the receiving shift register.
On the other hand, let's say a task that processes data takes some time (for example, to move a working tool). This is quite normal - the G-code in CNC machines comes precisely with some lead. The tool moves, and the next line at the same time runs through the wires.
So that the buffer register of the controller does not overflow, and the bytes in the program have time to be received during the main operation, it is necessary and sufficient to do their reception in the interrupt handler. The simplest option is possible when “raw” bytes are transferred to the main task:
Fig.3
But in this case, too many operations of placing and taking from the queue are obtained. Overhead is too high. It is advisable to queue not the raw bytes, but the results of their preprocessing (starting from the lines, ending with the results of interpreting the lines for our example with the G-code). But pre-processing in the interrupt handler is unacceptable, because at this time part, and sometimes all other interrupts are blocked (depending on the priority setting), and the data for other subsystems will be processed with a delay, which sometimes disrupts the product’s performance.
This postulate is worth repeating it several times. I remember, on one forum, I saw this question: "I took a typical microphone unpacker from PDM format, but it does not work correctly." And an example was attached to the question, in which PDM filtering was performed in the context of an interrupt. It goes without saying that when the author of the question began to convert from PDM to PCM without interruption (as he was immediately advised), all the problems went away by themselves. Therefore, in the interruption, pre-processing is unacceptable! Do not microwave eggs andperform unnecessary actions in the interrupt handler!
The recommended scheme in all textbooks, in the presence of preprocessing, is the following.
Fig.4.
The preprocessing task, which has a high priority, is blocked almost all the time. The interrupt handler received a byte from the hardware, woke up the pre-processor, passing this byte to it, and then left. From now on, all interrupts are again enabled.
The high-priority pre-processor wakes up, accumulates data in the internal buffer, and then falls asleep, again giving the opportunity to work with tasks with normal priority. When a line is accumulated (a line feed character has arrived), it interprets it and places the result in a message queue. This is the option that all academic publications recommend, so I just had to convey the classic idea to the readers here. Although, I’ll immediately add that I myself, not as a theorist, but as a practitioner - I see a weak point of this method. We win on a rare call to the queue, but lose on context switches to enter the high-priority task. In general, the recommendations were made; it’s described about the shortcomings of this approach, but how to work in real life — everyone should find their own method, choosing the optimal ratio of performance and simplicity. Some recommendations with real estimates will be in the next article on interruptions.
To implement the message queue, the MessageQueue class is used. Since the message queue should work effectively with arbitrary types of data, it is designed as a template (the data type is substituted for it as an argument).
The constructor has the form:
MessageQueue (size_t max_size);
The max_size parameter determines the maximum queue size. If you try to queue an element when it is filled to capacity, the task will be blocked until there is free space (some task will not take one of the elements already in the queue).
Since it has already been said too much, one cannot do without an example of initializing a queue. Take a test fragment in which it is clear that the queue element is of type short, and the dimension of the queue will not exceed 5 elements:
You can place a message in the queue using the function:
Result Push (const T & message, uint32_t timeout_ms = INFINITE_TIMEOUT);
The timeout_ms parameter is required when the queue is full. In this case, the system will try to wait for the moment when there is free space in it. And this parameter - just tells how much is allowed to wait.
If necessary, the message can be put not at the end of the queue, but at its beginning. To do this, use the function:
Result PushFront (const T & message, uint32_t timeout_ms = INFINITE_TIMEOUT);
To remove the next element from the head of the queue, the function is used:
Result Pop (T & message, uint32_t timeout_ms = INFINITE_TIMEOUT);
Here, accordingly, the timeout parameter sets the wait time in case the queue is empty. During the set time, the system will try to wait for the appearance of messages in it that have been queued by other tasks.
You can also get the value of an element from the head of the queue without removing it:
Result Peek (T & message, uint32_t timeout_ms = INFINITE_TIMEOUT);
Finally, there are functions to find out the number of messages queued:
size_t Count ()
and the maximum possible size of the queue:
size_t GetMaxSize ()
In fact, purely formally, the means of exchanging data between controllers are drivers. But ideologically they relate to the usual means of data exchange, which is one of the main features of the MAX MAX RTOS, so we will consider them in the part of the manual related to the kernel.
Let me remind you that in the published version of the OS, physical exchange can be carried out through wired UART or SPI interfaces, or through the RF24 radio module (also connected to the SPI interface). We also recall that to activate the exchange of data between controllers, you should enter the line:
#define MAKS_USE_SHARED_MEM 1 in the MaksConfig.h file
and determine the type of the physical channel by setting one of the constants:
MAKS_SHARED_MEM_SPI, MAKS_SHARED_MEM_UART or MAKS_EMHADIO .
The SPI and UART mechanisms in the current implementation provide the connection of only two devices, so the radio
option is recommended. Now, after such a protracted preamble, we begin the study of the SharedMemory class.
A class object can be initialized using the Initialize () function. The word "can" is not used accidentally. In general, for a radio option, initialization is not required.
The data structure is passed to this function. Let us briefly consider its fields.
Consider examples of filling this structure and calling the initialization function.
Another option:
This class provides two mechanisms for the interaction of tasks - messages and shared memory (with the possibility of locks).
Since the shared memory object is always one, the developers of the operating system created the MaksSharedMemoryExtensions.cpp file, which converts complex function names to global ones.
Here is a snippet of this file:
Since all the applications included in the delivery package use global function names, I will also use this naming option in the examples for this document.
Let's start with the posts. To send them, the SendMessage () function is used. The function is quite complex, so let's consider it in detail:
Result SendMessage (uint32_t message_id, const void * data, size_t data_length)
Arguments:
message_id - message identifier;
data - a pointer to the message data;
data_length - the length of the message data.
Usage example:
The result of the function reflects the sending status of the message. Whether it was received by any of the recipients or not, the function is silent. It is only known whether it is gone or not. To confirm, the recipient must send a return message.
Accordingly, the function is used to wait for a message on the recipient side:
Result WaitUntilMessageReceived (SmMessageReceiveArgs & args, uint32_t timeout_ms = INFINITE_TIMEOUT)
Arguments:
args - a reference to an object with message parameters;
timeout_ms - timeout in milliseconds. If the timeout value is INFINITE_TIMEOUT, then the task will be blocked without the possibility of unlocking by timeout (endless wait).
Message options are a whole class. Let's consider briefly its open members:
uint32_t GetMessageId ()
Returns the identifier of the received message:
size_t GetDataLength ()
Returns the data size of the received message in bytes.
void CopyDataTo (void * target)
Copy message data to the specified buffer. The memory for the buffer must be allocated in advance. The size of the buffer should be no less than the size of the message data (the result of calling the GetDataLength method)
Thus, the example serving the receipt of the message sent in the previous example looks like this:
Context refers to the area of memory that should be synchronized between all controllers. The goal pursued during synchronization can be any. The simplest case is that one device informs the other about completed work stages for hot standby. If it fails, the remaining devices will have information on how to pick up work. For devices that achieve the goal together, the mechanism of exchange through the context may be more convenient than through messages. Messages should be generated, transmitted, received, decoded. And with the memory of the context, you can work like with ordinary memory, it is important only to remember to synchronize it so that the memory of one device is duplicated to the others.
The number of synchronized contexts in the system can be arbitrary. Therefore, it is not necessary to fit everything into one. For different needs, you can create different synchronized contexts. The size of the memory, the data structure in it and other parameters of the synchronized context is the concern of the application programmer (by itself, the larger the amount of synchronized memory, the slower the synchronization occurs, which is why it is better to use different small contexts for different needs).
In addition, even moments for synchronization sessions - even those are selected by the application programmer. The RTOS MAX provides an API for support, but the application programmer must call its functions. This is due to the fact that the data exchange process is relatively slow. If everything is left at the mercy of the operating system, then delays are possible at a time when the processor core should serve other tasks as much as possible. If you automatically synchronize contexts too often, resources will be wasted, if too rarely, data can become outdated before the controllers synchronize. We add to this the question whose data is more important (for example, if there are four different subscribers), after which it becomes completely clear that only an application programmer can initiate synchronization. It is he who knows when to do it better, as well as which of the subscribers should distribute their data to the rest. The OS, on the other hand, provides transparency for the application.
The context has its own numerical identifier (set by the application programmer). All applications can have either one synchronized context or several. It is only important that their identifiers be consistent within the interacting controllers.
The simplest examples of synchronized data - cleaning robots periodically mark the territory they have cleaned on the map in order to know about areas that have not yet been cleaned, and also tell who is about to go now, so as not to interfere with each other. Wrenches working on one product mark each screwed nut after the screwing is complete, so that if one fails, the other would finish part of it. The board with a touch screen recorded a click and noted this fact for the rest of the boards. Well, there are a lot of other cases where it is necessary to split the memory, but it is allowed to do this several times per second (a maximum of several tens of times per second).
Thus, the context can be represented as shown below:
Fig. 5. Context
And its purpose - can be represented in the following figure:
Fig. 6. The essence of context synchronization
Now we consider the functions that are used to synchronize the context:
Result GetContext (uint32_t context_id, void * data);
Copies context data to the specified memory area, the memory must be pre-allocated. Suitable for the case when the data length is known in advance (for example, a structure with simple fields).
Arguments:
context_id - context identifier;
data - pointer memory area for storing context;
As a result, the context data with the specified identifier, which were obtained during the last synchronization, will be returned. Thus, this function will work quickly, since the data is taken from a local copy of the context. There is a second variant of this function:
Result GetContext (uint32_tcontext_id, void * & data, size_t & data_length);
allocates memory and copies data and context length. Suitable for the case when the data length is not known in advance (for example, an array of arbitrary length).
Arguments:
context_id - context identifier;
data - pointer memory area for storing context;
data_length - size in bytes of the memory area to store the context.
In principle, you can create a task that will wait for the context to be updated, and then copy its new data to the application memory. The following function is suitable for this:
Result WaitUntilContextUpdated (uint32_t & context_id, uint32_t timeout_ms = INFINITE_TIMEOUT)
Arguments:
context_id - context identifier;
timeout_ms - timeout in milliseconds. If the timeout value is INFINITE_TIMEOUT, then the task will be blocked without the possibility of unlocking by timeout (endless wait).
Finally, consider the case when a task wants to update its context in the entire system (consisting of several controllers).
The context should be captured first. The following function is used for this:
Result LockContext (uint32_t context_id)
Argument:
context_id - context identifier.
The function requires an exchange between controllers, so it can take a long time
If the context could be captured (while trying to capture at the same time, only one will win, the others will receive an error code), then the context can be written using the following function:
Result SetContext (uint32_t context_id, const void * data, size_t data_length)
Arguments:
context_id - context identifier;
data - a pointer to a memory area for storing context;
data_length - size in bytes of the memory area to store the context.
Finally, to conduct context synchronization, you should call the function:
Result UnlockContext (uint32_t context_id)
Argument:
context_id - context identifier.
It is after its call that contexts will synchronize throughout the system.
The function requires an exchange between controllers, so it can take a long time
Consider a real-world example of working with synchronized contexts, which comes with the OS. The code is contained in the file ... \ maksRTOS \ Source \ Applications \ CounterApp.cpp
In this example, several devices increment a counter once a second (if you run this application on boards with a screen, the counter value will be displayed visually). If one of the controllers is turned off and then turned on, it will receive the current contents of the counter and will work together with everyone. Thus, the system will keep counting until at least one of the controllers is alive in it.
The application programmer who did this example chose a context identifier based on the principle: “Why not?”
The memory to be synchronized looks simple:
The main actions that interest us occur in the function:
void CounterTask :: Execute ()
First, the controller tries to find out: is it the first in the system? To do this, he tries to get the context:
If the controller is not the first, then the context will be received, and along the way, the counter value that exists in the system will be received.
If the controller is the first, then the context will not be received. In this case, it should be created, which is done as follows (in the same place, the counter vanishes along the way):
Everything, now the context definitely exists, it was found in the system to which we just connected, or created by us. We enter an infinite loop:
There we wait one second:
And we are trying to win the contest for the right to distribute our counter to the whole system:
If we succeeded, we distribute
Apparently, we distribute a context, but we do not receive. The fact is that in this example only newly connected devices get context. If it is received, then the device works autonomously. Of course, it’s better to keep in touch constantly, but the description of such a system will take a lot more paper, but you won’t argue against psychology, most readers will simply yawn and move on to the next section, so we will provide the most inquisitive users with proceedings with more complex examples as independent work. The principle of the shared memory, I hope, is now more or less clear.
Fig. 1. An example of the interaction of tasks within the same controller
but also between controllers, completely hiding the transport layer.
Fig. 2. An example of the interaction of tasks between controllers.
At the same time, different controllers are equivalent to different processes, since their memory is completely isolated. In the OS version published on our website , the physical channel between the controllers can be wired SPI or UART interfaces, as well as a wireless interface via RF24 radio modules.
It is not recommended to use the SPI and UART options, since in the current implementation no more than two controllers can be connected through them.
Further, I’ll tell you more about this, and other chapters of the “Knowledge Book” can be found here:
Part 1. General information
Part 2. Kernel RTOS MAX
Part 3. Structure of the simplest program
Part 4. Useful theory
Part 5. First application
Part 6. Synchronization tools streams
Part 7. Means of exchanging data between tasks (this article)
Part 8. Working with interrupts
Means for exchanging data within one controller (message queue)
The classic approach to the operation of RTOS is as follows: tasks exchange data with each other using message queues. At least that is what all academic textbooks require. I belong to practical programmers, therefore, I admit that sometimes it’s easier to get by with any direct means of exchange made for a specific case. For example, banal ring buffers that are not tied to the system. But nevertheless, there are cases where message queues are the most optimal objects (if only because, unlike non-system things, they can block tasks when the buffer being polled is full or empty).
Consider a textbook example. There is a serial port. Of course, by circuitry, to simplify the system, it is made without flow control lines. Data on wires can go one after another. At the same time, the equipment of many (although not all) standard controllers does not imply a large hardware queue. If you don’t have time to collect the data, it will be erased with new portions coming from the receiving shift register.
On the other hand, let's say a task that processes data takes some time (for example, to move a working tool). This is quite normal - the G-code in CNC machines comes precisely with some lead. The tool moves, and the next line at the same time runs through the wires.
So that the buffer register of the controller does not overflow, and the bytes in the program have time to be received during the main operation, it is necessary and sufficient to do their reception in the interrupt handler. The simplest option is possible when “raw” bytes are transferred to the main task:
Fig.3
But in this case, too many operations of placing and taking from the queue are obtained. Overhead is too high. It is advisable to queue not the raw bytes, but the results of their preprocessing (starting from the lines, ending with the results of interpreting the lines for our example with the G-code). But pre-processing in the interrupt handler is unacceptable, because at this time part, and sometimes all other interrupts are blocked (depending on the priority setting), and the data for other subsystems will be processed with a delay, which sometimes disrupts the product’s performance.
This postulate is worth repeating it several times. I remember, on one forum, I saw this question: "I took a typical microphone unpacker from PDM format, but it does not work correctly." And an example was attached to the question, in which PDM filtering was performed in the context of an interrupt. It goes without saying that when the author of the question began to convert from PDM to PCM without interruption (as he was immediately advised), all the problems went away by themselves. Therefore, in the interruption, pre-processing is unacceptable! Do not microwave eggs andperform unnecessary actions in the interrupt handler!
The recommended scheme in all textbooks, in the presence of preprocessing, is the following.
Fig.4.
The preprocessing task, which has a high priority, is blocked almost all the time. The interrupt handler received a byte from the hardware, woke up the pre-processor, passing this byte to it, and then left. From now on, all interrupts are again enabled.
The high-priority pre-processor wakes up, accumulates data in the internal buffer, and then falls asleep, again giving the opportunity to work with tasks with normal priority. When a line is accumulated (a line feed character has arrived), it interprets it and places the result in a message queue. This is the option that all academic publications recommend, so I just had to convey the classic idea to the readers here. Although, I’ll immediately add that I myself, not as a theorist, but as a practitioner - I see a weak point of this method. We win on a rare call to the queue, but lose on context switches to enter the high-priority task. In general, the recommendations were made; it’s described about the shortcomings of this approach, but how to work in real life — everyone should find their own method, choosing the optimal ratio of performance and simplicity. Some recommendations with real estimates will be in the next article on interruptions.
To implement the message queue, the MessageQueue class is used. Since the message queue should work effectively with arbitrary types of data, it is designed as a template (the data type is substituted for it as an argument).
template
class MessageQueue
{
...
The constructor has the form:
MessageQueue (size_t max_size);
The max_size parameter determines the maximum queue size. If you try to queue an element when it is filled to capacity, the task will be blocked until there is free space (some task will not take one of the elements already in the queue).
Since it has already been said too much, one cannot do without an example of initializing a queue. Take a test fragment in which it is clear that the queue element is of type short, and the dimension of the queue will not exceed 5 elements:
voidMessageQueueTestApp::Initialize()
{
mQueue = new MessageQueue(5);
Task::Add(new MessageSenderTask("send"), Task::PriorityNormal, 0x50);
Task::Add(new MessageReceiverTask("receive"), Task::PriorityNormal, 0x50);
}
You can place a message in the queue using the function:
Result Push (const T & message, uint32_t timeout_ms = INFINITE_TIMEOUT);
The timeout_ms parameter is required when the queue is full. In this case, the system will try to wait for the moment when there is free space in it. And this parameter - just tells how much is allowed to wait.
If necessary, the message can be put not at the end of the queue, but at its beginning. To do this, use the function:
Result PushFront (const T & message, uint32_t timeout_ms = INFINITE_TIMEOUT);
To remove the next element from the head of the queue, the function is used:
Result Pop (T & message, uint32_t timeout_ms = INFINITE_TIMEOUT);
Here, accordingly, the timeout parameter sets the wait time in case the queue is empty. During the set time, the system will try to wait for the appearance of messages in it that have been queued by other tasks.
You can also get the value of an element from the head of the queue without removing it:
Result Peek (T & message, uint32_t timeout_ms = INFINITE_TIMEOUT);
Finally, there are functions to find out the number of messages queued:
size_t Count ()
and the maximum possible size of the queue:
size_t GetMaxSize ()
Means of communication between different controllers
In fact, purely formally, the means of exchanging data between controllers are drivers. But ideologically they relate to the usual means of data exchange, which is one of the main features of the MAX MAX RTOS, so we will consider them in the part of the manual related to the kernel.
Let me remind you that in the published version of the OS, physical exchange can be carried out through wired UART or SPI interfaces, or through the RF24 radio module (also connected to the SPI interface). We also recall that to activate the exchange of data between controllers, you should enter the line:
#define MAKS_USE_SHARED_MEM 1 in the MaksConfig.h file
and determine the type of the physical channel by setting one of the constants:
MAKS_SHARED_MEM_SPI, MAKS_SHARED_MEM_UART or MAKS_EMHADIO .
The SPI and UART mechanisms in the current implementation provide the connection of only two devices, so the radio
option is recommended. Now, after such a protracted preamble, we begin the study of the SharedMemory class.
A class object can be initialized using the Initialize () function. The word "can" is not used accidentally. In general, for a radio option, initialization is not required.
The data structure is passed to this function. Let us briefly consider its fields.
Consider examples of filling this structure and calling the initialization function.
SmInitInfo info;
info.TransferCore = &SpiTransferCore::GetInstance();
info.NotifyMessageReceived = true;
info.AutoSendContextsActivity = true;
info.SendActivityDelayMs = 100;
info.CheckActivityDelayMs = 200;
SpiTransferCore::GetInstance().Initialize();
SharedMemory::GetInstance().Initialize(info);
Another option:
SmInitInfo info;
info.TransferCore = &SpiTransferCore::GetInstance();
info.AutoSendContextsActivity = true;
info.SendActivityDelayMs = 100;
info.CheckActivityDelayMs = 200;
SpiTransferCore::GetInstance().Initialize();
SharedMemory::GetInstance().Initialize(info);
This class provides two mechanisms for the interaction of tasks - messages and shared memory (with the possibility of locks).
Since the shared memory object is always one, the developers of the operating system created the MaksSharedMemoryExtensions.cpp file, which converts complex function names to global ones.
Here is a snippet of this file:
Result GetContext(uint32_t context_id, void* data)
{
return SharedMemory::GetInstance().GetContext(context_id, data);
}
Result GetContext(uint32_t context_id, void* data, size_t data_length)
{
return SharedMemory::GetInstance().GetContext(context_id, data, data_length);
}
Result SetContext(uint32_t context_id, const void* data, size_t data_length)
{
return SharedMemory::GetInstance().SetContext(context_id, data, data_length);
}
Since all the applications included in the delivery package use global function names, I will also use this naming option in the examples for this document.
Messages
Let's start with the posts. To send them, the SendMessage () function is used. The function is quite complex, so let's consider it in detail:
Result SendMessage (uint32_t message_id, const void * data, size_t data_length)
Arguments:
message_id - message identifier;
data - a pointer to the message data;
data_length - the length of the message data.
Usage example:
const uint32_t APP5_EXPOSE_MESSAGE_ID = 503;
...
if (broadcast)
{
char t = 0;
SendMessage(APP5_EXPOSE_MESSAGE_ID, &t, sizeof(t));
}
const uint32_t APP5_AIRPLANE_MESSAGE_ID = 504;
...
bool AirplaneTask::SendAirplane()
{
Message msg(_x, _y, _deg, _visibility);
return SendMessage(APP5_AIRPLANE_MESSAGE_ID, &msg, sizeof(msg)) == ResultOk;
}
The result of the function reflects the sending status of the message. Whether it was received by any of the recipients or not, the function is silent. It is only known whether it is gone or not. To confirm, the recipient must send a return message.
Accordingly, the function is used to wait for a message on the recipient side:
Result WaitUntilMessageReceived (SmMessageReceiveArgs & args, uint32_t timeout_ms = INFINITE_TIMEOUT)
Arguments:
args - a reference to an object with message parameters;
timeout_ms - timeout in milliseconds. If the timeout value is INFINITE_TIMEOUT, then the task will be blocked without the possibility of unlocking by timeout (endless wait).
Message options are a whole class. Let's consider briefly its open members:
uint32_t GetMessageId ()
Returns the identifier of the received message:
size_t GetDataLength ()
Returns the data size of the received message in bytes.
void CopyDataTo (void * target)
Copy message data to the specified buffer. The memory for the buffer must be allocated in advance. The size of the buffer should be no less than the size of the message data (the result of calling the GetDataLength method)
Thus, the example serving the receipt of the message sent in the previous example looks like this:
void MessageReceiveTask::Execute()
{
Message msg;
while (true)
{
SmMessageReceiveArgs args;
Result res = WaitUntilMessageReceived(args);
if (res == ResultOk)
{
uint32_t mid = args.GetMessageId();
switch (mid)
{
....
case APP5_EXPOSE_MESSAGE_ID:
#ifdef BOARD_LEFT
_gfx->ExposeAirplaneRed();
#else
_gfx->ExposeAirplaneBlue();
#endif
break;
case APP5_AIRPLANE_MESSAGE_ID:
{
args.CopyDataTo(&msg);
#ifdef BOARD_LEFT
_gfx->UpdateAirplaneRed(msg.X, msg.Y, msg.Deg);
_gfx->SetAirplaneRedVisibility(msg.Visibility);
#else
_gfx->UpdateAirplaneBlue(msg.X, msg.Y, msg.Deg);
_gfx->SetAirplaneBlueVisibility(msg.Visibility);
#endif
}
break;
...
Synchronized Context
Context refers to the area of memory that should be synchronized between all controllers. The goal pursued during synchronization can be any. The simplest case is that one device informs the other about completed work stages for hot standby. If it fails, the remaining devices will have information on how to pick up work. For devices that achieve the goal together, the mechanism of exchange through the context may be more convenient than through messages. Messages should be generated, transmitted, received, decoded. And with the memory of the context, you can work like with ordinary memory, it is important only to remember to synchronize it so that the memory of one device is duplicated to the others.
The number of synchronized contexts in the system can be arbitrary. Therefore, it is not necessary to fit everything into one. For different needs, you can create different synchronized contexts. The size of the memory, the data structure in it and other parameters of the synchronized context is the concern of the application programmer (by itself, the larger the amount of synchronized memory, the slower the synchronization occurs, which is why it is better to use different small contexts for different needs).
In addition, even moments for synchronization sessions - even those are selected by the application programmer. The RTOS MAX provides an API for support, but the application programmer must call its functions. This is due to the fact that the data exchange process is relatively slow. If everything is left at the mercy of the operating system, then delays are possible at a time when the processor core should serve other tasks as much as possible. If you automatically synchronize contexts too often, resources will be wasted, if too rarely, data can become outdated before the controllers synchronize. We add to this the question whose data is more important (for example, if there are four different subscribers), after which it becomes completely clear that only an application programmer can initiate synchronization. It is he who knows when to do it better, as well as which of the subscribers should distribute their data to the rest. The OS, on the other hand, provides transparency for the application.
The context has its own numerical identifier (set by the application programmer). All applications can have either one synchronized context or several. It is only important that their identifiers be consistent within the interacting controllers.
The simplest examples of synchronized data - cleaning robots periodically mark the territory they have cleaned on the map in order to know about areas that have not yet been cleaned, and also tell who is about to go now, so as not to interfere with each other. Wrenches working on one product mark each screwed nut after the screwing is complete, so that if one fails, the other would finish part of it. The board with a touch screen recorded a click and noted this fact for the rest of the boards. Well, there are a lot of other cases where it is necessary to split the memory, but it is allowed to do this several times per second (a maximum of several tens of times per second).
Thus, the context can be represented as shown below:
Fig. 5. Context
And its purpose - can be represented in the following figure:
Fig. 6. The essence of context synchronization
Now we consider the functions that are used to synchronize the context:
Result GetContext (uint32_t context_id, void * data);
Copies context data to the specified memory area, the memory must be pre-allocated. Suitable for the case when the data length is known in advance (for example, a structure with simple fields).
Arguments:
context_id - context identifier;
data - pointer memory area for storing context;
As a result, the context data with the specified identifier, which were obtained during the last synchronization, will be returned. Thus, this function will work quickly, since the data is taken from a local copy of the context. There is a second variant of this function:
Result GetContext (uint32_tcontext_id, void * & data, size_t & data_length);
allocates memory and copies data and context length. Suitable for the case when the data length is not known in advance (for example, an array of arbitrary length).
Arguments:
context_id - context identifier;
data - pointer memory area for storing context;
data_length - size in bytes of the memory area to store the context.
In principle, you can create a task that will wait for the context to be updated, and then copy its new data to the application memory. The following function is suitable for this:
Result WaitUntilContextUpdated (uint32_t & context_id, uint32_t timeout_ms = INFINITE_TIMEOUT)
Arguments:
context_id - context identifier;
timeout_ms - timeout in milliseconds. If the timeout value is INFINITE_TIMEOUT, then the task will be blocked without the possibility of unlocking by timeout (endless wait).
Finally, consider the case when a task wants to update its context in the entire system (consisting of several controllers).
The context should be captured first. The following function is used for this:
Result LockContext (uint32_t context_id)
Argument:
context_id - context identifier.
The function requires an exchange between controllers, so it can take a long time
If the context could be captured (while trying to capture at the same time, only one will win, the others will receive an error code), then the context can be written using the following function:
Result SetContext (uint32_t context_id, const void * data, size_t data_length)
Arguments:
context_id - context identifier;
data - a pointer to a memory area for storing context;
data_length - size in bytes of the memory area to store the context.
Finally, to conduct context synchronization, you should call the function:
Result UnlockContext (uint32_t context_id)
Argument:
context_id - context identifier.
It is after its call that contexts will synchronize throughout the system.
The function requires an exchange between controllers, so it can take a long time
Work example
Consider a real-world example of working with synchronized contexts, which comes with the OS. The code is contained in the file ... \ maksRTOS \ Source \ Applications \ CounterApp.cpp
In this example, several devices increment a counter once a second (if you run this application on boards with a screen, the counter value will be displayed visually). If one of the controllers is turned off and then turned on, it will receive the current contents of the counter and will work together with everyone. Thus, the system will keep counting until at least one of the controllers is alive in it.
The application programmer who did this example chose a context identifier based on the principle: “Why not?”
static const uint32_t m_context_id = 42;
The memory to be synchronized looks simple:
uint32_t m_counter;
The main actions that interest us occur in the function:
void CounterTask :: Execute ()
First, the controller tries to find out: is it the first in the system? To do this, he tries to get the context:
Result result = GetContext(m_context_id, & m_counter);
If the controller is not the first, then the context will be received, and along the way, the counter value that exists in the system will be received.
If the controller is the first, then the context will not be received. In this case, it should be created, which is done as follows (in the same place, the counter vanishes along the way):
if ( result != ResultOk ) {
m_counter = 0;
result = LockContext(m_context_id);
if ( result == ResultOk ) {
SetContext(m_context_id, & m_counter, sizeof(m_counter));
UnlockContext(m_context_id);
}
}
Everything, now the context definitely exists, it was found in the system to which we just connected, or created by us. We enter an infinite loop:
while (true)
{
There we wait one second:
Delay(MAKS_TICK_RATE_HZ);
And we are trying to win the contest for the right to distribute our counter to the whole system:
result = LockContext(m_context_id);
If we succeeded, we distribute
if ( result == ResultOk ) {
GetContext(m_context_id, & m_counter);
++ m_counter;
SetContext(m_context_id, & m_counter, sizeof(m_counter));
UnlockContext(m_context_id);
}
Apparently, we distribute a context, but we do not receive. The fact is that in this example only newly connected devices get context. If it is received, then the device works autonomously. Of course, it’s better to keep in touch constantly, but the description of such a system will take a lot more paper, but you won’t argue against psychology, most readers will simply yawn and move on to the next section, so we will provide the most inquisitive users with proceedings with more complex examples as independent work. The principle of the shared memory, I hope, is now more or less clear.