To the question of TI

    “Now I will show you a portrait ... Hmm ... I warn you that this is a portrait ... In any case, please treat it like a portrait ...

    In this post we will talk about the development and debugging of programs for MK SS1350 in the environment recommended by the manufacturer CCS development. The merits (and they are) and the disadvantages (and how without them) of the above products will be affected. There will be no screenshots in the text to show (circled) the location of the compilation icon in the integrated programming environment or selecting a file in a directory. Recognizing the fundamental possibility of articles in a similar style, I will try to focus on conceptual issues in the hope that my reader will be able to figure out the details.

    The purpose of this opus, in addition to sharing the experience gained, is an attempt to arouse healthy envy among domestic MK manufacturers, which are direct competitors of TI ("in the country where we prosper") - the task is frankly ungrateful, but they say that a stone wears away a drop.

    I’ll emphasize right away that it will only be about Windows 7 (moreover, only) version 7, although the TI website has an option for both Mac and Linux, I have not tried them, I’m quite ready to believe that everything is not so cool there, but why think about the bad (or vice versa, everything is great there, . about then why envy)

    So, what can we learn from the site TI - to get started with evaluation modules must perform three essential steps:

    1. Buy evaluation modules - done.

      Note on the margins (PNP): You will also have to do this, because in the programming environment in question I personally (unfortunately) did not manage to find the ability to emulate hardware for debugging, at least where I was looking.
    2. Install the development environment - download, run the installer, everything worked out. We connect the evaluation module to USB - the firewood rises by itself and everything worked out again - done. When you try to program the device, we get a message about the need to update the firmware, we agree and again everything turned out. In general, there’s nothing to write about if it was always and everywhere ...
    3. Go and study the course TI SimpleLink Academy 3.10.01 for SimpleLink CC13x0 SDK 3.10 - a strange suggestion, it seems like to teach me - just to spoil, but so be it, I open the corresponding link and am stunned - how many things are stuffed here.

    Here we see training materials on working with SYS / BIOS hardware drivers and with the TI-RTOS operating system and using the NDK network stack, including USB, and using wireless protocols and many more aspects of working with representatives of various MK families produced by the company. And all this wealth is accompanied by ready-to-use examples, and if we take into account the presence of user manuals and module descriptions, then perhaps there is nothing more to wish for. But there are also utilities that facilitate the work of preparing and configuring program code, flashing and debugging in various ways, and this wealth is also quite documented.

    Pnp: if someone is inclined to consider this material as advertising in relation to the company, its products and programming system, then it will most likely be right and I am really very impressed with the volume of detected software. Its quality will be discussed further and, I hope, the suspicions of bias will be dispelled, I was not completely blinded by the feeling and I continue to see perfectly the defects of the object of description, so this is not the love of youth, but a serious feeling of an adult specialist. I’m scared to imagine the amount of material costs needed to create and maintain such a volume of software and documentation for it, but this was obviously not done in one month, and the company probably understands what it is doing.

    Okay, until we postpone the study of materials for later, we will comprehend everything “along the way with the nugget” and boldly open CCS. It implements the concept of workspaces, received from the parent - Eclipse. Personally, the project concept is closer to me, but no one bothers us to keep exactly one project in space, so let's move on.

    But then things get a little worse - we open the workspace (RP) for our debugging board and see many projects (as a rule, in two versions - under RTOS and for “bare iron”). As I said earlier, this is not a crime, but the fact that many projects contain the same files with identical software modules is not great at all. The code is duplicated many times and supporting changes becomes a very non-trivial task. Yes, with such a solution it is much easier to transfer the project by simply copying the directory, but for such things there is export of the project, and it is quite well implemented. Links to files in the project tree are adequately supported, so the decision to include the files themselves in the examples provided cannot be considered satisfactory.

    We continue our research - we will start working with a finished project, but not blinking an LED, although there are two of them on the debug board, but working with a serial port, a ready-made uartecho example. We create a new RP, include the project of interest to us and ... nothing comes of it, it is clear from the message that it is necessary to include a related project in the RP. It is not very clear why this is done, but it is not difficult to fulfill the requirements of the environment, after which the project begins to be assembled.

    Pnp: on the home machine, I used the Import Project command and all the necessary inclusions happened on their own. Where exactly related projects are indicated, I do not know, let us leave the analysis of this aspect for the future.

    We compile, flash and start debugging. We find an interesting phenomenon - the step-by-step execution is not adequately displayed when considering the library of working with a serial port - optimization costs. We turn off the optimization in the compiler settings (which settings are not there, are there really people who all know them and, moreover, use all of them), assemble the project again - and nothing changes. It turns out that only those files are included that are included in the project tree, at least in the form of links. We add links to the library sources and after rebuilding everything is debugged correctly (provided that we have the option to generate debugging information enabled).

    Pnp: but I found options to enable MISRA-C compliance checking.

    Pnp: another way is to use the "Clean ..." command with subsequent assembly, the "Build All" command for some reason does not affect the associated project.

    Then we find that not everything is always debugged normally, sometimes we find ourselves in areas of machine code for which the compiler does not find the source. Since the programming environment provides us with all the files necessary for work — the result of the preprocessor, the assembler code and the linker card (you just need to remember to enable the corresponding options), we turn to the latter. We find two areas of the program code - starting from 0x0000. and starting from 0x1000. (32-bit architectures are good for everyone, but address writing is not their strong point). We turn to the documentation for the microcircuit and find out that inside there is a ROM area mapped specifically to 0x1000., And it contains the built-in part of the libraries. Alleged that the use of routines from it improves performance and reduces consumption compared to the address space 0x000. While we are mastering MK, we are not so interested in the last parameters, but the convenience of debugging is crucial. You can disable the use of ROM (but for our purposes) by setting the NO_ROM option to the compiler, which we do and reassemble the project.

    PNP: the transition to the subroutine in the ROM looks very funny - there is no long transition in the command system, so first the transition is made with a return to the intermediate point in the low address area (0x0000), and there already lies the PC boot command, the parameters of which are not recognized by the disassembler. Something I can not believe, as if with such overhead costs you can win in speed, although for long routines - why not.

    By the way, an interesting question is how it is generally guaranteed that the contents of the ROM correspond to the source codes kindly provided by the company. I can immediately suggest a mechanism for embedding additional (of course, debugging and service) functions in the ROM, which for the user - MK programmer will be completely invisible. And personally, I have no doubt that the developers of the chip also know many other mechanisms that implement such functionality, but we will end the attack of paranoia.

    On the other hand, I can only welcome the appearance of such an analogue of the BIOS, because in the long run this will make the developers' dream of real portability of code between different MK families with one core a reality. We also note the peculiarity of the implementation of interaction with "embedded" software modules. If in early attempts to create a similar mechanism implemented in TivaC models, there was a call supervisor that was accessed with the group number and the number of the entry point into the subprogram, which caused a significant overhead, then here the resolution of communications is at the linker level due to double names of functions and Direct long jumps to subroutines in ROM are inserted. This is much faster in execution, but requires a recompilation of the project when changing the usage model.

    Now that we are fully prepared for convenient debugging, we return to our project and begin to quietly debug the program with access to the source codes of the modules (well, that’s what I thought ...), which will allow us to form an opinion on the quality of these texts. The project under study implements a mirror of the serial communication channel and is extremely convenient for training purposes. Of course, we took the option using RTOS, I do not see the slightest reason not to use it in our configuration (a lot of memory and program memory).

    Immediately, we note that the source codes are presented in C, it is often not very convenient, many language constructs look cumbersome compared to their analogues on the pluses, but the creators were more concerned with code compatibility than syntactic sugar. Although it would be possible to create a C ++ version of libraries, conditional compilation has been known for a long time and is used everywhere, but this entails additional material costs. Surely, the management of the company knows what they are doing, and my comments are a kind of "cunning analytics", but it seems to me that I also have the right to my opinion.

    I also know the opposite approach, when the library is designed using the latest C ++ tools, and when asked what to do for those developers who use compilers that do not meet the latest specifications, the perfect answer is to upgrade to new versions or not this library (I strongly recommend the second option in such cases). My personal opinion is that if we really want our product to be used (and TI clearly wants it, and does not make the library on the principle of “drop it off me, here’s a new drum for you”), then its approach is certainly true.

    The source code of the program looks classically - initializing the hardware and software environment, creating tasks and launching a sheduler in the main module, the task text in a separate compilation module. In the example under consideration, the task is exactly one - mainThread, the purpose is not entirely clear from the name, and also, which confuses me somewhat - the name of the file containing the source text does not coincide with the name of the function (uartecho.c - although the name is speaking here) well yes Search in the programming environment is implemented in a standard way (context menu or F3 on the entity name) and there is no problem with this.

    The process of setting task parameters before starting is pretty much expected:

    1. create a parameter structure (local, of course),
    2. give it default values,
    3. set parameters other than standard, and
    4. use the structure when creating the task.

    Despite the kind of naturalness of these operations, it is not obvious to all library authors, and I saw various implementations in which, for example, there was no stage 2, which led to a funny (for an outside observer, not for a programmer) program behavior. In this case, everything is fine, the only question that has arisen is why the default values ​​are not constant, probably this is a legacy of the damned past.

    PNP: in the well-known FREE-RTOS a slightly different approach is taken with the task parameters indicated directly in the body of the call of the API of the task creation function. The pros and cons of these approaches are as follows:

    1. + allows you to not explicitly specify parameters that match the default values, + does not require remembering the order of parameters, -more verbose, -Larger memory costs, -You need to know the default parameters, -creates a named intermediate object
    2. - requires specifying all parameters, -requires remembering the order of parameters, + is more compact, + requires less memory, + does not require named intermediate objects.

      There is a third method, advocated by the author of this post (in the style of TURBO), which has its own set
    3. + allows you not to explicitly specify the parameters that match the standard, + does not require remembering the order of parameters, -multi-verbal, -larger memory costs, -You need to know the default parameters, + works in the lambda style, + makes standard errors difficult to implement, -several looks strange because of the many right brackets.

    Well, there is another fourth option, devoid of any drawbacks, but requiring C ++ not lower than 14 - we lick our lips and pass by.

    We start debugging, run the program and open one of the two serial ports provided by the debugging board in the terminal window provided by the programming environment. Which of the two ports (one is debugging, probably the second is user, you can see their numbers in the system) is difficult to tell in advance, sometimes the youngest, sometimes the senior, at least it doesn’t change when you reconnect the board, so you can write it on the board. Well, one more inconvenience - open terminals are not saved with the project and are not restored when you open a debugging session, although they do not close when you exit it. We check the operation of the program and immediately find out one more drawback - the terminal cannot be configured, for example, it basically works in Unix style with closing / r, I have lost touch with such minimalism,

    Pnp: We note one more feature of debugging, well, this is true for any development environment - when we switch tasks with a sheduler, we lose focus focus, breakpoints will help us solve this problem.

    To begin, consider the process of creating an instance of a serial port - everything seems to be standard here, a structure is used, the fields of which are assigned the required parameters of the object. Note that on the pros we have the opportunity, completely absent in C, to completely hide all initialization “under the hood”, but I have already voiced possible arguments in favor of the second solution. There is a function for initializing the tuning structure, and this is good (paradoxical as it sounds, this function does not seem obligatory for the authors of some libraries). At this point in the story, the honeymoon ends and the ordinary(married) life.

    A careful study of the sources shows that not everything is so good. What is the problem - the initialization function copies the default values ​​from the object that lies in the constant region into our control structure, and this is wonderful, but for some reason:

    1. the object is global, although it is used by the only function to initialize the parameters (at one time a similar practice cost Toyota a decent amount) - well, adding the static directive is easy;
    2. the control object is named, in C there is no beautiful solution to this problem, or rather, there is a solution with an anonymous copy and I gave it in a long time post, but a lot of right brackets do not allow to call this option really beautiful, in pluses there is a solution of awesome beauty, but what to dream of a pipe dream;
    3. all fields of the object are clearly redundant in bit depth, even bit fields (enumerations of two possible values) are stored in 32-bit words;
    4. enumerated mode constants are defined in the form of defines, which makes it impossible to check at the compilation stage and necessary in run time;
    5. repeating a section of an infinite loop in different places of possible failures, it would be much more correct to make one (in this case empty) handler;
    6. Well, all operations for setting up and starting a task can (and should) be hidden in one function or even a macro.

    But the initialization of the receive buffer is well done - we use pre-reserved memory, no manipulation of the heap, the call chain is somewhat complicated, but everything is readable.

    Pnp: in the debug window, before our eyes, the call stack, everything is done as it should and soundly - respect and respect. The only thing that is somewhat surprising is an attempt to hide this window leads to the end of the debugging session.

    Well, and one more somewhat unexpected decision - setting the possible number of objects in the enumeration, for serial ports and for this debug board equal to 1, in the style

    typedef enum CC1310_LAUNCHXL_UARTName {
        CC1310_LAUNCHXL_UART0 = 0,
    } CC1310_LAUNCHXL_UARTName;

    Such solutions are standard for real transfers, but for the description of hardware objects - and I did not know that this is possible, although it works for myself. We have finished the initialization of iron, let's move on.

    In a running task, we observe a classic infinite loop in which data from a serial port is read by the function
    UART_read(uart, &input, 1);
    and immediately sent back by function
    UART_write(uart, &input, 1);
    . Let's go into the first one and see an attempt to read characters from the receive buffer
    return (handle->fxnTablePtr->readPollingFxn(handle, buffer, size))
    (how do I hate such things, but in C it’s just impossible otherwise), we go deeper and find ourselves in UARTCC26XX_read, and from it we get into the implementation of the ring buffer - a function
    RingBuf_get(&object->ringBuffer, &readIn)
    . Here, ordinary life goes into an acute phase.

    I didn’t like to say that I didn’t like this particular module (ringbuf.c file), it was simply awful and I personally would have kicked out the place of such a respected company of authors of this part with shame (you can still take me in their place, but I'm afraid that the salary level of our Indian colleagues will not suit me), but I probably don’t know what. Watch your hands:

    1) re-roll of read / write pointers is implemented through the remainder of the division

    object->tail = (object->tail + 1) % object->length;

    and there are no optimizations of the compiler when performing this operation, such as overlaying a bit mask, since the length of the buffer is not a constant. Yes, in this MK there is a hardware division operation and it's pretty fast (I wrote about it), but still it never takes 2 clock cycles, as in the correct implementation with honest re-roll (and I wrote about this too),

    Pnp: I recently saw a description of the new M7 architecture in the implementation and I don’t remember anyone, so for some reason, dividing 32 by 32 began to be performed in 2-12 cycles instead of 2-7. Either this is a translation error, or ... I don’t even know what to think of.

    2) moreover, this code fragment is repeated in more than one place - macros and inlines for wimps, ctrl + C and ctrl + V rule, the DRY principle goes through the forest,

    3) a completely redundant counter of filled buffer places was implemented, which entailed the following drawback,

    4) critical sections in both reading and writing. Well, I can still believe that the authors of this module do not read my posts on Habré (although this behavior is unacceptable for professionals in the field of firmware), but they should be familiar with the Mustang Book, there this issue is examined in detail,

    5) how cherry on the cake, an indicator of the maximum buffer size has been introduced, moreover, with a very slurred name and a completely absent description (the latter applies generally to the entire module). I don’t exclude that this option may be useful for debugging, but why drag it into the release - do we have any processor cycles at all with the RAM?

    6) at the same time, buffer overflow processing is completely absent (there is a -1 return signaling about this situation) - even in Arduino it is, we will leave aside the quality of this processing, but its absence is even worse. Or were the authors inspired by the well-known fact that any assumptions are true regarding a relatively empty set, including the fact that it is not empty?

    In general, my comments are fully consistent with the first line of the demotivator on the topic of code review "10 lines of code - 10 comments."

    By the way, the penultimate of the noted shortcomings makes us think about more global things - but how do we even implement the base class in order to be able to carry out its deep modification. To make all fields secure is a dubious idea (although probably the only right one), to insert a call of friendly functions into the heirs is very much like crutches. If in this particular case there is a simple answer to the question of introducing an indicator of buffer fullness - a generated class with overlapping writing and reading and an additional counter, then to implement reading without advancing the buffer (as in this case) or replacing the last placed character (I saw such ring buffer implementation) you cannot do without access to the internal data of the parent class.

    At the same time, there are no complaints about the implementation of actually reading from the serial interface - the input is blocking, in the absence of a sufficient number of characters in the receive buffer, a semaphore is cocked and control is transferred to the sheduler - everything is implemented accurately and correctly. Personally, I don’t really like controlling the equipment in a general-purpose procedure, but this reduces the nesting of procedures and reduces the index of cyclomatic complexity, no matter what it means.

    Let us now pay attention to the transmission of the received data to the serial channel, since when creating the object it was provided with only one ring buffer - the receiving one. Indeed, the internal buffer of the hardware is used to transmit characters, and when it is filled, the wait for readiness is entered (at least in the blocking mode of operation). I can’t help myself, so as not to criticize the style of the corresponding functions: 1) for some reason, the object has a generalized pointer, which inside the function constantly turns into a pointer to characters
    *(unsigned char *)object->writeBuf);
    2) the logic of work is completely opaque and slightly confused. But all this is not so important, because it remains hidden from the user and "does not affect the maximum speed."

    In the process of research, we come across one more feature - we do not see the source code of some internal functions in debug mode - this is due to a change of names for different compilation options (ROM / NO_ROM). Replace the required source file (C: \ Jenkins \ jobs \ FWGroup-DriverLib \ workspace \ modules \ output \ cc13xx_cha_2_0_ext \ driverlib \ bin \ ccs /./../../../ driverlib / uart.c--) I didn’t succeed (but I didn’t try very hard), although I found the source (of course, in the file in the uart.c file, thanks, captain), fortunately, this fragment is simple and it’s easy to identify the assembler code with the source code in C (especially if you know the features of the ITxxx team). I don’t know how to solve this problem for libraries with complex functions, we will think when the need arises.

    And finally, a small remark - I am ready to believe that the hardware of the serial channel implementation for MK CC13x0 models is the same as that for CC26x0, and duplicating the contents of a file called UARTCC26XX.c --- cannot be called the right solution, but creating an intermediate definition file with inclusion I would welcome the source file, overriding the functions and the corresponding comment, since this would make the program more understandable, and this should always be welcome, vout.

    So, the test case works, we learned a lot about the internal structure of standard libraries, noted their strengths and not so good sides, in conclusion of the review we will try to find the answer to the question that the programmer usually cares about in the dilemma “OS or not OS” - context switching time. Two ways are possible here: 1) consideration of the source code is rather a theoretical way, it requires a level of immersion in the subject that I am not ready to demonstrate, and 2) a practical experiment. Of course, the second method, unlike the first, does not give absolutely correct results, but “the truth is always concrete” and the obtained data can be considered as adequate if the measurements are organized correctly.

    To begin with, in order to estimate the switching time, we need to learn how to evaluate the overall execution time of various program fragments. In this architecture, there is a debugging module, part of which is a system counter. Information about this module is quite accessible, but the devil, as always, is hiding in the details. First, let's try to configure the necessary mode with the handles directly through access to the registers. We quickly find the CPU_DWT register block and in it we find both the CYCCNT counter itself and the control register for it CTRL with the CYCCNTENA bit. Naturally, or, as they say, of course, nothing happened and the ARM website has an answer to the question why - it is necessary to enable the debugging module with the TRCENA bit in the DEMCR register. But with the last register, it’s not so simple - it’s not in the DWT block, to search in other blocks is lazy - they are long enough, but I did not find any search by name in the register window (but it would be nice to have it). We go into the memory window, enter the address of the register (it is known to us from the date) (by the way, for some reason the hexadecimal format of the address is not default, you need to add the 0x prefix with pens) and, suddenly, we see a named memory cell with the name CPU_CSC_DEMCR. It's funny, to say the least, why the company renamed the registers in comparison with the names proposed by the licensor of the architecture, probably it was necessary. And exactly, in the CPU_CSC block of registers we find our register, set the desired bit in it, return to the counter, enable it, and it all worked. we enter the register address (we know it from the date) (by the way, for some reason the hexadecimal format of the address is not default, you need to add the 0x prefix with pens) and, suddenly, we see a named memory cell with the name CPU_CSC_DEMCR. It's funny, to say the least, why the company renamed the registers in comparison with the names proposed by the licensor of the architecture, probably it was necessary. And exactly, in the CPU_CSC block of registers we find our register, set the desired bit in it, return to the counter, enable it, and it all worked. we enter the register address (we know it from the date) (by the way, for some reason the hexadecimal format of the address is not default, you need to add the 0x prefix with pens) and, suddenly, we see a named memory cell with the name CPU_CSC_DEMCR. It's funny, to say the least, why the company renamed the registers in comparison with the names proposed by the licensor of the architecture, probably it was necessary. And exactly, in the CPU_CSC block of registers we find our register, set the desired bit in it, return to the counter, enable it, and it all worked.

    Pnp: there is still a search by name, it is called (naturally) by the Ctrl-F combination, it just exists only in the context menu, but in the usual one it is canceled, I apologize to the developers.

    Immediately I note another drawback of the memory window - the printing of the contents is interrupted by indicating the named cells, which makes the output torn and not fig segmented into 16 (8.32, 64, substitute the necessary) words. Moreover, the output format changes when the window is resized. Maybe all this can be configured as the user needs, but, based on my own experience (and what else should I proceed from), I declare that setting the output format of the memory viewing window does not apply to intuitively obvious solutions. I am fully in favor of enabling such a convenient feature as displaying named memory areas in the viewing window, otherwise many users would never know about it, but care must also be taken for those who consciously want to disable it.

    By the way, I would not completely abandon the possibility of creating macros (or scripts) for working with the environment, because I had to do this setting of registers (to enable time measurement) every time after resetting the MK, since I consider the code correction by inserting register manipulations for debugging purposes not very correct. But, although I never found macros, working with registers can be greatly simplified due to the fact that individual (necessary) registers can be included in the expression window, and thereby significantly facilitate and speed up work with them.

    To emphasize that the engineer’s feeling for the MK family has not cooled off (otherwise I scold different aspects of the development environment), I note that the counter works fine - I could not find any extra cycles in any of the debug modes, but before this happened be, at least in the MK series, developed by LuminaryMicro.

    So, we outline the experiment plan for determining the context switching time - create a second task that will increment a certain internal counter (in an infinite loop), start the MC for a certain time, find the relationship between the system counter and the task counter. Next, start the MK for a similar time (not necessarily exactly the same) and enter 10 characters at a pace approximately once per second. It can be expected that this will result in 10 switching to the echo task and 10 switching back to the counter task. Yes, these context switches will be performed not according to the sheduler timer, but according to the event, but this should not affect the total execution time of the investigated function, so we begin to implement the plan, create the counter task and start it.

    Here we find one feature of RTOS, at least in the standard configuration - it is not crowding out “for real”: if the priority task is constantly ready for execution (and the counter task is that) and does not give control to the sheduler (does not expect signals, does not fall asleep, not blocked by flags, etc.) then not a single task of lower priority will be executed from the word at all. This is not Linux, in which various methods are used to guarantee that everyone gets a quantum, "so that no one goes offended." This behavior is to be expected, many light RTOS behave like this, but the problem is deeper, since management does not receive tasks of equal priority with a constantly prepared one. That is why in this example, I put the echo task, which is included in the wait, the priority is higher than the constantly ready counter task,

    We begin the experiment, the first part (just waiting for the execution time) gave the data on the ratio of the counters 406181к / 58015к = 7 - it is quite expected. The second part (with 10 consecutive characters for ~ 10 seconds) gives the results 351234k-50167k * 7 = 63k / 20 = 3160 cycles, the last digit is the time associated with the context switching procedure in MK cycles. Personally, this value seems to me to be somewhat larger than expected, we are continuing to research, it seems that there are still some actions that spoil the statistics.

    PNP: a common mistake of an experimenter is not to evaluate the previously expected results and believe in the garbage received (hi ​​to 737 developers).

    It is obvious (“yes, quite obvious”) that the result, in addition to the actual context switching, also contains the time necessary to perform the operations of reading a character from the buffer and outputting it to the serial port. Less obvious is that it also has the time it takes to process an interrupt when it is received and to place the character in the receive buffer). How can we separate a cat from meat - for this we have a tricky trick - we stop the program, enter 10 characters and start it. We can expect (we should look at the source) that an interrupt on reception will occur only 1 time and immediately all characters will be sent from the receive buffer to the ring one, which means we will see less overhead. It is also easy to determine the time of delivery to the serial port - we will output every second character and solve the resulting 2 linear equations with 2 unknown. And it is possible and even simpler - not to deduce anything, which I did.

    And here are the results of such tricky manipulations: we make the input by the packet and the missing ticks becomes smaller - 2282, turn off the output and the costs drop to 1222 ticks - it's better, although I was hoping for 300 ticks.

    But with the time of reading, nothing like this can be come up with; it is scaled at the same time as the desired context switching time. The only thing I can offer is to turn off the internal timer at the beginning of entering the received character and turn it on again before entering the waiting for the next one. Then two counters will work synchronously (with the exception of switching) and it can easily be determined. But such an approach requires deep implementation of system programs in the texts, and still the component of interrupt handling will remain. Therefore, I propose to limit myself to the data already obtained, which allow us to firmly assert that the task switching time in the TI-RTOS under consideration does not exceed 1222 clock cycles, which for a given clock frequency is 30 microseconds.

    PNP: anyway, a lot - I counted cycles at 100: 30 to save the context, 40 to determine the finished task and 30 to restore the context, but we get an order of magnitude more. Although optimization has been turned off now, turn on –o2 and see the result: it has not changed much - it has become 2894 instead of 3160.

    There is another idea - if the OS supported switching peer-to-peer tasks, then you could run two tasks with counters, magically obtain data on the number of switches in a while and calculate the loss of the system counter, but due to the peculiarities of the sheduler, about which I already said, this approach will not lead to success. Although another option is possible - to do ping-pong between two peer-to-peer (or even peer-to-peer) tasks through a semaphore, it is easy to calculate the number of context switches here - you will have to try it, but it will be tomorrow.

    The traditional survey at the end of the post this time will be devoted not to the level of presentation (it is clear to any unbiased reader that he is beyond any praise and exceeds any expectations), but to the topic of the next post.

    Only registered users can participate in the survey. Please come in.

    You will prefer either:

    • 47.6% continue to get familiar with the environment from TI and TI-RTOS, 10
    • 33.3% description of debug boards from TI for this family 7
    • 52.3% see a review of the domestic RTOS MAX (if you think that I scolded TI, then you will understand that you were mistaken, I stroked the squirrel tail) 11

    You will prefer either:

    • 75% continue to get familiar with the environment from TI and TI-RTOS, 3
    • 50% description of debug boards from TI for this family 2
    • 0% see a review of the domestic RTOS MAX (if you think that I scolded TI, then you will understand that you were mistaken, I stroked the squirrel tail) 0

    Also popular now: