The counter on the microcontroller with an accuracy of 2/3 microseconds and an overflow of several days. Answer

This post is in response to the topic avn author of the programmable timer / counter in the microcontroller.
In his post, the author talks about software counters with acceptable accuracy and overflow time, but does not indicate the restrictions that software counters implement in this way. Without being tied to the architecture of microcontrollers, I will try to present another algorithm for counting time in an accessible form.

Formulation of the problem

The avn version of the implementation of time counting has the right to life in small programs, when the main task is to count time. In real life, on low-power microcontrollers, "sometimes" you have to perform differential calculations. Mathematics counting time can take from two to several tens of increments of a software counter. In addition, the volume background task of calculating mathematics is interrupted by requests from the periphery (ADC, DAC, PWM, asynchronous and synchronous communication channels).
If you consider that it may be necessary to keep a program counter with even greater overflow and take into account that each time you enter the timer, dozens of assembler instructions are executed (saving the stack, operations for incrementing the large timer, restoring the stack), then you can see the main task there is no time at all.

What to do?

In such a situation, I proceed as follows. I select the hardware timer overflow frequency from the software requirements (accuracy of binding events to time, sufficient accuracy in math calculations and margin for software timer overflow). I make the interrupt priority as high as possible (if it is not possible to use a timer with an "auto-load"), and the interrupt itself is the least possible. That is, in the interrupt, I increment the delta variable in bit capacity equal to the bit depth of the microcontroller (for eight-bit controllers this is 1 byte). Thus, there is no need to additionally save registers onto the stack.
In the main program, I declare a global copy variable of the program counter - delta_ . In the main loop of the program, I forbid all interrupts, I rewrite the values ​​fromdelta in delta_ , nullify delta , enable interrupts. Thus in the global variable delta_ is the runtime of the previous main program loop. With this variable, you can perform further operations: add with the current time, count large time delays, take into account when calculating diffurs.

What are the advantages here?

The undoubted advantage is the speed of the interruption of the hardware timer, and therefore the ability to set a slightly higher frequency of interruptions.
No accumulation errors in time counting.
Independence from the main program execution time and peripheral interruptions.

What about the cons?

The minus is the situation when the requirements for the accuracy of event binding are high, and the execution time of the main program cycle is long. In this case, the software counter delta may not be enough and will have to increase its capacity and, consequently, increase the execution time of the hardware timer interrupt.
Somewhat more program code.

Final words

I had to use the proposed time-counting algorithm in software development, where the accuracy of the time reference was set to 1ms, the need to count time delays from 25ms to several minutes, integral links with integration time from 1 second to 1 hour, time constant of differential links from 1 second to 1 hour , and the execution time of the main program cycle ranged from 30 to 40 ms.

Thank you for attention.

Also popular now: