Testing embedded systems is one aspect that for some reason is little talked about.

    Reading the article with a similar title, the last visit to Embedded World and development experience in this area prompted me to write the article.

    For some reason, when they talk about testing in relation to embedded systems, they almost always mean a platform that allows you to "cut off" this very embedded system so that you can test the written code regardless of the hardware platform.

    Of course, the approach has a place to be, and with it you can test and find a lot, but ...

    Here's a simple system as an example: a microcontroller and an infrared temperature sensor connected to it via I2C. How will we test?

    What can be virtualized so that the test does not lose all meaning? If all the code essentially boils down to initializing the I2C controller peripherals and implementing a communication protocol with the sensor itself? And also blocking access to the resource for the case of multitasking environments.

    In my opinion, for normal testing it is necessary to be able to read the temperature value from the sensor, and in some way to obtain data on the real temperature of the environment and / or the object indicated by the sensor from the outside. And compare them. Those. for normal “end-to-end” testing without a real board with a controller and sensor, as well as an outward communication interface, in my opinion, there is no way to do it ... In the most extreme case, we can assume that the temperature in the room where people work, will be in the range of 18-30 degrees, and check the received value for falling into this interval. But if you need to check the accuracy, then - alas, you can’t do without a thermal camera.

    Life example: we had to somehow work with the ADG2128 chip - an 8x12 switching matrix with I2C control. And the chip, as it turned out, had an undocumented glitch - its I2C part “woke up” the chip not only when it received its address at the beginning of the packet, but whenever its address was found on the bus. Even in the middle of the show. I2C, as it were, is designed to hang several devices on it. And now - there is a communication with another device hanging on the same bus, and in the middle of the communication a byte with the address of this ADG pops up, it wakes up and starts to output its data to the bus ... In general, there was an interesting bug, and its correction was also a crutch very peculiar, albeit working in the end.

    So - how could such or such a glitch be “caught” using the testing approach without the most built-in system with a “live” chip on it?

    A couple more examples from the life of embedded systems - after adding another function, the controller’s memory ends. Adding new functionality leads to “racing” or deadlock. Alternatively, incorrect, but still possible in reality, actions of the user from the series “connecting to the device wrong / wrong / wrong / wrong time” lead to similar consequences. Or send the wrong configuration to the device. Or the device itself with a certain configuration will begin to consume more current than USB can provide. Or when you connect the device to a laptop that is running on battery power, there will be no connection between the ground and the ground in the outlet - and the measurement will be amazingly inaccurate due to a bug in the designed circuit ...

    In my opinion, normal, “full-fledged” testing will be possible only in the form of developing yet another device, a real, “iron” one, which will emulate all the necessary effects on our tested device, plus our test framework should be able to control our tested device and impact simulation device.

    When we developed DSLAM (a telecommunication device with wide Ethernet at one end and 32/64/128 DSL modems at the other end), the test bench looked something like this: 64 modems connected to 64 ports of the L2 / L3 traffic generator, and uplink connected to another port. The test script configured DSLAM, traffic generators, launched traffic and checked the results.

    When we developed a multi-channel applied oscilloscope, the testing device looked like this: a box with 4 independent outputs connected to the inputs of the tested oscilloscope, each output could simulate all sensors supported by the oscilloscope (such as current clamps or pressure sensors), and also give values ​​that would give out real sensor. Test scenario - set the combination of sensors and generated values ​​at the outputs, configure the device under test (set the sensor, range, etc.), measure them with the tested device, compare with the generated ones.

    All this was integrated into the CI system - the current build was collected and poured onto the device, after which the testing described above began.

    The systems were used both at the development stage for regression testing and at the production stage for testing a new device before sending it “in the field”.

    Undoubtedly, such an approach is expensive - but with a long and complex multi-functional project, it seems to me that there is no alternative to it. And without it, there’s a direct road to the “death loop of testing” (The growth of the required number of “manual” tests as new functions are added, as a result, even the simplest change in the code cannot be done quickly: 1 hour for change / bugfix and week for manual regression testing, yeah. About a week - not a joke, alas.)

    Now we are making the testing system itself in the form of a more or less universal modular system, see if anyone else needs it ...

    Also popular now: