Crying engineer

Original author: Jack Ganssle
  • Transfer

Does your spouse understand what you think?


Engineering is an analytical profession in which, if everything is done correctly, everything can be checked by cold calculation. This is a given. It does not matter what you think about something, only the result is important.

A recent article by Malcolm Gladwell in New Yorker raises this issue. It discusses the role of engineers in the automotive recall service. Remember the Pinto disaster? Collisions at the rear can turn a car into a fireball. What is wrong with these engineers?

Turns out numbers just don't support passionate cries for change. The big deal that was initiated when three teenage girls died when their Pinto burned out was won by Ford.

The article gives non-engineers an excellent understanding of how we think, how we make decisions, and more importantly, how we look at the world. She draws a striking contrast in the way of thinking of number-oriented people and many others who make decisions based on how they feel.

Of course, sometimes our analytical aspects are not always appropriate. When we raised children, my wife asked why I always thought about what could go wrong with them. I replied: “I trained in the worst case analysis.”

EE, like many of the firmware people, spent the last four decades in the gap between designing circuits and writing code to support these circuits. The hardware side does not forgive emotions or any ideas to make the electrons move not exactly as predicted by theory. As for the software, it froze in the “but does it work?” Phase.

In equipment, we have a body of knowledge. Ohm's law. Laws of Maxwell. Physics of transistors. We can analyze the HFE and other parameters to predict the transfer function, and we can calculate the Q and resonant frequency when creating an oscillatory circuit that meets certain requirements.

The software world is distorted. It’s hard to predict. The simplicity of the question is - how long will this interrupt routine written in C run? Most of us cannot predict this. Fortunately, this can be measured. Unfortunately, few do. How to convert requirements to the size of the required flash? How can you predict the size of a stack or heap?

In equipment we can analyze emergency situations. Extreme temperatures, tolerances and fits, mating dimensions - all this is mathematically understandable.

The software is not too clear. The flaws are hidden, manifesting in the most unexpected places. The storm of interruptions is difficult to analyze. Task management is not deterministic.

Then features appear. They sweep through the software, like the flames of a fire in California. Rarely are they dealt with in an engineering standard - which is a cold, tough analysis. Unfortunately, software processes are very difficult to learn. The academic literature is full of articles about different ideas, but the vast majority of them are related to experiments using a tiny code base created by several developers, among them mostly students with little experience, and, of course, not in the real world. Engineers are rightfully skeptical of the conclusions from these toy experiments.

However, we have a lot of data that most of the software engineering community does not know.

What is the best way to develop code? This question probably doesn’t even make sense, given the wide range of applications that are constantly being created. The program, which will be used by one person twice, has completely different needs than the one that controls the engines on the A380.

Consider the agile community. There are dozens of agile methods. Which of them work? Which one is better? No one knows. In fact, there is little reliable data (outside of toy experiments) about the effectiveness of agile methods against other approaches. This is definitely not a reason to throw agile, as I believe (due to lack of analytical data) some of the agile ideas are simply brilliant.

Some people say that, of course, we do not have enough data about agile, but this is also true for other methods. This is a lot of truth, since there were centuries in EE for the development of the theory, to benefit from the work of Georg Ohm and others. Software is a relatively new concept. We still need the Theory of Software of Everything.

But I think you can re-formulate these questions. For example: “What are the two most effective ways to reduce errors and speed up development?”.

I wonder what your answer will be.

Everything is very simple: formal checks and static analysis. Why? We have data. In fact, we have tens of thousands of data points. One source is The Economics of Software Quality by Capers Jones and Olivier Bonsignour, but there are many others.

We know that cyclomatic complexity is actually a good way to measure aspects of testing effectiveness.

We know, in fact, that the average team of programmers does not remove 15% of previously made errors. We know that the firmware team skips a third of the errors made.

We have data, part of Ohm's law for software development. However, in the world of embedded systems, only about 2% of teams use this data to manage their development methods.

I believe that there is a lot of data, and I encourage developers to use them and apply the results in their daily work.

Edwards Deming said best of all: "We believe in God; everything else requires data."

Also popular now: