# Richard Hamming: Chapter 18. Modeling - I

- Transfer

“The goal of this course is to prepare you for your technical future.”

Hi, Habr. Remember the awesome article "You and your work" (+219, 2442 bookmarks, 394k readings)?

So Hamming (yes, yes, self-checking and self-correcting Hamming codes ) has a whole book written based on his lectures. We translate it, because the man says it.

This book is not just about IT, it is a book about the thinking style of incredibly cool people.

*“This is not just a charge of positive thinking; it describes the conditions that increase the chances of doing a great job. ”*

We have already translated 27 (out of 30) chapters. And we are working on the publication "in paper."

### Chapter 18. Modeling - I

*(Thanks to Valentin Pinchuk for the translation, who responded to my call in the “previous chapter.”) Who wants to help with the translation - write in a personal or mail magisterludi2016@yandex.ru*

An important direction in the use of computers in our time, in addition to entering and editing text, graphics, programming, etc., is modeling.

Modeling is the answer to the question: “What if ...?”

What if we do this? What if this is what happened?

More than 9 out of 10 experiments are currently performed on computers. I have already mentioned my serious concern that we are increasingly dependent on modeling and less and less investigating reality, and seem to be approaching the old scholastic approach: what is written in textbooks is a reality and does not require constant experimental checks. But I will not now dwell on this issue in detail.

We use computers for modeling, as this:

- first, cheaper;
- secondly, faster;
- thirdly, usually better
- fourthly, it gives the opportunity to do what cannot be done in the laboratory.

The first two points state that even taking into account the cost of money and time for programming, all its mistakes and other shortcomings, it is still much cheaper and faster than getting the required laboratory equipment for work. Moreover, if in recent years you have ordered expensive, high-quality laboratory equipment, in less than 10 years you will find that it should be written off as obsolete. These arguments are not suitable if the situation is constantly monitored, and laboratory equipment is constantly used. But let him stay idle for a while, and suddenly it will stop working properly! This is called “shelf life”, but it is sometimes “shelf life” of skills to use it, and not “shelf life” of the equipment itself! I too often became convinced of this on my own personal experience.

According to the third point, we can get more accurate data from modeling than from direct measurement in the real world. Field or even laboratory measurements are often difficult to obtain with the required accuracy in a dynamic setting. In addition, when modeling, we can often work in a much wider range of independent variables than is possible with any laboratory setup.

According to the fourth point, probably the most important of all, the simulation is capable of doing what no experiment can do.

I will illustrate these points with concrete situations in which I personally participated, so that you can understand how modeling can be useful for you. I will also point out some of the details by which those who have little experience in modeling get more insight into how to approach it, because it is unrealistic to perform modeling that will take years to complete.

The first big calculations in which I participated were in Los Alamos during World War II, when we designed the first atomic bomb. There was no opportunity to conduct a full-scale experiment on a smaller scale - either you have a critical mass, or there is none.

Without going into the secret details, I remind you that one of the two projects was spherically symmetrical and was based on explosive initiation, Fig. 18.I.

The entire volume of the bomb material was divided into concentric spherical shells. The equations of forces acting on each shell (on both its sides), as well as the equations of state, which described, in addition to other parameters, the density of a substance depending on the pressure on it, were compiled. Then the time axis was divided into intervals of 10–8 s. For each time interval we calculated with the help of computers, how each shell will move and what will happen to it at this time, under the influence of the forces applied to it. There was, of course, a separate study for the process of passing the shock wave from the surrounding explosive through this area. But all laws, in principle, were well known to experts in their respective fields. The pressure was such that one had only to speculate that

*Fig.18.I.*

This just illustrates the main point on which I want to stop. It is necessary to have extensive and deep special knowledge in the subject area. Actually, I am inclined to consider the many courses that you have already studied, and those that are yet to be studied, as the only means of obtaining relevant expert knowledge. I want to highlight this obvious need for domain expertise — too often I have seen how modeling experts ignore this basic fact and believe that they can safely perform modeling on their own. Only a subject matter expert can know if what you couldn’t include in the model is vital to the accuracy of the simulation, or can this be safely neglected.

*Other*An important point is that in most cases modeling is such a stage, which is repeated again and again, many times, with the same program, otherwise you will not be able to initialize the data. In the case of a bomb, the same calculations were performed for each shell and then for each time interval - a myriad number of repetitions. In many cases, the computational power of the machine repeatedly exceeds our programming capabilities, so it is advisable to look for the repetitive parts of the upcoming modeling in advance and constantly, and, if possible, to carry out the modeling accordingly.

Very similar to the problem with a nuclear bomb and modeling in weather forecasting. In this case, the atmosphere is broken into large blocks of air, and for each block, the values of cloud cover, albedo, temperature, pressure, humidity, speed, etc., should be initialized, see Fig. 18.II.

Then, using conventional atmospheric physics, we monitor the corresponding changes of each block in a small time interval. This is the same element calculation method as in the previous example.

However, between the two tasks, with the bomb and the weather forecast, there is a significant difference. For a bomb, small deviations in the simulated process do not significantly affect the overall

performance, but the weather, as you know, is very sensitive to small changes. It is believed that even the flap of a butterfly's wings in Japan can influence whether a storm hits this country and how severe it will be.

*Fig.18.II*

This is a fundamental topic on which I must dwell. If the simulation has a stability margin, in the sense of resistance to small changes in its overall behavior, then the simulation is quite realistic; but if small changes in some details can lead to very different results, then modeling is difficult to perform accurately. Of course, there is also long-term stability in the weather: the seasons follow their designated whorling, regardless of small deviations. Thus, there is both short-term (day-to-day) weather instability and long-term (year to year) stability. And the ice ages show that there are even more long-term weather instabilities, and, undoubtedly, even more lasting stability!

I have come across a lot of problems of this kind. It is often very difficult to determine in advance whether stability or instability will dominate the task and, therefore, assess the possibility of obtaining the desired results. When you do a simulation, carefully study this aspect of the task before you dive too deep into it not to discover later, having spent a lot of effort, money and time, that you are not able to get acceptable results. Thus, there are situations that are easy to model, situations that are practically not at all simulated, and most of the rest are between these two extremes. Be careful about what you can do with modeling!

When I joined Bell Telephone Laboratories in 1946, I soon took part in the early design stages of the very first NIKE guided missile system. I was sent to the Massachusetts Institute of Technology to use their RDA # 2 differential analyzer. There I gained knowledge about the interrelation of the parts of the analyzer and a lot of tips from specialists who are much more sophisticated in conducting simulations.

In the initial draft, an inclined launch of the rocket was envisaged. Variational equations provided me with the ability to fine-tune various components, such as wing size. I think it should be mentioned that the calculation of one trajectory took about ½ hour, and about half of this time I had to convince myself to calculate the next launch. Therefore, I had enough time for observations and deep reflection on why everything went as it went. A few days later, I gradually got a "feeling" of the behavior of the rocket, why it behaves as it does under the different laws of guidance that I applied.

Over time, I came to the conclusion that the vertical start was always the best. A quick exit from the dense lower layers of the air to the rarefied was the best strategy - I could well afford to add air resistance after, when commands were given to decline the trajectory. In doing so, I found that I significantly reduced the size of the wings. I also realized quite well that the equations and constants that were given to me to assess changes in effects caused by changes in the structure of the rocket can hardly be accurate in such a large range of parameter changes (although they never told me the original equations, I guessed ). So I called for advice and found that I was right - I should go home and get new equations.

With some delay due to the desire of other users to use their time on RDA # 2, I soon returned to work, already more experienced and sophisticated. I continued to develop the rocket's sense of behavior — I had to “sense” the forces acting on it, using different trajectory formation programs. And the waiting time, when the solution slowly appeared on the plotter, gave me the opportunity to understand what was happening. I often think about it, but what would have happened if I had a modern, high-performance computer? Would I ever get that rocket feeling on which so much depends on the final draft? I often doubt that the extra hundreds of paths would teach me the same way - I just don’t know. But it is precisely for this reason that to this day I am suspicious of receiving a multitude of calculations without due reasoning about what you got. The volume of results seems to me to be a poor substitute for the feeling of penetration into a simulated situation.

The results of these first runs led us to the choice of a vertical start (which saved us from unnecessary ground equipment in the form of a circular guide and other devices), simplified the design of many other components and reduced the size of the wings to about 1/3 of the size I was originally asked. I found that large wings, in principle providing greater maneuverability, so increase the air resistance in the early sections of the trajectory, which ultimately results in a lower flight speed resulting in less maneuverability in the final approach phase with the target.

Of course, at the early stage of modeling, a simple atmospheric model of exponential decrease in density with height and other simplifications were used, which were changed in the subsequent stages. This gave me another conviction - the use of simple models in the early stages allows you to get a general idea of the whole system, which will inevitably be disguised in any full-scale simulation. I strongly recommend starting with simple modeling and then developing it to a more complete, more accurate, so that an understanding of the essence can come as soon as possible. Of course, when choosing a final design, you must consider all the nuances that may affect it. But (1) start as simple as you can, provided that you consider all the main influences, (2) get a general idea,

Guided missiles were one of the earliest studies in the field of supersonic flight, and this problem was another big uncertainty. The data of the two supersonic wind tunnels that are only available to us are frankly contradictory.

Guided rockets naturally led to space flight, where I was less involved in the modeling itself, and more as an external consultant and in the initial planning of the so-called cyclogram of the project.

Another of the first simulations that I recall was the design of a traveling wave tube. Again, on primitive relay equipment I had a lot of time to think, and I realized that I could, as the calculations were carried out, understand what shape should be given, in addition to the traditional tube of constant diameter. To understand how this happened, consider the basic design of a traveling wave tube. The idea is that you send an input wave along a spiral tightly wound around a hollow tube, and therefore, the effective speed of an electromagnetic wave through a tube is significantly reduced. Then we send an electron beam along the tube axis.

The beam has initially greater speed than the wave traveling along the helix. The interaction of the wave and the beam leads to a slowing down of the electron beam - which means the transfer of energy from the beam to the wave, that is, the amplification of the wave! But, obviously, in some place the pipe their speeds are approximately aligned, and then further interactions only worsen the situation. As a result, I had the idea that if you gradually increase the diameter of the pipe (and, therefore, the path traveled by the wave through the turns of the helix - approx. Translator), the beam will again become faster than the wave and more energy will be transmitted from beam to wave. Indeed, on each calculation cycle, it was possible to calculate the ideal pipe profile.

I have had unpleasant finds. As a rule, the equations used in fact usually were local linearizations of more complex non-linear equations. Approximately to the twentieth-fiftieth step of the calculation, I could estimate the non-linear component. I found that to the amazement of the researchers on some projects, the estimated non-linear component was larger than the calculated linear component — thereby killing the approximation and stopping the useless calculations.

Why tell this story? Because it vividly demonstrates that an inquiring mind can help in modeling, even if you work alongside experts in the field where you are an amateur. Feeling with your own hands every small detail, you have a chance to see what others did not notice, and make a significant contribution, as well as save machine time! How often have I found omissions in modeling that users of his results are unlikely to recognize.

There is an important step that you must take, and I want to emphasize this: master special jargon. Each specialty has its own jargon, which is trying to hide what is happening from outsiders, and sometimes from insiders! Watch jargon — learn to recognize it as a special language to facilitate communication in a narrow area of things or events. However, it interferes with thinking outside the original area for which it was intended. Jargon is both a necessity and a curse. You need to understand that you need to strain your brains to take advantage of it and avoid traps, even in your own area of expertise!

During the long years of evolution, cavemen apparently lived in groups ranging in size from 25 to 100 people. People from outside, as a rule, were not welcomed, although we believe this does not apply to abducted wives. Comparing many centuries of caveman evolution with a century of civilization (less than ten thousand years), we see that we have chosen evolution mainly to isolate strangers, and one of the ways to do this is to use special slang languages. Thieves' slang, group slang, the private language of a husband and wife from words, gestures and even raised eyebrows are all examples of using a private language to isolate outsiders. Consequently, this instinctive use of jargon, when a stranger comes in, must always be consciously resisted - we now work in much larger groups than cavemen,

Mathematics is not always the magic language you need. To illustrate this, let us return to the casual modeling of sea interception mentioned by me, equivalent to a system of 28 differential equations of the first order. But it is necessary to open the plot. Ignoring all but its essential part, we consider the problem of solving a single unique differential equation

**y '= f (x, y) with | y | ≤1**, see Fig. 18.III.

Remember this equation, and I will talk about the real problem. I programmed a real problem, a system of 28 differential equations to get a solution, and then limited some values to 1, as if it were a voltage constraint. Despite the resistance of the consultant, my friend, I insisted that he fully participate in the binary programming of the problem with me, and I explained to him what was happening at each stage. I refused to pay until he did it - so he had no other choice! We got to the restrictions in the program, and he said: “Dick, this is a stabilizer limit, not a voltage limit,” meaning that the restriction should be imposed at each step of the calculation, and not as a result. This is the best example I’ve known to demonstrate how we both understood

*Fig.18.III.*

If we had not caught this error, I doubt that any real, live experiments with the participation of the aircraft would reveal a decrease in maneuverability, which was obtained from my interpretation. That is why, to this day, I insist that a person with a deep understanding of what should be modeled should be involved in detailed programming. If you do not do this, then you may encounter similar situations when both the consultant and the programmer know exactly what is meant, but their interpretations may be so different that they will lead to completely different results!

You should not get hung up on the idea that modeling is conducted solely for functions of time. One of the tasks that I was assigned to investigate on a differential analyzer, assembled by us from the old parts of the M9 anti-aircraft fire control device, was to calculate the probability distributions of locks in the central office. It does not matter that they gave me an infinite system of interrelated linear differential equations, each of which asked the probability distribution of the number of calls to the central office, depending on the total load. It was necessary to somehow get out on the final machine, which had only 12 integrators, as I recall.

I took it for the full input impedance of the circuit. Using the difference between the last two calculated probabilities, I assumed that they are proportional to the difference between the following two (I used a reasonable proportionality constant, derived from the difference from the two previous functions). Thus, it was possible to fairly correctly get the contribution from the next, not yet calculated equation. The results were in demand in the switching department and, I believe, made an impression on my boss, who still had a low opinion about computers.

There was also an underwater simulation, especially mentioning the acoustic array, installed in the Bahamas by my friend, where, of course, in the harsh winter (he jokes - interpreter) he often had to go check everything and make new measurements. Many simulations of the design and behavior of transistors were carried out.

They modeled microwave relay stations with their receiving horns, as well as the impact of a pulse at one end of a chain of relay stations, as it passes through the entire chain, on the stability of the entire system of these stations. It is possible that, even with the rapid recovery of each station from the pulse, its size may increase as it crosses the continent. At each relay station there was stability, in the sense of a pulse dying out over time, but the question of spatial stability remained open - what if suddenly a random pulse could grow endlessly as it crosses the continent? I called this problem "spatial stabilization." We had to know the conditions under which this could happen, or could not happen - therefore, modeling was necessary because, among other things,

I hope that you understand: it is in principle possible to simulate any situation that lends itself to some mathematical description. But in practice, you should be very careful when modeling unstable situations. Although in chapter 20 I will tell you about one extreme case that I had to solve. It was very important for Bell Telephone Laboratories, and meant, at least for me, that I had to get a solution, no matter what excuses I gave myself that this was impossible. Any answers to important problems will always be found if you are determined to receive them. They may not be perfect, but in a stalemate is something better than nothing - provided it is trustworthy!

Mistakes in modeling were very often forced to abandon good ideas! However, there is little that can be found in the literature, since they were very, very rarely reported. One well-known erroneous model, which was widely announced before its mistakes were discovered by others, was a model of the whole world, created by the so-called "Rome Club". It turned out that their chosen equations should have shown a catastrophe, regardless of the initial data or the choice of most of the coefficients! But when others got these equations and tried to repeat the calculations, it turned out that the calculations have serious errors! I’ll go over to this aspect of modeling in the next chapter, because it’s a very serious question — either to report on things that make people believe in what they want to believe, although these things aren’t at all, or things

*(In the early 70s of the 20th century, at the suggestion of the Club, the founder of system dynamics, Jay Forrester, applied the computer-aided modeling methodology developed by him to world problems. The results of the study were published in the book World Dynamics (1971). It said that the further development of mankind on a physically limited planet Earth will lead to an environmental catastrophe in the 20s of the next century.In spite of numerous discussions on particular issues in general, the results were accepted by the world community, including . Why does the author focuses on the fallacy of this model - is unclear, perhaps it reflects a certain personal relationship of two scientists intersect while using Hamming resources at MIT, where Forrester worked. At the same time, the thesis about the unwillingness of the authors of the models to share their mistakes with the outside world is quite fair, more often the authors later refer to the limited scope of the erroneous model. - comment of the translator)*

*To be continued ...*

*Who wants to help with the translation, layout and publication of the book - write in a personal or mail magisterludi2016@yandex.ru*

By the way, we also started the translation of another cool book - "The Dream Machine: The History of Computer Revolution" )

**Book content and translated chapters**

Foreword

Кто хочет помочь с переводом, версткой и изданием книги — пишите в личку или на почту magisterludi2016@yandex.ru

- Intro to The Art of Doing Science and Engineering: Learning to Learn (March 28, 1995) Перевод: Глава 1
- «Foundations of the Digital (Discrete) Revolution» (March 30, 1995) Глава 2. Основы цифровой (дискретной) революции
- «History of Computers — Hardware» (March 31, 1995) Глава 3. История компьютеров — железо
- «History of Computers — Software» (April 4, 1995) Глава 4. История компьютеров — Софт
- «History of Computers — Applications» (April 6, 1995) Глава 5. История компьютеров — практическое применение
- «Artificial Intelligence — Part I» (April 7, 1995) Глава 6. Искусственный интеллект — 1
- «Artificial Intelligence — Part II» (April 11, 1995) Глава 7. Искусственный интеллект — II
- «Artificial Intelligence III» (April 13, 1995) Глава 8. Искуственный интеллект-III
- «n-Dimensional Space» (April 14, 1995) Глава 9. N-мерное пространство
- «Coding Theory — The Representation of Information, Part I» (April 18, 1995)
*(пропал переводчик :((( )* - «Coding Theory — The Representation of Information, Part II» (April 20, 1995) Глава 11. Теория кодирования — II
- «Error-Correcting Codes» (April 21, 1995) Глава 12. Коды с коррекцией ошибок
- «Information Theory» (April 25, 1995)
*(пропал переводчик :((( )* - «Digital Filters, Part I» (April 27, 1995) Глава 14. Цифровые фильтры — 1
- «Digital Filters, Part II» (April 28, 1995) Глава 15. Цифровые фильтры — 2
- «Digital Filters, Part III» (May 2, 1995) Глава 16. Цифровые фильтры — 3
- «Digital Filters, Part IV» (May 4, 1995) Глава 17. Цифровые фильтры — IV
- «Simulation, Part I» (May 5, 1995) Глава 18. Моделирование — I
- «Simulation, Part II» (May 9, 1995) Глава 19. Моделирование — II
- «Simulation, Part III» (May 11, 1995)
- «Fiber Optics» (May 12, 1995) Глава 21. Волоконная оптика
- «Computer Aided Instruction» (May 16, 1995)
*(пропал переводчик :((( )* - «Mathematics» (May 18, 1995) Глава 23. Математика
- «Quantum Mechanics» (May 19, 1995) Глава 24. Квантовая механика
- «Creativity» (May 23, 1995). Перевод: Глава 25. Креативность
- «Experts» (May 25, 1995) Глава 26. Эксперты
- «Unreliable Data» (May 26, 1995) Глава 27. Недостоверные данные
- «Systems Engineering» (May 30, 1995) Глава 28. Системная Инженерия
- «You Get What You Measure» (June 1, 1995) Глава 29. Вы получаете то, что вы измеряете
- «How Do We Know What We Know» (June 2, 1995)
*пропал переводчик :(((* - Hamming, «You and Your Research» (June 6, 1995). Перевод: Вы и ваша работа

Кто хочет помочь с переводом, версткой и изданием книги — пишите в личку или на почту magisterludi2016@yandex.ru