
Richard Hamming: Chapter 5. Computer History - A Practical Application
- Transfer
“The goal of this course is to prepare you for your technical future.”

So Hamming (yes, yes, self-checking and self-correcting Hamming codes ) has a whole book written based on his lectures. Let’s translate it, because the man is talking business.
This book is not just about IT, it is a book about the thinking style of incredibly cool people. “This is not just a charge of positive thinking; it describes conditions that increase the chances of doing a great job. ”
We have already translated 13 (out of 30) chapters.
Thank you for the translation Sergey Metlov, who responded to my call in the "previous chapter". Who wants to help with the translation - write in a personal email or mail magisterludi2016@yandex.ru(By the way, we also launched the translation of another cool book - “The Dream Machine: The History of the Computer Revolution” )
Chapter 5. The history of computers - practical application
As you probably noticed, I use technical material to link several stories, therefore, I will start with a story about how this chapter and the two previous ones were born. By the 1950s, I realized that I was afraid to speak in front of a large audience, despite having studied hard in college for many years. After thinking about this fact, I came to the conclusion that I can not afford to become a great scientist, having a similar weakness. The scientist’s duty is not only to make discoveries, but also to successfully convey them in the form of:
- books and publications;
- public speaking;
- informal conversations.
Problems with any of these skills could seriously pull my career to the bottom. My task was to learn to speak in public without fear of the audience. Undoubtedly, practice is the main tool, and it should remain at the forefront, despite the availability of other useful techniques.
Shortly after I realized this, I was invited to give an evening lecture for a group of engineers. They were IBM customers and studied some aspects of working with IBM computers. Earlier, I myself attended this course, so I knew that lectures usually take place during the week on weekdays. As an entertainment in the evenings, IBM usually has a party on the first day, a trip to the theater on some of the other days, and a lecture on the general topic of computers on one of the last evenings. I was definitely invited to one of these last evenings.
I immediately accepted the offer, because it was for me the very chance I had to practice speaking in public. Soon, I decided that my performance should be so good that I was invited again. That would give me more room to practice. At first I wanted to give a lecture on one of my favorite topics, but I soon realized that if I want to be called again, I have to start with what will be interesting to the audience. And this, as a rule, is a completely different topic. I did not know for sure that people would like to hear what course they are taking and what their abilities are, so I chose a topic that would be of interest to most - “The history of computer technology until 2000”. I was curious about this topic, and I myself wondered what exactly I could tell about it! Moreover, and this is important,
Asking myself, “What do they want to hear?”, I do not speak as a politician, but as a scientist who must tell the truth as it is. A scientist should not speak just for fun, as The purpose of a lecture is usually to convey scientific information from the lecturer to the audience. This does not mean that speech should be boring. There is a thin, but very definite line between scientific discussion and entertainment, and the scientist should always be on the right side of this line.
My first speech was about hardware, in the creation and operation of which there are, as I noted in Chapter 3, three natural natural limitations: the size of molecules, the speed of light and the problem of heat dissipation. I added beautiful color VuGraph schemes, indicating on separate layers the limitations of quantum mechanics, including the result of the uncertainty principle. My speech was successful because an IBM employee who invited me to speak told me later how much the participants liked her. I mentioned in passing that I liked it too, and I will be glad to come to New York again any day to give my lecture, provided that I will be called in advance. IBM agreed. This was the first cycle of lectures, which lasted for many years, two or three times a year. For myself, I got enough public speaking practice and stopped being so afraid of it. You should always feel some excitement while speaking. Even the best actors and actresses usually have a slight fear of the stage. Your mood will be passed on to the public, so if you are too relaxed, people can get bored or even fall asleep!
My speech gave me the opportunity to keep abreast of the latest news and trends in computers, contributed to my intellectual development and helped me improve my oratory. I was not just lucky - I did a serious job, trying to delve into the essence of the issue. At any lecture, wherever it was held, I began to pay attention not only to what was said, but also to the presentation style, trying to understand whether the final speech was effective enough. I preferred to skip lectures that were purely entertaining, but I myself learned to tell jokes. An evening lecture should usually contain three good jokes: one at the beginning, one in the middle and the last at the very end, so that students will remember at least one. All three, however, should be well narrated. I needed to find my own style in humor, and I practiced,
After several of my lectures, I realized that not only hardware, but also software will limit the evolution of computer technology as the year 2000 approaches, as was noted in the previous chapter 4. In the end, after a while, I began to realize that efficiency is what is likely to advance the evolution of computers. Much, though not all, of what was about to happen should be economically justified. This will be discussed later in this chapter.
The development of computer technology began with simple arithmetic, then went through a great many astronomical applications and came to the solution of cumbersome computational problems (note translator - using computers they started to perform a class of tasks called “number crunching”, which usually means simple mathematical calculations, but the implementation of which requires a long time). However, it is worth recalling the Spanish theologian and philosopher Raymond Lull (1235-1315), also known as Lulli, who built the logical machine! It was the same car that Swift ridiculed in “The Adventures of Gulliver,” when Gulliver was on the island of Liliputov. I have the impression that Liliputia corresponds to Mallorca, where Lull lived and lived.
In the early years of the development of modern computer technology, say in the region of the 1940s and 1950s, “number crunching” was its main driving force, as the people who needed serious computing were the only ones who had enough money to allow themselves (at the time) to execute them on computers. Because the cost of computers fell, the set of tasks for which it became profitable to use them, expanded and began to include tasks that go beyond the scope of “number crunching”. We realized that they could also be executed on computers, it was simply not profitable enough at that time.
Another significant moment in my computing experience occurred in Los Alamos. We worked on solving partial differential equations (the behavior of an atomic bomb) on primitive equipment. At first, at Bell Telephone Laboratories, I solved partial differential equations on relay computers; I even solved a partial differential-integral equation! Later, having much better machines, I went over to the usual differential equations for calculating rocket trajectories. Later I published several articles on how to calculate a simple integral. Then I went on to an article on function evaluation, and finally published a paper on combining numbers! Yes, we were able to solve some of the most complex problems on the most primitive equipment - it was necessary to do this in order to prove that machines can do what what could not be done without them. Then, and only then, could we address the issue of economic efficiency of solving problems that before that could only be solved manually! And for this we needed to develop the basic theories of numerical analysis and practical calculations suitable for machines, and not for manual calculations.
This is typical of many situations. First of all, it is necessary to prove that a new thing, device, method or something else can cope with complex tasks before it can penetrate the system in order to perform routine, and later more useful tasks. Any innovation always encounters such resistance, so do not lose heart, seeing how your new idea is most stupidly refused to accept. Understanding the scale of the actual task, you can decide whether you should continue to make efforts, or whether you need to improve your solution, and not waste your energy in the fight against inertia and stupidity.
In the early years of computer development, I soon turned to the problem of solving many small problems on a large machine. I realized that I was actually engaged in the mass production of a constantly changing product - I had to organize the work in such a way that I could cope with most of the tasks that would arise next year, and at the same time not knowing exactly what they would be. Then I realized that computers in the broad sense opened the door to mass production of a variable product, no matter what it is; numbers, words, word processing, furniture making, weaving, or something else. They give us the opportunity to work with diversity without excessive standardization, and therefore we will be able to develop faster, moving towards a desired future! Today you can observe that this applies to the computers themselves!
Computers, with little human intervention, develop their own chips and more or less automatically assemble themselves from standard parts. You simply indicate what you need in the new computer, and the automated system collects it. Some computer manufacturers today assemble parts with little or no human intervention.
The peculiar feeling that I was involved in the mass production of a variable product with all its advantages and disadvantages led me to the IBM 650, which I spoke about in the previous chapter. With an effort of approximately 1 man-year for a total of 6 months, I found that by the end of the year I had more work done than if I took on each task in turn! Software development paid off in one year! In a fast-paced area such as computer software, if it does not bring the expected benefits in the near future, it is unlikely to ever pay off at all.
I did not talk about my experience outside of science and engineering. For example, I solved one rather serious business problem for AT&T using UNIVAC-I in New York, and one day I will talk about what lesson I learned then.
Let me discuss the use of computers in more detail. It is no secret that when I worked in the research department at Bell Telephone Laboratories, the tasks were initially primarily scientific, but in the process we soon encountered engineering problems. Firstly (see Fig. 5.1), following only the growth of purely scientific problems, you get a curve that increases exponentially (pay attention to the vertical logarithmic scale), but soon you will see how the upper part of the S-shaped curve is flattened to more moderate growth rates. In the end, given the problem that I worked at Bell Telephone Laboratories and the total number of scientists in the laboratory, there should be a limit to what they could offer and what resources to use. As you know, they began to offer much larger tasks much slower,
Soon, engineering problems appeared, and their volume grew almost along the same curve, but they were more voluminous and stood on top of an earlier scientific curve. Then, at least in Bell Telephone Laboratories, I discovered an even larger segment - military computing, and finally, when we moved on to manipulating characters in the form of word processing, compilation times for higher-level languages and other details showed similar growth . Thus, while each type of workload seemed to be gradually approaching saturation, in turn, the net total effect from them was to maintain a relatively constant growth rate.

Figure 5.1
What will appear next to continue this linear logarithmic growth curve and prevent the inevitable smoothing of the curves? The next big area, I think, will be pattern recognition. I doubt our ability to deal with the general problem of pattern recognition, because in the first place it implies too much. But in areas such as speech recognition, radar image recognition, image analysis and redrawing, workload planning in factories and offices, statistical data analysis, creating virtual images, etc., we are free to consume a very large amount of computing power. The calculation of virtual reality will become a major consumer of computing power, and its obvious economic value ensures that this happens both in practical areas, so in the field of entertainment. In addition, I believe that artificial intelligence, which will one day reach the stage that its application justifies investments in computing power, will become a new source of tasks that need to be solved.
We started early to do the interactive computing that a scientist named Jack Kane introduced me to. At that time, he had a wild idea to connect a small Scientific Data Systems (SDS) 910 computer to the cyclotron in Brookhaven, where we spent a lot of time. My vice president asked me if Jack could do this, and when I carefully studied the issue (and Jack himself), I said that I think I could. I was then asked: “Will the computer manufacturing company be able to support it in the process?”, Because the vice president did not want to get any unsupported machine. This required me much more effort in non-technical areas, and I finally made an appointment with the SDS president at his Los Angeles office. I received a satisfactory answer and returned with a feeling of confidence, but more on that later. So we did it, and I was sure, as I am sure now, that this cheap little SDS 910 at least doubled the effective performance of the huge expensive cyclotron! This was, of course, one of the first computers that, during a cyclotron start, collected, reduced, and displayed the collected data on the screen of a small oscilloscope (which Jack collected and made it work for several days). This allowed us to get rid of many not quite correct runs; let's say the sample was not exactly in the middle of the beam, the noise was at the edge of the spectrum, and so we had to redesign the experiment, or something funny just happened, and we needed more details to understand what happened. All these are reasons to suspend the experiment and make changes, rather than bring it to the end, and then look for the problem.
This experience led us to Bell Telephone Laboratories, where we began installing small computers in the laboratory. First, just to collect, minimize and display data, but soon to conduct experiments. It is often easier to give a computer program a form of electromotive voltage for experiments using a standard digital-to-analog converter than to create special circuits for this. This significantly increased the range of possible experiments and added practical interest to conducting interactive experiments. Again, we got the car under one pretext, but its presence ultimately changed both the problem itself and what the computer was originally used for. When you see that you were able to successfully use a computer to solve a problem, you understand that you are already doing some equivalent on it, but different from the original, work. Again, you can see how the presence of a computer ultimately changed the very nature of many of the experiments we conducted.
The Boeing company (in Seattle) later applied a somewhat similar idea, namely: the current scheme of the alleged design of the aircraft was saved on film. It was understood that everyone would use this film, so when designing any particular aircraft, all parts of the vast company would be synchronized. This did not work as planned by the management. I know this for sure, because for two weeks I was secretly engaged in extremely important work for the top Boeing leadership under the guise of conducting a regular check of the computer center for one of the lower-level groups!
The reason the method did not work is quite simple. If the design scheme in the current state is on film (currently on disk) and you use the data to study, say, the area, shape and profile of the wing, then when you make changes to your parameters and see how you can improve the scheme, it’s possible , this was due to the fact that someone else made changes to the overall design, and not to the change you made, which often only worsens the situation! Therefore, in practice, each group during the optimization study made a copy of the current tape and used it without any updates being made in other groups. Only when they finally finished work on their design did they make changes to the general scheme - and, of course, they had to check their new design, combined with new projects of other groups.
This leads me to a conversation about databases. Computers were seen as salvation in this area, which is often observed in other areas to this day. Of course, airlines with their backup systems are a good example of what you can do using a computer - just think about what kind of mess happens when manually processing data, with a lot of errors caused by the human factor, not to mention the extent of the problems that arise. Airlines now have many databases, including weather records. Weather conditions and delays at the airport are used to compile a flight profile for each flight immediately before take-off and, if necessary, is adjusted during the flight due to the latest information.
It seems that managers working in various companies always think that if they only knew about the current state of affairs in the company in all details, they could manage it much better. Therefore, by all means, they should always have an up-to-date database of all the activities of the company. There are difficulties, as shown above. But there is another; suppose that you and I are both vice-presidents of the company, and by Monday morning we need to prepare the same reports. You download data from the program on Friday afternoon, while I, being wiser and knowing that on weekends I get a lot of information from remote branches, I wait until Sunday. Obviously, there may be significant differences in our two reports, although we both used the same program to prepare them! In practice, this is simply unbearable. In addition, most important reports and decisions should not depend so much on whether you made them a minute earlier or a minute later.
How about a scientific database? For example, whose measurement results will get into it?
Of course, there is a certain prestige for you to be yours, so there will be hot, expensive, annoying conflicts of interest in this area. How will such conflicts be resolved? Only through significant costs! Again, when you do optimization research, you have the above problem; was there a change in some kind of physical constant that you didn't know about that made the new model better than the old? How can you save change status for all users? It is not enough to simply force users to examine all your changes every time they use the machine, and if they do not, errors will occur in the calculations. Blaming users does not fix errors!
At first, I mainly talked about general-purpose computers, but I gradually began to discuss its use as a special-purpose device for controlling things, such as cyclotron and laboratory equipment. One of the main steps to this happened when someone from the industry of creating integrated circuits for customers suggested, instead of creating a special chip for each client, to produce a four-bit general-purpose computer and then program it for each specific task (INTEL 4004). He replaced the complex production work with the work of creating software, although, of course, the chip had yet to be done, but this time it would be a large batch of identical 4-bit chips. Again, this is a trend I noted earlier, the transition from hardware to software for mass production of a variable product - always using the same general-purpose computer. The 4-bit chip was soon expanded to 8-bit, then to 16-bit, etc., and now some chips have 64-bit computers built in!
You, as a rule, do not realize how many computers you interact with daily. Traffic lights, elevators, washing machines, telephones that currently have a lot of computers, unlike my youth, on the other end we were always waiting for a cheerful operator, eager to hear the number of the subscriber you wanted to talk to, answering machines, cars with many computers under the hood, are examples of how actively expanding the range of their application. You just have to observe and note how versatile computers are in our lives. And of course, over time, they will develop even more - the same simple general-purpose computer can perform so many specific tasks that it rarely requires a special chip.
You see much more specialized chips around than you actually need. One of the main reasons is the satisfaction of the big ego that you have your own special chip, and not one of the general herd. (I repeat part of chapter 2.) Before making this mistake and using a special chip in any equipment, ask yourself a number of questions. Let me repeat them. Do you want the only one using your chip? How many do you need in stock? Do you really want to work with only one or several suppliers instead of buying them on the open market? Wouldn't the total cost be much higher in the long run?
If you have a universal chip, then all users will contribute to the detection of deficiencies, and the manufacturer will try to fix them. Otherwise, you will have to create your own manuals, diagnostic tools, etc., and in addition, the experience that experts have in working with other chips will rarely help them in working with yours. In addition, general-purpose chip updates you may need are likely to be available to you at no cost to you, as as a rule, someone else will take care of this. You will inevitably need to update the chips, because soon you will need to do more work than the original plan required. To meet this new need, it’s much easier to work with a general purpose chip,
I do not need to give you a list of how computers are used in your field of activity. You should know much better than me how quickly the scope of their application is expanding throughout your organization, not only directly in the workplace, but also far beyond. You should also be well aware of the ever-increasing speed of change, modernization and flexibility of these versatile information processing devices, which allows the entire organization to meet the ever-changing demands on the operating environment. The list of possible applications for computers has just begun to take shape, and it has to be expanded - perhaps for you. I do not mind a 10% improvement in the current situation, but I also expect from you innovations that will affect your organization so much that history will remember them for at least a few years.
As you move up the corporate ladder, you should explore the successful areas of computer use and those that fail; try to learn to distinguish between them; try to understand situations that lead to success, and those that almost guarantee failure. Recognize the fact that, as a rule, you ultimately need to solve not the initial problem, but the equivalent, and do it in such a way that in the future you can easily make improvements and changes (if the approach works). And always think carefully about how your technology will be used in battle, because as a rule, reality will be different from your ideas.
The options for using computers in society are far from exhausted, and there are many more important areas where they can be used. And finding them is easier than many think!
In the previous two chapters, I made several conclusions about possible limitations in the areas of application, as well as in hardware and software. Therefore, I should discuss some of the possible limitations of the applications. I will do this in the next few chapters under the general title “Artificial Intelligence, AI”.
To be continued ...
Who wants to help with the translation - write in a personal email or e-mail magisterludi2016@yandex.ru
By the way, we have also launched the translation of another cool book - “The Dream Machine: The History of the Computer Revolution” )
Book Contents and Translated Chapters
Who wants to help with the translation - write in a personal email or mail magisterludi2016@yandex.ru
- Intro to The Art of Doing Science and Engineering: Learning to Learn (March 28, 1995) (in work) Translation: Chapter 1
- “Foundations of the Digital (Discrete) Revolution” (March 30, 1995) Chapter 2. Fundamentals of the Digital (Discrete) Revolution
- History of Computers - Hardware (March 31, 1995) (in work)
- “History of Computers - Software” (April 4, 1995) Chapter 4. History of Computers - Software
- History of Computers - Applications (April 6, 1995) Chapter 5. Computer History - Practical Application
- "Artificial Intelligence - Part I" (April 7, 1995) (in work)
- "Artificial Intelligence - Part II" (April 11, 1995) (in work)
- “Artificial Intelligence III” (April 13, 1995) Chapter 8. Artificial Intelligence-III
- “N-Dimensional Space” (April 14, 1995) Chapter 9. N-Dimensional Space
- “Coding Theory - The Representation of Information, Part I” (April 18, 1995) (in work)
- "Coding Theory - The Representation of Information, Part II" (April 20, 1995)
- “Error-Correcting Codes” (April 21, 1995) (in)
- Information Theory (April 25, 1995) (in work, Alexey Gorgurov)
- Digital Filters, Part I (April 27, 1995) is done
- Digital Filters, Part II (April 28, 1995) in work
- Digital Filters, Part III (May 2, 1995)
- Digital Filters, Part IV (May 4, 1995)
- “Simulation, Part I” (May 5, 1995) (in work)
- "Simulation, Part II" (May 9, 1995) is ready
- "Simulation, Part III" (May 11, 1995)
- Fiber Optics (May 12, 1995) at work
- Computer Aided Instruction (May 16, 1995) (in work)
- Mathematics (May 18, 1995) Chapter 23. Mathematics
- Quantum Mechanics (May 19, 1995) Chapter 24. Quantum Mechanics
- Creativity (May 23, 1995). Translation: Chapter 25. Creativity
- “Experts” (May 25, 1995) Chapter 26. Experts
- “Unreliable Data” (May 26, 1995) (in work)
- Systems Engineering (May 30, 1995) Chapter 28. Systems Engineering
- “You Get What You Measure” (June 1, 1995) Chapter 29. You Get What You Measure
- “How Do We Know What We Know” (June 2, 1995) in work
- Hamming, “You and Your Research” (June 6, 1995). Translation: You and Your Work
Who wants to help with the translation - write in a personal email or mail magisterludi2016@yandex.ru