
Internet History: Discovering Interactivity
- Transfer

Other articles in the series:
- Relay history
- The history of electronic computers
- Transistor history
- Internet history
The very first electronic computers were unique devices designed for research purposes. But after they hit the market, organizations quickly incorporated them into an existing data processing culture — in which all the data and processes were presented as stacks of punch cards .
Herman Hollerithdeveloped the first tabulator, capable of reading and counting data based on holes in paper cards, for the census of the United States at the end of the XIX century. By the middle of the next century, a very motley menagerie of descendants of this machine penetrated large enterprises and government organizations around the world. Their common language was a card consisting of several columns, where each column (usually) represented one digit that could be perforated at one of ten positions denoting numbers from 0 to 9.
For punching input data into the cards, complex devices were not required, and this process could be distributed across several offices in the organization that generated this data. When the data needed to be processed - for example, to calculate the revenue for the quarterly report of the sales department - the relevant cards could be brought to the data center and queued for processing by suitable machines that issued a set of output data on the cards or printed it on paper. Around the central processing machines - tabulators and calculators - peripheral devices crowded around for punching, copying, sorting and interpreting cards.

IBM 285 Tab, a popular punch card device in the 1930s and 40s.
By the second half of the 1950s, almost all computers were working on such a “batch processing” scheme. From the point of view of a typical end user from the sales department, little has changed. You brought a pile of punch cards for processing and received a printout or other pile of punch cards as a result of your work. And in the process, the cards turned from holes in the paper into electronic signals and vice versa, but you didn't care much. IBM dominated the punch card industry and remained one of the dominant forces in the field of electronic computers, for the most part due to well-established communications and a wide range of peripheral equipment. They simply replaced the customer’s mechanical tabs and calculators with faster and more flexible data processing machines.

Set for processing punch cards IBM 704. In the foreground, a girl works with a reader.
This punch card processing system worked wonderfully for decades and did not decline - quite the contrary. And yet, in the late 1950s, the marginal subculture of computer researchers began to argue that this whole workflow needed to be changed — they stated that computers were best used interactively. Instead of leaving him a task and then coming for results, the user should directly communicate with the machine and use its capabilities upon request. In Capital, Marx described how industrial machines - which people just start up - replaced tools that people directly controlled. However, computers began to exist in the form of machines. And only later, some of their users converted them into tools.
And this alteration did not occur in data centers - such as the United States Census Bureau, the insurance company MetLife or United States Steel Corporation (all of these companies were among the first to buy UNIVAC, one of the first commercial computers available). It is unlikely that an organization in which a weekly salary is considered the most effective and reliable way will want someone to violate this processing while playing with a computer. The value of being able to sit at the console and simply try one or the other on a computer was more clear to scientists and engineers who wanted to study the problem, get to it from different angles until its weak point was discovered, and quickly switch between thoughts and actions.
Therefore, such ideas arose from researchers. However, the money to pay for such a wasteful use of the computer did not come from the heads of their departments. A new subculture (one might even say a cult) of interactive work with computers was born out of a productive partnership between US military and elite universities. This mutually beneficial cooperation began during World War II. Nuclear weapons, radars and other magic weapons taught the military leadership that the apparently obscure activities of scientists can be of incredible importance for the military. This convenient interaction existed for about one generation, and then fell apart in the political upheavals of another war, in Vietnam. But at that time, American scientists had access to huge amounts of money, almost no one touched them, and they could do almost everything,
Justification of interactive computers began with a bomb.
Whirlwind and SAGE
On August 29, 1949, the Soviet research team successfully conducted the first test of nuclear weapons at the Semipalatinsk test site . Three days later, a U.S. reconnaissance aircraft during a flight over the northern part of the Pacific Ocean discovered traces of radioactive material in the atmosphere left from this test. The USSR got a bomb, and their American rivals learned about it. The tense situation between the two superpowers has persisted for more than a year since the USSR cut off ground routes to the West-controlled areas of Berlin in response to plans to restore Germany to its former economic greatness.
The blockade ended in the spring of 1949, in a hopeless situation due to the massive operation undertaken by the West to support the city from the air. The tension subsided somewhat. However, American generals could not ignore the existence of a potentially hostile force that had access to nuclear weapons, especially given the ever-increasing size and range of strategic bombers. The United States had a chain of aircraft detection radar stations created on the shores of the Atlantic and the Pacific during World War II. However, they used outdated technology, did not cover northern approaches across Canada, and were not connected by a central air defense coordination system.
To rectify the situation, the Air Force (an independent US military unit since 1947) convened an Air Defense Engineering Committee (ADSEC). It has been remembered in history as the “Valley Committee”, named for the chairman, George Valley. He was a physicist from MIT, a veteran of the military research radar group Rad Lab, after the war turned into an electronics research laboratory (RLE). The committee studied this issue for a year, and Valley released its final report in October 1950.
One could assume that such a report would turn out to be a boring mishmash of clerical work, and end with a carefully expressed and conservative proposal. Instead, the report turned out to be an interesting example of creative argumentation, and contained a radical and risky plan of action. This is an obvious merit of another professor from MIT,Norbert Wiener , who argued that the study of living beings and machines can be combined into a single discipline of cybernetics . Valley and his co-authors began with the assumption that the air defense system is a living organism, and not metaphorically, but in reality. Radar stations serve as sensory organs, interceptors and rockets are the effectors by which it interacts with the world. They work under the control of a director who uses information from the senses to make decisions about the necessary actions. They further argued that a director consisting solely of people would not be able to stop hundreds of approaching planes on millions of square kilometers in a few minutes, so as many director functions as possible needed to be automated.
The most unusual of their conclusions is that it would be best to automate directors through digital electronic computers, which can take some of the human decisions on themselves: analysis of incoming threats, directing weapons against these threats (counting the rates of interception and transferring them to fighters), and perhaps even developing a strategy for optimal response forms. Then it was not at all obvious that computers were suitable for such a purpose. In all the USA at that time there were exactly three working electronic computers, and not one of them closely corresponded to the reliability requirements for the military system on which millions of lives depend. It just was a very fast and programmable number handler.
Nevertheless, Wally had reason to believe in the possibility of creating a real-time digital computer, as he knew about the Whirlwind project["Vortex"]. It began during the war in the MIT servo lab under the supervision of young graduate student Jay Forrester. His initial goal was to create a general-purpose flight simulator that could be reconfigured to support new aircraft models without having to rebuild each time from scratch. A colleague convinced Forrester that his simulator should use digital electronics to process input parameters from the pilot and issue output states for the instruments. Gradually, the attempt to create a high-speed digital computer outgrew and overshadowed the original goal. The flight simulator was forgotten, and the war that gave rise to its development ended long ago, and the committee of inspectors from the Department of Naval Research (ONR) was gradually disappointed in the project due to the constantly growing budget and the constantly postponed end date.
However, for George Valley Whirlwind was a revelation. The real Whirlwind computer was still far from operational. However, after this, a computer should appear, which is not just a mind without a body. This is a computer with sensory organs and effectors. Organism. Forrester has already considered plans to expand the project to the main system of the country's military command and control center. To computer experts from ONR, who considered computers suitable only for solving mathematical problems, this approach seemed grandiose and absurd. However, it was precisely this idea that Valley was looking for, and he appeared just in time to save Whirlwind from nothingness.
Despite big ambitions (and, perhaps, thanks to them), the Valley report convinced the Air Force command, and they launched an extensive new research and development program to first understand how to create an air defense system based on digital computers, and then actually build it. The Air Force began to collaborate with MIT to conduct basic research - it was a natural choice, given the presence of the Whirlwind Institute and RLE, as well as the history of successful cooperation in the field of air defense, since the days of Rad Lab and World War II. They called the new initiative “the Lincoln Project,” and built a new Lincoln Research Laboratory at Hansky Field, 25 km northwest of Cambridge.
Air Force Named Computerized Air Defense Project SAGE- a typical strange abbreviation for a military project, meaning "semi-automatic ground environment." Whirlwind was to become a test computer, proving the viability of the concept before entering the full-scale production of equipment and its implementation - this responsibility was assigned to IBM. The working version of the Whirlwind computer that IBM was supposed to do was given the much less memorable name AN / FSQ-7 ("Army-Navy Stationary Special Equipment" - compared to this acronym, SAGE looks pretty accurate).
By the time the Air Force drew up complete plans for the SAGE system in 1954, it consisted of various radar systems, air bases, air defense weapons - and all this was controlled by twenty-three control centers, massive bunkers designed to withstand the bombing. To fill these centers, IBM would need to supply forty-six computers, not twenty-three, which would cost the military many billions of dollars. This is because the company still used electronic lamps in logic circuits, and they burned out like incandescent bulbs. Any of the tens of thousands of lamps in a running computer could fail at any time. Obviously, it would be unacceptable to leave the whole sector of the country's airspace unprotected while the technicians are doing repairs, so you had to keep a spare car on hand.

SAGE control center based on the Grand Forks Air Force in North Dakota, where there were two AN / FSQ-7 computers.
Each control center had dozens of operators sitting in front of cathode-ray screens, each of which tracked part of the airspace sector.

The computer monitored any potential airborne threats and drew them as traces on the screen. The operator could use a light gun to display additional information on the trail and issue commands for the protection system, and the computer turned them into a printed message for an accessible rocket battery or air force base.

Interactivity virus
Given the nature of the SAGE system — real-time direct interaction between human operators and a digital computer with a CRT using light guns and a console — it is not surprising that the first cohort of advocates of interactive interaction with computers was raised in Lincoln’s laboratory. The entire computer culture of the laboratory existed in an isolated bubble, being cut off from the norms of batch processing that developed in the commercial world. Researchers used Whirlwind and its descendants, reserving periods of time for which they received exclusive access to the computer. They are used to using hands, eyes and ears for direct interaction through switches, keyboards, brightly glowing screens and even a speaker, without paper intermediaries.
This strange and small subculture spread to the outside world like a virus, through direct physical contact. And if you consider it a virus, then a null patient should be called a young man named Wesley Clark. Clark left graduate school in physics at Berkeley in 1949 to become a technician at a nuclear weapons factory. However, he did not like the work. After reading several articles from computer magazines, he began to look for an opportunity to penetrate what seemed like a new and interesting field, full of untapped potential. He learned about recruiting computer specialists to Lincoln’s laboratory from an advertisement, and in 1951 he moved to the east coast to work under Forrester, who had already become the head of the digital computer lab.

Wesley Clark, demonstrating his biomedical LINC computer, 1962
Clark joined the advanced research group, a laboratory unit that represented a relaxed state of collaboration between the military and universities of the time. Although technically a subdivision was part of the Lincoln laboratory universe, this team existed in a bubble inside another bubble, was isolated from the daily needs of the SAGE project, and was free to choose any computer direction that could at least somehow be tied to air defense. Their main task in the early 1950s was to create a Memory Test Computer (MTC), designed to demonstrate the viability of a new, highly efficient and reliable method of storing digital information, magnetic core memory, which will replace the moody CRT-based memory used by Whirlwind.
Since MTC had no other users besides its creators, Clark had full access to the computer for many hours daily. Clark became interested in then fashionable cybernetic mixture of physics, physiology and information theory, thanks to his colleague Belmont Farley, who talked with a group of biophysicists from RLE in Cambridge. Clark and Farley spent long hours behind MTC, creating software models of neural networks to study the properties of self-organizing systems. From these experiments, Clark began to extract certain axiomatic principles of computer technology from which he never deviated. In particular, he began to believe that "user convenience is the most important design factor."
In 1955, Clark teamed up with Ken Olsen, one of the developers of MTC, to plan a new computer that could pave the way for the next generation of military control systems. Using very large memory on magnetic cores for storage, and transistors for working with logic, it could be made much more compact, reliable and powerful than Whirlwind. Initially, they proposed a design called TX-1 (Transistorized and eXperimental computer, “experimental transistor computer” - much more clearly than AN / FSQ-7). However, the Lincoln laboratory management rejected the project as too expensive and risky. Transistors appeared on the market just a few years before, and very few computers were created on transistor logic. So Clark and Olsen are back with a smaller version of the machine, TX-0,

TX-0 The
functionality of the TX-0 computer as a tool for managing military bases, although it was a pretext for its creation, was of less interest to Clark than the ability to promote her ideas on computer design. From his point of view, the interactivity of computers has ceased to be a fact of life in Lincoln laboratories and has become a new norm - the right way to create and use computers, especially for scientific work. He gave TX-0 access to biophysicists from MIT, although their work had nothing to do with air defense, and allowed them to use the machine’s visual display to analyze electroencephalograms from sleep studies. And no one objected to this.
The TX-0 was successful enough for Lincoln’s laboratories to approve the TX-2 full-blown transistor computer with huge two million-bit memory in 1956. The project will take two years to complete. After that, the virus will break out of the laboratory. Upon completion of the TX-2, laboratories will no longer need an early prototype, so they agreed to lease the TX-0 to Cambridge for RLE. It was installed on the second floor, above the batch processing center. And he immediately infected computers and professors from the MIT campus, who began to fight for time periods in which they could gain complete control of the computer.
It was already clear that it was almost impossible to write a computer program correctly the first time. Moreover, researchers studying a new problem often at first did not understand at all what the correct behavior should be. And to get the results from the data center I had to wait for hours, or even until the next day. For dozens of newly-minted programmers from the campus, the opportunity to climb the stairs, find the error and fix it right away, try a new approach and immediately see the improved results, was a real revelation. Some used their time on TX-0 to work on serious science or engineering projects, but the joy of interactivity also attracted more playful souls. One student wrote a text editing program, which he called an "expensive typewriter."

Ivan Sutherland showcases his Sketchpad program on TX-2
Meanwhile, Ken Olsen and another TX-0 engineer, Harlan Anderson, annoyed by the slow progress of the TX-2 project, decided to launch a small-scale interactive computer on the market for scientists and engineers. They left the laboratory to found Digital Equipment Corporation, equipped an office in a former textile factory on the Assabet River, ten miles west of Lincoln. Their first PDP-1 computer (released in 1961) was essentially a TX-0 clone.
TX-0 and Digital Equipment Corporation have begun to spread the good news of a new way to use computers outside of Lincoln’s lab. And yet, so far the interactivity virus has been localized geographically, in eastern Massachusetts. But this was soon to change.
What else to read:
- Lars Heide, Punched-Card Systems and the Early Information Explosion, 1880-1945 (2009)
- Joseph November, Biomedical Computing (2012)
- Kent C. Redmond and Thomas M. Smith, From Whirlwind to MITER (2000)
- M. Mitchell Waldrop, The Dream Machine (2001)