The untold story of AI

Original author: Oscar Schwartz
  • Transfer

The history of AI is often told how the history of machines is gradually becoming smarter. But the human factor is lost in the story, the question of designing and training machines, and how they appear, thanks to the efforts of man, mental and physical.


Let us study this human history of AI - how innovators, thinkers, workers, and sometimes speculators created algorithms that can reproduce human thought and behavior (or pretend that they are reproduced). The idea of ​​super-intelligent computers that don't require human involvement can be exciting - but the true history of smart machines shows that our AI is as good as we are.

When Charles Babbage played chess with the first mechanical Turk


The famous 19th century engineer may have been inspired by the first example of the hype around AI




In 1770, at the court of the Austrian Empress Maria Theresa , inventor Wolfgang von Kempelen demonstrated a machine playing chess . The "Turk", as Kempelen called his invention, was a life-size human figure carved from cedar [according to other sources - wax / approx. transl.], dressed as a representative of the Ottoman Empire, sitting behind a wooden cabinet, on the countertop of which was a chessboard.

Kempelen declared that his car was capable of defeating any courtier, and this call was accepted by one of Maria Theresa's advisers. Kempelen opened the cabinet doors, demonstrating a mechanism similar to a clockwork - a complex network of levers and gears, and then inserted a key into the car and started it. The machine gun came to life, and raised a wooden hand to move the first figure. Within 30 minutes, he defeated his opponent.

The Turk made a sensation. Over the next ten years, Kempelen appeared with his chess machine in Europe, defeating many of the smartest people of that time, including Benjamin Franklin and Frederick II. After the death of Kempelen in 1804, Johann Nepomuk Melzel , a student at a German university and a musical instrument designer who continued his performances around the world , acquired Turku .

One of those who were allowed to look at the car in more detail was Charles Babbage , a famous British engineer and mathematician. In 1819, Babbage played twice with Turk and lost both times. According to historian Tom Standage, who wrote the detailed story of Turk, Babbage suspected that the machine was not a smart machine, but only a cunning hoax, and that he was hiding a man who controlled Turk’s movements from the inside.



Babbage was right. Behind the screen of the Turkish mechanism was the following: Kempelen and Melzel hired grandmasters to sit secretly inside a large box. The grandmaster could see what was happening on the board thanks to magnets that gave a mirror image of the placed pieces.

To control Turk’s hand, a hidden player used a pantograph, a system of blocks that synchronized the movements of his hand with a wooden Turk. The player moved the lever on the magnetic board, rotated it to open and close the fingers of the Turk, and then moved the figure to the right place. The room where the grandmaster was sitting contained several sliding panels and a chair on wheels, moving on greased rails, which allowed him to move forward and backward when Melzel opened the box for all to see.

And although Babbage suspected such tricks, he did not waste time exposing him, like many of his contemporaries. However, his meeting with Turk, apparently, determined his thinking for many years.


Charles Babbage developed the difference machine No. 2 from 1847 to 1849, but during his life it was not built.

Soon after, he began work on an automatic mechanical calculator called a " difference machine ", which he intended to use to create error-free logarithmic tables. The first design of the machine, which could weigh 4 tons, contained 25,000 metal components. In the 1830s, he abandoned it, starting work on an even more complex mechanism, the "analytical machine." She had a “repository” and a “mill” that functioned as a memory and processor, as well as the ability to interpret software instructions contained on punch cards.

Initially, Babbage hoped that the analytic machine would work simply as an improved version of the difference. But his ally, Ada Lovelace, I realized that the programmability of the machine allows it to function in a more generalized mode. She stated that such a machine would give rise to a new type of “poetic science,” and mathematicians would train the machine to perform tasks by programming it. She even predicted that the machine would be able to compose "complex scientific musical works."


Ada Lovelace and Charles Babbage

Babbage eventually agreed with Lovelace, and imagined how the potential of a general-purpose machine, capable of not only grinding numbers, could change the world. Naturally, his thoughts returned to meeting with Turk. In 1864, he wrote in a diary about his desire to use a “mechanical record” to solve completely new problems. “After serious thought, for my test I chose a cunning machine that can successfully play an intellectual game, such as chess.”

Although technically the Turk and the Babbage machine are not connected in any way, the possibility of the existence of machine intelligence embodied in the mystification of von Kempelen seems to have inspired Babbage to think about cars in a completely new light. As his companion later wrote, David Brewster: "These automatic toys, once entertaining commoners, are now involved in increasing the capabilities and development of our kind of civilization."

Babbage’s meeting with the Turk at the very beginning of computational history serves as a reminder that hype and innovation sometimes go hand in hand. But it teaches us one more thing: the intelligence attributed to machines is almost always based on hidden human achievements.

Invisible female computer programmers ENIAC


The people who run ENIAC have barely received recognition



Marilyn Veskov (left) and Ruth Lichterman were two female ENIAC programmers

on February 14, 1946, reporters gathered at the Moore Engineering School at the University of Pennsylvania to watch an open demonstration of one of the first general-purpose electronic digital computers in the world: ENIAC (electronic digital integrator and calculator )

Arthur Burks, a mathematician and chief engineer of the ENIAC team, was in charge of a demonstration of the capabilities of the machine. First, he instructed the computer to add 5000 numbers, which was done in 1 second. Then he demonstrated how a machine can calculate the trajectory of a projectile faster than the projectile itself would need to fly from gun to target.

Reporters were astounded. It seemed to them that Burks needed only to press a button, and the car would come to life in order to count in a couple of moments what people had taken days to do.

What they did not know, or what was hidden during the demonstration, was that behind the seeming intelligence of the machine was the hard and advanced work of a team of programmers consisting of six women who had previously worked as “computers”.


Betty Jennings (left) and Francis Bilas are working with the ENIAC main control panel.

A plan to build a machine that can calculate the trajectory of shells was born in the early years of World War II. Moore's School of Engineering worked with the Ballistic Research Laboratory (BRL), where a team of 100 trained “human calculators” manually calculated artillery fire tables.

The task required a good level of knowledge in mathematics, including the ability to solve nonlinear differential equations, differential analyzers and slide rules . But at the same time, calculations were considered clerical work, a task too tedious for male engineers to do. Therefore, BRL hired women - mostly those with university degrees and a penchant for mathematics - for this job.

Over the course of the war, the ability to predict the flight path of shells was increasingly linked to military strategy, and the BRL was increasingly demanding results.

In 1942, physicist John Mowchley wrote a memowith the proposal to create a general-purpose programmable electronic calculator that can automate calculations. By June 1943, Mouchley, together with engineer J. Presper Eckert, had received funding for the construction of the ENIAC.


J. Presper Eckert, John Mouchley, Betty Jean Jennings and Herman Goldstein in front of ENIAC

The goal of the electronic computer was to replace hundreds of people-computers from BRL, as well as increase the speed and efficiency of calculations. However, Mauchly and Eckert realized that their new machine would need to be programmed to calculate paths using punched cards, using technology that IBM had been using for its machines for several decades.

Adele and Herman Goldstein, a married couple who supervised the work of human calculators at BRL, suggested that the most powerful mathematicians from their team should be involved in this work. They selected six — Kathleen McNulty, Francis Bilas, Betty Jean Jennings, Ruth Lichterman, Elizabeth Schneider, and Marilyn Veskov — and promoted them from human calculators to operators.


Elizabeth Betty Schneider Works for ENIAC

Their first task was to get to know ENIAC thoroughly. They studied the drawings of the machine in order to understand its electronic circuits, logic and physical structure. There was something to learn: a 30-ton monster occupied about 140 square meters. m., used more than 17,000 electronic tubes, 70,000 resistors, 10,000 capacitors, 1,500 relays and 6,000 manual switches. A team of six operators was responsible for setting up and installing the machine to perform certain calculations, work with equipment serving punch cards, and search for errors in the work. For this, operators sometimes had to climb inside the machine and replace a failed electronic lamp or wiring.

ENIAC did not have time to finish on time to calculate the flight of shells during the war. But soon her power was used by John von Neumannfor nuclear fusion calculations. This required the use of more than a million punch cards. Physicists from Los Alamos relied entirely on operator programming skills, since only they knew how to handle such a large number of operations.


ENIAC Programmer Kathleen McNulty

However, the contribution of female programmers has received very little recognition or appreciation. In particular, because the programming of the machine was still closely associated with manual calculations, and therefore was considered not quite a professional job, suitable only for women. Leading engineers and physicists focused on the development and creation of iron, which they considered more important for the future of computers.

Therefore, when ENIAC was finally introduced to the press in 1946, six female operators remained hidden from the public eye. The dawn of the Cold War was approaching, and the US military eagerly demonstrated its technological superiority. Representing ENIAC as an autonomous smart machine, engineers painted an image of technological excellence, hiding the human labor that was used.

The tactics worked, and influenced media coverage of the work of computers in the following decades. In the ENIAC news spreading around the world, the car occupied the main focus and received such epithets as “electronic brain”, “wizard” and “human-made robot brain”.

The hard and painstaking work of six female operators crawling inside the car, replacing wiring and lamps so that the machine could perform its “reasonable” actions, was extremely light.

Why Alan Turing wanted artificial intelligence to make mistakes


Infallibility and intelligence are not the same thing.




In 1950, at the dawn of the digital age, Alan Turing published an article that would later become the most famous of his works, Computing Machines and Mind , in which he posed the question : “Can machines think?”

Instead of trying to identify the concept of “machine” and “thinking”, Turing describes a different method of finding the answer to this question, inspired by the salon game of the Victorian era - the game of imitation. According to the rules of the game, a man and a woman in different rooms talk to each other, passing notes through an intermediary. The mediator, who also plays the role of a judge, needs to guess which of them is a man and which is a woman, and his task is complicated by the fact that the man is trying to imitate a woman.

Inspired by this game, Turing developed a thought experiment in which one of the participants was replaced by a computer. If a computer can be programmed to play simulation so well that the judge could not tell if he is talking about a machine or a person, then it would be wise to conclude, Turing argued, that the machine has intelligence.

This thought experiment became known as the Turing test , and to this day remains one of the most famous and controversial ideas in AI. He does not lose his attractiveness, since he gives an unambiguous answer to a very philosophical question: “Can machines think?” If the computer passes the Turing test, then the answer is “yes”. As the philosopher Daniel Dennett wroteTuring's test was to end the philosophical debate. “Instead of arguing endlessly about the nature and essence of thinking,” Dennett writes, “why don’t we agree that whatever this nature is, everything that can pass this test undoubtedly has it.”

However, a more thorough reading of Turing's work reveals a small detail that introduces a little ambiguity into the test, which suggests that, perhaps, Turing did not mean a practical test of the machine for intelligence, but a philosophical provocation.

In one part of the paper, Turing provides a simulation of what a test might look like using an imaginary intelligent computer of the future. The person asks questions, and the computer answers.

Q: Please write a sonnet about the bridge over the Fort.

A: Here I am forced to refuse. I never got poetry.

Q: Add 34957 and 70764. A

: (answer after a 30 second pause): 105621.

Q: Do you play chess?

Oh yeah.

Q: My king stands on e1; I have no other figures. Your king is at e3 and the rook at a8. Your turn. How will you go?

O: (After thinking about fifteen seconds): La1, mat.

In this conversation, the computer made an arithmetic error. The real sum of numbers will be 105721, not 105621. It is unlikely that Turing, a brilliant mathematician, accidentally made it. Rather, it is an easter egg for the attentive reader.

Elsewhere in the article, Turing seems to be hinting that this error is a programmer trick designed to trick a judge. Turing understood that if attentive readers of computer answers saw an error, they would decide that they were talking to a person, assuming that the machine would not make such an error. Turing wrote that the machine can be programmed to "deliberately include errors in the answers, designed to confuse the interrogator."

And if the idea of ​​using errors to hint at the human mind was difficult to comprehend in the 1950s, today it has become a design practice for programmers working with natural language processing. For example, in June 2014, the chatbot Zhenya Gustmanbecame the first computer to pass the Turing test. However, critics pointed out that Zhenya managed to do this only thanks to the built-in trick: he pretended to be a 13-year-old boy who had English not his native language. This meant that his mistakes in syntax and grammar, as well as incomplete knowledge, were erroneously attributed to naivety and immaturity, instead of the inability to process natural languages.

Likewise, after Google’s Duplex voice assistant struck the audience with pauses in conversation and the use of sounds filling them, many indicated that this behavior was not the result of the system’s thinking, but a specially programmed action designed to simulate the process of human thinking.

Both cases implement Turing's idea that computers can be specifically made to make mistakes in order to impress a person. Like Turing, the programmers Zhenya Gustman and Duplex understood that a superficial imitation of human imperfection can deceive us.

Perhaps the Turing test does not evaluate the presence of the mind of a machine, but our readiness to consider it reasonable. As Turing himself said : “The idea of ​​reason in itself is more emotional than mathematical. How reasonable we consider the behavior of something is determined no less by our own state of mind and skills than by the properties of the object in question. ”

And, perhaps, the mind is not some substance that can be programmed by the machine - which, apparently, Turing had in mind - but a feature manifested through social interaction.

DARPA dreamer aiming for cybernetic intelligence


Joseph Karl Robnett Liklider made proposals for the creation of a “symbiosis of man and machine,” leading to the invention of the Internet




At 10:30 a.m. on October 29, 1969, a graduate student at the University of California, Los Angeles sent a two-letter message from the Sigma 7 SDS computer to another machine several hundred kilometers away at the Stanford Research Institute in Menlo Park.

The message read: “LO.”

The graduate student wanted to write “LOGIN,” but the packet exchange network that supported the message transfer, ARPANET, fell before he managed to write the whole message.

In the history of the Internet, this moment is celebrated as the beginning of a new era of online communications. But it is often forgotten that the technical infrastructure of ARPANET was based on a radical idea about the future symbiosis of man and computer, developed by a man named Joseph Karl Robnett Liklider .

Liklider, who had a psychological education, became interested in computers in the late 1950s , working in a small consulting company. He was interested in how these machines could strengthen the joint mind of mankind, and he began to conduct research in the rapidly developing field of AI. Having studied the literature that existed then, he discovered that programmers intend to “teach” machines to perform certain actions for a person, such as playing chess or translating texts, moreover, with greater efficiency and quality than people.

This concept of machine intelligence did not suit Liklider. From his point of view, the problem was that the existing paradigm considered people and machines to be intellectually equivalent creatures. Liklider, however, believed that in fact, people and machines are fundamentally different in their cognitive capabilities and strengths. People do well with certain reasonable tasks - such as creativity or judgment - and computers with others, such as storing data and processing them quickly.

Instead of forcing computers to mimic the intellectual activities of people, Liklider proposed the collaboration of people and machines, in which each side uses its own strengths. He suggested that such a strategy would shift from competitions (such as playing people’s computers to chess) to previously unthinkable forms of intellectual activity.

In the work of 1960 "The symbiosis of machine and man“Liklider described this idea.“ I hope that pretty soon the human brain and computers will be closely connected with each other, and that the resulting partnership will be able to think like no brain could, and process data in a way that modern machines did not process For Lyclyder, a promising example of such a symbiosis was a system of computers, network equipment, and human operators, known as a “semi-automatic terrestrial environment,” or SAGE, which had opened two years earlier to track air traffic. of the United States space.

In 1963, Liklider got a job as director of the Department of Advanced Research Projects (then called ARPA, and now - DARPA), where he had the opportunity to implement some of his ideas. In particular, he was interested in developing and implementing what he first called the " intergalactic computer network ."

The idea came to him when he realized that in ARPA it was necessary to invent an effective method for updating data related to programming languages ​​and technical protocols, accessible to teams scattered far from each other, consisting of people and machines. The solution to this problem was a communication network combining these teams over long distances. The problems of networking them were similar to the problems that science fiction writers pondered, as he pointed out in a memo describing this concept : “How to start communication between completely unrelated intelligent creatures?”


Lyclider, professor of MIT, and his student Jeff Harris

Liklider left ARPA before the development program for this network started. But over the next five years, his sublime ideas became an integral part of the development of ARPANET. And as ARPANET turned into what we know today as the Internet, some began to see that this new method of communication is a collaboration of human and technological entities, a symbiont that sometimes behaves, as the Belgian cybernetist Francis Hailayen said. like a "global brain."

Today, many significant breakthroughs in the application of machine learning are based on the joint work of people and machines. For example, the freight industry is increasingly looking for ways to allow human drivers and computing systems to use their strengths to improve delivery efficiency. Also, in the field of transportation, Uber has developed a system in which people are given tasks that require good driving skills, such as entering and exiting from the highway, and the cars have hours of routine driving on the highway.

Although there are many other examples of the symbiosis of man and machine, the cultural tendency to imagine machine mind as a separate supercomputer with a human level mind is still quite strong. But in fact, the future of cyborgs, which Liklider imagined, has already come: we live in a world of symbiosis of machines and people, which he described as "living together in close connection, or even in the union of two different organisms." Instead of focusing on the fear that people will be replaced by cars, Liklider’s legacy reminds us of the possibilities of working with them.

Algorithmic bias appeared back in the 1980s


At medical school, they thought that a computer program would make the process of accepting students more honest, but it turned out the opposite




In the 1970s, Dr. Joffrey Franglen from the Medical School of St. George in London began to write an algorithm for dropping out applications for applicants.

At that time, three-quarters of the 2,500 people who submitted introductory applications annually were screened out by special people who evaluated their written application, as a result of which applicants did not reach the interview stage. Approximately 70% of people who completed the initial dropout enrolled in medical school. Therefore, the initial screening was an extremely important stage.

Franglen was the deputy dean and also dealt with applications. Reading applications required a breakthrough of time, and it seemed to him that this process could be automated. He studied the student dropout technique that he and other certifiers used, and then wrote a program that, he said, “mimicked the behavior of attesting people.”

Franglen's main motivation was to increase the efficiency of the adoption process, and he also hoped that his algorithm would eliminate the inconsistent quality of staff work. He hoped that by passing this process to the technical system, it would be possible to achieve exactly the same assessment of all applicants and create an honest dropout process.

In fact, everything turned out the other way around.

Franglen completed his algorithm in 1979. In that year, applicants' applications were checked simultaneously by a computer and people. Franglen found that his system agreed with assessors' ratings in 90-95% of cases. The administration decided that these numbers allow officials to be replaced with an algorithm. By 1982, all primary applications for admission to school began to be evaluated by the program.

After a few years, some staff members were worried about the lack of diversity among students who entered. They conducted an internal investigation of the Franglen program and found certain rules that evaluated applicants for seemingly unrelated factors such as place of birth or name. However, Franglen convinced the committee that these rules were collected based on the collection of data on the work of attestants, and did not significantly affect the sample.

In December 1986, two school lecturers learned of this internal investigation and went to the British Racial Equality Commission. They told the commission that they had reason to believe that the computer program was used to covertly discriminate against women and people of color.

The commission began an investigation. It was discovered that the algorithm separated candidates for Caucasians and non-Europeans, based on their names and places of birth. If their names were not European, it went to them in a minus. The mere presence of a non-European name subtracted 15 points from the total amount of the applicant. The commission also found that women were underestimated by an average of 3 points. Based on this system, up to 60 applications were rejected daily.

At that time, racial and gender discrimination were very strong at British universities - and St. Georg was caught on this only because she entrusted this bias to a computer program. Since it was arguable that the algorithm below rated women and people with non-European names, the commission received solid evidence of discrimination.

The medical school was accused of discrimination, but it got off quite easily. Trying to make amends, the college contacted people whom they could unlawfully discriminate, and three of the rejected applicants were offered schooling. The Commission noted that the problem in the medical school was not only technical, but also cultural. Many employees regarded the sorting algorithm as the ultimate truth, and did not waste time figuring out how it sorts applicants.

At a deeper level, it is clear that the algorithm supported the preconceptions that already existed in the admissions system. After all, Franglen checked the car with the help of people and found a coincidence of 90-95%. However, by encoding the discrimination that the certifiers had in the machine, he ensured an endless repetition of this bias.

The Discrimination Case at St. Georg got a lot of attention. As a result, the commission banned the inclusion of information on race and ethnicity in applications of applicants. However, this modest step did not stop the spread of algorithmic bias.

Algorithmic decision-making systems are increasingly deployed in areas with a high degree of responsibility, for example, in health care and criminal justice, and the repetition, as well as the strengthening of existing social bias, stemming from historical data, is of serious concern. In 2016, ProPublica reporters revealed that the software used in the US to predict future crimes was biased against African Americans. Researcher Joy Bulamvini later revealed that Amazon's face recognition software is more wrong with black women.

Although machine bias is quickly becoming the most talked about topic in AI, algorithms are still considered mysterious and undeniable mathematical objects that produce reasonable and unbiased results. As AI critic Kate Crawford says, it's time to admit that algorithms are “human creations” and inherit our bias. The cultural myth of an undeniable algorithm often hides this fact: our AI is as good as we are.

How Amazon shoved mechanical Turks into a machine


Today's invisible digital workers resemble the man who ruled the 18th-century Mechanical Turk




At the turn of the millennium, Amazon began to expand its services beyond selling books. With the growing number of different product categories on the company's website, it was necessary to come up with new ways of organizing and categorizing them. Part of this task was the removal of tens of thousands of duplicate products appearing on the site.

Programmers tried to make a program that can automatically eliminate duplicates. Defining and deleting objects seemed like a simple task available to the machine. However, programmers soon surrendered, calling the data processing task " impossible ." For a task involving the ability to notice minor inconsistencies or similarities in images and texts, human intelligence was required.

Amazon has run into a problem. Removing duplicate products was trivial for people, but a huge amount of items would require significant labor. Managing workers engaged in one such task would not be trivial.

Company manager Venki Harinarayan came up with a solution. His patent describes a “hybrid computing collaboration between humans and machines” that breaks down a task into small pieces, subtasks, and distributes them across a network of human employees.

If duplicates are deleted, the main computer can divide the Amazon website into small sections - say, 100 pages of openers - and send sections to people on the Internet. They then needed to identify duplicates within these sections and send the puzzle pieces back.

A distributed system offered a crucial advantage: employees did not have to accumulate in one place, they could perform their subtasks on their computers, wherever they were, and whenever they wanted. In fact, Harinarayan came up with an effective way to distribute low-skilled, but difficult to automate work, across a wide network of people who can work in parallel.

This method turned out to be so effective for the internal work of the company that Jeff Bezos decided to sell the system as a service to third-party organizations. Bezos turned Harinarayan technology into a market for workers. There, enterprises that had tasks that were easy for people (but difficult for robots) could find a network of freelance workers performing these tasks for a small fee.

So it appearedAmazon Mechanical Turk , or mTurk for short . The service started in 2005 , and the user base began to grow rapidly. Enterprises and researchers around the world began to upload the so-called “Tasks for human intelligence” on the platform, such as decrypting audio or labeling images. The tasks were carried out by an international and anonymous group of workers for a small fee (one frustrated employee complained that the average remuneration was 20 cents).

The name of the new service referred to an 18th-century machine playing chess - the Mechanical Turk, invented by the small businessman Wolfgang von Kempelen. And just like in that fake automation, inside of which there was a person playing chess, the mTurk platform was designed to conceal human labor. Instead of names, platform workers have numbers, and communication between the employer and the worker is devoid of personalization. Bezos himself called these inhumane workers " artificial artificial intelligence ."

Today, mTurk is a thriving market with hundreds of thousands of employees from around the world. And although the platform provides a source of income for people who may not have access to other work, the working conditions there are very doubtful. Some critics claimBy hiding and dividing employees, Amazon makes it easier for them to operate. In a research paper from December 2017, it was found that the median salary of an employee is about $ 2 per hour, and only 4% of employees earn more than $ 7.25 per hour.

Interestingly, mTurk has become a critical service for developing machine learning. In the MO program, a large data set is issued on which it learns to look for patterns and draw conclusions. MTurk workers are often used to create and mark up these kits, and their role in the development of MO often remains in the shade.

The dynamic between the AI ​​community and mTurk is consistent with that which has been present throughout the history of machine mind. We readily admire the appearance of autonomous “intelligent machines”, ignoring or intentionally hiding the human labor that creates them.

Perhaps we can learn from Edgar Allan Poe 's remarks . When he studied the Mechanical Turk von Kempelen, he did not succumb to this illusion. Instead, he wondered what it was like for a hidden chess player to sit inside this box, "squeezed" among the gears and levers in a "painful and unnatural pose."

Right now, when headlines about AI breakthroughs dot the news field, it's important to remember Poe's sober approach. It can be quite exciting - albeit dangerous - to succumb to the hype around AI and overly get carried away by the ideas of machines that do not require mere mortals. However, with careful consideration, you will be able to see traces of human labor.

Also popular now: