Artificial intelligence rested on the barrier of understanding

Original author: Melanie Mitchell
  • Transfer
Machine learning algorithms do not yet understand reality as people do — sometimes with disastrous consequences.

About the author: Melanie Mitchell is a computer science professor at Portland State University and a visiting professor at the Santa Fe Institute. Her book, Artificial Intelligence: A Guide for People Who Think, will be published in 2019 by Farrar, Straus, and Giroux.


Visitor to the Artificial Intelligence Expo in South Africa, September 2018. Photo: Nic Bothma / EPA, via Shutterstock

You must have heard that we are in the midst of an AI revolution. We are told that machine intelligence is progressing with amazing speed, relying on “deep learning” algorithms that use huge amounts of data to train complex programs known as “neural networks”.

Today's programs can recognize faces and record speech. We have programs for detecting subtle financial frauds, for finding relevant web pages in response to ambiguous requests, for laying the optimal route almost anywhere, these programs defeat grandmasters in chess and Go and translate between hundreds of languages. Moreover, they promise us soon and everywhere unmanned vehicles, automatic diagnostics of cancer, house cleaning robots and even automatic scientific discoveries.

Facebook founder Mark Zuckerberg recently said that over the next five to ten years, the company will develop AI in order to "exceed the human level in all basic senses: sight, hearing, language, general understanding." Shane Legg, chief research associate at Google DeepMind, predicts that "AI will surpass the human level in the mid-2020s."

As a person working in the field of AI for several decades, I witnessed the failure of many such predictions. And I am sure that the latest forecasts will not come true either. The problem of creating human intelligence in machines remains greatly undervalued. Today's AI systems are sorely lacking in the essence of human intelligence: understandingsituations that we experience, the ability to understand their meaning. Mathematician and philosopher Gian-Carlo Roth asked the famous question: "I wonder if the AI ​​can ever overcome the barrier of understanding." For me, this is still the most important question.

The lack of human understanding in machines is underlined by the problems that have recently appeared in the fundamentals of modern AI. Although modern programs are much more impressive than 20-30 year old systems, a number of studies show that in-depth training systems demonstrate the unreliability of completely inhuman ways.

I will give a few examples.

“A person with an uncovered head needs a hat” [The bareheaded man needed a hat] - the speech recognition program on the phone recognizes this phrase as “Under the direction of a bear, a person needs a hat” [The bear headed man used a hat]. The phrase “I put a pig in a stall” [I put the pig in the pen] Google Translate translates into French as “I put a pig in a pen” [Je mets le cochon dans le stylo].

Programs that “read” documents and answer questions about them are easily fooled if short, non-essential fragments of text are added to the document. Similarand programs that recognize faces and objects (the glorious triumph of deep learning) fail if they slightly modify the input data with certain types of lighting, image filtering and other changes that do not affect the recognition of objects by humans in the slightest.

One recent study found that adding a small amount of “noise” to an image of a face seriously interferes with the work of modern face recognition software. Another study , called “The Elephant in the Room” with humor, shows that a small image of a foreign object, such as an elephant, in a corner of the living room image in a strange way causes computer vision systems to learn deeply about other objects in deep learning.

Moreover , programs that have learned to skillfully play a specific video game or board game on a “superhuman” level are completely lost with the slightest change in conditions (changing the background on the screen or changing the position of the virtual “platform” to beat the “ball”).

These are just a few examples that demonstrate the unreliability of the best AI programs, if the situation is slightly different from those in which they were trained. The errors of these systems range from ridiculous and harmless to potentially catastrophic. For example, imagine an airport security system that will not allow you to take a flight because your face was confused with the face of a criminal, or an unmanned vehicle that, due to unusual lighting conditions, did not notice that you were going to a crossroads.

Even more alarming are recent demonstrations of AI vulnerabilities in front of so-called “hostile” examples. In them, a malicious hacker can make certain changes in the image, sound or text that are invisible or insignificant for people, but can lead to potentially catastrophic AI errors.

The possibility of such attacks is shown in almost all applications of AI, including computer vision, medical image processing, speech recognition and processing. Numerous studies have shown the ease with which hackers can trick face or object recognition systems with scanty changes to the picture. The imperceptible stickers on the “Stop” road sign cause the machine vision system in an unmanned vehicle to take it for “Give way”, anda modification of the sound signal , which sounds like background music to a person, orders Siri or Alexa to covertly perform a certain command.

These potential vulnerabilities illustrate why current progress in AI rests on a barrier of understanding. Anyone who works with AI systems knows that behind the façade of human-like vision, speech and ability to play, these programs do not understand at all - in any human way - the input data that they receive for processing, and the result that they give out. The lack of such an understanding makes these programs susceptible to unexpected errors and inconspicuous attacks.

What will it take to overcome this barrier so that the machines can understand more deeply the situations they encounter and not rely on small details? To find the answer, you need to turn to the study of human knowledge.

Our own understanding of the situations we encounter is based on broad intuitive “common sense” concepts about how the world works, and about the goals, motives, and likely behavior of other living beings, especially other people. In addition, our understanding of the world is based on our basic abilities to summarizewhat we know, to form abstract concepts and draw analogies - in short, to flexibly adapt our concepts to new situations. For decades, researchers have experimented with teaching AI to intuitive common sense and sustainable human abilities to generalize, but there is little progress in this very difficult matter.

AI programs with a lack of common sense and other key aspects of human understanding are increasingly being deployed into real-world applications. While some people worry about the "superintellektele" AI, the most dangerous aspect of AI is that we trust too much and give too much autonomy to such systems, without being fully aware of their limitations. As a researcher, Pedro Domingos noted in his book “The Main Algorithm”:“People are worried that computers will become too smart and take over the world, but the real problem is that they are too stupid and have already taken it .

The race for the commercialization of AI has put tremendous pressure on researchers to create systems that work “quite well” in narrow tasks. But ultimately, the goal of developing reliable AI requires a deeper study of our own remarkable abilities and a new understanding of the cognitive mechanisms that we ourselves use to reliably understand the world around us. Overcoming the barrier to understanding AI will probably require a step back - from ever larger networks and data sets back to the roots of the industry as an interdisciplinary science that studies the most complex scientific problem: the nature of intelligence.

Also popular now: