How to teach artificial intelligence common sense

Original author: Clive Thompson
  • Transfer


Five years ago, programmers from DeepMind, a London-based company specializing in AI, joyfully watched AI learn to play the classic arcade game on its own. They used the fashionable technology of deep learning (GO) for a seemingly strange task: mastering the game in Breakout , made in Atari, in which you need to beat the ball from the brick wall so that the bricks disappear.

GO is self-learning for cars; you feed huge amounts of data to the AI, and it gradually begins to recognize patterns on its own. In this case, the data was happening on the screen - the large pixels were bricks, a ball and a racket. In DeepMind AI, a neural network consisting of layers of algorithms, there was no knowledge of the rules of the game Breakout, its principles of operation, goals and methods of the game. Programmers simply allowed neural networks to study the results of each action, each ball bounce. Where it leads?

It turned out to be impressive skills. In the first few attempts, the AI ​​randomly hung out. Having played a few hundred times, he began to accurately beat the ball. By the 600th game, the neural network had guessed the expert moves used by people playing Breakout, when the player breaches the bricks and sends the ball to independently jump on top of the wall.

“It was a big surprise for us,” said Demis Hasabis, director of DeepMind. “The strategy flowed from the system itself.” AI has demonstrated an ability for unusually subtle human-like thinking, for understanding the internal concepts underlying the game. Since neural networks roughly copy the structure of the human brain, according to theory, they should in some sense copy our style of thinking. This moment seemed to be a confirmation of the theory.

Then last year, computer scientists from the Vicarious company in San Francisco, a student of AI, offered an interesting way to test AI in real-world conditions. They took the AI ​​that they used in DeepMind, and trained it to play Breakout. He coped beautifully. And then they started to tweak the game a bit. Either they raise the racket, they add an impenetrable area in the center of the field.

A human player could quickly adapt to these changes; but the neural network failed. The super-intelligent AI seems to be able to play only Breakout of that kind, which he studied for hundreds of attempts. He did not digest anything new.

“People can’t easily recognize patterns,” Dilip George, a computer science specialist, one of the founders of Vicarious, tells me. - We are still creating models of what he saw. And these causal models - we associate cause and effect. " People engage in reasoning, make logical conclusions concerning the world around them; we have a common sense knowledge base that helps us deal with new situations. When we see Breakout, a little different from the one we just played, we understand that it will most likely have similar rules and goals. The neural network did not understand anything about Breakout. It can only follow the laws. When the pattern changed, she became helpless.

GO is the king of AI. For the six years for which it has burst into the mainstream, it has become the main way of teaching machines to perceive and experience the world around them. It stands for speech recognition from Alexa, behind Waymo's mobile phones, for instant translations from Google. Uber is in a sense a huge optimization challenge, and uses machine learning (MO) to predict where passengers will need cars. Baidu, a Chinese tech giant, has 2000 programmers working on neural networks. For years it seemed that GO would only improve, and inexorably spawn a car with a flexible and fast intelligence to match the person.

However, some heretics claim that GO abuts the wall. They say that it alone can never produce generalized artificial intelligence (AII), because the true human mind is not only recognizing patterns. It’s time for us to start working on how to endow the AI ​​with everyday common sense, the human mind. If we do not succeed, they warn, we will bump our head against the limitations of HE, like pattern recognition systems that are easily deceived by changing part of the input, with the result that the model of GO will take the turtle as a pistol. But if we manage to do this, they say, we will witness the explosive growth of safer and more useful devices - medical robots moving around in a cluttered house, counterfeit recognition systems that do not suffer from false positives,

But how does the true reasoning look in the car? And if GO cannot bring us to this, what can?



Gary Marcus is a thoughtful 48-year-old professor of psychology and neurology at New York University wearing double-lens glasses, and probably the most famous apostate for orthodox depth education.

Marcus first became interested in AI in the 1980s and 90s, when neural networks were in the experimental phase, and since then his arguments have not changed. “It's not that I'm late for a party, and I want to debase everything here,” Marcus told me when we met at his apartment near New York University (we are also friends). “As soon as the GO explosion occurred, I said: Guys, this is the wrong direction!”

Then the GO strategy was no different from the current one. Suppose you need a machine to learn to recognize daisies. First you need to encode the algorithmic "neurons", combining them into layers like a sandwich (using multiple layers, the buter becomes thicker, or "deeper" - hence the "deep" learning). To the first layer, you show the image of the daisy, and its neurons are activated or not activated, depending on whether the image resembles the examples of daisies that they see earlier. The signal then goes to the next layer, where the process repeats. As a result, the layers sift the data, rendering a verdict.

First, the neural network is engaged in blind guessing; she starts life from scratch. The point is to provide useful feedback. Every time the AI ​​does not guess the daisy, the connections that lead to the wrong answer are weakened in the set of neurons. If you guess, connections are enhanced. After enough time and daisies, the neural network becomes more accurate. She learns to grab certain patterns of daisies, which allow her to define a daisy each time (and not sunflowers or asters). Over the years, the key idea - to start with a naive network and train it with repetitions - has been improved and seemed useful in almost all applications.

But Marcus was not convinced. From his point of view, a clean sheet was a problem: it is assumed that people develop intelligence only by observing the world around them, which means that machines are also capable of it. But Marcus believes that people do not work that way. He is following the intellectual path laid down by Noam Chomsky , who asserts that people are born with a predisposition for learning and a program for learning languages ​​and interpreting the physical world.

With all their alleged resemblance to the brain, he notes, neural networks, apparently, do not work like a human brain. For example, they need too much data. In most cases, each network requires thousands or millions of examples to learn. Worse, every time you need to force the network to recognize a new item, you need to start from scratch. A neural network, trained to recognize canaries, is not at all useful in recognizing birds songs or human speech.

“We don’t need huge amounts of data for training,” says Marcus. His children do not need to see a million cars before they can recognize the car. Even better, they can generalize: when they see a tractor for the first time, they understand that it looks like a car. They can also guess from the opposite. Google Translate can give the French equivalent of the English sentence "the glass was moved, and it fell off the table." But he does not understand the meaning of words, and will not be able to tell you what will happen if the glass is not moved. People, as Marcus notes, grasp not only grammar patterns, but also the logic behind the words. You can give your child a fictional verb “to plyat”, and he will most likely guess that in the past tense he will be “plyak”. But he did not see such a word earlier. He was not "trained."

“GO systems do not know how to integrate abstract knowledge,” says Marcus, who founded the company that created the AI, able to learn on less data (and sold it to Uber in 2016).

This year, Marcus published a preprint of his work on arXiv, where he claims that without new approaches, civil society may never overcome its current limitations. He needs a breakthrough - built-in or complementary rules that help the AI ​​to reason about the world around.

Also popular now: