What smart machines should learn from the neocortex

Original author: Jeff Hawkins
  • Transfer
image

Computers have transformed work and entertainment, transportation and medicine, games and sports. And with all their power, these machines are still not capable of performing the simplest tasks that a child can cope with - let's say, to move in an unknown room or use a pencil.

Finally, a solution to this problem becomes available. It will appear at the intersection of two areas of research: the reverse development of the brain and the magnificently flourishing field of artificial intelligence. In the next 20 years, these two areas will grow together and launch a new era of smart cars.

Why to build smart machines we need to understand how the brain works? Although such machine learning techniques as deep neural networks, have recently shown impressive results, they are still very far from intellectual, from being able to understand and act in the outside world as we do. The only example of intelligence, the ability to learn, plan and execute our plans is the brain. Therefore, we must understand the principles underlying human intelligence and use them in the development of truly intelligent machines.

In our company Numenta , located in Redwood City, pc. California, we study the neocortex (new cortex) - the largest component of the brain, and the main one is responsible for intelligence. Our goal is to understand how it works and define the principles underlying human consciousness. In recent years, we have achieved great success in our work by defining several properties of biological intelligence, which, we believe, should be introduced into future thinking machines.

To understand these properties, you need to start with the basics of biology. Human brain is similar to the reptile brain. They have a spinal cord that controls reflexes; brain stem controlling autonomous behavior, such as breathing and heart rate; the middle brain controlling emotions and the simplest behavior. But humans, and all mammals, have something that reptiles do not have: the neocortex.

The neocortex is a highly folded sheet 2 mm thick. If it could be stretched, it would be the size of a large napkin. In humans, it takes 75% of the brain volume. It is this part that makes us smart.

At birth, the neocortex knows almost nothing, it learns through experience. Everything that we learn about the world — driving a car, making coffee in a vending machine, and thousands of other things with which we interact daily — is stored in the neocortex. He learns what these objects are, where they are and how they behave. Neocortex generates commands for motor skills, so when you cook or write a program, it is the neocortex that controls this behavior. The language is also created and understood by the neocortex.

The neocortex, like the entire brain and nervous system, consists of cells called neurons. Therefore, in order to understand how the brain works, we need to start with neurons . Your neocortex contains about 30 billion neurons. A typical neuron has one tail-like axonand several tree-like extensions - dendrites . If we imagine that a neuron is a kind of signaling system, then the axon is the transmitter, and the dendrites are the receivers. There are 5,000 - 10,000 synapses on dendritic branches, each of which is connected to the same synapse of other neurons. There are more than 100 trillion synaptic connections in the brain .

Your experience with the outside world — that you recognize a friend’s face, enjoy music, hold soap in your hand — came from input from your eyes, ears, and other senses that went through your neocortex and caused groups of neurons work out. When a neuron is triggered, its electrochemical surge travels along the axon and travels along synapses to other neurons. If the receiving neuron receives enough input pulses, it can trigger in response and activate other neurons. Of the 30 billion neurons contained in the neocortex, 1–2% work at any time, which means that millions of neurons are active at any given time. The set of active neurons changes as you move and interact with the world. Your sense of peace, what you can consider sensible experience

In the neocortex, these pictures are stored due to the formation of new synapses. Storing them allows you to recognize faces and places when you see them again, and recall them from memory. For example, when you think about your friend's face, a pattern of triggered neurons appears in the neocortex, similar to the pattern that appears when you actually see his face.

It's amazing how simple and complex the neocortex is at the same time. It is difficult because it is divided into dozens of sections, each of which is responsible for different conscious functions. In each region there are many layers of neurons, as well as dozens of types of neurons, and these neurons are connected into complex complexes.

The neocortex can also be called simple, since the details of each section are almost identical. In the process of evolution, a single algorithm appeared that applied to everything that the neocortex did. The existence of such a universal algorithm is a fascinating fact, because if we can decipher it, we will be able to get to the bottom of the concept of “intellect” and introduce this knowledge into future machines.

But isn't AI doing this? Are not all AIs built on " neural networks"similar to those that exist in the brain? Not really. AI technologies do refer to neurobiology, but they use an oversimplified neuron model that misses the main aspects of real neurons, and they are not connected in the same way as in real and complex brain architecture. Differences a lot, and they matter, which is why today's AIs do a good job with image markup or speech recognition, but they can't reason, plan, and act creatively.

Recent advances in understanding how the neocortex works lead us to speculate about how the thinking machines of the future can be arranged. I will try to describe these aspects of biological intelligence, necessary, but missing in modern AI. This is a re-assembly training, distributed representations and incarnations relating to the use of movement in order to train the realities of the world around.

Learning by rewiring: The brain displays amazing learning-related properties. First, we learn fast. Secondly, learning is gradual. We can learn something new without training the brain from scratch and not forgetting what we have already learned. Thirdly, the brain is constantly learning. Moving around the world, planning and acting, we do not stop learning. Fast, incremental and continuous learning are essential ingredients that enable smart systems to adapt to a changing world. Neurons are responsible for learning, and the complexity of real neurons makes them a powerful learning machine.

In recent years, neuroscientists have learned some interesting facts about dendrites. Each of its branches works as a pattern recognition set. It turns out that 15-20 active synapses on a branch are enough to recognize an activity pattern in a large set of neurons. Therefore, a single neuron can recognize hundreds of different patterns. Some of them make it work, while others change the internal state of the cell and work as predictions of future actions.

At one time, neuroscientists believed that learning was solely due to modification of the effectiveness of existing synapses, so that when an incoming signal arrives, the probability of a neuron switching on by a synapse decreases or increases. But now we already know that most of the learning is due to the cultivation of new synapses between cells - the brain is “reassembling”. Up to 40% of neuron synapses are replaced daily with new ones. New synapses lead to the emergence of new communication patterns between neurons, and, consequently, to new memories. Since the branches of the dendrites are practically independent, when a neuron learns to recognize a new pattern on one of the dendrites, it does not prevent the neuron from learning with other dendrites.

That is why we can learn new things without disturbing old memories, and we do not need to train the brain from scratch every time we learn something new. Today's neural networks do not have such opportunities.



Smart machines do not have to simulate the complexity of biological neurons, but the possibilities that are available thanks to dendrites and learning through re-assembly is a necessity. These capabilities should be in future AI systems.

Distributed views: the brain and the computer present information in different ways. Any combination of zeros and ones is potentially possible in the computer's memory, so if you change one bit, it can lead to a complete change of meaning - just like changing the letters “o” to “and” in the word “cat” will produce something unrelated to it the word "whale". Such a presentation is unreliable.

The brain also uses the so-called. sparse distributed representations[sparse distributed representations, SDR]. They are called sparse because at any given time activity shows relatively few neurons. The activity of neurons is constantly changing when you move or think, but their percentage is always small. If we imagine that each neuron is a bit, then the brain uses thousands of bits to represent information (much more than an 8-bit or 64-bit representation in a computer), but only a small fraction of the bits are 1 at any time; all others are 0.

Suppose you want to introduce the concept of "cat" using SDR. You can use 10,000 neurons, of which 100 will be active. Each of the active neurons represents a certain aspect of the cat, say, “pet”, “furry”, “clawed”. If several neurons die, or several new ones turn on, then the new SDR will still be a good idea of ​​the cat, since for the most part the active neurons will be the same. Thus, instead of an unreliable presentation, SDR is obtained resistant to errors and noise. When we build silicon versions of the brain, they will be resistant to errors.

I want to mention two features of the SDR. First, overlay facilitates the comparison of two things, and allows you to understand how they are similar and how they differ. Suppose one SDR is a cat, and the other is a bird. Both SDRs will have the same groups of neurons active, representing “pet” and “clawed”, but not “fluffy”. The example is simplified, but the property of imposing is important, because thanks to it the brain immediately understands the similarity and difference of objects. This property gives him the opportunity to generalize what is missing computers.

The second property, association, allows the brain to simultaneously present several ideas. Imagine that I see an animal running in the bushes, but I could only see him briefly, so I'm not sure what I saw. It could be a cat, a dog or a monkey. Since the SDR is distributed, the set of neurons can activate all three SDRs simultaneously, and not confuse them with each other, since the SDRs do not interfere with each other. The ability of neurons to continuously form SDR aggregations makes them a good tool for handling uncertainties.

These SDR properties are fundamental to understanding, thinking, and planning in the brain. We cannot create smart machines without using SDR.

Embodiment: The neocortex receives input from the senses. Every time we move our eyes, limbs, or torso, the input data from the senses change. This constantly changing input is the main mechanism used by the brain for learning. Imagine that I give you an object that you have not seen before. Let it be a stapler. How will you study it? You can get around it by looking at it from different angles. You can lift, hold your fingers, turn it in your hands. You can push and pull to see how he behaves. In this interactive process, you learn the shape of the stapler, your feelings about it, how it looks and how it behaves. You make movements, you feel the change in the input data, you do another thing, you feel the change again, and so on. Learning through motion is the primary way of learning the brain.

I do not want to say that a smart machine needs a physical body - only that it can change sensations through movement. For example, a virtual AI can “move” through the web, following links and opening files. He can study the structure of the virtual world through virtual movements, similar to how we walk through the building.

This leads us to an important discovery made at Numenta last year. In the neocortex, sensation data is processed by a site hierarchy. When data passes from one hierarchy level to another, more and more complex features are extracted from them, until at some point it is impossible to recognize the object. In-depth learning networks also use hierarchy, but they often require 100 processing levels for image recognition, and the neocortex costs only four to achieve the same result. Also, deep learning networks require millions of training examples, and the neocortex can learn new objects with just a few movements and sensations. The brain does something fundamentally different from what a typical artificial neural network does — but what?

Herman Helmholtz, German nineteenth-century physicist, one of the first to offer an answer to this question. He saw that although our eyes move three to four times a second, our visual perception remains stable. He calculated that the brain takes into account eye movements, otherwise it would seem to us that the whole world jumps to and fro. In the same way, if you touch something, you would be confused if the brain was processing only tactile sensations, and did not know that your fingers are moving. This principle of combining movements with changes in sensations is called sensorimotor integration . How and where sensorimotor integration takes place in the brain is a mystery.

We discovered that sensorimotor integration occurs in all areas of the neocortex. This is not a separate step, but an integral part of the processing of sensations. Sensomotor integration is a key part of the neocortex's “intelligent algorithm”. We have a theory and model of how neurons can do this, and it is well superimposed on the complex anatomy of the neocortex region.

What are the implications of this discovery for machine intelligence? Consider two types of files that you can find on your computer. One is an image made by a camera, and the other is a computer-developed design, for example, an Autodesk file. An image is a two-dimensional array of visual details. A CAD file is also a set of parts, but each of them is associated with a location in three-dimensional space. The CAD file models three-dimensional objects, and not how the object looks from a certain perspective. With a CAD file, you can predict how an object will look from any point of view, and determine how it will interact with other three-dimensional objects. With the image of this can not be done. We discovered that each section of the neocortex learns three-dimensional object models in much the same way as a CAD program. Everytime,

In retrospect, this observation makes sense. Smart systems need to learn multidimensional models of the world. Sensomotor integration does not occur in several parts of the brain - this is the main principle of its operation, part of the intelligence algorithm. Smart machines are required to work that way.

These three basic aspects of the neocortex — re-assembly training, distributed representations, and sensorimotor integration — will be the cornerstones of machine intelligence. Thinking machines of the future may ignore many aspects of biology, but not these three. No doubt, other discoveries in the field of neurobiology are waiting for us, shedding light on other features of consciousness that need to be included in similar machines in the future, but we can begin with what we know today.

From the earliest days of AI, critics have rejected the idea of ​​attempting to emulate the human brain, usually arguing that "airplanes do not flap their wings." Actually Wilbur and Orville Wrightstudied birds in detail. To create lift, they studied the shape of the wings of birds and tested them in a wind tunnel. For the driving force, they turned into an area other than aviation — a propeller and a motor. To control the flight, they observed how the birds twisted their wings to create a roll and used tails to maintain height. That is what they did. Aircraft to this day use this method, although we twist only one edge on the wings. In short, the Wright brothers studied the birds and then decided which elements of the flight were necessary for the flight of people, and which could be ignored. That is what we will do when we create thinking machines.

Thinking about the future, I worry about the fact that our goals are not ambitious enough. For today's computers it’s very cool to classify images and recognize speech, but we don’t come close to creating really smart machines. I think it's vital for us to do this. Future successes and even the survival of humanity may depend on it. For example, if we are going to populate other planets, we will need machines that are in our favor for flying in space, building structures, resource extraction and independent solutions to complex problems in an environment in which people cannot survive. On Earth, we are faced with the problems of disease, climate and lack of energy. Smart cars can help us. For example, it is quite possible to make smart machines that are sensitive and capable of working on a molecular scale. They could talk about protein folding and gene expression just as you and I think about computers and staplers. They could think and act a million times faster than a person. Such machines could cure diseases and keep our world in the habitable state.

In the 1940s, computer-age pioneers felt that computers would go far and be very useful, and that they were likely to transform human society. But they could not accurately predict how computers will change our lives. Similarly, we can be sure that truly intelligent machines will transform our world for the better, even if today we cannot predict how. After 20 years, we will look back and realize that in our time, breakthroughs in the theory of brain and machine learning have begun an era of real machine intelligence.

Also popular now: