Why is the human brain so effective?

Original author: Liqun Luo
  • Transfer

How massive parallelization elevates brain efficiency over AI capabilities




The brain is a complex device; in humans, it contains about 100 billion neurons and about 100 trillion connections between them. It is often compared to another complex system with enormous possibilities for solving problems: a digital computer. The brain and the computer contain a large number of elementary units - neurons or transistors, respectively - connected to complex circuits that process information transmitted by electrical signals. At the global level, the brain and computer architectures are a bit similar, as they consist of practically separate circuits for input, output, central processing, and memory.

Who better copes with problem solving - the brain or the computer? Given the rapid development of computer technology in recent decades, one can decide what the computer is winning. Indeed, computers are developed and programmed to defeat human masters in complex games, such as chess in the 1990s, and go, more recently - as well as in competitions for encyclopedic knowledge, such as the quiz show “ Jeopardy! ” that people defeat computers in a variety of tasks related to the real world — from the ability to distinguish a cyclist or a pedestrian on the road before raising a cup of tea from the table and carefully moving it to the mouth — not to mention conceptualization and creativity.

Why is the computer copes well with certain tasks, and the brain with others? Comparisons of the computer with the brain helped engineers and neuroscientists to understand this issue. The following comparison was made at the dawn of the modern computer era, in a small but influential book Computer and the Brain by John von Neumann , a specialist in many fields of science, who first developed a computer architecture scheme in the 1940s, so far pore serving as the basis of modern computers. Let's look at the numbers in these comparisons.

PropertiesComputerBrain
The number of elementary elementsUp to 10 billion transistors≈ 100 billion neurons and 100 trillion synapses
Basic operation speed10 billion / sec<1000 / s
Accuracy1 to 4.2 billion (on a 32-bit processor)1 to 100
power usage100 watts10 W
Information processing methodBasically consistentSerial and massively parallel
The number of inputs and outputs for each element1-3≈ 1000
Operation modeDigital Digital and analog
Data taken from computers in 2008. The number of transistors per integrated circuit doubled every 18-24 months, but over time the performance increase decreased due to problems with power consumption and heat dissipation.

The computer has enormous advantages over the brain in the speed of basic operations 1 . Today, personal computers are capable of performing such elementary arithmetic operations, such as addition, at a speed of 10 billion operations per second. The speed of the elementary operations of the brain we can estimate by the elementary processes by which neurons transmit information and communicate with each other. For example, neurons activate action potentials- bursts of electrical signals triggered in the vicinity of the neuron cell and transmitted along its long branches, axons, connecting it with the following neurons. Information is encoded in the frequency and time of onset of these bursts. The maximum frequency of neuron activation is about 1000 bursts per second. In another example, neurons transmit information to associated partner neurons by emitting chemical neurotransmitters in special structures at the ends of the axons, synapses, and partner neurons turn the neurotransmitter connection back into electrical signals, and this process is called synaptic transmission. The fastest synaptic transmission occurs in 1 ms. Therefore, by bursts and synaptic transmissions, the brain is capable of performing a maximum of a thousand basic operations per second, which is 10 million times slower than a computer. It is assumed that arithmetic operations should convert the input to the output, so the speed of work is limited by the basic operations of neuron communications, such as action potentials and synaptic transmission. But these restrictions are exceptions. For example, neurons with electrical synapses (connections between neurons that do not use chemical neurotransmitters) are not outstanding bursts, in principle, are able to transmit information faster than a millisecond; dendritic events occurring locally are also capable of this.

The computer also has serious advantages over the brain in the accuracy of basic operations. A computer is able to represent numbers with any accuracy necessary with the help of bits, zeros and ones assigned to each number. For example, a 32-bit number has an accuracy of 1 to 2 32or 4.2 billion. Empirical evidence suggests that most of the numbers in the nervous system (for example, the activation frequency of neurons, which is often used as an estimate of the intensity of the stimulus) fluctuates by a few percent due to biological noise, that is, accuracy, at best, is 1 to 100, which is millions of times worse than that of a computer. By the way, noise can serve as an indicator that many nervous processes are essentially probabilistic. The same stimuli can cause different sequences of bursts of electrical activity of neurons.

However, calculations performed by the brain cannot be called slow or inaccurate. For example, a professional tennis player can follow the trajectory of a ball flying at a speed of up to 260 km / h, move to the optimal place on the court, put a hand in the desired position and wave the racket, returning the ball half the opponent - all in a few hundred milliseconds. Moreover, the brain is capable of all these tasks (with the help of the body it controls), consuming ten times less energy than a personal computer. How does the brain do this? An important difference between a computer and the brain is the mode in which each of these systems processes information. The computer performs tasks for the most part in sequential steps. This can be seen by the way programmers write code, creating a stream of sequential instructions. For each step of this sequence, high accuracy is required, as errors accumulate and increase at each step. The brain also uses sequential steps in processing information. In the example of tennis, information passes from the eyes to the brain, and then into the spinal cord to control the contractions of the muscles of the legs, torso, arm and wrist.

But the brain also uses massive parallel data processing, taking advantage in the form of a huge number of neurons and connections between them. For example, a moving tennis ball activates a variety of retinal cells, photoreceptors that convert light into electrical signals. These signals are transmitted to many different types of retinal neurons. By the time the photoreceptor signals have passed through two or three synaptic connections in the retina, information about the position, direction and speed of the ball has already been extracted by parallel neural circuits, and transmitted to the brain. Similarly, the motor cortex (part of the cerebral cortex responsible for conscious motility) simultaneously sends commands to control the contraction of the muscles of the legs, torso, hands and wrists, so that the body and the hands at the same time need the position optimal for receiving the ball.

This massively parallel strategy works because each neuron collects input data and sends output to a multitude of other neurons — on average for mammals, 1,000 inbound and outbound connections from each neuron. And each transistor in total has only three nodes for input and output. Information from a single neuron can go through many parallel paths. And at the same time, many neurons processing information can connect their output by sending them to one subsequent neuron. This property is especially useful for increasing the accuracy of information processing. For example, information represented by a single neuron may contain noise (that is, its accuracy is of the order of 1 to 100). Taking input from 100 neurons processing the same information, the next neuron in turn can already present information with higher accuracy (in this case, 1 in 1000). Assume standard deviation σenvironments for each unit of input data roughly corresponds to noise. For the average number of independent inputs n, the expected deviation of mean σ of media = σ / √ n. In our example, σ = 0.01, and n = 100, therefore σ of media = 0.001.

Computer and brain have similarities and differences in the representation of their elementary units. The transistor uses a digital representation of information with discrete values ​​(0 or 1). The burst of axons is also a digital signal, because at each moment in time the neuron is activated or not activated, and when it is activated, almost all of the bursts have approximately the same size and shape. This property allows you to reliably transmit bursts over long distances. However, neurons also use the capabilities of analog signals that represent information using continuous values. Some neurons (most retinal neurons) do not produce bursts, and their output information is transmitted by stepwise electrical signals (which, unlike bursts, can vary in size), who are able to transmit more information than bursts. The receiving end of a neuron (usually located in dendrites) also uses analog signals to integrate up to thousands of input signals simultaneously, which allows dendrites to perform complex calculations.

Another notable property of the brain that is explicitly involved in the example of playing tennis is that the power of connections between neurons can be changed as a result of actions and experience — this process, as neurobiologists consider, is the basis of learning and memorization. Repetitive workouts allow neural circuits to be better tuned for performing tasks, which seriously increases speed and accuracy.

Over the past decades, engineers have been inspired by the brain to improve computers. The principles of parallel processing and modification of link weights, depending on use, are included in modern computers. For example, in the development of computers, the current trend is the increase in parallelism, for example, the use of several processors (cores) in one computer. Another example is deep learning, the science of machine learning and artificial intelligence, which has achieved tremendous success in recent years, and is responsible for the rapid progress in recognizing objects and speech in computers and mobile devices, was inspired by discoveries related to the mammalian visual system. 2

Depth learning, imitating the mammalian visual system, uses several layers, each of which represents more and more abstract properties of the object (visual or speech), and the weights of connections between different layers are adjusted using training, and not through engineering efforts. These recent advances have expanded the list of tasks subject to computers. And yet, the brain remains superior computer flexibility, generalizability and ability to learn. As neuroscientists discover more and more brain secrets (helped by more and more computer use), engineers will be able to take more examples of inspiration from the brain in order to further improve the architecture and speed of computers. Whoever is the winner in a particular task,

1. Patterson, DA & Hennessy, JL Computer Organization and Design (Elsevier, Amsterdam, 2012), 4th ed.

2. LeCun, Y. Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

Lycan Lyuo is a professor at the School of Arts and Sciences and a professor of neuroscience at Stanford University.

Also popular now: