Electrical thinking

Original author: Yvonne Raley
  • Transfer
This topic is a translation of an article by Yvonne Raley from Scientific American Mind .

RobotHow long will it take you to add the numbers 3 456 732 and 2 245 678? Ten seconds? Not bad for a man. An average modern computer can complete this operation in 0.000000018 seconds. What about your memory? Can you remember the list of planned purchases from 10 items? And out of 20? Compare this to the many millions for the computer.

On the other hand, computers do not recognize faces so quickly that people distinguish instantly. Machines lack the creative potential of fresh ideas; they lack feelings and warm memories of their youth. But recent technological advances are closing the gap between the human brain and circuitry. At Stanford University, bioengineers reproduce complex parallel processing of neural networks on microarrays. Another development, a robot named Darvin VII, is equipped with a camera and a metal jaw so that it can interact with the outside world and learn like a young animal. Researchers at the Neuroscience Institute in La Jolla, California, simulated Darwin’s brain on rat and monkey brains.

Developments raise a natural question: if computer data processing ultimately imitates natural neural networks, can cold silicon ever think in the full sense of the word? And how will we evaluate it, if he can? More than 50 years ago, British mathematician and philosopher Alan Turing invented an original strategy to attract attention to these issues, and following this strategy made a great contribution to the design of artificial intelligence. At the same time, light began to be shed on human knowledge.

Start: Acumen Testing


So what do we mean by the word “think”? People often use this word to describe processes that affect consciousness, reason, and creativity. Unlike modern computers, which only follow the instructions written on them by programs.

In 1950, in an era when silicon microchips did not exist, Turing realized that as the computer becomes more intelligent, this question of artificial intelligence will ultimately be raised. In perhaps the most famous philosophical note that has ever existed, “ Computational mechanisms and intelligence", Turing simply replaced the question," Can cars think? " to "Can a machine - a computer - go through an imitation game?" This means that the computer can conduct a conversation so naturally that it fooled the person you are talking to so much that he thinks that another person is talking to him?

Turing took his idea from a simple room game in which a person had to use a series of questions to determine the gender of a person in the next room. With his experiment, he replaced the person in the next room with a computer. To pass the test, which is now called the Turing Test , the computer must answer any question from an interrogating person with linguistic competence and a sophisticated imitation of a person.

Turing concluded his fruitful study with the prediction that in 50 years (this time just now has come) we will be able to design a computer that would be so good at a simulation game that the average interrogator would have only a 70 percent chance to reliably recognize his interlocutor - a machine he or a person.

Turing's prediction did not come true. No computer can pass his test. Why are there things that are easy for people, but cause great difficulties for cars? To pass the test, the computer must demonstrate not only one ability (in mathematics, oratory or the ability to fish), but many of them - as much as an ordinary person possesses.Online Consultant AnnaSo far, computers have a limited architecture. Their programming allows them to solve specific problems, and they have a knowledge base that applies to only one of those tasks. A good example is Anna, an online consultant at IKEA. You can ask Anna about goods and services, but she cannot tell you about the weather.

What else does a computer need to pass the Turing test? It is clear that he should have a large vocabulary with all the quirks and oddities in the form of a pun. It is critical to take into account the context in which this pun is used. But computers cannot easily recognize context. The word “bank”, for example, can mean “river bank” or “financial institution” depending on the context in which it is used.

What makes context very important is that it delivers fundamental knowledge. An essential part of this knowledge is, for example, to know the personality of the person asking the questions: is it an adult or a child, an expert or an amateur. And for a question like "Did the Yankees win the championship?" very important is the year in which the question is asked.

Fundamental knowledge is useful in all cases, because it reduces the required amount of computing power. Logic is not enough to correctly answer a question such as “Where is Sue’s nose if Sue is at home?” You need to know that noses are usually attached to their owners. To tell the computer to simply answer “at home” is not enough for this type of question. Then the computer to the question "Where is Sue's backpack, if she is at home?" I have to answer “at home,” while the appropriate answer would be “I don’t know.” And just imagine how difficult the question would be if Sue had recently had plastic surgery on her nose. Here the right answer would be a counter question: “What part of Sue’s nose are you asking?” Trying to write software that addresses every possible case,

Human or just humanoid?


However, the Turing Test is subject to criticism. Philosopher of the University of New York Ned Blockargues that the Turing simulation game tests one way or another only the behavior of a computer in relation to identity to human behavior (we are only talking about verbal and cognitive behavior). Imagine that we could program a computer with all possible options for developing a conversation of a clearly defined length. When the interrogator asks question Q, the computer searches for a conversation in which Q meets and then gives the necessary answer, A. When the interrogator asks his next question, P, the computer looks for lines Q, A and P and gives the answer B, which follows from this conversation. Such a computer, according to Block, will have the intelligence of a toaster, but it will pass the Turing test. One answer to Blok’s challenge is that the problem he raised for computers is also relevant to human behavior. Leaving aside physical characteristics, the evidence of the fact that a person can think is the behavior that produces thought. And this means that we will never know exactly if our interlocutor is talking, in the usual sense of the word. Philosophers call this the problem of "other minds."

Anyone Chinese?


A similar line of discussion - the argument of the Chinese room - was developed by the philosopher John Rogers Searle of the University of California at Berkeley to show that a computer can pass the Turing Test without understanding the meaning of the words it will use. To illustrate this, Searle asks us to imagine that programmers have written a program to simulate an understanding of the Chinese language.

Imagine that you are a processor in a computer. You are locked in a room (computer case) full of baskets containing Chinese characters (characters that will appear on the computer screen). You do not know Chinese, but you have a large book (application program) that tells how to handle these characters. However, the rules in the book do not tell what these symbols mean. When Chinese characters enter the room (input), your task is to give them back out of the room (output). For this task, you get a further set of rules - these rules correspond to a simulation program designed to pass the Turing test. You are not aware that the characters that come into the room are questions, and the characters that you send back are answers. Moreover, these answers perfectly mimic the answers, which the Chinese announcer could give out; so outside the room it looks as if you know Chinese. But of course you do not know him. Just like a computer can pass the Turing test, but in fact it will not think.

To learn to think, a machine must have a chance to know things for itself.

Will computers ever understand what these symbols mean? Computer scientist Stephen Harnad of the University of Southampton in England believes that they can, but like people, computers will need to understand abstractions and their context at the first training, just as they establish a connection with the real, external world. People learn the meanings of words through a causal relationship between us and the object to which the symbol corresponds. We understand the word “tree” because we had a life experience with trees. (Think about this case: blind and deaf Helen Keller finally understood the meaning of the word “water” when it came into her hand; enlightenment came when she touched the water when it came out of the pump.)

Darwin viiHarnad claims that in order for a computer to understand the meanings of the characters it manipulates, it must be equipped with sensory equipment — for example, a camera — that is how it can actually see objects represented by characters. A project like the little Darwin VII - a robot with an eye camera and metal lower jaws - is another step in that direction.

Harnad offers a revised Turing test, which he called the Turing Robotic Test. To earn the label “thinking,” the machine must pass the Turing test and be connected to the outside world. Interestingly, this add-on captures one of Turing's personal observations: a machine, as he wrote in a 1948 report, should be allowed to "travel the world around it" so that it could "have a chance to know things for itself."

Future robot


Sensory equipment, which, according to Harnad, is crucial, can provide computer scientists with a way to equip a computer with the context and fundamental knowledge necessary to pass the Turing test. Instead of requiring you to enter all the necessary data by direct enumeration, the robot learns only what it needs to know in order to communicate in its environment.

Can we be sure that equipping touch access to the world will endow the computer with real understanding? This is exactly what Searle wants to know. But before we can answer this question, we must wait until the machines actually pass the Turing robotic test proposed by Harnad. Meanwhile, the intelligence model proposed by the Turing test continues to provide an important AI research strategy. According to Dartmouth College philosopher James Moore, the main strength of the test is the foresight that it offers - “creating a complex, ubiquitous intelligence that can learn.” This foresight establishes a valuable task for AI, regardless of whether the machine that passes the Turing test can think like we do, understand or be aware.

Also popular now: