Anxiety about the capture of the world by artificial intelligence, perhaps based on unscientific assumptions

Original author: Eleni Vasilaki
  • Transfer

Do we need to be afraid of artificial intelligence (AI)? For me, this is a simple question, with an even simpler answer: no. But not everyone agrees with me - many people, including Stephen Hawking, expressed concern that the emergence of powerful AI systems could mean the end for humanity.

Your view on the question of whether an AI will conquer the world or not will obviously depend on whether the AI ​​can develop rational behavior that transcends human behavior - something called “superintelligence”. Let us consider the likelihood of this process, and why there is such concern about the future of AI.

People usually fear what they don't understand. Fear is often made responsible for racism, homophobia and other sources of discrimination. It is not surprising that it is applicable to new technologies - they are often surrounded by certain riddles. Some technological advances seem completely unrealistic, exceed the expectations and, in some cases, human capabilities.

There is no spirit in the car

Let's tear down the cover of mysticism with the most popular AI technologies, known as “machine learning”. They allow the machine to learn how to perform a task without having to program certain instructions. It may look scary, but in fact it all comes down to rather boring statistics.

The machine, that is, the program, or more precisely, the algorithm, is designed so that it can reveal the relationships that exist in the input data. There are many different methods to achieve this effect. For example, we can provide the machine with images of handwritten letters, and ask her to recognize the sequence of these letters. We have already given her possible answers - these can only be letters of the alphabet. First, the machine randomly calls the letter, and we correct it, giving the correct answer. We also programmed the machine for the possibility of self-tuning, and the next time it is given this letter, it is more likely to give us the correct answer. As a result, over time, the machine improves its efficiency and “learns” to recognize the alphabet.

We, in fact, programmed the machine to use common relationships in the data to achieve a certain goal. For example, all variants of the letter “a” look structurally similar, but different from “b”, which the algorithm can use. Interestingly, after the training phase, the machine can apply this knowledge to new examples of letters, for example, written by a man whose handwriting has not been seen before.

We give AI answers

However, people already know how to read. Perhaps a more interesting example would be the Google Deepmind artificial player who beat all human players. He obviously learns not like people - he plays the game with himself the number of times that no one person will play in his entire life. He was programmed to win, and explained that winning depends on his actions. He was also taught the rules of the game. Playing the same game over and over again, he can discover the best move in each situation, inventing moves that people have never done before.

Tots against robots

Does this go-go AI smarter than man? Definitely not. AI is very specialized, designed for a specific type of task, and does not have the versatility of people. Over the years, people begin to understand the world as no AI has managed, and probably will not be able to in the near future.

What AI is called the “intellect” is due to the fact that it is capable of learning. But in training, he does not reach the people. Toddlers can learn by just watching someone else solve a problem. AI requires data wagons and a lot of attempts to achieve success in very specific tasks, and it is very difficult to generalize its data for tasks that are too different from those on which it was trained. Therefore, if people develop amazing intelligence fairly quickly in the first years of their lives, the key concepts of machine learning are not much different from what they were ten or twenty years ago.

The brain of a little child is amazing

The successes of modern AI are less related to breakthroughs in technology, and more dependent on a simple amount of data and computing power. It is important to note that even an infinite amount of data will not give the AI ​​of human intelligence - first we need to make significant progress in creating technologies of “generalized intelligence” - and we have not even come close to solving this problem.

In general, simply because the AI ​​is able to learn, it does not follow that he suddenly learns all aspects of human intelligence and surpasses us. There is not even a simple definition of what human intelligence is, and we do not have a clear understanding of how it appears in the brain. But even if we could figure it out, and then create an AI that would become smarter, it does not at all follow from this that it would become more successful.

Personally, I’m more worried about how people use AI. Machine learning algorithms are often considered black boxes, and few attempts are made to sort out the details of the solution found by the algorithm. This is an important aspect that is often ignored while we are more obsessed with efficiency, and less so with understanding. Understanding the solutions open by these systems is important, because then we could assess whether these are the right solutions and whether we want to apply them.

If, for example, we train our system incorrectly, we can get a machine that has been trained in interconnections, which in general do not exist. Suppose we want to make a machine that evaluates the potential of potential students to succeed in engineering. The idea is probably bad, but let's just for an example, let's get to its end. Traditionally, men dominate this area, which means that training examples are likely to be taken from male students. If we are not convinced of the balance of the training data, the machine can conclude that only men can be engineers, and it can be wrong to apply to future decisions.

Machine learning and AI are tools. They can be used correctly or incorrectly, like everything else. We should be bothered by the way they are used, not by the methods themselves. The greed and stupidity of a person bother me much more than artificial intelligence.

Also popular now: