Singularity: when can it be expected?

Original author: Dom Galeom
image

The term “artificial intelligence” appeared only about 60 years ago, and now it is no longer just a term. At the moment, there are many AI experts who are trying to understand what the future of this technology will be. One of the main issues for discussion is technological singularity, the moment when machines reach a level of intelligence that exceeds the level of human intelligence.

And although the singularity is still the subject of many science fiction works, the possibility of its occurrence looks more and more real. Corporations and startups are actively engaged in artificial intelligence, including Google, IBM and others. The results of this work are. For example, a robot is created that looks like a person and can hold a conversation, read emotions - or at least try to do it - and also perform different types of tasks.


Probably the most confident expert among all supporters of the point of view of the inevitability of the onset of the singularity is Ray Kurzweil, CTO of Google. He believes that robots will become smarter than people around the year 2045.

Head of SoftBank CEO Masayoshi Son, a well-known futurist in himself, is convinced that the singularity will come already in this century, around 2047. The businessman is doing his best to accelerate the onset of the singularity, creating his own projects and buying strangers. So, SoftBank recently bought a startup Boston Dynamics from Google. The company also invests billions of dollars in technology venture capital investment.

Not everyone shares the optimism of the adherents of accelerating the process of singularity, considering the strong form of AI dangerous.


Among those who fear intelligent cars are Elon Musk, Stephen Hawking and other scientists and businessmen. They argue that the emergence of a strong form of AI will be the beginning of the end of human civilization. Here are some expert opinions.

Louis Rosenberg, CEO of Unanimous AI

“In my opinion, which I expressed at TED this summer , artificial intelligence will become rational and get ahead of people in development, which is what people call a singularity. Why am I so sure this will happen? It's simple. Mother nature has already proved that the mind can appear as a result of the appearance of a massive number of homogeneous computing elements (i.e. neurons) from which adaptive networks (that is, the brain) are formed.

In the early 90's, I reflected on this issue, and it seemed to me that AI would exceed human capabilities in 2050. Now I believe that this will happen faster, probably already in 2030.

I believe that the creation of artificial intelligence on Earth is no more dangerous than the alien AI that came from another planet. In any case, this AI will have its own values, morality, feelings and interests.

To believe that the interests of AI will coincide with ours is absurdly naive, but in order to understand what the effect of a conflict of interests might be, it is worth thinking about what man has done with nature and living beings on Earth.

Therefore, we need to prepare for the inevitable emergence of a robust AI. This has the same degree of probability as the arrival of a ship from a different solar system. All this is a threat to the existence of our own species.

What can be done? I do not believe that we can delay AI-conscious AI. We humans are simply unable to cope with hazardous technologies. This is not because we have no good intentions. It's just that we rarely understand the potential threat of our own inventions. When we start to convene, it is usually too late.

Does this mean that we are doomed? For a long time, I thought that this was really so and wrote two novels about the inevitable destruction of man. But now I believe that humanity will survive if we become smarter, much smarter, and we can get ahead of cars.

Pierre Barro, CEO at Aiva Technologies

I believe that there is a strong misconception regarding how quickly the “super mind” will appear, associated with the belief that the exponential growth of productivity (technology) should be taken for granted.

Firstly, at the hardware level, we have almost reached the limit established by Moore's law. At the same time, there is no certainty that modern technologies, such as quantum computing, can be used to continue to increase the efficiency of computer systems, at the same pace as we have seen so far.

Secondly, with regard to the program level, we still have a long way to go. Most AI systems need multiple training cycles in order to be able to perform a particular operation. We humans are far more effective in terms of learning; we only need a few examples and repetitions.

The scope of AI is very narrow. So, such systems focus on solving specific problems, such as recognizing photographs of cats and dogs, cars, and composing musical compositions. But there are still no systems that can simultaneously do all this together.

And this is not to say that we should not be too optimistic about the development of AI. It seems to me that there is too much hype in this topic, but soon all our illusions can dissipate, illusions in understanding what AI can and cannot do.

If this happens, then a new “AI winter” may come, which will lead to a lower level of funding for the development of artificial intelligence. This is probably the worst that can happen in this area, and everything needs to be done to prevent the possibility of such a scenario.

But when will the singularity come? I think it depends on what is meant by this term. If we talk about passing the Turing AI test and raising the level of rationality of the artificial system to the level of a person, then this will happen somewhere in the year 2050. This does not mean that AI will necessarily be smarter than us.

If we are talking about AI, which is superior to man in absolutely everything, then here we must first understand how our own mind works. And only then can you think about creating something that will transcend us. The human brain is still a very difficult problem, which so far the best of the best cannot solve. The human brain is definitely more complex than the most complex neural networks or complexes of neural networks.

Raja Chatila, head of the Institute for Intelligent Systems and Robotics (ISIR) at the University of Pierre and Marie Curie.

The concept of technological singularity has no technological or scientific basis.

The main argument is the so-called “law of acceleration of development”, which was created by the efforts of the prophets of technological signature, including mainly Ray Kurzwell. This law seems to stem from Moore’s law, which, as we know, is not a scientific law - it is an empirical conclusion that is based on the results of the development of the electronic industry.

We all know the limitations of Moore’s law — the time we will achieve quantum technology for example — and the fact that this architecture can change everything. It is important to understand that Moore’s law is not 100% law.

But the proponents of the singularity are trying to draw a parallel with the evolution of species and technologies for no particular reason. They believe that a constant increase in the power of computers will ultimately give artificial intelligence that will surpass the human mind. In particular, they suggest that this will happen between 2040 and 2050.

But conventional computing devices are not the mind at all. We have about 100 billion neurons in the brain. Not only their number, but above all, their structure and principle of interaction gives us the opportunity to think and act.

All we can do is create certain types of algorithms to achieve certain goals and solve problems (calling it intelligence). In fact, all these systems are very limited in their capabilities.

My conclusion is that singularity is faith, but not science.

Gideon Shmuel, CEO eyeSight Technologies

Trying to figure out how to make machines self-learning, in a broad sense, we will spend a lot of time. The challenge is that once such systems are created, they can learn extremely quickly, exponentially. And in a matter of hours or even minutes they can surpass a person.

I would like to say that technologies are neither bad nor good, this is just a tool. I would also like to say that this tool becomes bad or good only in the hands that hold it. As for the singularity, people, users, have nothing to do with it, all this concerns only machines. They can break out of our hands, and the only thing that can be said with certainty is that we will not be able to predict the consequences.

A large number of science fiction books and films show us super-intelligent machines, how they destroy humanity, or shut everyone down, or perform some other actions that I don’t like.

What you really need to do is think about ways to develop AI technology. For example, if we take machine vision, then the risk is relatively small. If the system can understand the meaning of things and recognize objects, nothing bad will happen.

It is in our own interests to obtain machines that can self-learn in order to understand what is happening. The risk here is in the intermediate layer, which perceives the data and translates external factors into any actions.

These actions can be very fast, related to ordinary reality (cars with AI, for example) or virtual reality (information processing, resource control, identification, etc.).

Should we be afraid of this, especially the latter? Personally, I fear, yes.

Patrick Winston, professor of information technology, specialist in AI.

I have been asked several times about what I think about this. People believe that human-level AI will appear in 20 years from the last 50 years (that is, each generation is waiting for the AI ​​to appear soon). My answer is no big deal, and in the end it may be true.

In my opinion, the creation of AI has nothing to do with, for example, sending a person to the moon. We already have all the technology for the moon, but there is almost nothing to create an AI. More technological breakthroughs are required for this, and now it is difficult to think about it from the point of view of any time frame.

Of course, it all depends on how many scientists will work on the problem of creating AI. Now a huge number of specialists are involved in machine learning and deep learning. Perhaps some of these specialists will understand how the human mind works.

When will we get the machine equivalent of the Watson-Creek breakthrough? I think in 20 years, in the end I believe that this will happen.

Also popular now: