What artificial intelligence researchers think about the possible risks associated with it

Original author: Scott Alexander
  • Transfer
The risks associated with AI, I became interested in 2007. At that time, most people’s reaction to this topic was something like this: “Very funny, come back when someone other than Internet jerks will believe in it.”

In the years that followed, several extremely intelligent and influential figures, including Bill Gates , Stephen Hawking and Ilon Mask , publicly shared their concerns about the risks of AI, followed by hundreds of other intellectuals, from Oxford philosophers to cosmologists from MIT and investors from Silicon Valley . And we are back.

Then the reaction changed to: "Well, a couple of some scientists and businessmen can believe it, but it is unlikely that they will be real experts in this field, really versed in the situation."

From here came statements like the article in Popular Science " Bill Gates is afraid of AI, but AI researchers know better ":
After talking with AI researchers - real researchers, who hardly make such systems work at all, let alone work well, it becomes clear that they are not afraid that superintelligence will suddenly sneak up on them, either now or in the future. . Despite all the frightening stories told by Mask, the researchers are in no hurry to build protective rooms and countdown self-destruction.

Or, as they wrote on Fusion.net in the article " Objection about killer robots from a person who actually develops AI ":
Andrew Angie professionally develops AI systems. He taught an AI course at Stanford, developed AI at Google, and then moved to Baidu, a Chinese search engine, to continue his work in the front ranks of using AI to the real world. So, when he hears about how Ilon Musk or Stephen Hawking - people who are not familiar with modern technology - are talking about AI, which is potentially capable of destroying humanity, you can almost hear him cover his face.

Ramez Naam [Ramez Naam] from Marginal Revolution repeats about the same thing in the article “ What do researchers think about the risks of AI? ”:
Ilon Musk, Stephen Hawking and Bill Gates have recently expressed fears that the development of AI can implement the "killer AI" scenario, and potentially lead to the extinction of humanity. They do not belong to AI researchers and, as far as I know, did not work directly with AI. What do real AI researchers think about the risks of AI?

He quotes the words of specially selected AI researchers, as well as the authors of other stories - and then he stops without mentioning any opinions different from this.

But they exist. AI researchers, including leaders in this field, actively expressed concerns about the risks of AI and beyond intelligence, and from the very beginning. I will begin by listing these people, in peak to the list of Naam, and then proceed to why I do not consider this a “discussion” in the classical sense expected from listing the list of luminaries.

The criteria for my list are the following: I mention only the most prestigious researchers, or professors of science in good institutions with many quotations of scientific works, or very respected scientists from the industry who work for large companies and have a good track record. They do AI and machine learning. They have some strong statements in support of some point of view about the onset of a singularity or a serious risk from the AI ​​in the near future. Some of them wrote about this work or book. Some simply expressed their thoughts, believing that this was an important topic worth exploring.

If someone does not agree with the inclusion of a person in this list or thinks that I forgot something important, let me know.

* * * * * * * * * *

Stuart Russell- Professor of Computer Science at Berkeley, winner of the IJCAI Computers And Thought Award, researcher at the computer mechanization association, researcher at the American Academy of Advanced Scientific Research, director of the Center for Intelligent Systems, winner of the Blaise Pascal Prize, etc. etc. Co-author of the book " AI: a modern approach, " a classic textbook used in 1,200 universities around the world. On his website he writes:
In the field of AI, 50 years have been developing under the banner of the assumption that the smarter the better. With this you need to combine anxiety for the benefit of humanity. The argument is simple:

1. The AI ​​is likely to be successfully created.
2. Unlimited success leads to great risks and great benefits.
3. What can we do to increase the chances of getting benefits and avoiding risks?

Some organizations are already working on these issues, including the Institute for the Future of Humanity at Oxford, the Center for Existential Risk Research at Cambridge (CSER), the Institute for the Study of Machine Intelligence at Berkeley and the Institute for Future Life at Harvard / MIT (FLI). I am on the advisory boards for CSER and FLI.

In the same way that nuclear fusion researchers considered the problem of restricting nuclear reactions as one of the main problems in their field, the development of the AI ​​field will inevitably raise control and safety issues. Researchers are already beginning to raise questions, from purely technical (basic problems of rationality and utility, etc.) to broadly philosophical ones.

On edge.org, he describes a similar point of view:
As explained by Steve Omohandro, Nick Bostrom and others, the discrepancy in values ​​with decision-making systems, the possibilities of which are constantly increasing, can lead to problems - perhaps even the problems of the scale of extinction of the species, if the machines prove to be more capable than people. Some believe that in the next century there will be no foreseeable risks for mankind, perhaps forgetting that the time difference between Rutherford’s confident statement that nuclear energy can never be recovered, and the invention of Silard initiated by the neutrons of the nuclear chain reaction took less than 24 hours. .

He also tried to represent these ideas in an academic environment, pointing out :
I find that the main people in this industry, who have never before expressed fears, think to themselves that this problem should be taken very seriously, and the sooner we take its seriousness, the better it will be.

David McAllister is a professor and senior fellow at Toyota Institute of Technology, affiliated with the University of Chicago, who previously worked in the faculties of MIT and Cornell Institute. He is an employee of the American Association of AI, published more than a hundred papers, conducted research in the areas of machine learning, programming theory, automatic decision-making, AI planning, computational linguistics, and had a serious impact on the algorithms of the famous chess computer Deep Blue. According to an article in the Pittsburgh Tribune Review:
Chicago professor David McAllister considers the inevitable emergence of the ability of fully automatic intelligent machines to develop and create smarter versions of themselves, that is, the onset of an event known as [technological] singularity. Singularity will allow machines to become infinitely intelligent, which will lead to an “incredibly dangerous scenario,” he says.

In his blog " Thoughts on cars ", he writes:
Most computer scientists refuse to talk about real successes in the field of AI. I think that it would be more reasonable to say that no one can predict when an AI will be obtained that is comparable with the human mind. John McCarthy once told me that when he was asked about how soon human-level AI would be created, he replies that it is five to five hundred years old. Makarty was smart. Given the uncertainties in this area, it is reasonable to consider the problem of a friendly AI ...

In the early stages, a generalized AI will be safe. However, the early stages of the AIS will be an excellent test ground for AI in the role of servant or other options friendly AI. An experimental approach is also advertised by Ben Görzel in a good post on his blog. If we are waiting for an era of safe and not too smart OII, then we will have time to think about more dangerous times.

He was a member of the AAAI Panel On Long-Term AI Futures expert group dedicated to the long-term perspectives of AI, chaired the long-term monitoring committee, and is described as follows :
McAllister spoke with me about the approach of "singularity", an event when computers become smarter than people. He did not name the exact date of her offensive, but said that this could happen in the next couple of decades, and in the end it will definitely happen. Here are his views on singularity. Two significant events will occur: operational rationality, in which we can easily talk to computers, and the AI ​​chain reaction, in which the computer can improve itself without help, and then repeat it again. We will notice the first event in the automatic assistance systems that will really help us. Later it will be really interesting to communicate with computers. And in order for computers to be able to do everything that people can do, it is necessary to wait for the second event to occur.

Hans Moravec is a former professor at the Institute for Robotics at Carnegie Mellon University, named after him is the Moravek paradox , the founder of SeeGrid Corporation , a company engaged in computer vision systems for industrial applications. His work “ sensor synthesis in the lattice of certainty of mobile robots ” was quoted over a thousand times, and he was invited to write an article for the British Encyclopedia on Robotics, at a time when articles in encyclopedias were written by world experts in this field, and not by hundreds of anonymous Internet commentators.

He is also the author of the book " Robot: From a Simple Machine to a Transcendental Mind ", which Amazon describes as follows:
In this exciting book, Hans Moravek predicts that by 2040 cars will approach the intellectual level of people, and by 2050 they will surpass us. But although Moravec predicts the end of an era of human domination, his vision of this event is not so gloomy. He does not isolate himself from the future, in which machines rule the world, but accepts him, and describes an amazing point of view according to which intelligent robots will become our evolutionary descendants. Moravec believes that at the end of this process, “immense cyberspace will unite with the inhuman supermind and deal with things as far from people as human affairs are from bacteria.”

Shane Leg is co-founder of DeepMind Technologies , an AI startup purchased in 2014 for $ 500 million by Google. He received his doctorate at the Institute of AI them. Dale Moule in Switzerland, and also worked in the Division of Computational Neurobiology. Gatsby in London. At the end of the thesis “machine superintelect” he writes :
If ever there is something that can come close to absolute power, it will be a super-intelligent machine. By definition, she will be able to achieve a large number of goals in a wide variety of environments. If we prepare carefully for such an opportunity in advance, we will be able not only to avoid a catastrophe, but also to begin an era of prosperity, unlike any other that existed before.

In a subsequent interview, he says :
AI is now in the same place where the Internet was in 1988. Requirements for machine learning are required in special applications (search engines like Google, hedge funds and bioinformatics), and their number is growing every year. I think that around the middle of the next decade this process will become widespread and noticeable. The AI ​​boom should occur in the 2020 area, followed by a decade of rapid progress, possibly after market correction. A person’s AI level will be created around mid-2020, although many people will not accept the onset of this event. After this, the risks associated with a developed AI will receive practical implementation. I will not say about the "singularity", but they expect that at some point after the creation of the AIS, crazy things will start to happen. This is somewhere between 2025 and 2040.

He and his co-founders Demis Khasabis and Mustafa Suleiman signed a petition to the Institute for the Future Life regarding the risks of AI, and one of their conditions for joining Google was that the company agrees to organize an AI ethics council to research these issues.

Steve Omohandro [ by Steve Omohundro ] - a former professor of computer science, University of Illinois, the founder of a group of computer vision and learning at the Center for the study of complex systems, and the inventor of various important developments in machine learning and computer vision. He worked on lips-reading robots, StarLisp, a parallel programming language, and geometric learning algorithms. He is now heading Self-Aware Systems., "A team of scientists working to ensure that intelligent technology benefits humanity." His work, "the basics of AI motivation, " helped bring about the field of machine ethics, since he noted that super-intelligent systems would be directed toward potentially dangerous goals. He's writing:
We have shown that all advanced AI systems are likely to have a set of basic motivations. It is extremely important to understand these motivations in order to create technologies that ensure a positive future for humanity. Yudkovsky called for the creation of a "friendly AI". To do this, we need to develop a scientific approach for “utilitarian development”, which will allow us to develop socially useful functions that will lead to the desired sequences. The rapid pace of technological progress suggests that these problems may soon become critical.

Under the link you can find his articles on "rational AI for the common good."

Murray Shanahan received his Ph.D. in computer science from Cambridge, and now he is a professor of cognitive robotics at Imperial College London. He published works in such areas as robotics, logic, dynamical systems, computational neuroscience, philosophy of mind. He is currently working on the book " Technological Singularity ", which will be published in August. Amazon's advertising summary is as follows:
Shanakhan describes technological advances in AI, both made under the influence of knowledge from biology, and developed from scratch. He explains that when a human-level AI is created — theoretically possible, but a difficult task — the transition to the super-intelligent AI will be very fast. Shanahan reflects on what the existence of supramental machines for such areas as personality, responsibility, rights and individuality can lead to. Some representatives of the supramental AI can be created for the benefit of man, some can get out of control. (That is, Siri or HAL?) The singularity represents for humanity both an existential threat, and an existential opportunity to overcome its limitations. Shanahan makes it clear that if we want to achieve a better result, we need to imagine both possibilities.

Marcus Hutter is a professor of computer science at the National Australian University. Prior to that, he worked at the Institute of AI them. Dale Moule in Switzerland and at the National Institute of Informatics and Communications in Australia, also worked on stimulated learning, Bayesian conclusions, computational complexity theory, Solomonian theory of inductive predictions, computer vision and genetic profiles. He also wrote a lot about singularity. In the article “ Can Intellect Explode? ” He writes:
This century can witness a technological explosion, the scale of which deserves the name of a singularity. The default scenario is a community of interacting rational individuals in the virtual world, simulated on computers with hyperbolically increasing computing resources. This is inevitably accompanied by an explosion of speed, measured by physical time, but not necessarily an explosion of intelligence. If the virtual world is populated by free and interacting individuals, evolutionary pressure will lead to the emergence of individuals with increasing intelligence, who will compete for computing resources. The end point of this evolutionary acceleration of the intellect can be a community of the most intelligent individuals. Some aspects of this singular community can theoretically be studied using modern scientific tools.

Jürgen Schmidhuber is a professor of AI at the University of Lugano and a former professor of cognitive robotics at the Munich University of Technology. He develops one of the most advanced neural networks in the world, works on evolutionary robotics and the theory of computational complexity, and serves as a research fellow at the European Academy of Arts and Sciences. In the book " Hypotheses of Singularities, " he argues that "with the continuation of existing trends, we will encounter an intellectual explosion in the next few decades." When he was directly asked about the risks associated with AI at Reddit AMA, he replied :
Stewart Russell’s anxiety about AI seems reasonable. Can we do something to control the influence of AI? In response, hidden in a nearby thread, I pointed out: at first glance, the recursive self-improvement of Gödel’s machines offers us a way to create a future superintelligence. The self-repair of the Gödel machine is in some sense optimal. It will only make changes in its code that are proven to improve the situation, according to the original utility function. That is, in the beginning you have the opportunity to send it on the right path. But other people can equip their Gödel cars with other utility functions. They will compete. And the resulting ecology of individuals some utility functions will be better suited to our physical universe than others, and theyfind a niche for survival . "

Richard Saton is a professor and member of the iCORE committee at the University of Alberta. He serves as a researcher at the Association for the Development of AI, co-author of the most popular textbook on stimulated learning , the discoverer of the method of temporary differences, one of the most important in this area.

In his reportat an AI conference organized by the Institute for the Future, Suton argued that “there is a real chance that even during our life” an AI will be created that is intellectually comparable to a person, and added that this AI “will not obey us”, “will compete and cooperate with us, ”and that“ if we create super-intelligent slaves, we will get super-intelligent opponents ”. In conclusion, he said that “we need to think over the mechanisms (social, legal, political, cultural) to provide the desired outcome”, but that “inevitably ordinary people will become less important”. He also mentioned similar problems at the Gadsby Institute presentation . Also in the Glenn Beck bookThere are such lines: "Richard Saton, one of the largest specialists in AI, predicts an explosion of intelligence somewhere by the middle of the century."

Andrew Davison is a professor of machine vision at Imperial College London, leading the robotic vision group and robotics laboratory Dyson, and the inventor of the computerized localization and markup system MonoSLAM. On his website he writes :
At the risk of being in an unpleasant position in certain academic circles, to which, I hope, I belong, since 2006 I began to take the idea of ​​technological singularity with full seriousness: the exponential growth of technology can lead to the emergence of superhuman AI and other developments that are extremely strong will change the world in a surprisingly near future (perhaps in the next 20-30 years). I was influenced by both the reading of Kurzweil's book “The Singularity is Close” (I found it sensational, but generally intriguing), and my own observations of the incredible progress of science and technology that is happening recently, especially in the field of computer vision and robotics, with whom I am personally connected. Modern decision-making systems, training, methods based on Bayesian probability theory,

It is hard to imagine all the possible consequences of what is happening, positive or negative, and I will try to enumerate only the facts without hitting the opinions (although I myself am not in the camp of super-optimists). I seriously believe that it is worth talking to scholars and the public about this. I will make a list of "signs of singularity" and will update it. It will be little news about technology or news that confirms my feelings that technology is surprisingly evolving faster and faster, and very few people are now thinking about the consequences of this.

Alan Turing and Irving John Good need no introduction. Turing invented the mathematical foundations of computational science and named after him the Turing machine, Turing completeness and Turing test. Hood worked with Turing at Bletchley Park, helped create one of the first computers, and invented many well-known algorithms, for example, the fast discrete Fourier transform algorithm, known as the FFT algorithm. In his work “Can digital machines think?” Turing writes:
Let's assume that such machines can be created, and consider the consequences of creating them. Such an act will undoubtedly be met with hostility, if only we have not advanced in religious tolerance since the times of Galileo. The opposition will consist of intellectuals who are afraid of losing their jobs. But it is likely that intellectuals will be wrong. It will be possible to tackle many things in trying to keep our intellect at the level of standards set by the machines, since after starting the machine method it does not have to take a long time until the machines exceed our insignificant possibilities. At some stage, one should expect the machines to take over control.

While working in the Atlas computer lab in the 60s, Goode developed this idea in the Reasoning Concerning the First Ultra-Intellectual Machine :
We define an ultra-intelligent machine as a machine capable of surpassing a person in any intellectual work. Since the development of such machines is one example of intellectual work, an ultra-intelligent machine can develop better-quality machines. As a result, no doubt, one should expect an “explosion of intellect”, and the human intellect will remain far behind. Therefore, the invention of the ultra-intelligent machine is the last of the inventions that need to be made by man.

* * * * * * * * * *

It bothers me that this list may create the impression that there is some kind of dispute between “believers” and “skeptics” in a given area, during which they blow each other apart. But I did not think so.

When I read articles about skeptics, I always meet two arguments. First, we are still very far from human-level AI, not to mention superintelligence, and there is no obvious way to reach such heights. Secondly, if you demand bans on AI research, you are an idiot.

I completely agree with both points. Like the leaders of the movement of risk AI.

Survey among AI researchers ( Muller & Bostrom, 2014) showed that on average they give 50% for the fact that human-level AIs will appear by 2040 odes, and 90% - that they will appear by 2075. On average, 75% of them believe that superintelligence (“machine intelligence, seriously exceeding the possibilities each person in most professions ") will appear within 30 years after the appearance of the AI ​​of the human level. And although the technique of this survey raises some doubts, if we accept its results, it turns out that most AI researchers agree that something that should be worried will appear in one or two generations.

But the director of the Institute of Machine Intelligence, Luc Müelhauser, and the director of the Institute for the Future of Humanity, Nick Bostrom, stated that their predictions for the development of AI are much later than those predicted by scientists who participated in the survey. If to studyAccording to the predictions of AI from Stuart Armstrong, it can be seen that, in general, the estimates for the time of the appearance of AI made by AI supporters do not differ from those made by AI skeptics. Moreover, the most long-term prediction in this table belongs to Armstrong himself. Nevertheless, Armstrong is now working at the Institute for the Future of Humanity, drawing attention to the risks of AI and the need to study the goals of superintelligence.

The difference between supporters and skeptics is not in their assessments of when we should expect the appearance of AI at the human level, but in when we need to start preparing for this.

Which brings us to the second point. The position of skeptics, it seems, is that although we probably should send a couple of smart people to work on a preliminary assessment of the problem, there is no need to panic or prohibit the study of AI.

Fans of AI, however, insist that although we absolutely do not need to panic or prohibit the study of AI, it is probably worth sending a couple of smart people to work on a preliminary assessment of the problem.

Yang Lekun is perhaps the most ardent skeptic of the risks of AI. He was abundantly quoted in an article on Popular Science , in a post on Marginal Revolution , and he also spoke with KDNuggets and IEEE.on the "inevitable questions of singularity," which he himself describes as "being so far away that science fiction can be written about them." But when he was asked to clarify his position, he stated:
Ilon Musk is very worried about the existential threats to mankind (so he builds rockets to send people to colonize other planets). And although the risk of an AI uprising is very small and very remote to the future, we need to think about it, develop precautions and rules. Just as the bioethics committee appeared in the 1970s and 1980s, before the extensive use of genetics, we need committees on AI ethics. But, as Joshua Bengio wrote, we still have plenty of time.

Eric Horwitz is another expert, often referred to as the main shout of skepticism and constraints. His point of view was described in articles such as "The director of Microsoft’s research department thinks that an AI that has gone out of control won't kill us, " and " Eric Horvitz of Microsoft believes that AI should not be afraid ." But what he said in a longer interview with NPR:
Keyst: Horvitz doubts that virtual secretaries will ever turn into something that will conquer the world. He says that this is how to expect the kite to evolve into a Boeing 747. Does this mean that he is making fun of the singularity?

Horwitz: No. I think that there was a mixture of concepts, and I myself also have mixed feelings.

Keyst: In particular, because of such ideas as singularity, Horvitz and other specialists in AI are increasingly trying to deal with the ethical problems that may arise in the coming years with narrowly focused AI. They also ask more futuristic questions. For example, how can you make an emergency shutdown button for a computer that can change itself?

Horwitz: I truly believe that the stakes are high enough to spend time and energy on actively seeking solutions, even if the likelihood of such events is low.

This, in general, coincides with the position of many of the most zealous AI risk agitators. With such friends and enemies are not needed.

The article in Slate, " Do not be afraid of AI, " also exhibits surprisingly many things in the right light:
As Musk himself states, the solution to the problem of AI risks consists in the sober and sensible joint work of scientists and lawmakers. However, it is quite difficult to understand how chatter about "demons" can contribute to the achievement of this noble goal. She may interfere with her.

First, in the idea of ​​the script Skynet gaping huge holes. And although researchers in the field of informatics believe that Mask’s reasoning is “not quite insane”, they are still too far from the world in which the hype about AI disguises a little less than the AI ​​reality that our informatics experts face.

Yang Lekun, head of the AI ​​laboratory on Facebook, briefly summed up this idea in a post on Google+ in 2013: The hype hurts AI. Over the past five decades, hype has killed the AI ​​four times. It needs to be stopped. "Lekun and others are rightly afraid of hype. The inability to meet the high expectations imposed by science fiction leads to serious cuts in budgets for AI research.

Scientists working on AI are smart people. They are not interested in falling into the classic political traps, in which they would be divided into camps and would accuse each other of panicking or ostrichism. Apparently, they are trying to find a balance between the need to begin preliminary work related to the danger looming somewhere far away, and the risk of causing such a strong hype that will hit them.

I do not want to say that there is no difference of opinion about how soon you need to start addressing this issue. Basically, it all comes down to whether it is possible to say that “we will solve the problem when we encounter it,” or expect such an unexpected take-off, due to which everything gets out of control, and for which, therefore, you need to prepare in advance. I see less and less than I would like evidence that most AI researchers who have their own opinion understand the second possibility. What can I say, if even in the article on Marginal Revolution quotes an expert who says that superintelligence does not pose a big threat, because "smart computers will not be able to set goals for themselves", although anyone who read Bostromknows that this is the whole problem.

There is still a lot of work to be done. But not to specifically select articles in which "real experts on AI are not worried about superintelligence."

Also popular now: