# LessWrong.com AI Risks, Part 1: Interview with Shane Legg from DeepMind

Hello readers of Habrahabr! I recently found out that DeepMind, an artificial intelligence (AI) company, was acquired by Google for \$ 500 million. I began to search the Internet for something about DeepMind researchers, interviewed them and found Q&A with western experts, including Shane Legg from DeepMind, collected on LessWrong.com . Below I will give a translation of an interview with Shane Legg, which seemed interesting to me. The second part of the article will feature interviews with ten other AI researchers.

Shane Legg is a Computer Science scientist and AI researcher who works on theoretical models of super intelligent machines (AIXI). His PhD thesis, “Super-Intelligence of the Machine,” was completed in 2008. He was awarded the Artificial Intelligence Prize and the \$ 10,000 from the Canadian Institute of Singularity. A list of Shane's works can be found here .

## Interview with Shane Legg, July 17, 2011

Original article

Abbreviations used:

FDI : Human-grade AI (hereinafter referred to as human AI, FDI, also Artificial Generic Intelligence, human-level AI)
FDI : Super-human AI

Q1 : Assuming that FDI research will not be suspended by a global catastrophe, like What year do you think FDI will be developed with a probability of 10% / 50% / 90%?

Explanation:

`P (FDI (year) | no war ∧ no catastrophe ∧ political and economic support) = 10% / 50% / 90%`

Shane Legg : 2018, 2028, 2050.

Q2 : How likely is the result of AI development to be a failure and a complete failure?

Explanation:
```P (Terrible Consequences | Unsuccessful AI) =? # The disappearance of humanity
P (Absolutely Terrible Consequences | Unsuccessful AI) =? # The torment of mankind```

Shane Legg : It really depends on how you define the terms. True, it seems to me that the disappearance of humanity will nevertheless come, and technology will play a role in this. ( Probably, this refers to the extinction of the species homo sapience, and not of intelligent life in general - approx. translator ). But there is a big difference when this happens - a year after the invention of FDI or a million years after it. I don’t know about the probability. Maybe 5%, maybe 50%. I don’t think anyone can give a good mark.

If by torture you mean prolonged torment, I think this is unlikely. If a super-intelligent machine wants to get rid of us, it will do it quite efficiently. I do not think that we will voluntarily begin to create a machine to maximize the suffering of mankind.

Q3: What is the probability that the FHI will rebuild itself into a massive superhuman intellectual machine (HFID) within hours / days / <5 years?

Explanation:

```P (HFID for hours | FDI, operating at human speed, 100 Gb network connection) =?
P (HFID for days | FDI, operating at human speed, 100 Gb network connection) =?
P (SRI for <5 years | SRI, operating at human speed, 100 Gb network connection) =?
```

Shane Legg : “human-level ChII speed” is a rather vague term. Without a doubt, the car will be better than a person in something, but worse in something. What it can be better in - it can lead to a big difference in the results.

In any case, I suspect that, having created the FDI, the developers will independently scale it up to the SRI, the machine itself will not do this. After that, the car will be engaged in self-improvement, most likely.

How fast can you do it? Perhaps very fast, but it is also possible that never - there may be limitations of nonlinear complexity, which means that even theoretically optimal algorithms give a diminishing increase in intelligence when adding computing power.

Q4:Is it important to find out and prove how to make AI friendly to us and our values ​​(safe) before solving the AI ​​problem?

Shane Legg : It seems to me like a question about chicken and egg . At the moment, we cannot come to a common opinion about what intelligence is and how to measure it, and all the more we cannot come to a common opinion on how the FDI should work. How can we make something safe if we don’t really know how it will work? It may be useful to consider some theoretical questions. But without a specific and basic understanding of AI, I think that an abstract analysis of these issues will become very varied.

Q5 :How much money is now needed to eliminate the possible risks from AI (contributing to the achievement of your personal distant goals, for example, to survive this century), less / enough / slightly more / much more / incommensurably more.

Shane Legg : A lot more. However, as is the case with charity, it is unlikely that the problem is facilitated by pumping money into the problem, and indeed this can worsen the situation. I really think that the main issue is not financial, it is a cultural issue (emphasis. Trans.). I think that here in society changes will begin to occur when there is progress with AI and people will begin to take the possibility of the occurrence of FDI during their life more seriously. I think until this point a serious study of the risks of AI will remain optional.

Q6 :Are AI risks prioritized than other existential risks, such as those associated with advanced nanotechnology capabilities? Explanation: What existential risks (such as the extinction of humanity) are most likely to have the greatest negative impact on your personal long-term goals if nothing is done to reduce the risk.

Shane Legg : For me, this is the number 1 risk in this century, next to it, but in second place is the creation of a biological pathogen.

Q7 : What is the current level of awareness of AI risks, relatively ideal?

Shane legg: Too low ... but this is a two-edged sword: by the time the mainstream research community begins to worry about the problem, we might run into some kind of arms race if large companies and / or governments start to panic secretly. In this case, most likely, everyone will be bad.

Q8 : Can you talk about the stage, the milestone, after which we are likely to achieve FDI within five years?

Shane Legg : This is a tricky question! When the machine can play a fairly extensive set of games, having a stream of perception at the input and output, and also be able to reuse the experience between different games. I think in this case, we will begin to approach.

July 17, 2011

Translator's note: I allowed myself to highlight the opinion of Shane Legge about culture - he considers the problem of resources for the invention of AI less important than the issue of cultural exchange. When I try to transfer this opinion to Russian reality, I have different thoughts - negative, related to the fact that there is almost no cultural exchange within the whole of large Russia, and rather positive, due to the fact that developers who seriously assess the likelihood of creating AI with their lives will either leave the country or contribute to the development of the social sphere.