DeepMind Recruits Specialists to Protect Against Strong AI



    DeepMind, a London-based research company (owned by Google), specializes in cutting-edge artificial intelligence, which can eventually develop into a form of strong AI. According to the theory, a strong AI may be able to think and be aware of itself, empathize (feel). Moreover, it will have the following properties.

    • Making decisions, using strategies, solving puzzles and acting in the face of uncertainty.
    • Representation of knowledge, including a general idea of ​​reality.
    • Planning.
    • Training.
    • Natural language communication.
    • Combining these abilities to achieve common goals.

    Obviously, a program with such functionality can act unpredictably for developers. Actually, it will be specially programmed for battery life. Therefore, it is very important to foresee the necessary security measures.

    From various sources, including profiles on the social network of LinkedIn professionals, it became known that DeepMind has begun recruiting employees for the AI ​​security department. This unit should reduce the likelihood that a strong AI can develop into a form that is dangerous to humanity and / or to itself.

    DeepMind is one of many companies in the world that are working on creating self-learning neural networks in the form of weak Artificial Intelligence. So far, these programs are limited to highly specialized tasks: they play complex board games (go) and help reduce Google’s cost of electricity. But the ambitions of English scientists do not stop there. In the future, they seek to develop a universal AI system. As stated on the website, the company wants to “solve intelligence” to “make the world a better place.” This is consistent with Google’s core principle, “Don’t be evil.”

    To reduce the chances of developing a dangerous form of a strong AI, the company created the AI ​​Safety Group security department (the date of formation of the department and the number of personnel are not known).Krakovna Victoria (Viktoriya Krakovna), Ian Lake (Jan Leike), Pedro Ortega (Pedro Ortega). For example, Victoria Krakovna (pictured) was hired as a researcher, she holds a PhD from Harvard University in Statistics. Winner of international school and continental student olympiads in mathematics of Canadian-Ukrainian origin was an intern at Google in 2013 and 2015, and later co-founded the Future of Life Institute in Boston, one of the leading research organizations in the world that deals with security artificial intelligence.


    Jan Leike also studies the safety of AI. He is listed as a research fellow at the Future of Humanity Institute , and this summer won the Best Student Work Award at the Uncertainty in Artificial Intelligence conference. The work is devoted to the application of the Thompson method in self-learning of reinforced neural networks ( text of a scientific paper ).

    Pedro Ortega is a PhD in machine learning from the University of Cambridge.

    Many scientists have warned of the potential dangers of superintelligent Artificial Intelligence. For example, British physicist and mathematician Stephen Hawking saidthat underestimating the threat posed by artificial intelligence could be the biggest mistake in human history if we do not learn to avoid risks.

    Stephen Hawking with the coauthors of the article warns of the danger if machines with non-human intelligence improve and nothing can stop this process. This, in turn, will launch the process of the so-called technological singularity.

    Such technology will surpass man and begin to manage financial markets, scientific research, people and the development of weapons beyond our comprehension. If the short-term effect of artificial intelligence depends on who controls it, then the long-term effect depends on whether it can be controlled at all.

    Probably, the company DeepMind listened to the words of the professor - and is taking the necessary security measures. Hawking et al. Mentioned that there is little serious research outside of nonprofit organizations such as the Cambridge Center for Existential Risks, the Institute for the Future of Humanity, and research institutes for machine intelligence and the life of the future. In fact, much more attention should be paid to these issues.

    Ilon Musk also warned of the potential dangers of AI. A year ago, he announced with like-minded peopleon the foundation of the non-profit organization OpenAI, which considers open research of Strong AI as a way of hedging the risks of humanity against a single centralized artificial intelligence.

    The official announcement of the foundation of the organization said: “In connection with the unpredictable history of AI, it is difficult to predict when AI of a human level may appear. When this happens, it will be important to have at the disposal of mankind a leading research institute that is able to prioritize the gain for all over its own interests. ”

    Nowadays, research in the field of strong AI is carried out by research organizations and large commercial corporations such as Google, Facebook and Microsoft.

    Also popular now: