The threat of a "rebellion of machines" will be studied
One of the most talked-about news on the English-speaking Internet was an interview with professor of philosophy Hugh Price, accompanying the announcement of the imminent launch of the Center for the Study of Global Risks at the University of Cambridge (CSER), and the theme of the "machine revolt" became prevailing with references to CSER.
Even on news sites with a very conservative audience, such as BBCnews, the number of comments under the relevant news reaches several hundred.
Let me remind you of the essence: the Center for the Study of Existential Risk (CSER) is going to investigate the global risks that biotechnologies, nanotechnologies, nuclear research, anthropogenic climate change and artificial intelligence developments are potentially fraught with. Founders of the Center - Professor of PhilosophyHugh Price and professor of astrophysics Martin Reese of the University of Cambridge, as well as one of the founders of Skype, Jaan Tallinn , has a degree in theoretical physics from the University of Tartu.
The progress of mankind today is characterized not so much by evolutionary processes as by technical development. This allows people to live longer, perform tasks faster and arrange destruction more or less at will.
However, the increasing complexity of computing processes will ultimately lead to the creation of a single artificial intelligence (AI), Price and Tallinn are sure. And the critical moment will come when this “universal mind” will be able to independently create computer programs and develop technology to recreate their own kind.
“Take gorillas, for example,” Professor Price suggests. “The reason they disappear is not at all that people are actively killing them, but that we manage the environment in ways that suit us but are destructive to their existence.”
The analogy is more than transparent. “At some point, this century or the next, we must face one of the greatest twists and turns in human history - perhaps even the history of space - when intelligence goes beyond biology,” Professor Price promises. "Nature did not foresee us, and we, in turn, should not take AI for granted."
Specialists associated with robotics and high technology, for the most part, were rather skeptical about the professor’s statements. Software crashes and errors in algorithms - something understandable, predictable and conditionally tangible - are much easier for the human mind to perceive than abstract threats of a fact that does not yet exist.
However, the problems associated with AI are not only worrying about Cambridge professors. Since 2001, there has been a nonprofit organization in the United States known as SIAI , the Singularity Institute. Her interests include studying the potential dangers associated with an “intellectual explosion” and the emergence of an “unfriendly” AI. One of the co-founders of the Institute, Eliezer Shlomo Yudkowski , is widely known for his research on technological singularity issues (the moment after which technological progress will be inaccessible to human understanding).
In his work “Artificial Intelligence as a Positive and Negative Global Risk Factor” (available in Russian here) Yudkowski writes: “One of the ways to a global catastrophe is when someone presses a button with an erroneous idea of what this button does — when AI arises through a similar fusion of working algorithms with a researcher who does not have a deep understanding of how the whole system it works ... Ignorance of how to make a friendly AI is not deadly in itself, if you know that you do not know. It is the erroneous belief that AI will be friendly means the obvious path to global disaster. ”
And a bit of fantastic reality - in August of this year, Business Insider announced the creation of "cyber flesh" by bio-engineers at Harvard University. The results of the study were published in Nature Materials.
The design is a complex structure of nanowires and transistors, on which human tissues are built. Cyberflesh can track and transmit data like the heartbeat of the human body. “This allows us to actually blur the boundaries between electronic, inorganic systems and organic, biological,” said Charles Lieber, head of the research team.
Additionally: an article by Tallinn and Price “ AI - can we keep it in a box?” ".