The best minds of the planet: artificial intelligence can destroy humanity

    image
    In the list of doomsday scenarios, smart killer robots that can wipe humanity off the face of the earth are pretty high. And in the scientific community, more and more experts on artificial intelligence (AI) agree that people will eventually create artificial intelligence that will become smarter than them. A moment called a singularity can create a utopia in which robots do work and people relax in heaven. Or it will lead to the destruction of all life forms that AI considers competitors for control of the planet - that is, us. Stephen Hawkingalways talked about the last option and recently recalled this in an interview with the BBC. We suggest that you familiarize yourself with some comments by Hawking and other geniuses who believe that AI can become the sunset of mankind .


    Stephen Hawking



    image
    The development of full-fledged artificial intelligence may portend the beginning of the end for people. This will begin on its own and will change at a frantic pace. People limited by slow biological evolution will not be able to resist and will be crowded out, ”says a world-famous physicist. Hawking has long presented his apocalyptic vision.

    In response to a film about singularity with Johnny Depp's “Excellence”, the scientist criticized the researchers for not doing anything to protect humanity from AI: “If the superior alien civilization sent us the message“ We will arrive in a few centuries ”, would we answered, they say, well, call us when you get here, the doors will be open? Probably not, but that's roughly what happens with AI. ”

    Elon Musk



    image

    Known for leading technology companies like Tesla and SpaceX , Musk is not a fan of artificial intelligence . At a conference at MIT in October, he compared the use of AI to “calling the demon” and called it the biggest and real threat to humanity. He also tweeted that AI could be more dangerous than nuclear weapons. Musk called for the creation of national and international regulation of AI development.

    Nick Bostrom



    image

    The Swedish philosopher is the director of the Institute for Future Humanity at Oxford University, where he spent a lot of time thinking about the potential outcome of a singularity. In his new book Superintellig, Bostrom writes that as soon as machines surpass human intelligence, they will be able to mobilize and decide to destroy people with lightning speed using a variety of strategies (inconspicuous development of a pathogen, luring people to their side or simply due to brute force). The world of the future will become more and more technological and complex, but we will not see this. “A society of economic miracle and technological steepness in which no one will be. Disneyland without children. "

    James barrat



    image

    Barrat is a writer and documentary who, in his new book, Our Latest Invention: Artificial Intelligence and the End of the Human Age, presents interviews with philosophers and AI researchers. He believes that smart creatures by their nature collect resources and perform tasks, which, without a doubt, will make competitors a super-smart AI and people - the main consumers of resources on the planet. This means that even a machine that was just supposed to play chess or perform simple functions can think about something else if it is smart enough. “Without carefully thought-out restraining instructions, a self-conscious, self-improving purposeful system to achieve its goals will progress in the way we cannot even think for ourselves,” the author of the book writes.

    Varnor Wing



    image

    As a mathematician and science fiction writer, Vindg coined the term “singularity” to describe the tipping point at which machines will become smarter than humans. He sees singularity as irreversible, even if international rules control the development of artificial intelligence. "The advantages over competitors from automation - economic, military, even cultural - are so attractive that laws prohibiting such things only stimulate their use by others," the author writes in his 1999 essay. But what happens when we reach a singularity? "The physical destruction of the human race will be the only option."

    Also popular now: