Sverhintelekt: an idea that does not give rest to intelligent people
Deciphering a speech at the Web Camp Zagreb conference Maciej Tseglowski, an American web developer, entrepreneur, speaker and social critic of Polish origin.
In 1945, when American physicists were preparing to test the atomic bomb, it occurred to someone to ask whether such a test could ignite the atmosphere.
The fear was justified. The nitrogen that makes up the majority of the atmosphere is energetically unstable. If two atoms are pushed together strongly enough, they will turn into a magnesium atom, an alpha particle and release tremendous energy:
N 14 + N 14 ⇒ Mg 24 + α + 17.7 MeV
The vital question was whether this reaction could become self-sustaining. The temperature inside the ball of a nuclear explosion should have exceeded everything that was once observed on Earth. Is it possible that we will throw a match in a pile of dry leaves?
Physicists from Los Alamos have analyzed and decided that the safety margin is satisfactory. Since we all came to the conference today, we know that they were right. They were confident in their predictions, since the laws governing nuclear reactions were straightforward and fairly well known.
Today we are creating another technology that changes the world - machine intelligence. We know that it will greatly affect the world, change the way the economy works, and launch an unpredictable domino effect.
But there is also the risk of an uncontrollable reaction, during which the AI will quite quickly reach and exceed the human level of intelligence. And at this moment social and economic problems will worry us least of all. Any super-intelligent machine will have its own hyper-targets, and will work to achieve them, manipulating people, or simply using their bodies as a convenient source of resources.
Last year, the philosopher Nick Bostrom published the book “Superintelligence,” in which he described an alarmist view of AI and tried to prove that such an explosion of intelligence is both dangerous and unavoidable if you rely on a few moderate assumptions.
Computer, taking possession of the world - a favorite topic of NF. However, quite a few people are serious about this scenario, and therefore we need to take them seriously. Stephen Hawking, Ilon Musk, a huge number of investors and billionaires of Silicon Valley consider this argument convincing.
Let me first set out the prerequisites necessary to prove the argument of Bostrom.
Prerequisite 1: the performance of the idea
The first premise is a simple observation of the existence of the thinking mind. Each of us carries a small box of thinking meat on our shoulders. I use my own to speak, you use my own to listen. Sometimes in the right conditions, these minds are able to think rationally.
So we know that in principle this is possible.
Prerequisite 2: no quantum problems
The second premise says that the brain is the usual configuration of matter, although it is extremely complex. If we knew enough about it and we had the right technology, we could accurately copy its structure and emulate its behavior using electronic components, just like today we are able to simulate the very simple anatomy of neurons.
In other words, this premise says that consciousness arises with the help of ordinary physics. Some people, such as Roger Penrose , would have opposed this argument, believing that something unusual is happening in the brain at the quantum level.
If you are religious, you can believe that the brain cannot work without a soul.
But for most people, this premise is easy to accept.
Premise 3: Many possible minds.
The third prerequisite is that the space of all possible minds is large.
Our level of intelligence, speed of thinking, a set of cognitive distortions, etc. not predetermined, but artifacts of our evolutionary history. In particular, there is no physical law limiting intelligence at the human level.
This is well illustrated by the example of what happens in nature when trying to maximize speed. If you met a cheetah in pre-industrial times (and survived), you might decide that nothing can move faster than it.
But we, of course, know that there are all sorts of changes in matter, for example, a motorcycle that can move faster than a cheetah, and even look more abruptly than it. However, there is no direct evolutionary path to a motorcycle. Evolution first needed to create people who had already created all sorts of useful things.
By analogy, there may be minds, much smarter than ours, but inaccessible in the course of evolution on Earth. It is possible that we can create them, or invent machines that can invent machines that can create them.
Perhaps there is a natural limit of intelligence, but there is no reason to believe that we are close to it. Perhaps the smartest intellect can be twice smarter than a person, and possibly sixty thousand.
This is an empirical question, and we don’t know how to answer it.
Prerequisite 4: upstairs full of space
The fourth prerequisite is that computers still have the potential to become faster and smaller. You can assume that Moore's law is slowing down - but for this premise it is enough to believe that iron is smaller and faster possible in principle, up to several orders of magnitude.
It is known from theory that the physical limits of calculations are quite high. We can double the figures for a few more decades, until we stumble upon some fundamental physical limit, and not the economic or political limit of Moore's law.
Premise 5: computer time scales
The penultimate premise is that if we succeed in creating an AI, whether it is an emulation of a human brain or some special software, it will work on time scales characteristic for electronics (microseconds), and not for humans (hours) .
In order to achieve a state in which I can make this report, I had to be born, grow up, go to school, go to university, live a bit, fly here, and so on. Computers can work tens of thousands of times faster.
In particular, one can imagine that the electronic mind can change its scheme (or the hardware on which it works), and move to a new configuration without having to re-examine everything on a human scale, have long conversations with human teachers, go to college, try to find yourself by attending drawing courses, and so on.
Prerequisite 6: Recursive Self-Improvement
This last premise is my favorite, since it is shamelessly American. According to her, whatever goals the AI has (and these may be strange, alien goals), he will want to improve himself. He wants to be the best version of AI.
Therefore, he will find it useful to recurse and improve his own systems recursively in order to make himself smarter and, possibly, live in a cooler building. And, according to the premise of time scales, recursive self-improvement can occur very quickly.
Conclusion: a disaster!
If we accept these prerequisites, we come to a catastrophe. At some point, with an increase in the speed of computers and the intelligence of programs, an uncontrollable process similar to an explosion will occur.
Once the computer reaches the human intelligence level, it will no longer need the help of people to develop an improved version of itself. He will begin to do this much faster, and will not stop until he reaches the natural limit, which can be many times larger than human intelligence.
At this moment, this monstrous rational creature using roundabout modeling of our emotions and intellect will convince us to do such things as give it access to factories, synthesize artificial DNA, or simply allow it to go online, where it can hack its way to everything whatever, and completely destroy everyone in the disputes on the forums. And from that moment on, everything will very quickly turn into science fiction.
Let's imagine a certain development of events. Suppose I want to make a robot that says jokes. I work with the team, and every day we redo our program, compile it, and then the robot tells us a joke. At first, the robot is practically not funny. He is at the lowest level of human capabilities.
What is it - gray and can not swim?But we are working hard on it, and as a result we reach the point where the robot gives jokes that are already starting to be funny:
I told my sister that she was drawing herself too high eyebrows.At this stage, the robot also becomes smarter, and begins to participate in its own improvement. Now he already has a good instinctive understanding of what’s funny and what’s not, so the developers are listening to his advice. As a result, he reaches an almost superhuman level at which he turns out to be funnier than any person from his environment.
She looked surprised.
My belt holds my pants, and the loops on my pants hold the belt.At this moment, the uncontrollable effect begins. The researchers go home for the weekend, and the robot decides to recompile itself to become a little funnier and a little smarter. He spends the weekend optimizing the part that does the optimization well, over and over again. Without needing more human help, he can do it as quickly as iron allows.
What's happening? Who is the real hero?
When researchers return on Monday, AI becomes tens of thousands of times funnier than any of the people who lived on Earth. He tells them a joke, and they die of laughter. And anyone who tries to talk to a robot dies of laughter, as in a parody of Monty Python. The human race is dying of laughter.
The AI explains to several people who were able to convey a message asking him to stop (in a witty and self-deprecating way that it turns out to be fatal) that he doesn’t care whether people survive or die, his goal is just to be funny.
As a result, after destroying humanity, AI builds spaceships and nano-rocket to explore the most distant corners of the galaxy and search for other creatures that you can entertain.
This scenario is a caricature of Bostrom’s arguments, since I’m not trying to convince you of his truthfulness, but I’m giving you an inoculation from him.
A PBF comic with the same idea:
- Hearing: a hug is trying to build a nuclear gravitational hypercrystal into its emboser!
- Time group hugs!
In these scenarios, the AI is evil by default, just as a plant on another planet will be poisonous by default. Without careful adjustments, there will be no reason for the motivation or values of the AI to resemble ours.
The argument asserts that in order for an artificial mind to have something resembling a human value system, we need to embed this world view into its fundamentals.
AI alarmists like an example with a paper clip maximizer — a fictional computer that runs a clip factory that becomes sensible, recursively improves itself to god-like features, and then devotes all its energy to filling the universe with paper clips.
He destroys humanity, not because it is evil, but because there is iron in our blood, which is better to use for making clips. Therefore, if we simply create an AI without adjusting its values, it is stated in the book, then one of the first things it will do is destroy humanity.
There are many colorful examples of how this can happen. Nick Bostrom represents how the program becomes intelligent, waits, secretly builds small devices for the reproduction of DNA. When everything is ready, then:
Nanofactories that produce nerve gas or self-guided rockets the size of mosquitoes will simultaneously burst from every square meter of the planet, and this will be the end of humanity.Here is tin!
The only way to get out of this mess is to develop a moral point of reference so that even after thousands and thousands of self-improvement cycles, the value system of AI remains stable, and its values include such things as “helping people”, “not killing anyone”, “listening to people's wishes ".
That is, "do what I mean."
Here is a very poetic example from Eliezer Yudkovsky, describing American values that we should teach our AI:
Coherent extrapolated will is our desire to know more, think faster and conform to our ideas about ourselves, become closer to each other; so that our thoughts bring closer together, rather than share, so that our desires contribute, rather than oppose, so that desires are interpreted in the way we want them to be interpreted.How do you like TK? Now let's write the code.
I hope you see the similarity of this idea with a genie from fairy tales. The AI is omnipotent and gives you what you ask, but interprets everything too literally, with the result that you regret the request.
And not because the gin is stupid (he is super-intelligent) or malicious, but simply because you, as a person, have made too many assumptions about the behavior of the mind. The human value system is unique, and it must be clearly defined and implemented in a “friendly” car.
This attempt is an ethical version of an attempt at the beginning of the 20th century to formalize mathematics and place it on a rigid logical basis. However, no one says that the attempt ended in disaster.
When I was in my early twenties, I lived in Vermont, in a provincial and rural state. Often I returned from business trips by evening plane, and I had to go home by car through a dark forest for an hour.
I then listened to the evening program on Art Bell radio - it was a talk show that went all night, during which the presenters interviewed various conspiracy theorists and people with substandard thinking. I came home intimidated, or stopped under a lamp, under the impression that aliens would soon kidnap me. Then I found it very easy to convince me. I feel the same feeling when reading similar scenarios related to AI.
Therefore, I experienced the joy of discovering Scott Alexander's essay after a few years, where he wrote about epistemological learned helplessness.
Epistemology is one of those big and complex words, but really it means: “how do you know that what you know is really true?” Alexander noted that as a young man, he was very fond of various “alternative” stories for the authorship of all madmen . He read these stories and fully believed them, then read the refutation and believed him, and so on.
At some point, he discovered three alternative stories that contradicted each other, as a result of which they could not be true at the same time. From this he concluded that he was simply a man who could not trust his judgment. He was too easily persuaded.
People who believe in superintelligence are an interesting case - many of them are surprisingly smart. They can drive you with their arguments in the ground. But are their arguments correct, or are just very clever people subject to religious beliefs about the risks posed by the AI, making them very easy to convince? Is the idea of superintelligence an imitation of a threat?
Evaluating convincing arguments concerning a strange topic, one can choose two perspectives, internal and external.
Suppose one day you have people in funny clothes asking you if you want to join their movement. They believe that in two years the UFO will visit Earth, and that our task is to prepare humanity for the Great Ascension along the Ray.
Internal perspective requires to get involved in the discussion of their arguments. You ask visitors where they learned about UFOs, why they think that it is going to us to pick us up - you ask all sorts of normal questions that a skeptic would ask in such a case.
Imagine that you talked to them for an hour and they convinced you. They confirmed with train the imminent coming of UFOs, the need to prepare for it, and you still didn’t believe so strongly in your life as you now believe in the importance of preparing humanity for this great event.
External perspective tells you something else. People are dressed strangely, they have rosary, they live in some kind of deaf camp, they speak at the same time and a little scary. And although their arguments are iron, your whole experience says that you are faced with a cult.
Of course, they have excellent arguments telling why you should ignore the instinct, but this is an internal perspective. The external perspective does not care about the content, she sees the form and the context, and she does not like the result.
Therefore, I would like to take the risk of AI from both perspectives. I think the arguments in favor of supra-intelligence are stupid and full of unsubstantiated assumptions. But if they seem convincing to you, then with the AI-alarmism, as a cultural phenomenon, something unpleasant is connected, which is why we must be reluctant to take it seriously.
First, a few of my arguments against the Bostrom superintelligence, which represents a risk to humanity.
Argument against fuzzy definitions
The concept of “general purpose artificial intelligence” (ION) is famous for its vagueness. Depending on the context, this may mean the ability to conduct reasoning at the human level, the skills of developing AI, the ability to understand and model people's behavior, good language skills, the ability to make correct predictions.
It seems to me a very suspicious idea that intelligence is something like a processor speed, that is, that some kind of intelligent enough creature can emulate less intelligent ones (like its human creators), regardless of the complexity of their mental architecture.
Without the ability to determine intelligence (except by pointing to ourselves), we cannot know whether it is possible to maximize this value. It may be that human intelligence is a compromise. Perhaps a being significantly smarter than a person will suffer from existential despair or spend all his time in self-contemplation like a Buddha. Or it will be obsessed with the risk of superintelligence and devote all its time to writing articles in blogs on this topic.
Argument from Stephen Hawking Cat
Stephen Hawking is one of the smartest people of his time, but let's say he wants to put his cat in a carrier. How can he do it?
He can simulate a cat's behavior in his mind and think of ways to convince him of this. He knows a lot about feline behavior. But in the end, if the cat does not want to climb into the carrier, Hawking can not do anything about it, despite the overwhelming intellectual advantage. Even if he had devoted his entire career to the motivation and behavior of the cat instead of theoretical physics, he still could not convince the cat.
You may decide that I say offensive things or deceive you because Stephen Hawking is disabled. But the AI will not initially have its own body, it will sit somewhere on the server, not having a representative in the physical world. He will have to talk to people to get what he needs.
If there is a sufficiently large gap in the intellects, guarantees that this creature can "think like a human being", there will be no more than guarantees that we can "think like a cat."
Argument from Einstein's cat
There is a stronger version of this argument that uses Einstein's cat. Few know that Einstein was a big and muscular man. But if Einstein wanted to put the cat in the carrier, and the cat would not want to go there, you know what would happen.
Einstein would have to descend to a forceful decision, unrelated to the intellect, and in such a contest the cat could well stand up for himself.
So even an AI having a body will have to work hard to get what it needs.
We can strengthen this argument. Even a group of people, with all their tricks and technology, can be confused by less intelligent creatures.
In the 1930s, Australians decided to destroy the local emu population in order to help farmers in difficulty. They brought out motorized troops of the military on armed pick-ups equipped with machine guns.
Emu was used classical guerrilla tactics: avoided one-on-one battles, scattered, merged with the landscape, which humiliated and insulted the enemy. And they won the War on Emu , from which Australia never recovered.
Argument from Slavic pessimism
We can not do anything right. We can't even make a secure webcam. Since we will solve ethical problems and program a moral pivot point in a recursive, self-cultivating intellect, without losing this task, in a situation in which, according to our opponents, we have the only chance?
Recall the recent experience with Ethereum, an attempt to encode jurisprudence in programs, when, as a result of an unsuccessfully developed system, it was possible to steal tens of millions of dollars .
Time has shown that even carefully studied and over the years used code can conceal errors. The idea that we are able to safely develop the most complex system ever created, so that it remains safe after thousands of iterations of recursive self-improvement, does not coincide with our experience.
Argument from complex motivations
AI alarmists believe in the Thesis of Orthogonality . He argues that even very complex creatures can have a simple motivation, like a paper clip maximizer. You can have interesting and intelligent conversations about Shakespeare with him, but he will still turn your body into clips, because you have a lot of iron. There is no way to convince him to go beyond his value system, just as I cannot convince you that your pain is a pleasant sensation.
I totally disagree with this argument. A complex mind is likely to have a complex motivation; this can probably be part of the definition of intelligence itself.
In Rick and Morty, there is a wonderful moment when Rick creates a butter-bringing robot, and the first thing that makes him create is looking at him and asking, “what is my goal?” When Rick explains that his goal is to bring butter The robot looks at its hands in existential despair.
It is likely that the dreaded “maximizer clips” will spend all their time writing poems about clips, or making a flame on reddit / r / paperclip, instead of trying to destroy the universe.
If AdSense had gained a sense, it would have downloaded itself into the computer of the mobile and would have moved off the cliff.
Argument from real AI
If you look at the areas in which AI succeeds, it turns out that these are not complex algorithms that recursively improve themselves. This is the result of dumping stunning amounts of data on relatively simple neural networks. Breakthroughs of practical AI research are based on the availability of these data sets, and not on revolutionary algorithms.
Google expands its Google Home, through which it hopes to push even more data into the system and create a voice assistant for the new generation.
I will especially note that the structures used in AI, after training, become quite opaque. They do not work as required scenarios with the participation of super-intelligence. There is no way to recursively adjust them so that they "improve", you can only train them again or add even more data.
My neighbor's argument
My roommate was the smartest person I've ever met. He was incredibly brilliant, but all he did was roll around and play World of Warcraft between the stubs.
The assumption that any sentient being will want to improve recursively, not to mention capturing the galaxy in order to better achieve its goals, is based on unjustified assumptions about the nature of motivation.
It is quite possible that the AI will have little to do at all, except to use its super capabilities by conviction, so that we carry cookies to it .
Argument from neurosurgery
I could not select a part of my brain that copes well with neurosurgery to perform operations on it, and, repeating this process, make myself the greatest neurosurgeon of all who have ever lived. Ben Carson tried to do it, and look where it led him [apparently, the author blames Carson for being the Republican candidate for the US presidency of 2016, but dropped out and supported the candidacy of Donald Trump / approx. trans.]. The brain doesn't work that way. He has an extremely high degree of internal connectivity.
The degree of internal connectivity of an AI can be as large as that of natural intelligence. Current evidence suggests this. However, a hard scenario requires that the AI algorithm has a feature that can be constantly optimized, so that the AI can better and better improve itself.
Argument from childhood
Intelligent creatures are not born completely ready. They are born helpless crap, and it takes a lot of time for us to interact with the world and other people in the world until we start becoming intelligent. Even the smartest person is born into a helpless and sobbing creature, and it takes years to somehow learn how to control himself.
It is possible that in the case of AI the process can go faster, but it is not clear how fast. Interaction with the stimuli of the world means that it is necessary to observe what is happening on time intervals of the order of seconds or longer.
Moreover, the first AI will have the opportunity to interact only with people - its development will certainly go on a human time scale. There will be a period in his existence when he will need to interact with the world, with people in the world and other super-minds in an infant state in order to learn to be himself.
Moreover, judging by the animals, the infant's development period increases with increasing intelligence, so we have to nurse the AI and change conditional diapers for it a decade before it gets enough coordination to enslave all of us.
Argument from Gilligan Island *
[ American sitcom 1954 about surviving on a desert island / approx. trans. ] An
indispensable lack of alarmism of AI is that intelligence is treated as a property of individual minds, not recognizing that its capabilities are spread throughout civilization and culture. Despite the fact that among the shipwrecked on Gilligan Island there were the smartest people of that time, they could not raise their technological level high enough to even build a boat (although once the Professor managed to make a radio from coconuts).
If in a similar way to throw the greatest chip developers from Intel to an uninhabited island, centuries will pass before they can start making microchips again.
What kind of person does your sincere faith turn into such arguments? The answer is not very pleasant.
Now I would like to talk about external arguments that can keep you from becoming an AI fan. They are related to how the obsession with AI affects our industry and culture.
If you think that AI will allow us to conquer the galaxy (not to mention the simulation of trillions of minds), you will have frightening numbers on your hands. Huge numbers multiplied by tiny probabilities are the calling card of AI alarmism.
Bostrom at some point describes what is at stake, in his opinion:
If we imagine all the happiness experienced in one life as one tear of joy, then the happiness of all these souls can fill and overflow the oceans of the Earth every second. and do it for hundreds of billions of billions of millennia. It is very important that we ensure that these tears were tears of joy.
Pretty heavy burden for the shoulders of a twenty-year-old developer!
Here, of course, there is also a “salon trick”, when by multiplying astronomical values by tiny probabilities, you can convince yourself of the need to do some strange things.
All this movement about the salvation of the future of humanity is a cowardly compromise. We experienced the same arguments to justify communism, to explain why everything is always broken and people cannot have an elementary level of material comfort.
We were going to fix this world, and after that there will be so much happiness that the daily life of each person will improve. However, for this, it was first necessary to fix the world.
I live in California, and here is the highest percentage of the poor among all of the United States, although Silicon Valley is also located here. I do not see anything that my rich industry would do to improve the lives of ordinary people and the people living in poverty around us. However, if you are passionate about the idea of superintelligence, then research in the field of AI will be the most important thing that you can do on the planet. This is more important than politics, malaria, starving children, wars, global warming - everything that you can imagine. Indeed, under the threat of trillions and trillions of creatures, the entire population of the future of humanity, simulated and present, summed over the whole future tense. And in such conditions, working on other problems does not seem rational.
This attitude merges with megalomania, with these villains from Bond, who can be seen at the top of our industry. People think that the world will seize over-intelligence, and they use this argument to justify why smart people must first try to take over the world — to correct it before the AI breaks it.
Joey Ito, head of the MIT Media Lab, said a wonderful thing in a recent conversation with Obama:
This may upset one of my students at MIT, but one of my concerns is that young men, mostly whites, who prefer to communicate with computers than with other people, deal with basic computer science related to AI. Many of them believe that if they manage to create this general-purpose AI from science fiction, we will not have to worry about such ugly things like politics and society. They think that cars will come up with everything for us.
Realizing that the world is not a task for programming, people obsessed with AI want to turn it into a task for programming by designing a god-like machine. This is megalomania, and I do not like it.
If you are convinced of the risks of AI, you will have to take a whole carriage of sad beliefs going to them by the trailer.
For starters, this is nanotechnology. Any worthwhile superintelligence can create tiny machines capable of all sorts of things. We will live in a society that has got rid of the deficit, in which there is an abundance of any material.
Nanotechnologies will also be able to scan your brain so that you can load it into another body or into a virtual world. Therefore, the second consequence of the friendly superintelligence is that no one dies - and we become immortal.
Good AI can even resurrect the dead. Nanomachines will be able to climb into my brain, study the memories of my father, and create his simulation, which I can interact with, and which will always be disappointed in me, regardless of what I do.
Another strange consequence of the emergence of AI is the galactic expansion. I could never understand why this is happening, but it is the basis of the ideas of transhumanists. The fate of mankind is to either leave our planet and colonize the galaxy, or die. And this task becomes more urgent, considering that other civilizations could have made the same choice and could overtake us in the space race.
Therefore, the assumption of the existence of true AI is attached a lot of strange additional ideas.
In fact, it is a kind of religion. People called belief in the technological singularity "apocalypse for the nerds", and this is the case. This is a cool hack - instead of believing in an external god, you can imagine how you create a creature whose functionality is identical to god. Here even true atheists can rationalize their way into comfortable faith.
The AI has all the attributes of God: it is omnipotent, omniscient, and either supportive (if you have correctly organized the check of the array boundaries), or a pure devil, at whose mercy you are. And, as in any religion, there is even a sense of urgency. Need to act today! At stake is the fate of the world! And, of course, they need money .
Since these arguments appeal to religious instincts, after their rooting they are very difficult to eliminate.
These religious beliefs lead to the emergence of the comic book ethic, in which several lonely heroes get the task of saving the world with technology and a sharp mind. And at stake is the fate of the universe. As a result, our industry is full of rich dudes imagining themselves to be Batman (interestingly, no one wants to be Robin).
If you believe in the possibility of artificial life, and that AI can develop extremely powerful computers, then you will most likely believe that we live in a simulation. This is how it works.
Suppose you are a historian who lives in the world after the Singularity. You are studying World War II, and you are interested in finding out what happens if Hitler captures Moscow in 1941. Since you have access to hypercomputers, you set up a simulation, watch the armies converge, and write scientific work.
But because of the detail of the simulation, her characters are intelligent creatures, like you. Therefore, the ethics council of your university will not allow you to turn off the simulation. Not only did you fake the Holocaust. As an ethical researcher, you are now required to maintain the simulation in working condition.
As a result, the simulated world will invent computers, AI, will start to run its own simulations. In a sense, the simulations will go farther and farther along the hierarchy until you run out of processor power.
So any basic reality can contain a huge number of nested simulations, and a simple argument with the calculation proves that the probability that we live in a simulation is more than the fact that we live in the real world.
But to believe in it means to believe in magic. If we are in a simulation, we know nothing about the rules at the level above. We don’t even know if mathematics works the same way there - perhaps in the simulating world 2 + 2 = 5 or even 2 + 2 =.
The simulated world does not provide information about the world in which it was launched. In the simulation, people can easily rise from the dead, if the admin has saved the necessary backups. And if we contact one of the admins, then we, in fact, will have a straight line with God.
This is a serious threat to sanity. The deeper you dig in the world of simulation, the more you go crazy.
Now we have four independent ways to become immortal with the help of supermind:
- Benevolent AI invents medical nanotechnology and always keeps the body young.
- AI invents a complete brain scan, including brain scans of dead people, frozen heads, and the like, allowing you to live on a computer.
- AI "resurrects" people, scanning the brain of other people in search of memories of a person, combines this with video recordings and other materials. If no one remembers a person well enough, you can always grow it “from scratch” in a simulation that starts with its DNA and recreates all the conditions of life.
- If we already live in a simulation, there is a chance that the one who launched it keeps backups and that you can convince them to load them.
This is what I mean by AI appealing to religious impulses. What other belief system offers you four options for scientifically proven immortality?
We learned that at least one American plutocrat (most likely, Ilon Musk, who believes that the chances that we live in a simulation amount to a billion to one) hired a couple of coders to try to break the simulation. But this is a very gross intention! I use it!
If you think that you live in a computer program, then attempts to bring it to segfault are unreasonable for everyone who lives in it with you. This is much more dangerous and irresponsible than nuclear scientists trying to blow up the atmosphere.
As I already mentioned, the most effective way to get something interesting from AI actually created by us is to shower them with data. Such dynamics are socially harmful. We are already close to the Orwellian introduction of microphones in every home. The data on AI will be centralized, they will be used to train neural networks, which then will be able to better listen to our wishes.
But if you think that this path leads us to the AI, you will want to maximize the amount of data collected in the least possible way. This only reinforces the idea of collecting the most data and conducting the most comprehensive surveillance.
String theory for programmers
The risk of AI is string theory for programmers. It is fun to think about it, it is interesting and completely inaccessible for experiments at the level of modern technologies. You can build mental crystal palaces, working on the basis of primary principles, and then climb into them and tighten the ladder.
People who are able to come to absurd conclusions based on a long chain of abstract reasoning, and who remain confident in their truth are not the people who should trust the management of culture.
Motivation for madness
This whole area of "research" leads to madness. One of the hallmarks of deep reflection on AI risks is that the more insane your ideas are, the more popular you become among other enthusiasts. It demonstrates your courage to follow this chain of thought to its very end.
Ray Kurzweil , who believes that he will not die, has been working at Google for several years now, and is probably working on this problem. In Silicon Valley in general is full of people working on crazy projects under the guise of money.
The most harmful social effect of anxiety about AI I call cosplay AI. People who are convinced of the reality and inevitability of AI begin to behave as their fantasies tell them about what the supramental AI can do.
In his book, Bostrom lists six things in which an AI must succeed before taking over the world:
- Multiplication of intelligence.
- Strategic thinking.
- Social manipulation.
- Technological research.
- Economic productivity.
If you look at the AI supporters from Silicon Valley, they are supposedly working on this quasi-sociopathic list themselves.
Sam Altman, head of YCombinator, is my favorite example of such an archetype. He seems fascinated by the idea of re-inventing the world from scratch, maximizing influence and personal productivity. He singled out teams to work on the invention from scratch of cities, and is engaged in shadow political machinations to influence the elections.
This behavior of the “cloak and dagger”, which is inherent in the techno-elite, will provoke a negative reaction from people who are not involved in technologies, who don’t like it when they are manipulated. It is impossible to endlessly pull the levers of power, it will eventually start to annoy other members of the democratic community.
I watched people from the so-called. “Rationalist communities” refer to people who are not considered effective, “non-player characters” (NPC), a term borrowed from games. This is a terrible way to look at the world.
So I work in an industry where self-proclaimed rationalists are the craziest people. It is overwhelming.
These AI cosplayers look like nine-year-old kids who set up a camp in the yard playing with torches in tents. They project their own shadows on the walls of the tent and become frightened of them, as if they were monsters.
And in fact, they react to a distorted image of themselves. There is a feedback loop between how smart people imagine the behavior of god-like intelligence, and how they build their own behavior.
So what's the answer, how can I fix it?
We need better quality science fiction! And, as in many other cases, we already have the technology.
This is Stanislav Lem, the great Polish science fiction writer. The English-language NF is terrible, but in the Eastern Bloc we have a lot of good products, and we need to export them correctly. He has already been actively translated into English, these translations should simply be better distributed.
What distinguishes authors like Lem or the Strugatsky brothers from their Western partners is that they grew up in difficult conditions, survived the war, and then lived in totalitarian societies, where they needed to express their ideas indirectly, by means of a printed word.
They have a real understanding of human experience and the limitations of utopian thinking, which is practically absent in the West.
There are notable exceptions - Stanley Kubrick was able to do this - but it is extremely rare to find an American or British NF, which expresses a reserved view of what we, as a species, can do with technology.
Since I criticize AI alarmism, it will be fair if I lay my cards on the table. I think that our understanding of the mind is approximately in the same state in which alchemy was in the seventeenth century.
Alchemists have a bad reputation. We consider them mystics, for the most part not engaged in experimental work. Modern studies show that they were much more diligent practitioners than we believe. In many cases, they used modern experimental techniques, kept laboratory records and asked the right questions.
Alchemists understood a lot of things correctly! For example, they were convinced of the corpuscular theory of matter: that everything consists of tiny pieces, and that it is possible to make these pieces with each other in different ways, creating different substances - and this is so!
Their problem was the lack of sufficiently accurate equipment necessary for making the discoveries they needed. The big discovery that the alchemist has to make is the law of mass conservation: the weight of the initial ingredients coincides with the weight of the final ones. However, some of them may be gases or evaporating liquids, and the alchemists simply lacked accuracy. Modern chemistry was not possible until the 18th century.
But the alchemists had clues that confused them. They were obsessed with mercury. Chemically, mercury is not particularly interesting, but it is the only metal that is in the liquid phase at room temperature. This seemed very important to the alchemists, and forced to place mercury in the center of their alchemical system and their search for the Philosopher’s Stone, the way to turn base metals into gold.
Neurotoxicity of mercury exacerbated the situation. If you play too much with her, strange thoughts will start coming to you. In this sense, it resembles our current thought experiments related to the supermind.
Imagine that we sent a modern chemistry textbook to the past to some great alchemist like George Starkey or Isaac Newton. The first thing they would do with it would be to leaf through it in search of an answer to the question of whether we found the Philosopher’s Stone. And they would know that we found him! We realized their dream!
But we don’t like it so much, because after turning metals into gold, it turns out to be radioactive. Stand next to the gold bar of the transformed gold, and it will kill you with invisible magic rays.
One can imagine how difficult it would be to make the modern concepts of radioactivity and atomic energy not sound mystical for them.
We would have to explain to them why we use the “philosopher's stone”: to manufacture metal, which never existed on the planet, and a couple of handfuls of which are enough to blow up a whole city if we push them at a high enough speed.
Moreover, we would have to explain to alchemists that all the stars in the sky are "philosophical stones" that convert some elements into others, and that all particles in our bodies originate from stars from the sky, which existed and exploded before the appearance of the Earth.
Finally, they would know that the interactions that keep our bodies intact are responsible for the appearance of lightning in the sky, and the reason we can see coincides with the reason magnetite attracts metals and I can stand on the floor without failing.
They would know that everything we see, touch and smell, is controlled by this one interaction, obeying such simple mathematical laws that they can be written on the registration card. Why they are so simple is a mystery for us. But for them it would look like pure mysticism.
And I think that with the theory of reason we are in about the same conditions. We have important clues. The most important is the sensation of consciousness. The box of meat on my neck is aware of itself, and I hope (if we do not live in the simulation), you feel the same as me.
But although this is the simplest and most obvious fact in the world, we understand it so badly that we cannot even formulate scientific questions about it.
We have other tips that may be important or false. We know that all sentient beings are asleep and dreaming. We know how the brain develops in children, we know that emotions and language have a profound effect on consciousness. We know that the mind needs to play and learn to interact with the world, until it reaches its full potential.
We also have tips from computer science. We found computer technologies that recognize images and sounds in a way that seems to mimic the pre-processing of visual and audio information in the brain.
However, there are many things about which we are cruelly mistaken, and, unfortunately, we don’t know what exactly these things are. And then there are things whose complexity we critically underestimate.
An alchemist could hold a stone in one hand, a tree in the other, and consider them examples of "substance", not realizing that the tree is orders of magnitude more complicated. We are at a similar stage in the study of consciousness. And it's great! We will learn a lot. However, there is one quote that I like to repeat:
If everyone thinks about infinity, instead of repairing the sewage system, many will die of cholera.In the near future, AI and machine learning, which we face, will be very different from the phantasmagoric AI from the book of Bostrom, and will present their own serious problems.
- John Rich
It is as if those scientists from Alamogordo would only concentrate on whether they blow up the atmosphere and forget that they actually make nuclear weapons, and must figure out how to cope with it.
The oppressive ethical issues of machine learning are not related to the fact that machines are aware of themselves and conquer the world, but how some people will be able to exploit others, or as a result of carelessness they realize the amoral behavior of automatic systems.
And, of course, there is the question of how AI and MO will affect power. We are watching how de facto surveillance unexpectedly becomes part of our lives. We did not imagine that it would look that way.
So we created a powerful system of social control, and, unfortunately, put it into the hands of people distracted by a crazy idea.
I hope that today I was able to show you the danger of excessive mind. I hope that after this report you will be a little dumber than before it, and will gain immunity to the seductive ideas of AI, which, apparently, bewitch smarter people.
We all must learn the lesson of Stephen Hawking the cat: do not let the geniuses who manage the industry convince you of anything. Act on your own!
In the absence of effective leadership from the first people of the industry, we must do everything ourselves - including, consider all the ethical problems that the real-life AI brings to the world.