Do we live inside a computer model?



    Back in 2003, the philosopher, professor of Oxford, Nick Bostrom published a lengthy article in which he considered the assumption that we all, in fact, live inside a computer model. And since then, this work haunts many scientists who periodically publish articles in support or in refutation of the work of Bostrom.

    His article states that at least one of the statements is true:

    1) The human race is likely to disappear before it reaches the posthuman level of development.
    2) It is extremely unlikely that any posthuman civilization will create a significant number of models of the history of its evolution (or possible options).
    3) We almost certainly live inside a computer model.

    It follows that the belief that our civilization can reach the posthuman level and create other models is a fallacy.

    It seemed interesting to us the ideas, assumptions and evidence presented by scientists, and we decided to share with you his conclusions (with abbreviations).

    Introduction


    Science fiction and many scientific works of serious scientists and futurologists predict that in the future we will have access to incredible computer computing power. Suppose this is so. Thanks to these supercomputers, future generations will be able to launch many life models of their predecessors. It is likely that these modeled “people” will possess consciousness if the virtual world is sufficiently elaborated. Then it is likely that the vast majority of minds, like ours, will be the products of the model. In this case, we can assume that this is already a fait accompli, and we, in fact, do not live in the real world, but in the virtual. And if we do not believe that this is so, then we have no right to believe that our descendants will master such technologies. That is the main idea.

    Assumption of substrate independence




    In philosophy, the concept of "substrate independence" means that the mind can exist in matter, consisting of any substance. At a logical level, our consciousness can be reduced to a system consisting of various computational structures and processes. And this is not unique to biological neural networks contained in our skull. Silicon processors, in theory, are also capable of this. We will not delve into the debate about this, we just accept the thesis of “substrate independence” as it is. That is, it is not claimed that this is just so, just suppose that a hypothetical computer running a certain program can gain consciousness.

    Moreover, there is no need to assume that the computer mind should behave in different situations in the same way as a person would behave, including passing the Turing test. It is enough assumption that in the future computing processes occurring in the human brain will be sufficiently fully and thoroughly modeled. Up to the level of individual synapses.

    Of course, various chemical components also affect our learning functions and cognitive functions. The thesis of substrate independence does not reject their role, but establishes that they affect subjective experience only through a direct or indirect effect on the computational actions of the brain.

    Technological limitations


    At the moment, we do not have fast enough computers and software to create artificial intelligence. But if technological progress develops at a steady pace, then in the end the necessary technologies will be created. It is likely that this will happen only after several decades, however, the time factor does not matter for the subject of this article. The assumption of our existence within the model will “work” even if we believe that it will take hundreds of thousands of years to reach the “posthuman” stage in the development of civilization when we reach technological capabilities in which only fundamental physical laws and a lack of raw materials and energy will limit us.

    At this level of development, it will be possible to transform celestial bodies down to entire planets into incredibly powerful computers. Now it’s even difficult for us to imagine what kind of computing power may be available to posthuman civilization. Since we have not yet developed a “ theory of everything ”, we cannot exclude the possibility of discovering in the future physical phenomena and phenomena that will allow us to go beyond the limitations of information processing that we present today. With much greater confidence, we can set the lowerboundaries of posthuman computational power, taking into account only the mechanisms known to us today. For example, in 1992, Eric Drexler, in his book Nanosystems: Molecular Machinery, Manufacturing, and Computation, described a computer the size of a cube of sugar (without power and cooling) that would be able to perform 10 21 operations per second. Robert Bradbury suggests that a planet-sized computer will achieve 10 42 operations per second. Seth Lloyd believes that by creating quantum or plasma computers, we can get even closer to the theoretical limits of computing power. For a computer weighing 1 kg, Lloyd calculated the upper limit of 5x10 50 operations per second when operating about 1031 bit



    It is also very difficult to assume what computing power will be enough to emulate the human mind. In a 1989 book by Mind Children, Hans Moravec concluded that a complete brain emulation requires a performance of about 10 14 operations per second. Based on the number of synapses in the brain and the frequency of their responses, Nick Bostrom himself calculated the necessary performance at the level of 10 16 —10 17operations per second. Most likely, even more computing power will be needed to simulate the internal operation of synapses and dendritic structures. However, our central nervous system at the micro level apparently has a high degree of redundancy to compensate for the unreliability and high “noisiness” of neural components.

    An environmental model will also require computing power, depending on scale and detail. It is impossible to model the entire universe down to the quantum level, unless new physical phenomena are discovered. But realistic modeling of our environment requires much less power. The main thing is that the artificial intelligence, interacting with the virtual environment in the same way as a living person would do in the real world, does not notice any deviations. So the insides of our planet at the microscopic level can not be modeled.

    Distant astronomical objects also do not require much effort, it is enough to “deliver” the information that can be obtained from observations from a planet or spaceships.

    The surface of the planet at the macroscopic level will need to be fully modeled, and at the microscopic level, according to the situation. The main thing is that the picture in the eyepiece of the microscope looks reliable. It becomes more difficult when we use systems and machines that interact at the micro level to get the expected results. In addition, capacity will be required to closely track the faith of all modeled minds. So, to immediately fill out all the necessary details when the "computer person" looks through a microscope.

    If one of us suddenly notices some kind of discrepancy in this world, then it is easy to “fix”, correct the mind of this person. Or go back a few seconds and fix the errors to avoid detection.



    Since it is impossible to accurately calculate the required computing power for simulations of human history, it can be roughly estimated them at 10 33 -10 36 operations per second (100 billion x 50 per person x 30 million s / year x 10 14 ... 10 17 Operations / sec per brain). But even if we made a few orders of error, this is not too important for the hypothesis of a universal model that we are considering.

    As mentioned above, a planetary computer can have a performance of 10 42operations / sec. Such a “device” is able to simulate the entire mental history of mankind, using only 0.0001% of its power. A posthuman civilization could build a colossal number of such computers, creating the corresponding number of full-scale models. Such a conclusion can be made even taking into account large errors in all our assumptions.

    The essence of the assumption of the existence within the model


    So, if there is a noticeable chance that our civilization will ever reach a posthuman level of development and create complete models of humanity, then why don’t we live in such models?

    Let us present a mathematical proof of this assumption.

    Legend:

    FP - the proportion of all technological civilizations that have reached the posthuman stage.
    N is the average number of models launched by posthuman civilization.
    H is the average number of people who existed before civilization turned into post-human.


    Then the number of simulated human minds is equal to:

    F sim = F p NH / (F p NH + H)

    Let:

    F1 - the proportion of post-human civilizations interested in creating models of humanity (or in which there are individual representatives who are interested in this and have the necessary resources).
    N1 is the average number of models launched by such civilizations.


    Then:

    N = F 1 N 1

    Therefore:

    F sim = F p F 1 N 1 / (F p F 1 N 1 +1)

    From this equation it follows that at least one of the three assumptions must be true:

    (1) F p ≈ 0
    (2) F 1 ≈ 0
    (3) F sim ≈ 1

    The principle of polite indifference (A bland indifference principle)


    So, we accept assumption (3) true. Let x be the proportion of all observers living inside the model. If we do not have confirmation that our consciousness, in comparison with all the others, is more likely created by a computer, and does not belong to a living person, then the probability (Cr) that we all live inside the model should be equal to:

    Cr (SIM | F sim = x) = x

    This statement follows from the principle of polite indifference. We consider two cases. The first, simplest, when all the minds inside the model are completely identical: they have the same knowledge, memories and experience. The second case, when all the minds are similar, but at the same time are qualitatively different from each other in their experience. And the assumption of a universal model works in both cases, because we do not have accurate information about which of the minds of the people living in our world are virtual and which belong to living people.
    (A deeper analysis of this position can be found in Bostrom’s book “Anthropic Bias: Observation Selection Effects in Science and Philosophy”)

    For a better understanding, the following analogy can be given. Suppose a certain fraction of the human population has a certain sequence of S genes, which is usually referred to as “junk DNA”. Suppose this sequence is not detected in the analysis of the genome and there are no obvious external manifestations of its presence in humans. Then it would be logical to believe that you can be a carrier of the sequence S. And this does not depend on the fact that the minds of people with S are different from the minds of people without S. Just because each person has his own unique life experience, without any relation to the presence or absence of S in their genes.

    The same is true if the presence of S is due not to biology, but to being in a computer model, given that we have no way to distinguish living people from “computer” ones.

    Interpretations


    If assumption (1) F p ≈ 0 is true, then humanity will almost certainly not reach the posthuman stage. In this case, we can talk about the high probability of DOOM, that is, the hypothesis of the extinction of mankind:

    Cr (DOOM | F p ≈ 1) ≈ 1

    It is quite possible to imagine a hypothetical situation in which the value of F p would be even greater. For example, the fall of a giant meteorite. In such cases, the probability of DOOM would be much higher than the proportion of civilizations that have not reached the posthuman stage.

    Assumption (1) in itself does not mean that we will die out soon. This probability depends on our current level of technological development long before we begin to die out. Another scenario in which assumption (1) is true is the collapse of a technological civilization. Primitive human communities will exist on Earth for an indefinite time.

    There are many ways for us not to reach the posthuman stage. One of the most natural options is to develop some powerful but dangeroustechnology. Today, one of the candidates for this role is molecular nanotechnology, which in the future will allow the creation of self-reproducing nanobots capable of obtaining resources from soil and organics - mechanical bacteria. Such nanobots, created with malicious intent, are able to destroy all life on the planet .

    If we accept the assumption (2) F 1 ≈ 0, then this means that the proportion of post-human civilizations interested in modeling humanity is negligible. Strong convergence must occurways of development of civilizations. If the number of running models is extremely large, then civilizations interested in this should be equally small. That is, almost all of them decided not to spend resources on this, or almost none of the representatives are interested in this and have the necessary capabilities. Or in these civilizations there is a direct ban on the creation of such models.
    What can cause civilizations to develop in very similar scenarios. Someone will say that all advanced civilizations come to the ethical prohibition of modeling, so as not to cause suffering to simulated minds. However, from today's point of view, the creation of an “electronic” civilization is not considered immoral. Yes, and just an ethical explanation is clearly not enough, it is necessary to achieve a high degree of similarity of social structures of different civilizations.

    Another possible point of coincidence of development paths is the assumption that almost all the inhabitants of almost every posthuman civilization will reach the level of development at which they simply lose their desire to create universal models of humanity. This will require very serious changes in the motivation of our descendants. Who knows, maybe in the future it will be considered just a stupid undertaking. Perhaps because of the insignificance of scientific benefits, that does not look too unbelievable, given the alleged immeasurable intellectual superiority of future civilizations. Or maybe future “posthumans” will find all these models to be too ineffective a way to get pleasure. After all, it is much easier to stimulate the necessary areas of the brain than to build colossal “game servers”. Thus, if assumption (2) is true, it follows

    The most intriguing scenarios are opened if the assumption (3) F sim ≈ 1 is true . If you and I now live inside a computer model, then the cosmos that we are able to observe is only a small part of the physical universe. Moreover, physical laws in a world where there is a supercomputer in which our virtual world is created with you may not coincide with “our” physical laws.

    There is also the possibility that the modeled civilization will also reach the posthuman stage within the model. And then she will launch her own supercomputers to create models. Such supercomputers can be compared with today's "virtual machines." After all, they can also be nested: you can create a virtual machine that emulates another virtual machine that emulates a third, and so on. Therefore, if we are going to create a model ourselves, then this in itself will be serious evidence against assumptions (1) and (2), and then we will be forced to conclude that we ourselves exist inside the model. Moreover, we have to assume that the posthuman civilization that created our world is itself a computer model. And their creators, in turn, too.



    In other words, reality can consist of many levels. Moreover, we can assume that over time their number is growing. Although there is an argument against this hypothesis, according to which the amount of necessary resources consumed for modeling at the highest, “real” level, becomes too large. Perhaps even modeling a single posthuman civilization can be incredibly costly. In this case, we can assume that our model will simply be turned off when it reaches the posthuman stage. Amen.

    To some extent, the post-humans who launched the model can be compared with the gods in relation to us, “living” inside the computer. After all, they created a world that we know, their intellect cannot be realized, they are omnipotent within our world and know everything that happens to each of us. A further logical chain can lead us to justify the appropriateness of “good behavior” in order to gain the favor of the demiurges.

    In addition to the theory of the universal model, one can consider the idea of ​​the possibility of selective modeling of a small group of people or even a single person. In these cases, the rest of humanity is “shadow people” modeled at a sufficient level. It is difficult to say how much less resources are required to model shadow people compared to "full" people. It is not even clear whether they will be able to behave indistinguishable from the "real", but without having a full consciousness.

    It can also be assumed that to save computing resources, the creators replace the "life experience" of simulated people with fake memories. Then the conclusion suggests itself that there is no pain and suffering in the world, and all our bad memories are an illusion. Of course, such reasoning only makes sense if you are not currently suffering from anything.

    Conclusion


    Well, let's say we live inside a computer model, what to do next? Actually, nothing special, live as before. Make plans and fantasize about the future. The saddest option for us is when assumption (1) is true. Compared to this, it is preferable that we still live in a computer model. Although the limits of the computing power of our "creators" can lead to the "shutdown" of our world upon reaching the posthuman stage. And in this case, the most favorable option for us is the truth of the assumption (2).

    Also popular now: