About Consciousness and Artificial Intelligence
The topic of gaining consciousness by artificial intelligence has actually become the commonplace of modern science fiction and cinema (just mention Asimov, “Terminator”, “Ghost in the Shell” et cetera ad infinitum). Meanwhile, few sci-fi thinkers thought about what consciousness is, how consciousness arose in a person, and how AI can acquire this very consciousness.
In this essay, we would like to draw attention to one very interesting (and, in our opinion, very plausible) definition of the mentioned phenomenon, which was given not by science fiction and not by a philosopher, but by evolutionary scientist Richard Dawkins in his book “The Selfish Gene ”.
Dawkins sees the human brain as a universal builder of mathematical models of reality. In chapter 4 he writes:
This idea - consciousness as the inclusion of an architect in the model he constructed - is, of course, far from indisputable (and quite likely, not falsified by Popper). In favor of this theory, some indirect reasoning can be given. For example, reflection in philosophy (the subject's conversion to himself) is considered one of the most important acts of consciousness.
UPD Another argument in favor of the pre-hypothesis hypothesis is the existence of animatism (a belief in the impersonal animation of objects and phenomena), characteristic of many (if not all) primitive societies. In the context of the Dawkins theory, animatism arises as an attempt to apply the current method of modeling the behavior of fellow tribesmen to other objects.
Suppose that the Dokinz hypothesis is true. Let's try to look at AI using this definition of consciousness; Having done this, we will undoubtedly come to the conclusion that no Skynet will threaten us in the near future.
In fact, AI is characterized by a clear separation of the subject area and the algorithm of the AI itself. This separation comes, if you like, from the Fonneimann architecture with its separation of the command area and the data area.
But, even if we put in the memory of the AI information about the principles of his own device, will the AI gain consciousness? Unfortunately no. This information - the algorithm of its own work - is absolutely useless for AI in the sense that on its basis it is impossible to draw any conclusions directly or indirectly affecting the accuracy of modeling and, therefore, the verifiability of predictions obtained on the basis of the model.
Dawkins kept silent in his book about why the brain needed to include itself in the subject area, i.e. what are the benefits for the architect from such an extension of the model. We will try to answer this question.
A living organism needs an analysis of its own behavior in order to predict the behavior of other individuals and, above all, representatives of the same species. Obviously, such a skill significantly increases the individual’s fitness for life in the pack. It is logical to assume that such an ability is in demand most of all in those animals that form more or less large hierarchical flocks with intense social connections - for example, people. (UPD: in Dawkins comments you can find a link to the research of Nicholas Humphrey, who came to the same conclusions)
And this is the second strong argument in favor of the fact that no AI can appear within the framework of existing technologies. AI does not communicate with its own kind and does not have public relations at all, the ability to analyze its own program is absolutely useless to him.
So, the general conclusion is approximately the following: if we accept the pre-hypothesis hypothesis, then, within the framework of the existing paradigms for constructing AI (and computers in general), consciousness will never be acquired by AI ever , no matter how many neurons could be simulated there.
I hereby transmit the foregoing text to the public domain.
In this essay, we would like to draw attention to one very interesting (and, in our opinion, very plausible) definition of the mentioned phenomenon, which was given not by science fiction and not by a philosopher, but by evolutionary scientist Richard Dawkins in his book “The Selfish Gene ”.
Dawkins sees the human brain as a universal builder of mathematical models of reality. In chapter 4 he writes:
The evolution of modeling ability has obviously led ultimately to subjective awareness. Why this was to happen seems to me the deepest mystery facing modern biology. There is no reason to believe that electronic computers act consciously when they model something, although we have to assume that in the future they may be aware of their actions. Perhaps awareness arises when the model of the world created by the brain reaches such a fullness that it has to include a model of itself in it .
This idea - consciousness as the inclusion of an architect in the model he constructed - is, of course, far from indisputable (and quite likely, not falsified by Popper). In favor of this theory, some indirect reasoning can be given. For example, reflection in philosophy (the subject's conversion to himself) is considered one of the most important acts of consciousness.
UPD Another argument in favor of the pre-hypothesis hypothesis is the existence of animatism (a belief in the impersonal animation of objects and phenomena), characteristic of many (if not all) primitive societies. In the context of the Dawkins theory, animatism arises as an attempt to apply the current method of modeling the behavior of fellow tribesmen to other objects.
Suppose that the Dokinz hypothesis is true. Let's try to look at AI using this definition of consciousness; Having done this, we will undoubtedly come to the conclusion that no Skynet will threaten us in the near future.
In fact, AI is characterized by a clear separation of the subject area and the algorithm of the AI itself. This separation comes, if you like, from the Fonneimann architecture with its separation of the command area and the data area.
But, even if we put in the memory of the AI information about the principles of his own device, will the AI gain consciousness? Unfortunately no. This information - the algorithm of its own work - is absolutely useless for AI in the sense that on its basis it is impossible to draw any conclusions directly or indirectly affecting the accuracy of modeling and, therefore, the verifiability of predictions obtained on the basis of the model.
Dawkins kept silent in his book about why the brain needed to include itself in the subject area, i.e. what are the benefits for the architect from such an extension of the model. We will try to answer this question.
A living organism needs an analysis of its own behavior in order to predict the behavior of other individuals and, above all, representatives of the same species. Obviously, such a skill significantly increases the individual’s fitness for life in the pack. It is logical to assume that such an ability is in demand most of all in those animals that form more or less large hierarchical flocks with intense social connections - for example, people. (UPD: in Dawkins comments you can find a link to the research of Nicholas Humphrey, who came to the same conclusions)
And this is the second strong argument in favor of the fact that no AI can appear within the framework of existing technologies. AI does not communicate with its own kind and does not have public relations at all, the ability to analyze its own program is absolutely useless to him.
So, the general conclusion is approximately the following: if we accept the pre-hypothesis hypothesis, then, within the framework of the existing paradigms for constructing AI (and computers in general), consciousness will never be acquired by AI ever , no matter how many neurons could be simulated there.
I hereby transmit the foregoing text to the public domain.