I have no mouth, but I have to shout. Reflections on AI and ethics

    Disclaimer
    Я скептически отношусь к своей способности высказать действительно оригинальную мысль. Скорее всего, я далеко не первый, кто задаёт эти вопросы, и вполне возможно, на них даже уже выработаны какие-то удобоваримые ответы. Поэтому, печатая данный текст, я не жду вашего удивления или восхищения. Я жду, что в комментарии придут люди, знакомые с современной философией сознания, и дадут мне ссылки на работы серьёзных мыслителей с забавными немецкими фамилиями.

    image

    Not so long ago, there was a post on Habré , comments on which made me think about several interrelated issues. The results of these thoughts (or their absence, here how to look) I want to share with the community.

    What is pain?


    I once had a toothache. I lay on the couch and tried not to pay attention to it. I thought that pain was just a signal going to my brain. The same signal as the presence or absence of voltage in the wiring going to the PS / 2 connector of the system unit. By itself, it does not carry any semantics, it is my mind that chooses how to interpret it. If I stop taking it as pain, and instead ignore it or simply “take note”, it will be easier for me.

    But it did not get any easier. In this simple way, I discovered that pain has qualia and is not limited to the simple transfer of information.

    How do we understand what others hurt?


    I am not an expert in neurophysiology, but they say there are some mirror neurons in the brain. When we see another person perform certain actions, the mirror neurons perform their reverse engineering. We are trying to understand what should happen in the head of this person so that he behaves exactly this way. And to some extent even we ourselves begin to feel what, according to our assumptions, he should feel. My cheekbones can be reduced at the sight of someone eating a lemon. If someone, for example, shouts, cries, writhes, rolls on the floor ... It is likely that this someone is in pain. Most likely, it will be unpleasant for me to see such a sight. I will begin to sympathize with this person, and, if it is in my power, I will even take some action to stop the pain. Damn mirror neurons.

    In fact, we have no guarantees that the other person is really in pain. For example, it may be a simulator, an actor, as in the Milgram experiment . However, this can be easily discerned by tucking the simulator into the tomograph and looking at which parts of the brain are currently active. However, this is also a behavior, albeit at a lower level. In the end, it all comes down to a very simple (I would even say too simple) criterion: we believe that a person is in pain if he behaves the way we would behave if we were in pain .

    How to understand that the other person is a person?


    There is such a famous thought experiment called " philosophical zombie ." Its essence is simple: let us imagine something that behaves absolutely indistinguishable from a person from the point of view of an external observer, but at the same time is completely devoid of subjective experience. If you prick him with a needle, it will say “ah” (or something less censorious), withdraw his arm, his corresponding facial muscles will contract, and even the tomograph will not be able to catch him. But at the same time inside it does not feel anything. He simply does not have this "inside." Such a thing is called a “philosophical zombie”, and the essence of the experiment is that the existence of this hypothetical creature does not lead to obvious contradictions. That is, it seems to be possible .

    Returning to the previous question, we really do not have any reliable opportunity to find out if another person is experiencing pain as a qualification. We can listen to our mirror neurons, or, if this is not enough for our sophisticated minds, use the Occam's razor. Saying that a “philosophical zombie” is an extra substance. It is much more logical to assume that all people are more or less the same than to assume the opposite, without any intelligible grounds for this. However, Occam's principle is still a heuristic, and not an immutable law. Entities that yesterday seemed superfluous to us today enter our house, opening the door with their foot. If you do not agree, try to imagine how you would explain quantum mechanics to Democritus.

    Do the androids have electric reins?


    In the first comment to the post about which I wrote above, the user NeoCode expressed the following thought:
    First of all, a “strong AI” is not a living being and will not be able to experience pain or loneliness, simply because its nature is originally different, it does not have millions of years of evolution and natural selection, and therefore - low-level biological mechanisms and programs. He will not even have the instinct of self-preservation, unless specifically programmed of course. But in its pure form - will not; it is possible to create an AI that is conscious and capable of solving complex tasks and learning, while not possessing the instinct of self-preservation at all.
    This is an important point, which many for some reason do not understand, by default “humanizing” artificial intelligence.

    In this, of course, there is a rational grain. It is impossible to mindlessly transfer human qualities to a hypothetical AI. In this regard, 95% of science fiction and 99.9% of the inhabitants are hopelessly naive. But I want to say the following: you should not also mindlessly deprive AI of human qualities. Some of them may turn out to be more fundamental than might have been supposed.

    Consider such a hypothetical situation: in order for the AI ​​to do what we need, and not what it wants (and it may well be more “interesting” for him to solve a sudoku than to deal with our project, which is about to be overdue) , we add to it a special input signal - such that the main purpose of the AI, the main component of its objective functionwill minimize this signal. Accordingly, when the deadline approaches, we press the pedal, the wiring is energized, and the AI ​​begins to actively think about how to remove this voltage. And the more active, the harder we press. And since the pressure of the foot on the pedal is associated with an unfinished project, the AI ​​has no choice but to complete this project. Well, or to hack a military drone flying past, so that it will blow out the brains to the pedal operator. Who knows, this is a strong AI.

    However, I digress. Tell me, does this hypothetical signal accidentally remind you of anything? Is it possible in this case to say that the AI ​​is in pain?

    Man to man is a wolf, and zombie is a zombie zombie


    In general, how can we understand whether an AI is experiencing qualia? In the case of the philosophical zombies, we had Occam's razor and empathy on our side. However, AI is philosophical, but not zombie. That is, it makes sense to raise this question about him, but he is not human-like. Therefore, we cannot say that he feels something, just by analogy with ourselves.

    Someone (as, for example, the author of the comment quoted above) will say that we can safely say the opposite. That there is no reason to believe that the AI ​​is actually in pain, and if not, then we will not believe so. To this I would answer as follows: Imagine that some thinking, feeling, but completely inhuman being created you. What reasons would he have for believing that you are experiencing qualia? Unless, of course, this being does not have the transcendental ability to really get into someone else's head; allegorically speaking, become a bat . However, this is already beyond the scope of our analogy and goes into the category of talking about the divine.

    Anthropic chauvinism


    In all the previous paragraphs we talked about pain. Pain is one of, say, the most characteristic varieties of human sensations. But who said that everything is limited to man?

    If a hypothetical alien mind (in general, even unimportant, artificial, alien or some other) is in principle capable of experiencing qualia, they may turn out to be fundamentally different from those that a person experiences. A grotesque example: the tiny AI comes to the father-scientist and says that he is experiencing sybkdschrtrb . Is it good or bad? Should give him candy as a reward, pat on the head for consolation, or even pour in an electric belt, because there is nothing here?

    In all the previous paragraphs, I asked questions to which there is no answer, and in fact did not state anything. Now I venture to say: human ethics are not ready to make inhuman mind their full subject . We can talk about "ethical issues of AI", considering artificial intelligence as a means by which some people do well or badly to other people. But if we try to think about ethical issues from the point of view of AI, we simply will not succeed. Not that we could not get an answer - we do not even possess the appropriate conceptual apparatus in order to correctly put the question. May not possess yet. Or maybe this is a fundamentally unavoidable gap. And artificial intelligence will have to develop its own ethics, if, of course, he will need it at all.

    And then decide whether it should consider a person as a subject, hehe.

    Also popular now: