Feeling of pain as the software foundation of strong artificial intelligence

And what, after all, is the main problem of creating strong artificial intelligence? “Of course, in the absence of the necessary hardware of sufficient power,” someone will say. And he will, of course, be right. Indeed, if at the moment we try to create at least a little bit like a human brain computer with trillions of nerve cells, each of which can be compared with a separate computer with its own functions and properties, then we will get a “brain brain” the size of a house. But if we imagine that progress has nevertheless reached the point where the creation of such a machine is only a matter of money and free time, then we can reflect on the algorithm of operation of such a machine. Which I did.

As a result of my thoughts, I wandered into a rather dead end for myself. But let's start from the very beginning, so that everyone would understand.

The main difference between a weak AI and a strong one is, of course, in the presence of the latter consciousness, i.e. ability to realize oneself at a given moment in the space surrounding it. After all, no matter how complex and much variable the weak AI is, it will never, for example, take the initiative if this function is not described in the program code. Just because he will not need it as a single person with his own ego. And what is consciousness and how can it be embodied programmatically? If we discard all metaphysics and mysticism that has been surrounding this term for centuries, and try to present it as a program, we will get something similar to a chat in which the bot communicates with itself. I mean the very internal dialogue with which we can ask ourselves questions and answer them ourselves, drawing conclusions, on the basis of which to ask yourself new questions and so on in a circle. After all, we think constantly, and stopping this process is the basis of Buddhism, shamanism, and many more spiritual teachings and practices described, for example, by Castaneda.

So, let's say we wrote a similar bot that has two functions - ask yourself a question and answer it yourself. And they also scored an incredible number of questions and answers into the database, while prescribing the self-learning function (this is provided that the hardware support allows us to use neural connections that will link questions with answers and give logic to the dialogue during the training process). But from this, our AI will not become "alive." This happens for two reasons: first, it is necessary to enable our computer to receive information from the outside. And more specifically - from the outside world. But it is also rather a matter of mechanics (imitation of vision, hearing, etc.). The second is still the lack of motivation for development, and generally any instincts. We are interested in the second.

And here the fun begins. Indeed, in addition to internal dialogue, a person also has a set of different feelings, such as love, fear, hatred, fidelity, anger and so on, which actually make us people. And how can you make a car love or hate? The question, of course, is complex, they have repeatedly been asked science fiction writers. But if you try to figure it out, then everything is not so complicated. If you think about it, then all these feelings have one basis. Let us analyze the algorithm of human feelings on the example of "love." For quite some time, we learned from the lips of scientists that love is chemistry, i.e. a set of chemical processes occurring in the human body, but this is still the same mechanics.

But the programmatic basis of love, in my opinion, is, oddly enough, the fear of loss. Indeed, if we analyze the components and remove the “fear of losing” function from the “love” program, we will see that nothing remains of it and the program no longer works.
We go further along the chain. It is here that human egoism manifests itself, or rather the main function that Isaac Asimov described - “do no harm to yourself” or the instinct of self-preservation. Feeling the same fear of loss, we are not so much afraid of losing a person as we are afraid of experiencing pain (harm ourselves). So with other feelings. Feeling of fear is the foundation that automatically arises from the theoretical ability to experience pain. And it is the presence of a feeling of pain that is the most basic condition for creating a strong artificial intelligence.

Again. If we manage to somehow imitate the feeling of pain in our machine, then it will automatically acquire a sense of self-preservation and will try in every possible way not to experience this pain, i.e. will become motivated. Fear will immediately appear, including the fear of death, as a critical form of self-preservation, and also the most important thing - the desire to live and develop, as the ensuing consequence and the opposite of the fear of death. I hope everyone understands my logic. But that is not all. Rather, this is only the beginning, and all that I wrote above was only a preface.

So, we came to the conclusion that the main function of a strong AI should be precisely the function of pain. Not fear and not a sense of self-preservation, but the ability to experience pain, since fear and a sense of self-preservation are the ensuing consequences.

Even if I made a mistake somewhere, and the feeling of pain is still not the basis of human consciousness, then I think that sooner or later, when creating a strong AI, someone will still encounter this issue, so somehow I think the topic is relevant. Moreover, when I started looking for information about the software embodiment of the self-preservation instinct, I practically did not find anything significant.

And it was here that I came to a standstill. The question is rather funny: how to make the computer hurt? If you kick the system unit, it is unlikely to start yelling at you. There is no way to do without the mechanics of the brain and will have to delve into neurobiology. And also touch on the issues of the pain threshold, which, as we know, is at a different level for everyone, and which especially gifted people are able to turn off altogether as needed and take a walk, for example, on coals, or sleep on nails. From a software point of view, this is a very interesting topic. But in order to logically separate one from the other and not get confused at the end, I would like to reveal this issue, as well as many others, in the following articles. And over time, get to the hardware device of a possible future of strong artificial intelligence.

PS - maybe in this article I invented a bicycle, but after reading a lot of books on strong AI, I didn’t understand anything until I started to think myself. Everyone just pours water and walks around and around, and the most important question - “What is the fundamental difference between a person and a machine” is not asked. Maybe such reasoning has already been described somewhere, but it is not possible to read everything in the world, all the more, as I have already said, my searches on the net have not been particularly successful, and this article is just a preface to understand the further logic of the reasoning.

Thanks for attention!

Also popular now: