Will robots ever truly recognize themselves? Scientists are moving in this direction.
At the center of the concept of "man" is the ability to become aware of oneself. Without it, we could not navigate, interact, sympathize with or survive in an ever-changing, complex world with other people. We need to be aware of ourselves when we are doing something, or when we expect the consequences of potential actions from ourselves and others.
Given our desire to include robots in our social world, it is not surprising that the creation of self-consciousness in artificial intelligence is one of the main goals of researchers in this area. If these machines take care of us and make us company, they will inevitably have the ability to put themselves in our place. And although scientists are still far from creating robots that recognize themselves as humans, they are gradually approaching this.
In a new study, published in Science Robotics, describes the creation of a robotic arm that understands its physical form - that is, having the simplest version of self-awareness. Nevertheless, this is a very important stage in the development of robotics.
There is no clear scientific explanation for the components of human self-awareness. Studies in neurobiology say that the networks of the cerebral cortex in its areas responsible for motility and in the parietal region of the brain are activated in many cases that are completely unrelated to movement. For example, in a person who has heard words such as “take” or “hit”, motor areas of the brain are activated. As with observing the actions of another person.
On this basis, a hypothesis that we perceive other people's actions as if the act itself - a phenomenon scientists call "simulation incarnation» [ Embodied simulation ]. In other words, we use our own abilities to perform actions with the help of our body in order to give meaning to the actions or goals of others. The simulation process is controlled by a mental model of the body or itself. This is what researchers are trying to reproduce in machines.
The research team used a deeply trained neural network to create a model of itself in a robotic arm using data obtained from its random movements. AI did not give any information about the geometric shape or physical properties of the hand, he studied gradually, moving and bumping into objects - much like a child learns himself, watching his own hands.
Then the robot was able to use a model of itself containing information about its shape, size and movements to make predictions about actions - for example, lifting something with a tool. When scientists made physical changes to the arm, the contradictions between the predictions of the robot and reality made the learning loop start again, which allowed the robot to adapt the model of itself to a new body shape.
And although the study used one hand, similar models related to the process of self-study are also being developed for humanoid robots, impressed by studies of developmental psychology.
The complete self
And all lcd robotic identity cannot be put on a par with the human. Our "I", like a bulb, has many mysterious layers . This includes the ability to identify with the body, with the physical boundaries of the body and feel the world from its visual-spatial perspective. But this also includes other processes that go beyond this, including the integration of information from the senses, the continuity of time with the help of memories, the production and awareness of one's own actions, and privacy.
And although the path to creating robotic self-awareness, which includes all this many levels, has just begun, building blocks such as building a body diagram in a new study are already being created. Also, machines can be made to mimic others and predict the intentions of others, or change their minds under the influence of circumstances. Such developments, as well as the growth of episodic memory, are also important steps towards the creation of socially oriented robotic components.
Interestingly, this study can also help us learn more about human identity. We know that robots are able to adapt their physical model when we change their body configuration. This can be represented in another way, as a situation similar to the use of animal tools, when external objects are combined with the body.
From the images of the brain, it can be seen that the neurons of the monkeys, activated during grasping, are also activated when they take objects with the help of forceps, as if the forceps became their fingers. The instrument becomes part of the body, and the feeling of oneself changes. This is similar to how we identify with the avatar on the screen during video games.
Intriguing idea proposed by Japanese neuroscientist Atsushi Iriki, lies in the fact that the ability to supplement your body with external objects, and the ability to perceive other bodies as tools, are two sides of the same coin . Interestingly, this blurry difference requires the emergence of a virtual concept of “self,” which holds together the subject / personality and objects / tools. Therefore, the way we adjust ourselves by adding or removing tools can help us understand how the “I” works.
Robots learning to use tools as an extension of their body are a fruitful field for experiments that allow confirming emerging data and theories from the fields of neurobiology and psychology. At the same time, research will lead to the development of more intelligent, capable machines that work for us and with us in different areas.
This is perhaps the most important aspect of the new study. He combines psychology, neurobiology and robotics in order to answer one of the most fundamental questions of science: who am I?