Future robots will be trained through curiosity and self-determination of goals.
- Transfer

Imagine that a friend asks you to help tidy up his room full of different things and furniture. But imagine also that he will not help you in this, but will simply describe to you, by showing the photos, how he would like his room to look like in the end. The task may seem boring, but any of us will cope with it. As children, we discovered new objects, learned to recognize them, and developed skills to deal with them. Encouraged by curiosity, we gradually gained visual, attentive, and sensory-motor knowledge, allowing us, adults, to deal with our physical environment of our choice.
Today's robots are not adapted for such tasks. Imagine a humanoid robot helping to clean the room. Suppose you showed the robot a room in a normal, cleaned state, and when there was a mess in it, you order the robot to remove it to its original state. In such conditions, it would be very tedious to teach the robot how to direct attention, and how to manage each of the objects, to put it in the right position at the right place, or how to build a sequence of actions.
And although new, complex robots and advanced algorithms appear annually, performing complex duties and finding unknown solutions for different tasks requires tedious programming of parts related to lower-level motor skills. At best, robots are able to learn a small set of inflexible actions. Comparing today's achievements of AI with biological intelligence, we will see that AI still has limitations in autonomy and flexibility.
The robots of the future will have to be able to learn autonomously to comprehend their surroundings, that is, to independently determine goals and effectively acquire the skills to achieve them, on the basis of acquiring, changing, summarizing and recombining the previously acquired knowledge and skills. This will allow them, with a little extra training, to change the environment from the current state to a wide range of end states defined as a target by the user. The question is, how can we create robots of the future that can cope with such a task?
Project GOAL-Robots
In search of an answer to this question and launched a project that is important for the use of AI - the European project, coordinated by the laboratory materialized computational neuroscience ( Laboratory of Computational Embodied Neuroscience , LOCEN), the Italian research group, based at the Institute of Cognitive Sciences and Technologies, owned by the Italian State Research Committee ( ISTC-CNR ).
The project " GOAL-Robots - Targeted Autonomous Learning Robots of the Open System" [Goal-based Open-ended Autonomous Learning Robots] got the first place in the list of 11 projects that received funding from 800 participants of the EU FET-OPEN call conference(Future Emergent Technologies), and is part of the Horizon 2020 EU research program. LOCEN and its research advisor Zhanluka Baldassar [Gianluca Baldassarre] will coordinate a consortium comprising three other important European research groups:
1. The Laboratory of Psychology and Perception (LPP) from France, under the leadership of Kevin O'Regan, working at the Paris Institute of Neurology and Cognitive Sciences behalf of Descartes, will conduct experiments related to the acquisition of skills and goals in children.
2. The Frankfurt Institute for Advanced Studies (FIAS) in Germany, led by Jochen Triesch, will concentrate on the development of visual systems and motor skills similar to biological ones.
3A group of robotics specialists led by Jan Peters [Jan Peters], Darmstadt Technical University (TUDa) in Germany, will be involved in the demonstration of robots for the project.
GOAL-Robots follows the previous European project IM-CLeVeR (“internally motivated cumulatively learning universal robots”), in which LOCEN with previous partners studied the role of intrinsic motivation (VM) in encouraging independent learning in both living organisms and robots. The scientific study of the VM began with observation of how children out of curiosity explore and interact with the outside world, gaining knowledge about how things work, and acquiring a large repertoire of sensory-motor skills to interact with them.

If curiosity and VM are the basis of human universality and adaptability, then AI with architecture and algorithm that emulates VMs can help create a “motivational engine” that will lead robots through an autonomous open learning process that does not require constant programming and training by humans.
GOAL-Robots also adds an important component to developing open learning robots: goals. A goal is an internal representation of a person about the world, a state of the body or an event, or a set of events that has two important properties. First, a person can cause this view even in the absence of a perception of the corresponding state of the world or event. Secondly, this challenge has a motivational effect, that is, it can influence the choice, focus the attention of the individual and behavior, and lead his learning process towards achieving the goal. The ability to create motivational goals at will, albeit abstract, and using them to select actions and learn, is a key element of behavioral flexibility and the possibility of learning biological personalities. Project members believe

Tasks and ideas
The idea of the project in a combination of mechanisms related to the VM and the motivating power of goals. In particular, VM will stimulate robots to independently discover new interesting events that have occurred due to the actions of themselves. Robots will explore their surroundings under the influence of curiosity and to set themselves more and more complex goals, and use them to obtain various skills in an open style.

An open process of acquiring abilities requires complex mechanisms and integration of various components of the architecture. In particular, robots will need to acquire new skills without violating previously acquired ones, and at the same time, reuse previously acquired skills to speed up the acquisition of new (knowledge transfer). In addition, they will need to learn to combine pre-acquired skills to create more complex ones. These are the most important tasks for AI today. To solve them, the project will use advanced algorithms, both for processing sensory information (for example, using deep learning networks), and for organizing and using knowledge related to motor skills (for example, using dynamic motion primitives and neural networks with an echo effect [echo]. -state neural networks]).
All mechanisms associated with different parts of the learning process will need to be integrated into a single control architecture: high-level processes of goal formation will be combined with motivational layers in which, based on VM, the robot will form and select goals. The goals will be gradually linked to the lower level of the controllers so that the robot can recall the acquired skills to achieve the required goals and build more complex skills based on a combination of the previous ones. The transfer of knowledge between different skills will be integrated, taking into account the need to eliminate mutual interference, and so on. These mechanisms are useful not only for the phase of independent learning, but also for the possibility of using the knowledge gained by the user.

Each year, the project will be represented by a “robot demonstrator”, and complex robotic platforms (such as iCub or Kuka) will be managed by architectures developed in the project to solve problems of increasing complexity. These demonstrators will not only show progress in the project, but will also become criteria for comparing progress in the development of independent robots.
The final demonstrator will have to face the task formulated at the beginning of the article: is it possible for a robot to demonstrate universality and adaptability, similar to human ones, interacting with the real world? In particular, the robots will be given the task: a) to study the corresponding order of the position of several objects in containers and on the shelves, and b) to reproduce this state after the user has moved and swapped the objects.
If the GOAL-Robots project fulfills its promises, you will no longer need to worry about lazy friends: when they ask you for help, you just ask your artificial friends to help them!