Do robots need their own Wikipedia?

    Welcome to iCover Blog Pages! Despite the skepticism of a certain part of intellectuals on such resources as Wikipedia, which in some cases has quite good reasons, in general, the value of this information source accessible to everyone should nevertheless be recognized as positive. Of course, connecting a visit to Wikipedia or YouTube with some kind of scientific revelation would be reckless, it is more about access to information about completely trivial things that help us answer current challenges and questions of our time. So, for example, a video on how to cook an omelet according to an original recipe viewed on YouTube will allow us to quickly and efficiently improve our culinary literacy without resorting to the need for tedious learning recipes from a cookbook. Why are we, actually?

    The question is of interest: with us everything is quite simple, understandable and familiar, but how to adapt the knowledge accumulated by mankind to the process of training robots most effectively? It is clear that the value of the information provided to the PS by the robot in response to his “search query”: what is the algorithm for transferring a cup of tea from the kitchen to the living room? ... will be reduced to zero. To assimilate information, the machine needs a detailed answer, step-by-step instructions with specific actions and an understanding of the language. So in our example with a cup of tea, it will be necessary to provide information on the coordinates of the container with tea, the method of capture, the place where it needs to be transported, etc. Of course, the example is intentionally simplified - in real life with a randomly changing environment, information is surrounded by many additional multi-vector parameters and introductory.

    The search for ways to create common algorithms and information sources for productive training of robots becomes a powerful incentive to search and develop new directions and ways to convey special knowledge and gain the required experience. Today we touch on two promising areas, working on which researchers have already managed to achieve certain positive results.

    YouTube Robot Training

    So, as we found out, those things that seem simple and natural for us (the simplest gestures, processing vegetables, working with a vacuum cleaner, cooking food according to familiar recipes, etc.) for a robot that has not undergone special training are still an insoluble problem. The fact is that at the current stage, robots, unlike humans, still do not know how to learn empirically, independently explore the world and correlate surrounding objects with certain qualities. Thus, today, at the dawn of the development of robotics, the robot so far needs to be taught each elementary movement individually - how to open the refrigerator, how to take the container, how to open it, how to extract the contents.

    The lack of such a valuable human quality as intuition and any associative thinking skills and the protracted process of training robots in this regard forced specialists to seek and develop alternative methods. Experts from the Institute of Advanced Computer Technologies (Maryland, USA) suggested their answer to the question, using YouTube videos to accelerate and improve the quality of training.

    An increase in the effectiveness of the learning process in this case is observed due to the simultaneous use of two channels of information identification - recognition by artificial intelligence of the actions performed by a person in the training video and recognition of speech information by means of parsing the language. The learning process allows at any given time to match specific words and phrases and their corresponding meanings and actions performed on the monitor screen.

    According to the participants of the experiment, the use of the “two-channel” training methodology already today allowed us to demonstrate the level of accuracy in fulfilling the tasks at the level of 77% with a memorization rate of 76%. At the same time, the module recognizes objects with an accuracy of 93% and in the future will be able to identify more complex verbal commands with a high degree of accuracy.

    Cloud learning

    Robotics are familiar with the problems that their mechanical wards experience when practicing algorithms for capturing objects of various shapes, weights and sizes. Robots also experience obvious problems in cases when it is necessary to pick up or use objects unfamiliar to them for their intended purpose. And here, cloud technologies are indispensable. A team of specialists from Brown University, USA, under the leadership of Stefanie Tellex (Stefanie Tellex) conducts an experiment to teach the collaborative Baxter robot how to capture objects and transfer their experience to fellow robots of the same model.

    A robot that first encounters an object scans the last with infrared sensors, which allows it to identify the shape of the object. And the next step is to choose the approach that will be optimal when lifting an object of this shape. Such an algorithm works in most cases and turns out to be 75% more successful than capture attempts made by the standard protocol. But this is only the first step. At the next stage, the positive “experience” obtained is uploaded to the cloud, which is essentially a database of already studied objects for all robots connected to it and a kind of analogue of the aforementioned Wikipedia.

    Today, around 300 Baxter robots operate in laboratories around the world. Experts estimate that if they all took part in replenishing the shared cloud database, then every 11 days, when the robotic community was fully loaded, the library could be supplemented with information about one million objects under study. Due to the fact that the base platform can be finalized, such an approach in the future will be a powerful incentive for the development of the entire community. So, for example, relatively recently Baxter received a “soft grip”, allowing it to lift many objects without compromising their integrity.

    The possibility of raising a wide variety of objects without the risk of dropping them and damaging them will allow us to consider in the future new areas of application of such robots not only on assembly lines, but also in the infrastructure of warehouse complexes of various types. And this is only the beginning, and in the future, the opportunities for collective self-training that the Robopedia cloud environment (term. Author) will reveal with a high degree of probability can be used in almost any field of robotics, from medicine to the field of servicing and fighting fires .

    Positive examples that help unleash the potential of the cloud learning concept today already allow us to be optimistic about the future of such an approach. Among such examples are the simplest ways to teach recognition of photo libraries that help in identifying objects and entire sets of algorithms that allow you to transfer individual skills of a higher order. And specialists from Brown Universities, Stanford and Cornell University are actively working to create an intelligent cloud-based learning environment. At the current stage of research, the robotic system allows you to save and transmit information about symbols, syntax elements, shapes, tactile properties, motor skills to the general information cloud.

    The Cloud Robopedia learning approach is relatively recent. Until recently, the vast majority of researchers regarded the learning process as isolated. Revision of the training concept will allow specialists to concentrate on improving robot algorithms, while having free access to a complete and current library of knowledge accumulated in the field at the moment.

    Dear readers, we are always happy to meet and wait for you on the pages of our blog. We are ready to continue to share with you the latest news, review materials and other publications, and will try to do our best to make the time spent with us useful for you. And, of course, do not forget to subscribe to our columns .
    Our other articles and events

    Also popular now: