Ken Goldberg's performance at RCC 2013

    Today we continue to publish peculiar abstracts of the speeches of the speakers of the Russian Code Cup 2013. This time we bring to your attention a report by Ken Goldberg.

    Ken Goldberg, inventor of the world's first web-based robot, professor at the School of Computer Science, University of California, Berkeley.

    I would like to devote my speech to the discussion of modern robotics and its immediate future. You have probably heard about the "google car", which goes without a driver? I am often asked why does Google do robots at all? I believe that this is due to the great interest of the "search corporation" in cloud technology. Personally, I can not imagine future robotics without the use of "clouds." I suppose you are wondering: what kind of robots will there be those that cannot work without a “cloud”? Let me tell you about cloud robotics and about my latest research in this area.



    I'll start with a historical tour. The Internet, in the form in which we know it, was created about 20 years ago. In 1993, I first found out about www. My students and I wanted to come up with something interesting using this new technology. Our project was a garden robot connected to the network - “Telegarden” (“Telegarden”). They arranged a flowerbed with a diameter of 2 m and a height of 0.5 m, planted plants, installed a robotic “arm” in the center and wrote a simple browser interface. Of course, in 1993 there were few places where you could access the Telesad.





    The robot allowed you to observe the plants through a video camera, as well as water them. Anyone could water our flowerbed through the net. To those who carried out N waterings, we gave seeds that could be planted with the help of a robot. Of course, over time, the number of remote users grew so much that they only began to interfere with each other, the flowerbed fell into disrepair. So our experiment, among other things, clearly illustrated the proverb about seven nannies.

    Since then, the Internet itself and robotics have stepped far forward. People have already begun to get used to robotic vacuum cleaners, military robots are becoming commonplace, doctors already operate about 2,000 medical robots, mainly in surgery. In general, robots have already entered our lives.



    The orange boxes in this image are robots that sort orders in huge warehouses of companies like Amazon. Robots also deliver pallets of goods to human operators who directly collect orders.

    And this example of a robot is known to much more people, this is the Kinect module for the Xbox game console:



    The term "cloud robotics" was coined by James Kafner of Google. He was one of the first to say that cloud computing will greatly facilitate the performance of robots by their robots. For example, so that a robot can wash dishes, make a bed, clean up an apartment, it must solve a lot of complex analytical problems. This is due to difficulties in recognizing the environment and the huge number of objects in this environment. Computing power that can cope with such tasks in an adequate time, to build in unpretentious robots is too expensive. Not to mention the huge amount of descriptive database of items that he must store in himself. It is much cheaper and easier to use remote resources for recognition and calculation.

    By the way, the ROS operating system for robots has already appeared, something similar to Linux. It is open source and is developing quite quickly.

    Last year, I participated in an African project to create a robotics network on the Black Continent. For this, it was decided to create a number of very cheap robots intended for use in schools and universities. A competition was announced to create a robot worth $ 10. We understood that the goal is most likely unattainable, but this will be a kind of guideline for the participants. The winner was an amateur enthusiast from Thailand: he took a remote control for a game console, added several sensors, and used two Chupa Chups as a counterweight. And it turned out a robot worth $ 8.9, including the cost of candy. He called it Lollybot (Lollipop).



    Any, even the most complex robot, will face challenges that he will not be able to solve. In such cases, he will be able to go online or call for help from a person. In general, cloud robotics has the following advantages:
    • Access to massive amounts of data.
    • Access to the most powerful computing systems
    • Work in an open source system
    • Self-learning ability through data exchange with other robots
    • The ability to seek help from a remote human specialist.

    Remember, in the movie "The Matrix" there was a dialogue:
    - What can you fly on this?
    - I can not yet. Operator: Download the B-22 helicopter pilot program.

    A lot of research is being done in this direction, you can find out about the most interesting of them here .

    Now, with my students, I deal with two topics. Imagine that the robot needs to be removed from the table. By “looking” at the table with the camera, the robot may have difficulty recognizing objects, determining their location in space, and positioning their manipulators. To solve these problems, there is a set of techniques called believe space, that is, the space of representations, hypotheses. We know what things we can meet in the conditions surrounding us. Based on a certain number of recognized images, having processed a large amount of data, to predict the presence of some other objects. All this is inappropriate to carry out by the robot itself. And we are trying with the help of a "cloud" to solve the problem of a manipulator capturing an object of complex shape.



    The object model must contain certain geometric tolerances, because the robot will constantly encounter objects that are slightly different from the standards in the database. In general, there are many works devoted to the problem of capturing an object of an exactly unknown shape:


    Reuse-PRM based on known obstacles. Lien and Lu, 2005


    Adaptation of Motion Primitives. Hauser et al, 2006


    Policy transfer for footstep plans. Stolle et al, 2007


    Learning where to sample. Zucker et al, 2008


    Skill trees from demonstration. Konidaris, et al, 2010


    Path retrieval from static library. Jetchev and Toussaint, 2010

    Suppose that we have two captures and an object whose shape the robot sensors look like:



    Obviously, the real geometric parameters of the object may differ from what the robot is currently “seeing”. And he must develop a safe way to capture. You can conduct a probabilistic analysis of various forms using the Gauss curve; you can approximately calculate the location of the center of gravity, and based on the calculations, develop a capture algorithm. Let's assume that our captures are parallel to each other. So, the sensor determined that one of the captures touched the subject. After that, you need to conduct geometric testing of different capture algorithms and assess the likelihood of a safe, reliable capture. All these calculations can be carried out in the “cloud” by parallelizing the geometric analysis for each form variant.

    Take a look at this selection of different objects:



    Looking at these projections, you can more or less guess how to better capture them. But intuition often fails us. Not always our own choice of a capture point is optimal. We tested our technique in real conditions on a simpler subject:



    Despite the fact that our intuition and experience tell us that all three capture options (in the upper right corner of the image) can be very successful, mathematical calculations suggest otherwise - the best is first option. Moreover, the probability of a successful capture is almost 4 times higher than in the third option, and look how slightly they differ from each other. The experiments confirmed our calculations.

    The second task we are working on is the problem of object recognition.



    To solve the problem, the robot can take a picture of an unrecognized object and run it through Google Goggles - a free graphical search system from Google. Of course, ideally, this should be a specialized database that can also provide information about the strength and weight of an item, its size. It will be very fine if the system also tells you the best capture option, or some of the best. But for this it is necessary to do a great job of tagging, labeling. For training other robots, it is advisable to provide feedback to the database regarding the success of the application of the proposed algorithms. So far, the system we developed is far from perfect: the probability of recognizing objects is about 80%, and only 87% of the recognized ones can be reliably captured by the manipulator.

    At the beginning of my speech, I mentioned surgical robots, and I want to talk more about their use. There is such a treatment technique as brachytherapy , that is, contact radiation therapy. According to statistics, every sixth man gets prostate cancer. One of the treatment methods is the introduction of needles into the tumor, through which point-irradiation is carried out.



    To reach the prostate, these needles must pass through a number of other important and delicate organs and tissues. As you know, this in itself is a very risky procedure [well, it would be !!!]:



    It was proposed to abandon the matrix and insert needles at different angles to reduce the risk of damage to surrounding organs.



    For this, a special algorithm for calculating vectors was developed. Such calculations require large capacities, and with access to the "cloud" this is not a problem. There are already practical results that confirm that refusing parallel needle insertion is equally effective and safer for patients. Robots are being developed that will help surgeons more accurately insert needles.

    The second area where surgical robots are introduced is the introduction of implants into regional cancers. These implants can act as a lens to focus radiation in a specific area of ​​the tumor. To do this, you need to correctly determine the shape of the implant, taking into account the geometric parameters of the channels, cavities and organs of the patient.



    In August, we presented our methodology for calculating the nonlinear delivery of implants to the site of irradiation. Thanks to 3D printing, which is becoming cheaper every year, it is possible to produce not only the implants themselves, but also channels of any necessary configuration for the introduction of irradiating instruments.



    The first results of a clinical study of our method already provide serious reasons for further optimism. Another medical example of the use of cloud technology, we called "superhuman" surgery. We are talking about laparoscopy , and the robot performs the function of an assistant, while the surgeon controls his actions.



    There are tons of features that you could automate a long time ago. For example, suturing a cut is a matter that does not require a great mind, but the highest concentration. The surgeon can indicate where to start the seam, where to finish, and then already trust this function to the robot. Just program this procedure until it turns out in view of the high complexity. Therefore, the best option is the "training" of the robot on the example of human work. An analysis of the movements of the surgeons showed that the movements of the person are not perfectly smooth, there is always a certain tremor:





    So to speak, a lot of noise. To smooth out the surgeon’s actions, we used the dynamic distortion technology used in speech recognition.



    The resulting smooth path can already be transferred to the robot for execution:



    Using the methodology of iterative training, we can adjust the algorithm of the robot’s movements and perform this operation as accurately and efficiently as possible, and then sharply increase the speed of the operation.

    Another task in surgery where robots can be used is the extraction of foreign objects from the human body: bullets, fragments, and so on. Work in this direction is also underway. So far, a person copes with this operation three times faster than an autonomous system. But it seems to us that this speed gap can be easily eliminated through the use of cloud computing and additional algorithm optimizations.

    In conclusion, I want to say a few words about what, in my opinion, awaits us in the near future. We will be surrounded by devices with their own “brains”. Well, or at least with RFID tags. The number of devices and devices connected to the Internet to obtain the necessary information will increase. That is, the so-called “industrial” Internet, the “Internet of things” will appear.



    Of course, for this it will be necessary to solve the reliability issues of manufacturers of all these “smart” devices, increase the bandwidth of the network infrastructure, and protect against hackers. At the same time, the problem of repair and maintenance of such a quantity of electronics will arise. By the way, this kid is one of the first messengers of the appearance of household robots without their own computing power :



    It is just a platform, and all management is carried out by the program installed in your iPhone.

    Great progress can be expected in the creation of anthropomorphic and zoomorphic robots, and the success of the developers from Boston Dynamics is so great that it is even somewhat scary. Also flying robots, quadrocopters are experiencing rapid development.

    To summarize: why do we need cloud robotics? For use in the development of Open source technologies, to provide the possibility of training robots for each other and for people, gaining access to huge amounts of data and computing power.

    Thank you very much for your attention.

    Also popular now: