Is Robot Riot Canceled?

    (An informal review of the book by David Mindell's "Rise of the Machines canceled Myths about robotics!", "Alpina non-fiction", 2017)

    image
    (the User Chmouel on en.wikipedia : file )

    book by David Mindell's "Rise of the Machines is canceled! Robot Mythsleft a strong but ambiguous impression. First of all, it is worth looking at the comments, which show what a gigantic work the author did, summarizing a huge amount of material from very significant sources. In a few words: robots under water, on earth, in the air, in space and on other planets: the Moon and Mars. In the latter case, however, an omission - now the robots have flown to the edge of the solar system, but, unfortunately, they are not mentioned about them in the book. But also what is mentioned allows the author to draw general conclusions about the prospects of robotics.

    I completely agree with the main conclusion: absolute autonomy is a harmful myth. At least for the coming decades. On his own and other people's experience, the author shows in detail that today the most successful are systems where the interaction of a person with a machine is quite fully realized, and not the alienation of a person from the decision-making process. I personally became convinced of the correctness of this idea using my humble example of game bots for the KR2HD game: a bot for planetary battlesaccording to our idea with a co-author, the idea should be completely autonomous, and now this project has stalled. In the new bot project for the battle of Rogeria, thanks to this multi-way battle, you can get the lion's share of the entire game’s points, I chose the semi-automatic mode: relatively routine operations (some are not trivial, because you have to use pattern recognition) performs the bot, but when certain conditions, he does not try to “sparkle with intelligence”, but asks for player intervention. He doesn’t do it often: now he managed to write the above on one computer, and the bot at that time was winding me glasses on another. And since this approach has paid off, I will describe it in more detail in a separate article. You don’t have to go far for annoying examples of attempts by various programs to “show off your intellect”:

    Going back to the book. Agreeing with the main assertion about the prospect of not autonomy, I must say that reading was not just because of the author’s repetitions, the text was clearly and strongly redundant, but nevertheless read every word. At the very end, the author, criticizing "google cars" (Google cars without a driver), once again explicitly listed three myths of robotics:

    Funnily enough, it is such a high-tech company like Google that, in its rhetoric, steps back into the 20th century, archaically exposing the driver as a passive observer. Their “new” approach falls prey to all three myths generated by the 20th century about robots and automation: 1) automotive technology should logically develop to complete, utopian autonomy (the myth of linear progress); 2) autonomous control systems relieve the driver of the obligation to drive (replacement myth); 3) autonomous machines can operate completely independently (myth of complete autonomy).

    After reading a lot of stories in the book, in particular, that for all moon landings, starting with Neil Armstrong, all the astronauts turned off the automatic landing and sat down manually, using the information from the on-board computer, it was similar when landing the Shuttles on Earth, I I agree with the author. However, a little lower, the author talks about a new project in which he participates. This is an ALIAS project - an automatic airplane control system. Everything looks good, but an ambitious task has been set: to equip any aircraft with a minimum of effort so as not to certify the aircraft again completely, not to interfere with its design. In particular, use computer vision to read information from displays installed in the cockpit. After reading this, I grabbed my head - I stopped understanding anything and I can only guess. Can, it seemed to me, but the author wants to place a web-camera in the second pilot’s seat, aimed at the display and recognize information from this display! This wildly complicates the system and greatly reduces reliability. Isn’t it easier to connect to the on-board computer using a USB cable and download the digital stream directly without any recognition? It is possible that any connection, even read-only, requires certification, but going to recognition in order to only avoid certification is absurd. As in this regard, my recognition bots are absurd - if the game had a COM interface, all the tasks of my bots would be solved trivially. Isn’t it easier to connect to the on-board computer using a USB cable and download the digital stream directly without any recognition? It is possible that any connection, even read-only, requires certification, but going to recognition in order to only avoid certification is absurd. As in this regard, my recognition bots are absurd - if the game had a COM interface, all the tasks of my bots would be solved trivially. Isn’t it easier to connect to the on-board computer using a USB cable and download the digital stream directly without any recognition? It is possible that any connection, even read-only, requires certification, but going to recognition in order to only avoid certification is absurd. As in this regard, my recognition bots are absurd - if the game had a COM interface, all the tasks of my bots would be solved trivially.

    It is interesting that in the whole book the author infrequently pronounces “AI”, while stating that he will not discuss the question “can the machine think”. Perhaps, contrary to general opinion, the author does not consider the task of pattern recognition as AI tasks? The point is not the name, but the fact that these are fundamentally different tasks. Simply put, in a healthy computing environment, twice two will always be four, but the same environment does not always correctly recognize the number “2” from paper or from a monitor. While a person recognizes images much better than a computer, but he is mistaken. So, not everyone and not always can immediately understand every word that a vocal-instrumental ensemble sings in a seemingly familiar language. And in the visual field, a person has illusions and mirages:

    “I had a hallucination yesterday: I was so scared that I slept badly all night,” the patient told me. - I enter the room in the evening and see: in the rays of the moon there is a kind of person. I was surprised - who would it be? I’m getting closer, and this is my robe hanging on the wall, and at the top of the hat. Then I got scared even more: since I have a hallucination, it means that I am seriously ill.

    But there was nothing to be afraid. It was not a hallucination, but an illusion, that is, an incorrect, distorted reflection of a real object. The robe and hat seemed human.
    ( Konstantin Platonov, Entertaining Psychology, RIMIS, 2011. )

    Another widely known example of difficult recognition is captcha, which you come across on the Internet at every turn. There are such scribbles that you have to press the CAPTCHA change button several times before you manage to "prove that a camel is not a robot." Maybe someday the machine will be able to recognize all kinds of audio and video images better than humans, but it has not yet been proven that such tasks always have a solution. And while modern practice shows that in general it is possible to recognize, however, it is not possible to avoid errors.

    It so happened that before reading Mindell’s book, I wanted to re-read Stanislaw Lem's “Navigator Pirks”. We can say that this is a chronicle of catastrophes, to which the hero has had some relationship throughout his career, and AI is involved in almost every of these catastrophes. As a result, similar questions arise as in Mindell’s book. One can only be amazed that Lem guessed the problems that will be relevant in the modern development of robotics. Unfortunately, Mindell does not mention Lem, but there could be interesting parallels. If it refers to situations fictitious by Lem as models, then many of them confirm the claims of Mindell.

    Of course, Lem did not foresee everything. So, he did not foresee hacking, did not foresee viruses and “Trojan horses” (although he models cases of inadequate robots, but not as a result of deliberate hacking of the OS). However, it is strange that in our time of constant disasters associated with hacks, Mindell says nothing about them. In my opinion, in this regard, it is somewhat reminiscent of Asimov, in whom the three laws of robotics ensure the harmonious coexistence of people and machines. At the same time, autonomy, that is, being controlled by a human operator, may not save - Mindell has repeatedly noted that the line between autonomous and non-autonomous devices is gradually erased and the same device can work both in stand-alone and in non-stand-alone mode, like an on-board computer Apollo’s descent to the moon above. At the same time, it seems obvious that the robot in which the Trojan is embedded will turn into a spy, and a robot infected with a virus can perform extremely inadequate and dangerous actions. Why is the book not talking about this? Maybe because such a too real threat refutes the too optimistic title of the book about the abolition of the machine revolt?

    Also popular now: