Worldwide known AI developers have agreed not to create smart weapons.

    Military drones are very "smart", some of the operator controls only from time to time, and the rest of the time they perform the task themselves

    . Artificial intelligence technologies are developing at a rapid pace. Specialists taught the AI ​​much, now its weak form (strong, fortunately or unfortunately has not yet been created) works in industry, shipping, entertainment, and many other industries. But what about military affairs? Yes, AI is also used here, because the predictive analysis of the trajectory of a rocket or vehicle, the actions of the enemy, the development of your own strategy - artificial intelligence can cope with all this.

    But what if to build in AI of any level into a weapon, will it not become more effective? Most likely, it will, and will be very "productive." But there are already questions of ethics. Can the machine control the fate of people? Many experts from the field of technology believe, then no. And these “many” recently signed a declaration - a kind of promise never to take part in the development of smart weapons.

    Among other specialists who took part in the creation and signing of the declaration are Ilon Musk, representatives of DeepMInd and employees of many other companies where questions of artificial intelligence are affected in one way or another. According to a group of scientists and entrepreneurs who left their signatures under the memorandum, it’s up to the other person, not the machine, to decide whether to kill a person or not. And the burden of this decision falls on the shoulders of the one who made it.

    In the case of the machine, there are no moral fluctuations - the system was sent on duty, say, to comb the back roads during the war, and it shoots the enemies as conscience suggestscomputing unit. According to experts, the development of smart weapons with AI elements can be a destabilizing factor for any country, as well as for its citizens.

    The text of the declaration regarding the use of AI in weapons development was published after the conclusion of the International Joint Conference on Artificial Intelligence IJCAI conference held in Stockholm. It was organized by the Future of Life Institute Research Institute. He is studying the risks of human existence. The Institute had previously called for the abandonment of the idea of ​​creating a smart lethal weapon. Now this idea has found understanding and is beginning to spread more and more widely.

    Among the signatories are the head of the SpaceX and Tesla companies Ilon Musk, three co-founders of Google’s subsidiary - DeepMind, creator of Skype Jaan Tallinn, plus world-renowned technology researchers of artificial intelligence Stuart Russell, Joshua Benjio, Jurgen Schmidhuber.

    Separate signatories noted that their common initiative should help move from words to deeds in terms of abandoning smart lethal weapons. No, no one is going to make a revolution, the main task is to show the danger of the gradual intellectualization of weapons of any type. “Weapons that decide for themselves who to kill are a disgusting and destabilizing idea, like biological weapons. The idea of ​​smart weapons should be treated the same way as biological weapons. ”

    Here, however, there is one difficulty. The fact is that it is rather difficult to distinguish a truly autonomous and “smart” weapon from one that is not. The feature is ghostly. When an ordinary hi-tech weapon stops being stupid, it gets wiser and starts deciding who to live and who to kill? Automatic turret, which tracks the appearance of people by thermal radiation - is it possible to use such weapons? And is there a big difference between a smart weapon and a conventional one, if the same turret is automatically guided, and a person is just pressing the trigger?

    In addition, the declaration itself was late - about 30 different countries have lethal systems in service, which with or without a stretch, can be called smart.

    By the way, an interesting fact - the developers of Google in the literal sense of the wordrebelled when they heard that the company was going to develop autonomous systems for the Pentagon. Namely - non-lethal drone with AI.

    Speaking about the smart weapon and its future, one can not forget the science fiction story (the author and the name, unfortunately, I do not remember), where a reasonable warhead with a level of intelligence development like a 5-year-old child was sent to perform a task. But when approaching the enemy base, she discovered that people live there, the same as on her own base. Realizing this, she decided to report everything to her creator and set off on the return flight. The story ends with the fact that the rocket flies into the window of the house where its creator lives, but for obvious reasons, it does not have time to tell anything.

    Also popular now: