Human rights activists call for the adoption of the first law of robotics



    Science fiction lovers are well aware of the three laws of robotics formulated by Isaac Asimov:

    1. A robot cannot harm a person or, through inaction, allow a person to be harmed.
    2. A robot must obey all orders given by a person, except in cases where these orders are contrary to the First Law.
    3. The robot must take care of its safety to the extent that it does not contradict the First and Second Laws.

    These laws apply in books, but not in real life. Fortunately, so far there are no robots capable of hitting an enemy in offline mode - only under the control of the operator. However, technology has come close to a dangerous point, some experts say.



    For example, the Samsung Techwin SGR-1 model (pictured) is used to protect the demilitarized zone on the border with North Korea. Equipment - 5.5 mm machine guns and 40 mm automatic grenade launchers. The robot operates in a semi-automatic mode, shoots only at the command of the operator.

    SGR-1 in action

    Some governments, including the United States, are perceiving the opportunity to save the lives of soldiers by replacing them with robots on the battlefield, says Steve Goose of the Human Rights Watch arms department. At first glance, such an alternative looks like a humane choice, but experts warn of inevitable errors that are associated with imperfect operation of computer vision algorithms. In some situations, even a person cannot accurately determine the difference between an armed enemy and a civilian, so computer vision systems will most likely have false positives.

    Human Rights Watch published a 50-page report, “ Losing Humanity. The Case against Killer Robots”With an overview of military robots that evolve from controlled to fully automatic machines. Human rights defenders urge all countries to comply with international law, including Article 36 of Additional Protocol I to the Geneva Conventions of August 12, 1949, concerning the protection of victims of international armed conflicts:

    Article 36.
    New types of weapons

    When studying, developing, acquiring or adopting new types of weapons, means or methods of warfare, the High Contracting Party shall determine whether their use, under some or all circumstances, is subject to the prohibitions contained in this Protocol or in any other rules of international law applicable to the High Contracting Party.

    Consideration of the legality of new types of military robots should be taken already at the stage of development of the concept / design of the device or later, but in any case - before the start of mass production, otherwise it contradicts the Geneva Conventions.

    According to experts at the Human Rights Watch arms department, the U.S. Army violated the convention on at least one type of weapon. We are talking about the unmanned aerial vehicle Predator, which is equipped with Hellfire missiles.



    These two types of weapons were evaluated independently of each other, while in the ICRC’s clarificationaccording to article 36 of the Protocol, it is said that a new type of armament should be re-tested for compliance with international law after a “significant modernization”. Equipping drones with Hellfire missiles obviously falls under the definition of "significant modernization."

    Experts of the human rights organization state that in international law there is no direct ban on the use of autonomous combat robots. However, no modern computer vision system is capable of complying with Articles 48 and 51 (4) of the Protocol.

    Article 48
    Basic rule

    In order to ensure respect and protection for the civilian population and civilian objects, parties to a conflict must always distinguish between civilian and combatants, as well as between civilian objects and military targets, and accordingly direct their actions only against military targets.

    Article 51
    Protection of civilians

    4. Indiscriminate attacks are prohibited. Indiscriminate attacks include:
    a) attacks that are not aimed at specific military targets;
    b) attacks in which methods or means of warfare are used
    that cannot be directed at specific military targets; or
    c) attacks that use methods or means of warfare, the consequences of which cannot be limited, as required by this Protocol;
    and which, therefore, in each such case, hit military targets and civilians or civilian targets without distinction.

    The question is whether robots, with sufficient development of technologies in the future, can at least theoretically comply with Articles 48 and 51 (4). Still, the difference between a civilian and an armed adversary is one of the fundamental problems.

    There are different opinions on this matter. Some experts believe that a strong AI will still be able to make such decisions. Others say that artificial intelligence, by definition, is not capable of this, because it requires, among other things, an assessment of a person’s intentions and his emotional state. An example is a mother who runs to her children and screams so that they do not play with machine guns near the soldiers. From the point of view of the computer vision system, there are two armed adversaries and a third who is approaching with a scream.

    Human Rights Watch is concerned that armed forces from around the world may start using autonomous combat robots before artificial intelligence experts come to a consensus. Therefore, the organization calls for the adoption of a new international agreement expressly prohibiting the development and use of weapons, which can operate in a fully automatic mode.

    This will mean that the first law of Isaac Asimov’s robotics, 70 years after its creation, will really come true.


    Also popular now: