Anatoly Kostin. This unpredictable automation

    It is believed that automation is good. Factory directors boast: "Our production is robotic, only a person sits on the remote control." Any technical novelty, from the machine to the plane, is smarter than the previous one and requires less effort from a person. The world has embarked on automation and does not see its danger.

    The main problem with automation is unpredictability. When creating a technical device or computer program, developers are guided by their own ideas about who, how and in what conditions will use them. But in life, not everything fits into the norm. And therefore, automation sometimes presents unpleasant surprises. Take a modern digital camera. A person does not need to install anything manually, but sometimes it is not possible to take a picture - you press the "start" button, and the device does not work. Automation counted: optimal conditions were not provided for shooting. But it happens so that you just need to not miss the right angle! This is a typical example of the unpredictability of smart technology.

    And what about airplane control? Few people know: during the flight of the Buran spacecraft (which took place in automatic mode), the unexpected happened. Experts expected that the “Buran” would make a right turn upon landing, but the ship unexpectedly turned left and flew across the strip. Both ground services and tester Magomet Tolboev, who accompanied the Buran on the MIG-25, were confused. Fortunately, the plane landed safely. The reason for the strange maneuver was a strong crosswind, and later the developers said: the probability of such a case is no more than 3%. But she is! Now suppose that the Buran was piloted by Tolboev. He would have two ways: to take control of himself and turn off working automation or not to intervene and become a hostage to automation. But who guarantees that the plane will not fly into the steppe and will not crash? No matter how the pilot does, he is “to blame” all around.

    The problem has another aspect. In situations not envisaged by the developers, automation can turn off working equipment, correct or block the operator’s actions, considering them erroneous. Already now this often leads to serious disasters.

    It is believed that in the event of a failure or breakdown, the operator should take control. But it’s psychologically difficult to switch from automatic to manual mode: you need to understand the cause of the accident and quickly switch to action. In automatic mode, the operator is a passive observer, it is difficult for him to remain vigilant. So fatal errors arise. And the more complex the technique, the worse the consequences. Not only is the operator a hostage of automation, he is also responsible for what he does not!

    Automation is not only a disaster. Here are the smaller troubles:

    * Automation requires special knowledge, so the requirements for staff qualifications will increase;
    * operators will slowly lose manual control skills, in case of accidents they will not be able to perform the necessary actions;
    * Separation of specialists from active management can cause them insecurity and reduce social status.

    Automation has already caused many air crashes. And the risks are growing: unmanned aircraft, autopilots for cars, and combat robots are being created. Back in the 1970s, our psychologists warned of the dangers of mindless automation. They proposed a solution: the optimal mode of control of the equipment is semi-automatic. The operator plays a leading role, maintains the skill level, and in case of failure or accident it is easier for him to take control. Little is required from the developers of household appliances: the user should get easy access to the “semiautomatic device” (now it is not easy to get to it). Industrial plants are more complicated: a semiautomatic device here is the first stage of solution. In an atypical situation, even high-class professionals are mistaken. You can reduce the likelihood of disasters by relying on the development of engineering psychologists, for example, funds active assistance. But first, the creators and users of smart machines should see the problem.

    Harvard Business Review Russia, November 2007, p. 34

    Also popular now: