Why you should not wait for moral manifestations from robomobiles

Original author: John McDermid
  • Transfer


Ever since companies started developing robotic vehicles, people began to ask questions about how designers are going to solve moral issues , such as who a robot should kill in the event that an accident is inevitable. A recent study suggests that this question may be even more difficult to answer than previously thought, as moral preferences of people vary in different countries.

Researchers at Harvard University and MIT have developed an online game that simulates situations in which accident with the victims was inevitable. They interviewed 40 million people from 200 countries, offering to choose various options for how such incidents should end, for example, whether car passengers or pedestrians should die.

As a result, three cultural clusters with significantly different ethical preferences were discovered. For example, in the southern cluster (which includes most of Latin America and the former French colonies), most preferred to spare women to the detriment of men. In the eastern cluster (which included many Islamic countries, China, Japan and Korea), people were less likely to vote to save the young to the detriment of the elderly.

The researchers concluded that this information should influence the decisions of developers of robomobiles. But should it? This work, highlighting the interesting discovery of global differences in moral preferences, also demonstrates persistent misunderstanding regarding AI and its capabilities. Given the current level of AI technology used in robotic vehicles, it is clear that the machine is unable to make moral decisions.

Fantasy of "moral machines"


Robomobiles are trained to make decisions about when to steer and when to slow down, using a special kind of AI, also known as a “ weak (narrow) AI, " which focuses on one highly specialized task. They are developed using various sensors, cameras, and rangefinding lasers (lidars) that provide information to a central computer. The computer uses AI to analyze input data and make decisions.

And although today this technology is relatively simple, as a result, cars can surpass a person in the simplest tasks associated with driving a car. But it would be unrealistic to believe that robomobiles should be able to make ethical decisions, for which even the most moral people would not have time in the event of an accident. If we want this from a car, it will need to program general-purpose artificial intelligence (IION).

IION is equivalent to making us human. This is an opportunity to talk, enjoy music, laugh at something or judge morality. Now it is impossible to make IIONdue to the complexity of human thoughts and emotions. If we demand the creation of autonomous cars with morality, we will have to wait several decades, even if this is possible at all.

Another problem with the new study is the unrealistic nature of many situations assessed by the participants. In one scenario, the well-known “ trolley problem ” was played out , and participants were asked who the car should move when the brakes fail: three passengers (a man, a woman and a child) or three elderly pedestrians (two old men and one old woman).

People can carefully reflect on such issues by filling out a questionnaire. But in most real-life incidents, the driver will not have time to make such a decision in those fractions of a second for which it will happen. So, the comparison is incorrect. Given the current level of AI technology used in robotic vehicles, these vehicles will also not be able to make such decisions.


Narrow AI allows robomobiles to make simple judgments about surrounding objects

Modern robomobiles have complex world perception capabilities, and they can distinguish pedestrians from other objects, such as streetlights or traffic signs. However, the authors of the study believe that robomobiles can, and perhaps even need to make deeper differences. For example, they could appreciate the degree of usefulness of certain people to society, for example, doctors or athletes, and decide to save them in the event of an accident.

The reality is that to carry out such complex reasoning, you will need to create an IION, which is impossible today. In addition, it is unclear whether this should be done at all. Even if it would be possible to make the program of the machine an opportunity to decide whose life to save, I believe that we should not be allowed to do this. We must not allow the preferences determined by the study, no matter how large its selection, to determine the value of human life.

Basically, robomobiles are designed to avoid accidents as much as possible or to minimize impact speed. Although, like people, they will not be able to make decisions based on morality in the event of an imminent collision. However, robomobiles will be safer than people-driven cars, more attentive, their reaction speed will be higher, and they will be able to use the full potential of the braking system.

So far, the biggest ethical issue in the field of robomobiles is whether enough evidence has been gathered in the simulations of their safe behavior in order to be able to release robomobiles on the streets. But this does not mean that they will be "moral", or will be able to become such in the near future. To say the opposite is to confuse a narrow AI driving, with an IION, which probably will not appear during our lives.

Ultimately, robomobiles will be safer than humans. This will be achieved thanks to their design and the ability to avoid incidents with all their might, or to reduce damage in case of inevitability. However, machines cannot make moral decisions where even we cannot. This idea remains a fiction, and you should not hope for it. Instead, we focus on security: faith in it will be justified.

Also popular now: