Does my algorithm have any mental problems?

https://aeon.co/ideas/made-in-our-own-image-why-algorithms-have-mental-health-problems
  • Transfer


Does my car have hallucinations? Does the algorithm, the manager of the police surveillance system in my city, suffer from paranoia? Marvin, the android from Hitchhiker's Guide to the Galaxy, had all the diodes on the left side. Are there any similar sensations in my toaster?

It sounds funny, but only until we realize that our algorithms are becoming more and more like us. The more we learn about our brain, the more we invest this knowledge in the creation of algorithmic versions of ourselves. These algorithms control the speed of robots, define targets for autonomous war drones, calculate our susceptibility to commercial and political advertising, find us soul mates in online services and evaluate risks for insurance and loans. Algorithms become almost reasonable background of our life.

The most popular algorithms used today in the work are those with deep learning . They copy the architecture of the human brain, building complex models of information. They are trained to understand the surrounding reality through its perception, to determine what is important, and to understand what comes out of it. They are similar to our brain, and the risk of their psychological problems is increasing.

Deep Blue, the algorithm that won the world chess champion Garry Kasparov in 1997, used the brute force method, going over a million positions per second and predicting up to 20 moves to the future. Everyone understood how it works, even if they could not repeat it. AlphaGo, the deep-learning algorithm that defeated Lee Sedol in go in 2016, is radically different. He used neural networks to create his own understanding of the game, which is considered the most difficult of all board games. AlphaGo was trained by watching others, and playing with himself. Go programmers and players are confused by the unusual way of playing AlphaGo. His strategy at first seems unusual. And only then we understand what he meant, and that, not at all 100%.

To better explain to you what I mean by thinking, consider the following. For programs like Deep Blue, the code may contain an error. They may fall from memory overflow. They may be paralyzed due to an infinite loop or just give the wrong answer. But all these problems can be solved by a programmer who has access to the source code of the program with which the algorithm was written.

Algorithms like AlphaGo work in a completely different way. Their problems are very difficult to see, just looking at the program code. They are related to the internal presentation of information. This view is a constantly changing multidimensional space, reminiscent of dream landscapes. Solving the problems of such algorithms requires as much as a psychotherapist for algorithms.

Take unmanned vehicles. Robomobil, who saw the first “stop” sign in the real world, had already seen millions of “stop” signs during training, in the process of building a mental representation of this sign. Under different lighting conditions, with good and bad weather, with bullet holes and without them - the "stop" signs contain an incredible amount of various information. In most variants of normal conditions, the robomobil recognizes the “stop” sign. But not all conditions are normal. Some recent experiments have shown that a few black stickers covering the stop sign can be fooled by an algorithm that decides that this is actually a sign of a speed limit of up to 60 miles per hour. The algorithm, having met with something, to the horror similar to the contrasting shadow of a tree, begins to hallucinate.

And how many ways to experience hallucination is the algorithm? To figure this out, we would have to give the algorithm all possible combinations of input data. This means that something can go in an infinite number of ways. Expert programmers have long known this, and they use it to create so-called. adversarial examples. The AI ​​LabSix research team from MIT showed that by giving special images to the image classification algorithm from Google, and using the data from it, we can determine its weak points. And then they can use these weaknesses to deceive the algorithm — to, for example, make him believe that the X-ray is actually an image of two puppies playing in the grass.

Algorithms can also make mistakes, because they sometimes perceive features of the environment that correlate with the final result, although they have no causal connection with it. In the world of algorithms, this is called overtraining . When this happens in the brain, we call it superstition.

One of the largest algorithmic failures at the moment remains the so-called. " para Google flu"[prediction of influenza epidemics]. Google Flu used the information that people are looking for on Google to predict the locations and intensity of outbreaks of flu. At first, Google Flu predictions worked well, but over time they began to deteriorate, until finally the system predicted two times more cases of influenza than were registered in the US Centers for Disease Control. Google Flu, as an algorithmic shaman, simply didn’t pay attention to what was needed.

Perhaps, algorithmic pathologies are amenable to correction. they are proprietary black boxes, which are forbidden to be updated by the laws of commerce. In the book " Weapons of Mathematical Destruction"Katie O'Neill of 2016 describes a true parade of freaks made up of commercial algorithms whose cunning pathologies have ruined people’s lives. Algorithmic faults that separate the rich and the poor are especially interesting. Poor people are more likely to have problems with credit, live in places with heightened crime rate, surrounded by other poor people with similar problems. Because of this, the algorithms choose these people as targets of deceptive advertising that feeds on their despair, offer them sub-standardloans, and send more police officers to their areas of residence, increasing the likelihood that the police will detain them for committing crimes that occur with the same frequency and in richer areas. The algorithms used by the justice system assign such people long terms, reduce their chances for parole, block vacancies for them, increase mortgage interest, require large insurance premiums, and so on.

This algorithmic vicious circle is hidden in matryoshka consisting of black boxes: algorithms-black boxes, hiding the processing process in their thoughts of higher dimensions, to which we do not have access, are hidden in black boxes of proprietary rights to algorithms. In some places, for example, in New York, this has led to proposals for the adoption of laws obliging to monitor the fairness of the work of the algorithms used by municipal services. But if we cannot even detect cognitive distortions in ourselves, how can we expect them to be detected in our algorithms?

Algorithms, training on human data, acquire our distortions. In a recent studyunder the leadership of Aileen Kaliskan from Princeton University, it was found that the algorithms that train on the news very quickly acquire racial and sexual prejudices. As Kaliskan noted: “Many people believe that cars do not have prejudices. But machines train on human data. And people have prejudices. ”

Social networks are a snake's nest of human prejudice and hatred. Algorithms that spend a lot of time in social networks quickly become blind fanatics. They acquire prejudices against nurses and female engineers. They misunderstand issues such as immigration and minority rights. A little more, and the algorithms will start treating people as unfairly as people treat each other. But the algorithms are inherently too sure of their infallibility. And as long as you do not train them to the opposite, they will have no reason to suspect themselves of incompetence (everything is just like that of people).

The algorithms I described have psychological problems due to the quality of the data on which they are trained. But algorithms can have similar problems due to their internal structure. They may forget the old information while studying the new one. Imagine that you remember the name of a new colleague and suddenly forgot where you live. In extreme cases, the algorithms may suffer due to the so-called. " catastrophic forgetting ", when the algorithm as a whole can no longer study and memorize something new. The theory of age-related cognitive impairment is based on a similar idea: when memory becomes overloaded, both the brain and the computer need more time to find what they know.

When exactly the case takes a pathological turn - it depends on the point of view. As a result, psychological abnormalities in people often remain unsolved. Synesthetics , such as my daughter, in the perception of which the written letters are associated with certain colors, often do not realize that they possess a special gift of perception, even to adolescence. Testimonies from Ronald Reagan’s speech analysis now suggest that he suffered from dementia as president. An article in The Guardian describes that mass executions that took place in the United States at about nine out of every ten days in the last five years, often take place in the so-called. "Normal" people who can not stand the persecution and depression.

In many cases, several consecutive interruptions are required to detect a problem. The diagnosis of schizophrenia takes at least a month to observe the symptoms. Asocial personality disorders - a modern term for psychopathy and sociopathy - will not be able to be diagnosed in people under the age of 18, and even then they can be delivered only if a person has a behavior disorder of up to 15 years.

Most psychological disorders do not have biomarkers, just as there are no bugs in the AlphaGo code. Problems in our "equipment" is not visible. She is in our "software". Many options for disruption of the brain make each psychological problem unique. We classify them in broad categories, such as schizophrenia orAsperger syndrome , but most disorders have a wide spectrum, covering the symptoms that can be found in most people, to varying degrees. In 2006, psychologists Matthew Keller and Joffrey Miller wrote that this is an inevitable property of the brain.

In a mind like ours, much can go wrong. Karl Jung once suggested that there is a madman in every rational person. The more our algorithms become similar to us, the easier it is for him to hide in them.

Thomas Hills is a professor of psychology at the University of Warwick in Coventry, Britain.

Also popular now: