AI lie detector can detect when a person is lying

Hi, Habr! I present to your attention the translation of the article by Rob Verger, "The article by the author of Ver Ver .

Today, artificial intelligence is everywhere - it determines what is in food photographs (on sites like Yelp), helps researchers in trying to speed up the MRI process, and can even look for signs of depression in a person’s voice. But many people didn’t think about using artificial intelligence as a lie detector.
This idea - the AI ​​Lie Detector - is now in the news, because the new European border guard project “iBorderCtrl” includes technology that focuses on “deception detection”. This initiative includes a two-stage process, and the stage involving “fraud detection” is working right from home. According to the European Commission, the protocol begins with a pre-selection, during which travelers “use a webcam to answer questions posed by an animated border guard, which is chosen according to gender, ethnicity and language of the traveler. A unique approach to “fraud detection” analyzes the smallest changes in travelers' facial expressions to determine if the interviewee is lying. ”

It all sounds like science fiction, and of course reminds of the problematic history of the polygraph. But such artificial intelligence is quite real. The only question is how accurate it can be.

Rada Mihalcha, a professor of computer science and engineering at the University of Michigan, has been working on finding cheating for about ten years. Thus, one AI lie detector and its principle of operation were constructed.

The first thing researchers in artificial intelligence and machine learning need is data. In this case, the team of Rada Mikhalcha started with a video of real court cases. For example, a defendant in a trial in which he was found guilty may provide an example of fraud; witness testimony was also used as an example of true or false testimony. (Of course, machine learning algorithms are directly dependent on the data they use, so it is important to remember that a person convicted of a crime may actually be innocent.)
As a result, 121 video clips were used and the corresponding transcripts of what was said (the ratio of false and truthful statements is approximately one to one). This data allowed the team to build classifiers for machine learning, which ultimately had an accuracy of 60-75%.

What pattern has the system noticed? “The use of pronouns. People who are trying to lie are inclined to rarely use the pronouns 'I' or 'we' and generally mention the things associated with them. Instead, they more often use the pronouns “you”, “yours”, “he (a)”, “they”. ”- Rada Mikhalcha explains.

This is not the only linguistic feature. Liberating people tend to use “stronger words” that “express confidence,” says Rada Mikhalcha. Examples of such words are such words as “absolutely” and “very”, when people who speak the truth, on the contrary, shy away from the answer, using such words as “possible” or “probably”.
“I believe that deceptive people will rather compensate for their lies by trying to appear more self-confident.”

With regard to facial expressions, Rada Mihalcha emphasizes that people who lie are more likely to look directly into the eyes of the person who interviews them. These people also more often use two hands for gesticulation instead of one, because, by the suggestion of Rada Mihalch, they thus try to seem more convincing. (Of course, these are only templates: if someone looks into your eyes and gestures with both hands while he speaks, this does not mean that this person is lying.)

This is a list of remarkable data that AI can notice as soon as researchers give him examples with which he can work and with which he can learn. But even Rada Mihalcha herself admits that her work is “not perfect.” “As researchers, we were very excited that we were able to achieve 75% accuracy.” But on the other hand, this means that the probability of error is one to four. “I do not think that this AI can be used in practice, because the probability of error is up to 25%.”

Ultimately, Rada Mihalcha sees this technology as very useful for people. For example, this technology could show that she noticed something “unusual” in someone's statement, and later this person might have been asked again. This use of AI is not uncommon: a technology that extends what people can do.

Also popular now: