Legal and ethical implications of using AI to recruit staff

Original author: Ben Dattner Thomas Chamorro-Premuzic Richard Buchband Lucinda Schettler
  • Transfer


Digital innovations and advances in the field of artificial intelligence (AI) have spawned many tools for finding and recognizing potential candidates. Many of the technologies promise to help organizations find the right person at a particular place and weed out the wrong people faster than ever.

These tools give organizations unprecedented opportunities for making decisions about human capital based on data processing. They also have the potential to democratize responses - millions of candidates can get an assessment of their strengths, directions for development and choice of both a potential career and a suitable organization. In particular, we are observingrapid growth (and corresponding investments) in evaluations based on games, bots for processing posts on social networks, linguistic analysis of candidate texts, video interviews using algorithms to analyze speech content, voice tones, emotional states, non-verbal behavior and temperament.

Undermining the basics of hiring and evaluating staff, these tools leave hanging questions about their accuracy, as well as privacy and ethical and legal implications. This is especially evident in comparison with time-tested psychometric techniques such as NEO-PI-R , the Vanderlik test, the standard progressive Raven matrix test, or the Hogan personality test. All were created using a scientific approach.and thoroughly checked at the appropriate workplaces, as a result of which a reliable correspondence was obtained between the candidates' assessments and their effectiveness at the workplace (and the certificates were published in independent scientific journals that were reliable) Recently, the US Senate was even worried about whether new technologies (especially facial analysis) would adversely affect the equality of all candidates.

In this article, we focus on the potential consequences of new technologies related to the privacy of candidates, as well as the protection of candidates by the law on non-discrimination of persons with disabilities, and other federal and state laws. Employers understand that they cannot ask candidates questions about their marital status or political opinions, their pregnancy, sexual orientation, physical or mental illness, problems with alcohol, drugs or lack of sleep. However, new technologies may not be able to directly take these factors into account without the appropriate consent of the candidate.

Before delving into the ambiguities of the brave new world of candidate ratings, it would be nice to look around the past. Psychometric evaluations have existed for more than 100 years, and began to be widely used after the so-called. Alpha test for the US military, which divided the recruits into categories and determined the likelihood of their success in various roles. Traditionally, psychometry is divided into three broad categories: cognitive abilities, or intelligence; personality or temperament; mental health or clinical diagnosis.

After the adoption of the law on persons with disabilities(Americans with Disabilities Act, ADA) in 1990, employers were generally forbidden to take an interest in physical disabilities of people, their mental health or clinical diagnoses as part of a preliminary assessment of candidates, and companies violating this law were put on trial and censure. In fact, disability - physical or mental - is considered “personal” information that the employer cannot be interested in during the interview, just as he cannot ask questions about his private life or take into account personal demographic information when making decisions.

Tests of cognitive abilities and intelligence were recognized as reliable methods for predicting success at work in a wide range of professions. However, such estimates may be discriminatory if they adversely affect somespecial groups of people determined, for example, by gender, race, age or nationality. If the employer uses an assessment whose adverse effect was found on the basis of relative assessments for various special groups of people, he must prove that this assessment technology is work-related and predicts success in a particular workplace.

Personality assessments are less likely to bring discrimination charges against employers, as there is virtually no correlation between personality characteristics and demographic characteristics. It is also worth noting that the relationship between personality and effectiveness at work depends on the context (i.e., on the type of work).

Unfortunately, much less information has been accumulated regarding the new generation of candidate search tools, which are increasingly being used in preliminary assessments. Many of the tools appeared as technological innovations, and not as scientifically created methods or research programs. As a result, it is not always clear what exactly they evaluate whether the hypotheses underlying them are legitimate, and whether they can be expected to predict the effectiveness of the candidate in the workplace. For example, the physical properties of speech and human voice - which have long been associated with personality traits- associated with differences in labor rates. If the instrument prefers such speech features as modulation, tone or a “friendly” voice that do not stand out in any particular group of people, then this does not cause legal problems. But such tools may not have been scientifically tested, and therefore are not monitored for potential discrimination - which means that the employer may be held responsible for blindly following their recommendations. In addition, while there is no convincing hypothesis or conclusions about whether it is ethical to filter people out on the basis of their voice, a property that is determined by physiology and not amenable to change.

Similarly, social media activity - for example, using Facebook or Twitter - reflects intelligence.and personality traits of a person, including their dark side . However, is it ethical to process this data for the purpose of hiring, if users use these applications for different purposes, and did not give their consent to analyze the data in order to draw conclusions based on their public posts?

In the context of hiring, new technologies raise many new ethical and legal issues related to privacy, which in our opinion need to be discussed publicly, namely:

1) What are the temptations of companies regarding the privacy of a candidate related to personal characteristics?

With the advancement of technology, big data and AI will be able to more accurately determine the characteristics that describe personal characteristics. For example, today Facebook likes can be used with significant accuracy to determinesexual orientation and race. It is also easy to determine political preferences and religious beliefs. Could it be tempting for companies to use such tools to drop out candidates, if they consider that since decisions are not made directly on the basis of these characteristics, will they be legal? It is possible that the employer does not violate any laws, evaluating the candidate on the basis of personal information, but the company may suffer legal risks if it makes decisions on hiring the candidate for membership in special groups - by place of birth, race or mother tongue - or on the basis of private information for which she does not have the right to consider, for example, physical illnesses or mental ailments. How courts will handle situations in which the employer has relied on tools, using these indirect characteristics is not yet clear; however, it is understood that it is illegal to act on the basis of certain special or private characteristics, regardless of how they were identified.

This may also be applicable to facial recognition software, as recent studies predict that AI for facial recognition will soon be able to accurately determine the sexual and political orientation of candidates, as well as their “internal state,” which includes mood and emotions. How can the application of the law on persons with disabilities change? In addition, the Lie Detector Act for employees generally prohibits employers from using such tests when hiring, and the Non-disclosure of Genetic Information Act prohibits them from using genetic information to make hiring decisions. But what if the exact same information about truth, lies, and genetic traits can be gathered using the tools mentioned?

2) What temptations will companies face regarding the privacy of candidates in their lifestyle and occupation?

Employers now have access to information such as a candidate’s check-in at the church every Saturday morning, a review of the dementia care center where he placed his elderly parent, or a third divorce statement. All such things, and many others, are easy to spot in the digital age. Big data monitors us wherever we go online and collects information that can be processed by tools that we still cannot imagine — tools that, in principle, can tell the employer whether we are suitable for certain roles. And this big data will only get bigger; according to experts, 90% of all data in the world was created in the last two years. And the expansion of data is followed by the potential expansion of their unfair use, leading to discrimination - intentional or accidental.

Unlike the European Union, which harmonized its approach to privacy with the Data Protection Act (GDPR), the United States relies on an approach to patching holes, mainly provided by state laws. They began to enact certain laws on social networks in 2012 to prohibit employers from requiring candidates for personal passwords in the form of a prerequisite for hiring. More than twenty states have passed laws of this kind. However, in the field of general protection of personal life in the framework of the use of new technologies in the workplace, such activity is not observed. In particular, California has passed legislation that potentially limits the employer's ability to use candidate or employee data. In general, state and federal courts have yet to adopt a unified platform, allowing to analyze the protection of employees' privacy from new technologies. The bottom line is that, so far, the fate of employee privacy in the era of big data remains uncertain. This puts employers in a conflict situation that requires caution. Emerging advanced technologies can prove extremely useful. But they give employers information previously considered personal. Is it legal to use it for hire? Is it ethical to study it without the consent of the candidate? Is it legal to use it for hire? Is it ethical to study it without the consent of the candidate? Is it legal to use it for hire? Is it ethical to study it without the consent of the candidate?

2) What temptations will companies face regarding the privacy of candidates related to their disability?

The Law on Persons with Disabilities includes both mental and physical diseases, and defines a person as a disabled person, if the disease significantly limits his vital activity, if such restrictions are recorded in the person’s history, or other people feel that he has limitations. About ten years ago, the U.S. Equal Employment Commission (EEOC) issued recommendations stating that mental restrictions should include the ever-expanding list of mental illnesses described in the psychiatric literature and making it easier for people to be covered by the Disability Act. As a result, people who have significant problems in communicating with others, with concentration or with behavior in society can fall into the category of people protected by this law.

Technology, in addition to raising new questions about disability, also presents new dilemmas regarding differences between people, whether they are demographic or otherwise. Situations have already been recorded in which such systems demonstrated learned distortions, especially those related to race and gender. For example, Amazon developed an automatic employee search program to study resumes - and abandoned it when they realized that it was not racially neutral . To reduce such biases, developers balance the data used to train AI models so that they adequately represent all groups. The more information the technology has for training, the better it will be able to control the appearance of potential distortions.

In conclusion, we note that technology can already cross the boundaries of public and private properties, characteristics and states of the individual, and there is every reason to believe that this will only get worse in the future. Employers using AI, big data, social networks and machine learning will receive ever-increasing opportunities for access to candidates' personal lives, their personal characteristics, difficulties and psychological states. There are no easy answers to many of the new privacy questions that we have raised above, but we believe that they are all worthy of public discussion.

Also popular now: