Artificial Intelligence In Hiring: Do the Risks Outweigh the Advantages?

By Charles Smith

With the increasingly competitive workforce, employers are searching for ways to efficiently hire quality candidates.  One method employers use to lower costs and simultaneously increase efficiency is the use of Artificial Intelligence (AI) to assist with the tedious job of searching through countless resumes.  While AI undoubtedly offers significant advantages to employers, there are also many risks.   The “pre-existing real-world data” that the AI system relies on for training can be one source of these risks.  “[Al]though an AI system itself does not have any biases, the information humans choose to use in the system may be biased.”  The full effects of AI in the hiring process have yet to be seen; accordingly, employers need to take steps to ensure they do not expose themselves to liability while implementing this new technology.

What Are The Risks?

Biased Outcomes

As with human intelligence, AI can also become tainted with unintentional biases.  AI will not automatically eliminate all biases in an employer’s hiring process; the results from AI are often only as good as the information employers choose to use in the system.  Employers may be led to believe that simply having an AI system will automatically produce unbiased results – this is not necessarily the case.  The “[r]isks of unlawful practices may arise, for example, when an algorithm looks for applicants with the same characteristics as those possessed by existing managers, . . . but minorities or other groups are not currently represented in the workforce.”  See Richard R. Meneghelo, Sarah J. Moore & John T. Lai, Counseling Employers on the Legal Implications of Artifical Intelligence and Robots in the Workplace, Lexis Practice Advisor Journal, April 18, 2018.

Difficulty in Explaining How AI Reached its Conclusion

Employers must be able to defend themselves in litigation.  The ability of AI to learn as it processes information may create difficulty for employers in explaining how the AI reached a particular outcome.  See Robert Kantner & Carl Kukkonen, “An Introduction to the Risks of AI for General Counsel,” Legal Tech News, October 11, 2018.   If an employer cannot demonstrate how the AI came to its conclusion, it may pose a problem for the employer in establishing that the challenged practice is job-related for the position in question and consistent with business necessity, and therefore, may potentially subject the employer to liability.

Steps To Advise An Employer To Take

Due to the limited case law on this particular issue, many questions have yet to be answered, and the potential risks that accompany this new technology are not clear.  A recent article by attorneys at Fisher & Phillips, LLP provides best practices for employers on how to practically implement AI in the workplace.  Some of the top takeaway pointers included:

  1. Only Use Data that is Job-Related and Consistent with Business Necessity.
    An employer should apply this to any context of the hiring process, not just when using AI.  However, employers should not be led to believe that an AI system will eliminate all discrimination.
  2. Only Use AI to the Extent Necessary to Diminish Certain Human Biases.
    Currently, AI is not suited to control all aspects of the hiring process. Keep the process simple, so Human Resources can adequately understand, implement, and defend it in litigation.
  3. Have An Alternative Process to Accommodate Disabilities.
    Depending on how the employer uses the AI system, the employer may need to ensure it is not inadvertently screening out individuals with disabilities.

In sum, AI is a long way from eradicating all bias in employment decisions.  AI can increase efficiency in the hiring process; however, if used improperly it may also increase an employer’s liability.  AI is not a cure-all for discrimination in the hiring process, but it may prove to be a step in the right direction.