a hand holding a guitar

Insights

ARTICLE

4 ways to avoid bias when your HR agency uses AI recruitment tools

Opinion is split on the use of artificial intelligence in human resources, and organizations should know the pros and cons of using these systems, particularly if they outsource to service providers that use AI recruitment tools. This article summarizes the controversy, touches on proposed Canadian legislation, and offers four best practices to avoid introducing bias into hiring processes — including steps to stay compliant with privacy laws and employment standards.

Artificial intelligence in recruitment: Friend or foe?

AI-driven technology is used in talent recruitment for everything from ad targeting and resume scoring to analyzing publicly available online information about candidates and evaluating their communication and technical skills during interviews.

Champions of the technology look at machine learning tools as a boon to the under-resourced HR department looking for time- and cost-efficient ways to fill vacant positions and manage a landslide of applicants. Some even say AI can reduce bias and meet internal and legislated equity goals in the hiring process — after all, how could machines be prejudiced?

Detractors, in contrast, argue the opposite, pointing to past examples of AI recruiting software that were discontinued after discovering that their algorithms were discriminating against women. 

Problems arise, critics say, because machines are trained to identify the best candidates using existing data, which introduces systemic biases. A resume scoring system, for example, may look for (or exclude) a particular gender, pattern of employment, education and even postal code, based on what the existing talent in that role is like. This reproduces the hiring biases of the past and eliminates potentially excellent candidates because they break the mold. This can result in human rights or pay equity liability for companies — even if biases were wholly unintended.

According to a recent study from Cambridge University, technology that uses image and video analysis to assess a candidate’s organizational fit, emotional intelligence and problem-solving ability during an interview is even more concerning. One of the lead researchers has said this technology has no scientific basis and is little more than “modern phrenology.”

Given that human rights legislation would not allow a company to merely blame discriminatory hiring practices on “the algorithm,” companies ought to be cautious in entrusting their hiring to software without significant human oversight.

What laws apply to AI recruitment tools?

There is currently no provincial or federal regulation of private sector development and use of artificial intelligence in Canada — although there are laws regarding how employers in Canada can collect, use and disclose identifiable employee data (such as gender, race and level of education). That’s set to change if the Artificial Intelligence and Data Act proposed in federal Bill C-27 passes into law. The draft legislation, which was introduced in June 2022 and will have significant business impacts, is meant to protect individuals from harm — including economic loss — and prevent biased output when machine learning software is used.

The Canadian AI proposal takes its cue from the Treasury Board of Canada Secretariat directive on automated decision-making and proposed EU legislation. Canada’s proposal focuses on high-impact systems. While the definition of high impact will be established by regulation, AI systems that can have significant consequences on a person’s wellbeing, employment and economic stability are expected to be considered high impact. Recruitment for a job certainly falls into this category, given the difference it could make in someone’s life if they are eliminated from a job competition or discriminated against by biased algorithms.

In Québec, privacy laws are changing following the passage of Bill 64. As of September 2023, organizations will have to disclose if they use a system where decision-making is based on entirely automated processing of personal information. They must also be able to provide information about the personal information that was used to render an automated decision and the factors, parameters and reasons that led to that decision. Organizations may also have to perform a privacy impact assessment before using automated-decision systems as a result of these regulatory changes.

4 best practices when your recruiter uses AI tools

If your organization relies on third party recruiters to help identify and screen candidates, you may have little visibility into the AI tools being used — but you need to ensure you’ve done your due diligence to both avoid bias and comply with legislation. These four best practices will help you do so.

1. Ask questions about the AI recruitment software

To be a meaningful conversation, the right parties need to be talking. Ask the recruitment agency if you can meet with the data scientist that looks after the software product the recruiter uses, the developer supervising work on the project and the business contact at the software company. 

Bring along someone from your IT or software development group who is comfortable talking about — and translating — the technological aspects of AI. You’ll also need someone familiar with systemic bias (think chief talent officer or the head of equity, diversity and inclusion). Having your own data scientist at the table can be very helpful. If your organization doesn’t have one, François Joli-Coeur can help you identify consultants familiar with AI software and bias.

Focus your conversation on the three topics below.

  • Software development. Is the software development team diverse? Does the software focus on skills over demographics? Does it remove gender, names, photographs and addresses — information that can introduce bias — from the database? Is the dataset the system uses to learn large and diverse? Does the service provider’s contract with the AI software vendor include a representation about the software’s fairness? The answers to all of these questions should be “yes.”
  • Human oversight. While some software claims to use algorithms to increase the diversity of job candidates, human involvement may still be necessary to identify bias. Does the recruiting agency do any audits of their tools to look for bias? And are the humans acting as oversight trained to look for bias, discrimination and non-equitable practices that may violate the applicable human rights legislation or incur liability? If biases are detected, is the software taught to adapt to counterbalance them?
  • External review. Has the system been reviewed by a third party to determine whether it was built in a responsible way? While there isn’t an international standard or certification for AI currently, organizations can be hired to perform an ethics and human rights assessment. Organizations must be sure to consider employment, immigration, pay equity and human rights liabilities in this respect, particularly as they pertain to extending offers of employment to candidates (or excluding candidates from consideration). 

2. Look carefully at your contract

Typically, the HR service provider will send only the candidates that the AI software identifies as suitable. It is possible to negotiate your contract to allow your HR team, including your equity, diversity and inclusion committee, to review the candidates that were rejected so you can do your own due diligence to surface any biases.

Machines learn through data, and a large dataset is one way developers can combat bias. As a result, service providers often include the right to anonymize the information that is collected from applicants so it can be used by the software company to refine the AI recruitment tool’s algorithms. This has privacy implications (see #3 below), and it may be something you wish to remove from your contract. 

3. Comply with privacy legislation

There may not be any laws governing the development and use of AI in Canada yet, but privacy commissioners can apply aspects of privacy legislation to AI. Under privacy laws, private sector organizations must generally:

  • Only use personal information for the purpose it was collected unless consent was received.
  • Have security safeguards in place.
  • Be open and transparent about how personal information is being used in algorithms.
  • Be able to explain how they avoid bias and discriminatory outcomes in their application of AI to personal information. 

We recommend you tell potential candidates that artificial intelligence is being used in your hiring processes and that their data may be anonymized and used by the AI product manufacturer for machine learning. You should also be able to answer any questions a candidate has about the use of AI recruitment tools by the service provider.

4. Develop an algorithmic impact assessment scorecard

Large organizations that hire for higher-level, career-oriented positions may want to consider developing an algorithmic impact assessment tool if their HR service providers use AI. In the simplest terms, an algorithmic impact assessment helps organizations better understand and manage the risks associated with artificial intelligence. The federal government has developed a scorecard and made it available for use by all.

If you decide to proceed with your own algorithmic impact assessment, we can explain what you need to do, assist with assessment criteria, help determine potential impact, identify mitigations and connect you to technical experts so the analysis is integrated.

Next steps

In the end, AI hiring tools are nothing more than machines crunching numbers. It’s up to the humans that develop the software, as well as those that use it, to be aware of the potential for bias and take the steps necessary to avoid it. 

If you’d like to have a conversation about the use of AI in your hiring processes, particularly in preparation for upcoming regulations, reach out to any of the key contacts below. 

We can help you prepare for a conversation with your recruitment agency, suggest wording for the contract, ensure you’re complying with privacy legislation, and advise you on the pros and cons of developing your own algorithmic impact assessment based on the Treasury Board Secretariat’s model. Then we can guide you through the next steps once your hiring decisions have been made, including offers of employment, collection of employee data and managing the employment relationship. 

In the end, you’ll feel confident that the AI recruitment tools used by your service provider aren’t introducing bias into your hiring processes and will be well prepared for existing and new legislation.

Key Contacts