une main qui tient une guitare

Perspectives

Nous sommes désolés. Le contenu de cette page n'est présentement disponible qu'en anglais.

Regulating artificial intelligence: Preparing your business for the future

Advances in artificial intelligence and machine learning technologies are quickly pushing the conversation about data management into a new phase, including ways of regulating artificial intelligence and machine learning (AI and ML). Canada has an outsized cluster of both startups and established technology companies in this sector, and Canadian businesses have been deploying such technology, in an ever-increasing array of financial, medical and consumer products. AI and ML are rapidly moving past the disruptor and differentiator phase, becoming a requirement for BLG’s clients across various industries.

In a previous report we examined the challenges that AI and ML will present to existing product liability regimes as courts begin to grapple with the novel issues that come with entirely new technology. AI and ML are more than just the next phase in the development of the internet and connected products, but an entirely new way of gathering, creating and using incredibly vast amounts of data and personal information. The conversation, now, is about how AI and ML software use data, and how the software itself learns and develops. As a result, changing how the law addresses AI and ML infused products will be fundamentally altered.

In this follow up, we build upon the observation that the rules surrounding AI and ML have run into the pacing problem that accompanies transformative technologies. That is, regulators around the world have only begun to craft rules to guide the development, implementation and use of AI and ML. The European Commission, leading the way much as it did with privacy and the introduction of the General Data Protection Regulation (GDPR) in 2018, in April 2021 issued its Proposal for a Regulation on a European Approach for Artificial Intelligence, the first legal framework governing the risks surrounding AI. This was (and remains) the first attempt to craft broad rules around AI and ML.

In Canada, the regulatory push is in its infancy. To close out 2021, the Federal government renewed its commitment to establishing a “digital policy task force” to position Canada as a leader in the digital economy and in shaping global governance of emerging technologies. This included an effort to “support artificial intelligence innovations and research in Canada,” as well as advancing standards and coordination on AI internationally.

Ambitious, certainly, but the coming impact on BLG’s clients will be profound. Interestingly, Mckinsey & Company have noted that companies seeing the highest return on investments in AI are far more likely to have reported engaging in active risk mitigation. This should not be surprising. In the same research, McKinsey & Company noted that in 2020, only 48 per cent of organizations reported that they recognized regulatory-compliance risks with AI, and even fewer (38 per cent) reported actively working to address those risks. And even a smaller percentage of companies surveyed recognized other risks –reputation, privacy, and fairness – that accompany AI.

What you need to know about the EU risk-based proposal for AI regulation

The EU proposal laid out a framework with the concept of risk as its core principle. There are three risk categories for AI systems:

  1. Unacceptable;
  2. High; and
  3. Limited and Minimal-Risk AI systems.

Those AI systems falling within the category of unacceptable risk – those that pose a clear risk to an individual’s security and fundamental rights through the use of subliminal, manipulative or exploitative techniques, real-time remote biometric identification systems in public spaces for law enforcement, or social scoring – would be banned. At the other end, systems that we are already familiar with, such as AI chatbots, video and computer games, and spam filters, along with customer and market segmentation systems will be subject to little oversight beyond requirements for transparency and ensuring users are aware that they are interacting with an AI system. The belief is that these low-risk systems do not carry the same risk to health and safety or EU values.

Those AI systems that fall within the high-risk category would be subject to the strictest requirements, including:

  • the implementation of a risk-management system;
  • technical documentation and record keeping;
  • transparency;
  • human oversight;
  • cybersecurity;
  • data quality;
  • post-market monitoring; and
  • conformity assessments and reporting obligations.

Users of these high-risk AI systems must also be told about the design of the system, and post-sale systems to ensure ongoing compliance must be implemented. The EU proposal contains several domains in which the use of AI could be considered high-risk:

  1. critical infrastructure;
  2. education and vocational training;
  3. employment;
  4. access to and enjoyment of essential private services and public services and benefits;
  5. immigration, asylum and border control management; and
  6. the administration of justice and democratic processes.

The proposed regulation, much like the GDPR, is extra-territorial: any AI system providing “output” within the EU would be subject to the regulation, no matter where the provider or user is located. Also subject to the regulation would be providers located within the EU or a system on the market in the EU, and AI systems used within the EU.

And, even more than the GDPR, the proposed regulation contemplates fines for the use of prohibited AI systems (those presenting unacceptable risks) of up to €30 million or 6 per cent of annual global revenue (above the maximum fine under the GDPR), with fines of up to €20 million or 4 per cent of annual global revenue for other violations, and a maximum penalty of €10 million or 2 per cent of global revenue for providing incorrect or misleading information to authorities.

A tentative U.S. approach?

Shortly before the official release of the EU proposal, the U.S. Federal Trade Commission published a statement acknowledging its authority under existing law to pursue enforcement actions against organizations that fail to mitigate AI bias or engage in other unfair or harmful actions through the use of AI. It noted that U.S law “prohibits the sale or use of … racially biased algorithms.” This was proceeded a month earlier by a request from the largest federal financial regulators for information and comment on financial institutions’ use of AI and ML.

In May 2021, the U.S. Consumer Product Safety Commission (CPSC), which is charged with protecting consumers from unreasonable risk of injury from products, published its report, Artificial Intelligence and Machine Learning in Consumer Products, recommending a program to identify and analyze the potential hazards associated with AI and ML in consumer products. It was a tentative document, which recommended continuing to explore opportunities to develop voluntary consensus standards for AI and ML, and to explore collaborative efforts through stakeholder engagements. The CPSC, with its focus on the safety of consumer products, also proposed to develop the means to screen for and identify AI and ML-capable products. In addition, it developed checklists and tools for investigators and data scientists to collaborate with stakeholders to identify and develop hazards associated with AI; this acknowledged a pre-regulation stage to evaluate AI capabilities to determine if they contribute to hazards in consumer products. The CPSC is looking to establish voluntary consensus standards and to develop a program to evaluate the potential safety impact of AI throughout the design, development and deployment lifecycles of consumer products that use AI.

The developing regulatory landscape in Canada and its impact on business

Canada has no regulatory framework specifically governing AI. To the extent that it exists at all, the governance of AI is highly fragmented, percolating into Canadian legislation primarily through privacy laws. The landscape is also dotted with guidelines, directives, declarations, statements and proposals on AI and automated decision-making that have been issued by various agencies and other stakeholders at both the federal and provincial levels.

A few of these developments are already creating, or will shortly create, concrete effects.

Federally, the Treasury Board Secretariat issued the Directive on Automated Decision-Making (Directive), which came into effect on April 1, 2020. This Directive governs the use of automated decision systems, which are defined as “any technology that either assists or replaces the judgement of human decision-makers. These systems draw from fields like statistics, linguistics, and computer science, and use techniques such as rules-based systems, regression, predictive analytics, machine learning, deep learning, and neural nets.” The Directive reflected the desire to govern the use of AI to make, or assist in making, administrative decisions to improve service delivery. The scope of the Directive is very limited, applying to only a defined class of agencies within the Federal government, and does not apply to AI systems used by provincial governments, municipalities, or provincial agencies such as police services. Importantly, it does not apply to the private sector. That said, various aspects of the Directive, such as the requirement to undertake algorithmic impact assessments in relation to the deployment of AI technologies, may foreshadow the direction that private sector regulations will ultimately take.

At the provincial level, Québec’s Bill 64, received assent on September 22, 2021, has introduced significant changes to Québec private sector and public sector privacy law. Bill 64 does not refer to AI or ML technologies directly, instead borrowing the EU GDPR’s broader “automated processing” terminology. As a result of Bill 64’s reforms, as of September 22, 2023, organizations must inform an individual when their personal information is used to render a decision based exclusively on an automated processing of such information. Organizations must also, at an individual’s request, inform them about:

  1. the personal information used to render the decision;
  2. the reasons and the principal factors and parameters that led to the decision; and
  3. the right of the individual to have the personal information used to render the decision corrected.

Organizations must also provide the individual with an opportunity to submit observations to a member of the organization who is in a position to review the decision.

These developments are merely the opening act for what we can expect over the next few years.

At the federal level, we can expect legislative reforms to address AI and ML, at the very least through changes in privacy laws. In 2020, the federal government introduced Bill C-11 the Digital Charter Implementation Act, 2020, which aimed to strengthen privacy protections for Canadians in the digital age. The proposed legislation sought to create new transparency requirements over the use of personal information, requiring organizations that use AI to provide, in plain language, a general account of the use of such a system to make predictions, recommendations or decisions about individuals that could have significant impacts on them. Bill C-11 died on the order paper with the dissolution of the last parliament prior to the federal election, but we can expect it to be resurrected in some form in the near term.

Ontario, although it concluded a consultative process throughout 2021 in order to develop a “trustworthy” AI framework, has not yet passed any specific regulations; nor have any other provincial governments apart from Québec.

This does not mean that AI and ML systems and processes currently employed by the private sector escape scrutiny altogether. Similar to the U.S., the Canada Consumer Product Safety Act (CCPSA) applies to manufacturers of products incorporating AI and ML systems and processes. For example, the CCPSA, which applies to all “consumer products,” is meant to address and prevent “dangers to human health or safety that are posed by consumer products in Canada.” But this focus on health and safety does not reflect the full scope of AI’s potential effects, leaving much uncovered.

The future of AI and ML

In Canada, we can expect a further regulatory push in the near future, as it is clear that regulators around the world are looking to shape the development and use of AI and ML systems and processes. The GDPR has led to a wave of privacy law reform throughout the world, including Canada, and it would be reasonable to expect that the EU’s proposals for AI regulation will exert a similar influence on the international development of rules to govern AI.

Given the interconnected nature of modern information technology systems, as with privacy laws we can expect future domestic AI regulatory frameworks to have extraterritorial effect, and in consequence there will be a strong motivation towards global harmonization. Early signals of international cooperation on these matters is already surfacing in some quarters. For example, Health Canada, jointly with the U.S. Food and Drug Administration (FDA) and the U.K.’s Medicines and Healthcare Products Regulatory Agency (MHPA), identified 10 guiding principles to inform the development of AI and ML infused medical devices. The principles focused on providing a foundation for best practices. Moreover, in September 2021, representatives from both sides of the U.S.-EU Trade and Technology Council met for the first time to discuss co-ordination of “key global technology.” This included a discussion of AI systems. In its public statement on AI Systems, the Council expressed its “willingness and intention to develop and implement AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values.” As for cooperative regulatory action, the Council indicated that “policy and regulatory measures should be based on, and proportionate to the risks posed by the different uses of AI” and committed itself to “a risk-based regulatory framework for AI.”

Key takeaways for your business

Canadian businesses utilizing AI and ML in their products and services should pay close attention to global legislative and regulatory developments in this area, as the impact of these developments on the direction taken by Canadian legislators and regulators is likely to be significant. In developing AI and ML systems and processes, in order to future-proof your operations, consider implementing a risk-management program that commits to undertake the following steps:

  1. conduct a review and inventory of all AI systems and processes used by your organization;
  2. conduct algorithmic impact assessments in order to determine the risk (and level of risk) associated with each AI system and process, and document how each risk was addressed, mitigated or resolved);
  3. consider what new tools and processes your company will require to give effect to those mitigations or resolutions – from ensuring material human intervention in certain decisions to diagnosing bias in ML training data sets and output;
  4. to the extent possible, create clear explanations of what your AI systems and processes do, with what information, in order to be able to explain the decisions rendered by your AI systems to affected individuals;
  5. (re)consider existing agreements and contracts to address complex issues of data ownership, usage and modelling, and learning algorithms in order to asses and allocation the costs and liability in unwanted scenarios; and
  6. enact AI and ML and data governance programs across your business to write rules ensuring its explainability, reliability, fairness, transparency, interpretability, and trustworthiness.

The bottom line: be proactive and prepared for future AI and ML regulations in Canada. Expect and anticipate regulatory alignment across the world, as governments look to harmonize the economic impact and benefits of AI.

Key Contacts