a hand holding a guitar



The professionalism series: Transforming the legal landscape with artificial intelligence

Please enable Targeting Cookies and refresh the page to clear this message and play the video.

Please enable Targeting Cookies and refresh the page to clear this message and play the video.

In BLG’s latest two-part Professionalism Series: Transforming the Legal Landscape with Artificial Intelligence, we welcomed artificial intelligence (AI) experts and legal practitioners from across sectors to discuss current and future uses of AI across industries, including legal and the ethical considerations surrounding these emerging technologies.

View the webinars to get the full picture of these insightful conversations.

AI continues to transform the corporate landscape, including in law. The rapid advancement of AI creates a need to understand how our existing legal frameworks may need to adapt to this fundamentally disruptive technology. In the first part of this series: How AI and the Law Intersect, Sinead Bovell, United Nations speaker multi-award-winning AI Ethicist, joined BLG’s Edona Vila to discuss fundamentals on AI and engage in a discussion on how best to prepare for the adoption of AI in the legal industry.

In the second part of this series: Ethical AI considerations and the future of law, Edona leads an insightful panel discussion about where AI is currently from a legal perspective and where it may be headed in the future. While using AI has many benefits, if left unchecked, it can raise significant ethical and safety concerns. This panel of speakers explored the ethical considerations surrounding AI in the legal industry, including ways in which the use of AI may be shifting core legal values and how we can ensure that these crucial values are preserved in the technological transition. Additionally, the panelists discussed a specific use case regarding the deployment of an AI solution in the health care setting. As part of this panel discussion, BLG Partner François Joli-Coeur reviewed the state of AI law in Canada.

Part two panelists

  • Edona Vila, Partner, Technology Sector, BLG
  • Kelly Friedman, Senior Legal Counsel & National Leader, BLG Beyond
  • François Joli-Coeur, Partner, Privacy and Cyber Security, BLG
  • Lisa Chamandy, Chief Knowledge & Innovation Officer, BLG
  • Andrew Terrett, Director of Legal Technology and Service Delivery at BLG
  • Melanie de Wit, Chief Legal Officer, Unity Health

Key takeaways

  • AI technology is continuously developing, with significant growth potential across sectors.
  • The outcomes of AI systems are largely based on the data it was trained on and the opinions of the programmer who built it (i.e., the decision to include or omit a variable, which demographics the facial recognition algorithm was tested on, etc.).
  • Depending on the data sets used to train AI systems and the programming decisions made, new biases can be introduced and existing biases can be amplified — appropriate controls are important to mitigate ethical problems that may arise.
  • AI regulations become very industry specific and that can be challenging for law makers.
  • Laws and regulations are important, but they are lagging AI advancements.
  • Human rights, privacy, fairness, transparency and accountability are some of the issues at the forefront of considerations for legislators.
  • There are workplace considerations around the use of AI in the workplace — such as algorithmic hiring or algorithmic-driven decisions about customers — and organizations can benefit from appropriate governance and policies to mitigate risk.
  • Keeping a human perspective is still crucial as AI tools are helpful but they are not perfect.
  • Law firms generally employ some AI in the form of machine learning and natural language processing but much of the current AI is not well understood in law firms.

Anywhere data exists, AI will play a role

Right now, we are seeing the emergence of practical generative AI systems that generate content that is indispensable from human generated content. Along with other types of AI that analyze data to help drive productivity and predictability. However, AI in the future is expected to be a lot more integrated and more outwardly visible in our everyday lives and work. Currently, like many industries, law firms employ AI in the form of machine learning or natural language processing, which is particularly helpful in eDiscovery. There is still more work to be done on integrating more advanced forms of generative AI into law firms. Moving into the future, everyone will likely be able to have our own ‘AI agent’ that works with us to know our habits well and be able to increase our efficiency at work and in general. This is just one aspect of the direction AI will be going, which is exciting, but people and the legislation will have to keep up to a rapidly changing environment.

Current legal framework

There is no specific legal framework currently in place in Canada, but privacy law applies to AI systems and there are new privacy laws emerging in Canada that take into consideration AI concerns. The Government of Canada is in the process of introducing legislation that specifically addresses the design, development, and use of AI via the proposed Artificial Intelligence Data Act (AIDA).

In its current form, AIDA’s regulatory scheme is aimed at preventing “high impact AI systems” from causing harm and generating biased outputs based on prohibited grounds under the Canadian Human Rights Act. Currently, there are no clear accountabilities in Canada for what businesses should do to ensure that high-impact AI systems are safe and non-discriminatory. Persons who are responsible for any AI system would be responsible for assessing whether their system is “high impact” and for keeping records describing the reasons supporting their conclusion.

While the Canadian Government has yet to define what a “high impact” AI system is, it has provided a few factors it considers important:

  • Evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences.
  • The severity of potential harms.
  • The scale of use.
  • The nature of harms or adverse impacts that have already taken place.
  • The extent to which for practical or legal reasons it is not reasonably possible to opt-out from that system.
  • Imbalances of economic or social circumstances, or age of impacted persons.
  • The degree to which the risks are adequately regulated under another law.

Ethical issues for organizations

Looking at the bias in the underlying data sets is critical to ensure ethical issues are mitigated. This is one of the biggest risks when it comes to AI tools. Bias and equity challenges are a massive issue with the data, and we need to do everything we can to understand and explain the datasets. It is also important to consider the use-case and if AI is necessary for the project at hand, could the task be completed without AI? Does using AI in this capacity create more risks? Additionally, ensuring the accuracy of data is critical and something that will take a lot of time.


AI has many applications, but legal judgement, proficiency and the human perspective continue to be imperative when implementing this technology. As AI advances, everyone should be prepared to continuously learn and adapt to the changes. Legislation will need to keep up with the advancements and new laws and regulations should be adopted accordingly but should not hinder potential innovation. Taking an iterative approach will be important while also keeping up to date with the changing legal implications with this technology.

If you are looking to deploy AI technology in your business or want to learn more about any of the topics discussed in this panel, please reach out to any of the key contacts below.

Key Contact