On July 3, 2025, the Autorité des marchés financiers (AMF) released a French-only draft guideline on the use of artificial intelligence in the financial sector (the Guideline).
Applicable to authorized insurers, financial services cooperatives, authorized trust companies, and other authorized deposit-taking institutions, the Guideline sets out the AMF’s expectations regarding the measures financial institutions should take to holistically manage the risks associated with the use of artificial intelligence systems (AI systems) and to ensure fair treatment of clients.
The AMF is inviting those interested to submit their comments by November 7, 2025.
Definition of AI system
The Guideline defines an AI system as a machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. It also notes that different AI systems vary in their levels of autonomy and adaptiveness after deployment.
Key expectations
Risk rating
The AMF expects financial institutions to assign a risk rating to each of their AI systems, and to review both the ratings and its underlying factors periodically, or at least annually. These risk ratings are intended to guide the degree and scope of the financial institution’s policies, processes, and procedures that should be implemented in relation to its AI systems’ lifecycle, governance, risk management, and the fair treatment of consumers.
The Guideline provides a non-exhaustive list of factors to consider when conducting this risk assessment, including the AI systems’ data characteristics, controls, and the financial institution’s overall risk of exposure. A provisional risk rating should be assigned during the initial assessment and may be refined as more information becomes available.
Lifecycle of AI systems
Financial institutions are expected to develop, document, approve, and implement processes that address the expectations for each stage of an AI system’s lifecycle, in proportion to the AI system’s assigned risk rating. Accordingly, an AI system with higher risk ratings should be subject to more frequent monitoring, corrective action, and improvement efforts.
The lifecycle of an AI system, as described in the Guideline, consists of three key phases: design or acquisition phase, use and monitoring phase, and modification or decommissioning phase.
During those different phases, financial institutions are expected to evaluate the quality of the training data, implement processes to govern the design and acquisition of AI, carry out validations and internal audits, set limits on the use of AI with a high risk rating or where there is insufficient information for proper evaluation, and implement continuous supervision of the AI systems’ performance and use.
Importantly, a financial institution’s decision to implement an AI system to meet a specific need, rather than opting for an alternative solution, should be documented. The financial institution should be able to demonstrate that the chosen system is the most appropriate option considering its risk rating.
Governance
Financial institutions are expected to adopt policies, processes, and procedures that clearly define the roles and responsibilities of all stakeholders involved in each phase of an AI system’s lifecycle. These governance frameworks must also establish the necessary levels of competence required by the individuals assigned to these roles.
Notably, each AI system must remain under the responsibility of a designated AI manager throughout its lifecycle. This manager should report to a member of senior management who is accountable for all AI systems within the financial institution.
Individuals responsible for establishing policies, processes, and procedures governing AI use must possess a sufficient level of knowledge of AI systems, their risks, the financial institution's risk appetite, and its ethical positions. Similarly, those tasked with applying these procedures should have a practical understanding of artificial intelligence and its related risks.
The Guideline also identifies four key stakeholder groups — the board of directors, senior management, risk management, and internal auditors — and elaborates on their roles beyond those outlined in other AMF publications. For instance, the board of directors should collectively possess a sufficient understanding of AI to make informed decisions regarding the evaluation, deployment, risks, and limitations of AI systems.
Risk management
Financial institutions are expected to adopt policies, processes, and procedures related to the use of an AI system proportionate to the financial institution’s nature, size, complexity of operations, and risk profile.
The Guideline recommends that financial institutions identify the major risks associated with their AI systems’ use, establish their own taxonomy, and keep it up to date. Financial institutions are also expected to assess and reassess their risk appetite and tolerance levels for key risks in light of their AI use.
To ensure risk management, the Guideline further instructs financial institutions to maintain a centralized directory of all their AI systems, and implement appropriate controls to facilitate transparent and consistent supervision of the risks associated with the use of AI throughout the financial institution.
Fair treatment of customers
Furthermore, the Guideline builds on existing AMF guidance relating to the fair treatment of customers, introducing additional expectations tailored to the use of an AI system.
Financial institutions are expected to ensure that their code of ethics enables them to maintain high standards of ethics and integrity in the specific context of AI systems use. The Guideline emphasizes the importance of identifying, documenting, and reporting instances where the use of an AI system may result in discriminatory outcomes or biased decision-making. Such occurrences must be communicated to senior management and addressed appropriately.
With respect to customer consent and communication when an AI system is used, financial institutions must provide customers with clear, accurate, and sufficient information about how their personal data will be used. For example, when obtaining consent for the use of personal data by an AI system, financial institutions should ensure that customers comprehend that their personal data will be linked with secondary data, which could affect the quality of the information associated with them. In addition, financial institutions should provide customers with a clear, simple explanation when they are subject to a decision made or assisted by an AI system, whether the decision is rendered autonomously by the AI system or by an individual following the AI system’s analysis.
Significance of the Guideline
The draft Guideline marks a significant development in the regulation of AI in Québec’s financial sector, underscoring the importance of aligning with evolving standards to mitigate operational, reputational, and legal risks.
For further guidance on the Guideline or assistance with its implementation, including the development of internal policies, governance frameworks, or risk management tools, we invite you to contact the authors or any of the key contacts listed below.