The Canadian federal Office of the Superintendent of Financial Institutions Canada (“OSFI”) has issued a discussion paper entitled Developing Financial Sector Resilience In a Digital World.
OSFI supervises Canadian federally regulated financial institutions to determine their financial condition and how they meet financial condition requirements. By issuing this discussion paper, OSFI is looking for feedback on risk and resilience, holistic assessment or architecture, understanding technological risk and the role of prudent regulators and core principles that should guide regulators, all in relation to technology development.
Part 5 of the discussion paper, titled Advanced Analytics, addresses artificial intelligence and machine learning, which are defined as follows:
“Artificial Intelligence is the application of computational tools to address tasks traditionally requiring human sophistication (E.G., recognizing images and processing natural languages by learning from experience)
Machine Learning is a subset of AI that refers to technology that is self learning/improving and can build predictive models from examples, data, and experience, rather than following pre-programmed rules.”
These definitions are taken from those set out by the Financial Stability Board and the Canadian CIO Strategy Council, respectively.
The paper sets out that soundness (an AI/ML model by design should be accurate, reliable, auditable and fair), explainability (an AI/ML must be describable so as to be able to be meaningfully explained to pertinent parties and accountability (risk responsibilities and risk management for AI/ML exist are are assigned within an institution) are key principles for managing AI/ML risk according to OSFI. OSFI is considering adopting these into regulatory and supervisory frameworks to address emerging AI/ML risk. It is noted that there is no current guidance for model risk available across industries. OSFI asks for a number of points of feedback in this respect. These include whether or not these principles are sufficient to capture risk that comes with AI/ML, what are the appropriate levels of explainability, what challenges may arise for institutions dealing with self-assessment against these principles, and what may be required to deal with reputational risks arising from AI/ML use.
Comments on the discussion paper may be submitted on or before December 15, 2020.