Press enquiries:
+41 61 280 8138
[email protected]   
Ref no: 39/2017 

The Financial Stability Board (FSB) today published a report that considers the financial stability implications of the growing use of artificial intelligence (AI) and machine learning in financial services.

Financial institutions are increasingly using AI and machine learning in a range of applications across the financial system including to assess credit quality, to price and market insurance contracts and to automate client interactions. Institutions are optimising scarce capital with AI and machine learning techniques, as well as back-testing models and analysing the market impact of trading large positions. Meanwhile, hedge funds, broker-dealers and other firms are using it to find signals for higher uncorrelated returns and to optimise trade execution. Both public and private sector institutions may use these technologies for regulatory compliance, surveillance, data quality assessment and fraud detection.

The FSB’s analysis reveals a number of potential benefits and risks for financial stability that should be monitored as the technology is adopted in the coming years and as more data becomes available. They are:

  • The more efficient processing of information, for example in credit decisions, financial markets, insurance contracts and customer interactions, may contribute to a more efficient financial system. The applications of AI and machine learning by regulators and supervisors can help improve regulatory compliance and increase supervisory effectiveness.

  • Applications of AI and machine learning could result in new and unexpected forms of interconnectedness between financial markets and institutions, for instance based on the use by various institutions of previously unrelated data sources.

  • Network effects and scalability of new technologies may in the future give rise to third-party dependencies. This could in turn lead to the emergence of new systemically important players that could fall outside the regulatory perimeter.

  • The lack of interpretability or auditability of AI and machine learning methods could become a macro-level risk. Similarly, a widespread use of opaque models may result in unintended consequences.

  • As with any new product or service, it will be important to assess uses of AI and machine learning in view of their risks, including adherence to relevant protocols on data privacy, conduct risks, and cybersecurity. Adequate testing and ‘training’ of tools with unbiased data and feedback mechanisms is important to ensure applications do what they are intended to do.

Notes to editors

In June the FSB published a report on the Financial Stability Implications from FinTech, which identified 10 regulatory and supervisory FinTech issues that merit authorities’ attention, three of which were seen as a priority for international collaboration. They are:

  • the need to manage operational risk from third-party service providers;

  • mitigating cyber risks; and

  • monitoring macrofinancial risks that could emerge as FinTech activities increase.

The FSB has been established to coordinate at the international level the work of national financial authorities and international standard-setting bodies and to develop and promote the implementation of effective regulatory, supervisory and other financial sector policies in the interest of financial stability. It brings together national authorities responsible for financial stability in 24 countries and jurisdictions, international financial institutions, sector-specific international groupings of regulators and supervisors, and committees of central bank experts. The FSB also conducts outreach with 65 other jurisdictions through its six Regional Consultative Groups.

The FSB is chaired by Mark Carney, Governor of the Bank of England. Its Secretariat is located in Basel, Switzerland, and hosted by the Bank for International Settlements.