AI In Finance: Ethical Considerations For A Digital Economy

Table of Contents

1. Introduction

This paper explores the use of artificial intelligence (AI) and machine learning (ML) in the financial sector and the resultant policy implications.1 It provides a nontechnical background on the evolution and capabilities of AI/ML systems, their deployment and use cases in the financial sector, and the new challenges they present to financial sector policymakers.

AI/ML systems have made major advances over the past decade. Although the development of a machine with the capacity to understand or learn any intellectual task that a human being performs is not within immediate grasp, today’s AI systems can perform quite well tasks that are well defined and normally require human intelligence. The learning process, a critical component of most AI systems, takes the form of ML, which relies on mathematics, statistics, and decision theory. Advances in ML and especially in deep learning algorithms are responsible for most of the recent achievements, such as self-driving cars, digital assistants, and facial recognition.2

The financial sector, led by financial technology (fintech) companies, has been rapidly increasing the use of AI/ML systems (Box 1). Recent adoption by the financial sector of technological advances, such as big data and cloud computing, coupled with the expansion of the digital economy, made the effective deployment of AI/ML systems possible. A recent survey of financial institutions (WEF 2020) shows that 77 percent of all respondents anticipate that AI will be of high or very high overall importance to their businesses within two years. McKinsey (2020a) estimates the potential value of AI in the banking sector to reach $1 trillion.

AI/ML capabilities are transforming the financial sector.3 AI/ML systems are reshaping client experiences, including communication with financial service providers (for example, chat bots), investing (for example, robo-advisor), borrowing (for example, automated mortgage underwriting), and identity verification (for example, image recognition). They are also transforming the operations of financial institutions, providing significant cost savings by automating processes, using predictive analytics for better product offerings, and providing more effective risk and fraud management processes and regulatory compliance. Finally, AI/ML systems provide central banks and prudential oversight authorities with new tools to improve systemic risk surveillance and strengthen prudential oversight.

The COVID-19 pandemic has further increased the appetite for AI/ML adoption in the financial sector. BoE (2020) and McKinsey (2020b) find that a considerable number of financial institutions expect AI/ML to play a bigger role after the pandemic. Key growth areas include customer relationship and risk management. Banks are exploring ways to leverage their experience of using AI/ML to handle the high volume of loan applications during the pandemic to improve their underwriting process and fraud detection. Similarly, supervisors relying on off-site intensive supervision activities during the pandemic could further explore AI/ML-supported tools and processes in the post-pandemic era.

The rapid progress in AI/ML development could deepen the digital divide between advanced and developing economies. AI/ML deployment, and the resulting benefits, have been concentrated largely in advanced economies and a few emerging markets. These technologies could also bring significant benefits to developing economies, including enhanced access to credit by reducing the cost of credit risk assessments, particularly in countries that do not have an established credit registry (Sy and others 2019). However, these economies are falling behind, lacking the necessary investment, access to research, and human capital.4 Bridging this gap will require developing a digital-friendly policy framework anchored around four broad policy pillars: investing in infrastructure; investing in policies for a supportive business environment; investing in skills; and investing in risk management frameworks (IMF 2020).

Cooperation among countries and between the private and public sectors could help mitigate the risk of a widening digital divide. So far, global initiatives—including the development of principles to mitigate ethical risks associated with AI (UNESCO 2021; OECD 2019), calls for cooperation on investing in digital infrastructure (see, for example, Google and International Finance Corporation (2020)), and the provision of access to research in low-income countries (see, for example, AI4Good.org)—have been limited. Multilateral organizations could play an important role in transferring knowledge, raising investments, building capacity, and facilitating a peer-learning approach to guide digital policy efforts in developing economies. Similarly, the membership in several intergovernmental working groups on AI (such as the Global Partnership on Artificial Intelligence and the OECD Network of Experts on AI, among others) could be expanded to include less-developed economies.

AI/ML adoption in the financial sector is bringing new unique risks and challenges that need to be addressed to ensure financial stability. AI/ML-based decisions made by financial institutions may not be easily explainable and could potentially be biased. AI/ML adoption brings in new unique cyber risks and privacy concerns. Financial stability issues could also arise with respect to the robustness of the AI/ML algorithms in the face of structural shifts and increased interconnectedness through widespread reliance on few AI/ML service providers. Chapter 2 explores the adoption of AI/ML in the financial sector and possible associated risks, Chapter 3 discusses related policy concerns, and Chapter 4 provides some conclusions.