Supplementary
-
Citations:
- Appears in Collections:
Others: Artificial Intelligence in Finance: Putting the Human in the Loop
Title | Artificial Intelligence in Finance: Putting the Human in the Loop |
---|---|
Authors | |
Keywords | Fintech Regtech Artificial intelligence Human in the loop Financial regulation |
Issue Date | 2020 |
Citation | Zetzsche, Dirk Andreas and Arner, Douglas W. and Buckley, Ross P. and Tang, Brian, Artificial Intelligence in Finance: Putting the Human in the Loop (February 1, 2020). Available at SSRN: https://ssrn.com/abstract=3531711 How to Cite? |
Abstract | Finance has become one of the most globalized and digitized sectors of the economy. It is also one of the most regulated of sectors, especially since the 2008 Global Financial Crisis. Globalization, digitization and money are propelling AI in finance forward at an ever increasing pace.
This paper develops a regulatory roadmap for understanding and addressing the increasing role of AI in finance, focusing on human responsibility: the idea of “putting the human in the loop” in order in particular to address “black box” issues.
Part I maps the various use-cases of AI in finance, highlighting why AI has developed so rapidly in finance and is set to continue to do so. Part II then highlights the range of the potential issues which may arise as a result of the growth of AI in finance. Part III considers the regulatory challenges of AI in the context of financial services and the tools available to address them, and Part IV highlights the necessity of human involvement.
We find that the use of AI in finance comes with three regulatory challenges: (1) AI increases information asymmetries regarding the capabilities and effects of algorithms between users, developers, regulators and consumers; (2) AI enhances data dependencies as different day’s data sources may may alter operations, effects and impact; and (3) AI enhances interdependency, in that systems can interact with unexpected consequences, enhancing or diminishing effectiveness, impact and explainability. These issues are often summarized as the “black box” problem: no one understands how some AI operates or why it has done what it has done, rendering accountability impossible.
Even if regulatory authorities possessed unlimited resources and expertise – which they clearly do not – regulating the impact of AI by traditional means is challenging.
To address this challenge, we argue for strengthening the internal governance of regulated financial market participants through external regulation. Part IV thus suggests that the most effective path forward involves regulatory approaches which bring the human into the loop, enhancing internal governance through external regulation.
In the context of finance, the post-Crisis focus on personal and managerial responsibility systems provide a unique and important external framework to enhance internal responsibility in the context of AI, by putting a human in the loop through regulatory responsibility, augmented in some cases with AI review panels. This approach – AI-tailored manager responsibility frameworks, augmented in some cases by independent AI review committees, as enhancements to the traditional three lines of defence – is in our view likely to be the most effective means for addressing AI-related issues not only in finance – particularly “black box” problems – but potentially in any regulated industry. |
Description | CFTE Academic Paper Series: Centre for Finance, Technology and Entrepreneurship, no. 1. |
Persistent Identifier | http://hdl.handle.net/10722/281749 |
SSRN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zetzsche, DA | - |
dc.contributor.author | Arner, DW | - |
dc.contributor.author | Buckley, RP | - |
dc.contributor.author | Tang, B | - |
dc.date.accessioned | 2020-03-24T09:07:02Z | - |
dc.date.available | 2020-03-24T09:07:02Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Zetzsche, Dirk Andreas and Arner, Douglas W. and Buckley, Ross P. and Tang, Brian, Artificial Intelligence in Finance: Putting the Human in the Loop (February 1, 2020). Available at SSRN: https://ssrn.com/abstract=3531711 | - |
dc.identifier.uri | http://hdl.handle.net/10722/281749 | - |
dc.description | CFTE Academic Paper Series: Centre for Finance, Technology and Entrepreneurship, no. 1. | - |
dc.description.abstract | Finance has become one of the most globalized and digitized sectors of the economy. It is also one of the most regulated of sectors, especially since the 2008 Global Financial Crisis. Globalization, digitization and money are propelling AI in finance forward at an ever increasing pace. This paper develops a regulatory roadmap for understanding and addressing the increasing role of AI in finance, focusing on human responsibility: the idea of “putting the human in the loop” in order in particular to address “black box” issues. Part I maps the various use-cases of AI in finance, highlighting why AI has developed so rapidly in finance and is set to continue to do so. Part II then highlights the range of the potential issues which may arise as a result of the growth of AI in finance. Part III considers the regulatory challenges of AI in the context of financial services and the tools available to address them, and Part IV highlights the necessity of human involvement. We find that the use of AI in finance comes with three regulatory challenges: (1) AI increases information asymmetries regarding the capabilities and effects of algorithms between users, developers, regulators and consumers; (2) AI enhances data dependencies as different day’s data sources may may alter operations, effects and impact; and (3) AI enhances interdependency, in that systems can interact with unexpected consequences, enhancing or diminishing effectiveness, impact and explainability. These issues are often summarized as the “black box” problem: no one understands how some AI operates or why it has done what it has done, rendering accountability impossible. Even if regulatory authorities possessed unlimited resources and expertise – which they clearly do not – regulating the impact of AI by traditional means is challenging. To address this challenge, we argue for strengthening the internal governance of regulated financial market participants through external regulation. Part IV thus suggests that the most effective path forward involves regulatory approaches which bring the human into the loop, enhancing internal governance through external regulation. In the context of finance, the post-Crisis focus on personal and managerial responsibility systems provide a unique and important external framework to enhance internal responsibility in the context of AI, by putting a human in the loop through regulatory responsibility, augmented in some cases with AI review panels. This approach – AI-tailored manager responsibility frameworks, augmented in some cases by independent AI review committees, as enhancements to the traditional three lines of defence – is in our view likely to be the most effective means for addressing AI-related issues not only in finance – particularly “black box” problems – but potentially in any regulated industry. | - |
dc.language | eng | - |
dc.subject | Fintech | - |
dc.subject | Regtech | - |
dc.subject | Artificial intelligence | - |
dc.subject | Human in the loop | - |
dc.subject | Financial regulation | - |
dc.title | Artificial Intelligence in Finance: Putting the Human in the Loop | - |
dc.type | Others | - |
dc.identifier.email | Arner, DW: douglas.arner@hku.hk | - |
dc.identifier.email | Tang, B: bwtang@hku.hk | - |
dc.identifier.authority | Arner, DW=rp01237 | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.ssrn | 3531711 | - |
dc.identifier.hkulrp | 2020/006 | - |