Walter leads UK Finance’s work on data protection, privacy and data ethics, coordinating collaboration among member firms’ subject matter experts on policy and regulatory issues affecting the industry.
Before working at UK Finance he worked on financial sector and data protection law reforms at the British Bankers’ Association, New Zealand Bankers’ Association and for the New Zealand government.
November 18, 2020 04:00 PM PT
This presentation seeks to advance the thinking on how financial services firms can implement a framework that supports explainable artificial intelligence (AI), thus building trust among consumers, shareholders and other stakeholders, and helping ensure compliance with emerging regulatory and ethical norms.
The use of AI in financial services continues to grow. Especially nowadays with the global COVID-19 pandemic, industry take up is increasing and use cases are expanding out from the back office and into customer-facing applications. We also see a shift towards more complex models giving more accurate and deeper insights. This expansion brings many opportunities for industry to improve efficiency, better manage risk and provide exciting new products and services to customers.
However, to take full advantage of this opportunity, there needs to be trust. As with all innovations, ethical considerations must keep pace with technological development. Building trust requires transparency and communication. Indeed, this is a topic of growing regulatory and government interest in many countries. Transparency and communication with customers have long been key considerations for financial services but AI will require new approaches and techniques if explanations are to be meaningful. Effective explanations will also require a degree of subtlety; given the huge potential range of use cases, close attention to the context of each will be key.
Alongside this, consumer education as to how and why AI is being used is increasingly important. Achieving effective explanations will require firms to have a clear AI strategy and robust governance, and to engage effectively with colleagues from a range of functions, including data science, compliance, audit, business and senior management, and even ethicists. It will also require ongoing work, with limits yet to be resolved in the state of the art of explaining AI and with ‘best practice’ sure to evolve. More research and thinking will be needed, especially now with the change in ways of working, not just from firms but also from regulators, government and think tanks.
This presentation will focus on why and to what extent explainability of AI outputs is needed, the challenges to achieving this and potential ways to apply the latest guidance. It will also provide technical financial services use cases to explore potential approaches to different types of explanations, according to the context and type of model. We consider not just common existing uses of AI, but also emerging or possible uses.
Speakers: Ansgar Keone and Walter McCahon