Dr. Ansgar Koene is Global AI Ethics and Regulatory Leader at EY (Ernst & Young) where he supports the AI Lab’s Policy activities on Trusted AI. He is also a Senior Research Fellow at the Horizon Digital Economy Research institute (University of Nottingham) where he contributes to the policy impact and public engagement activities of the institute and the ReEnTrust and UnBias projects. As part of this work Ansgar has provided evidence to parliamentary inquiries, co-authored a report on Bias in Algorithmic Decision-Making for the Centre for Data Ethics and Innovation, and was lead author of a Science Technology Options Assessment report for the European Parliament on “a Governance Framework for Algorithmic Accountability and Transparency.”
Ansgar chairs the IEEE P7003 Standard for Algorithmic Bias Considerations working group, was the Bias Focus Group leader for the IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), and participates in the IEEE standards P2089 “Age Appropriate Digital Services Framework” and IEEE P2863 “Governance of AI”. Other standards development work includes participation in the ISO/IEC JTC1 SC42 Artificial Intelligence activities and the CEN-Cenelec Focus Group for AI.
He is a trustee for the 5Rgiths foundation for the Rights of Young People Online and is part of the 5Rights Digital Futures Committee.
Ansgar has a multi-disciplinary research background, having worked and published on topics ranging from Policy and Governance of Algorithmic Systems (AI), data-privacy, AI Ethics, AI Standards, bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies. He holds an MSc in Electrical Engineering and a PhD in Computational Neuroscience.
November 18, 2020 04:00 PM PT
This presentation seeks to advance the thinking on how financial services firms can implement a framework that supports explainable artificial intelligence (AI), thus building trust among consumers, shareholders and other stakeholders, and helping ensure compliance with emerging regulatory and ethical norms.
The use of AI in financial services continues to grow. Especially nowadays with the global COVID-19 pandemic, industry take up is increasing and use cases are expanding out from the back office and into customer-facing applications. We also see a shift towards more complex models giving more accurate and deeper insights. This expansion brings many opportunities for industry to improve efficiency, better manage risk and provide exciting new products and services to customers.
However, to take full advantage of this opportunity, there needs to be trust. As with all innovations, ethical considerations must keep pace with technological development. Building trust requires transparency and communication. Indeed, this is a topic of growing regulatory and government interest in many countries. Transparency and communication with customers have long been key considerations for financial services but AI will require new approaches and techniques if explanations are to be meaningful. Effective explanations will also require a degree of subtlety; given the huge potential range of use cases, close attention to the context of each will be key.
Alongside this, consumer education as to how and why AI is being used is increasingly important. Achieving effective explanations will require firms to have a clear AI strategy and robust governance, and to engage effectively with colleagues from a range of functions, including data science, compliance, audit, business and senior management, and even ethicists. It will also require ongoing work, with limits yet to be resolved in the state of the art of explaining AI and with ‘best practice’ sure to evolve. More research and thinking will be needed, especially now with the change in ways of working, not just from firms but also from regulators, government and think tanks.
This presentation will focus on why and to what extent explainability of AI outputs is needed, the challenges to achieving this and potential ways to apply the latest guidance. It will also provide technical financial services use cases to explore potential approaches to different types of explanations, according to the context and type of model. We consider not just common existing uses of AI, but also emerging or possible uses.
Speakers: Ansgar Keone and Walter McCahon