Note: This is a replay of a highly rated session from the June Spark + AI Summit. Enjoy!
Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them. In this session, Francesca will go over a few methods and tools that enable you to “unpack” machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues.
Using open source fairness and interpretability packages, attendees will learn how to:
Speaker: Francesca Lazzeri
Francesca Lazzeri, PhD is an experienced scientist and machine learning practitioner with over 12 years of both academic and industry experience. She is author of a number of machine learning publications, including technology journals, conferences, and books. She currently leads an international team of cloud advocates, developers and data scientists at Microsoft. Before joining Microsoft, she was a research fellow at Harvard University in the Technology and Operations Management Unit, where she worked on multiple patent, publication and social network data-driven projects to investigate and measure the impact of external knowledge networks on companies’ competitiveness and innovation. Find her on Twitter: @frlazzeri and Medium: @francescalazzeri