Allies and Adversaries : Explaining Model Reasoning via Contrasting Proximal Prototypes - Databricks

Allies and Adversaries : Explaining Model Reasoning via Contrasting Proximal Prototypes

Have you ever wondered how an ML model works? Why does it come up with certain predictions and not others? Have you seen a model behaving in ways that are weird or counter-intuitive? Do you lack trust in your model because it is a black-box? Recently, the rise in popularity of deep-learning neural-net models-impenetrable as they are even to their creators–has underscored the importance of mathematical frameworks for model interpretability. However, even simple models such as linear models can be hard to interpret for those without sufficient technical expertise. This talk will survey various approaches to model interpretability in both academia and industry. We will showcase global and local (instance-level) insights, using a particular model as an example. Global interpretability is valuable in providing a summary level understanding of the model behavior. However, the complex nature of the model makes it in-accurate at an instance-level. Hence, we augment it with instance-level interpretations. We will talk about how we built and deployed in production an algorithm that can interpret blackbox models at global and local-level. Our system has a backend component built with Spark on Azure Databricks. The backend can scale to analyze millions of data-points to generate explanations. We will talk about Locality Sensitive Hashing (LSH) and other optimizations in Spark to improve compute and performance. The proposed method is far more efficient as compared to prior-art which is compute intensive. Our front-end and middle-tier, built with Docker and Azure Webapp provides visuals and REST APIs to retrieve the model interpretations. This talk will cover various aspects of building, deploying and using the above system.



« back
About Deepak Pai

I am a Software Engineer and Data Scientist with 14 years of experience. I have published papers in top peer reviewed conferences and have been granted patents. As part of my current job, I manage a team of data scientists and engineers developing core ML services at Adobe. Our services are used by various Adobe Sensei Services that are part of Experience Cloud. I hold a Masters and Bachelor degree in Computer Science from a leading universities in India.

About Vijay Srivastava

I am working as Staff Data Scientist in the ML Core Services team. I have 13+ years of industry experience, 11 of which are at Adobe. As one of the initial members of the team, I am currently responsible for building and maintaining the core machine-learning service infrastructure that are part of Experience Cloud Intelligent Services.