Interpretable AI: Not Just For Regulators - Databricks

Interpretable AI: Not Just For Regulators

Download Slides

Machine learning systems are used today to make life-altering decisions about employment, bail, parole, and lending. Moreover, the scope of decisions delegated to machine learning systems seems likely only to expand in the future. Unfortunately serious discrimination, privacy, and even accuracy concerns can be raised about these systems. Many researchers and practitioners are tackling disparate impact, inaccuracy, privacy issues, and security problems with a number of brilliant, but often siloed, approaches. This presentation illustrates how to combine innovations from several sub-disciplines of machine learning research to train explainable, fair, trustable, and accurate predictive modeling systems. Together these techniques create a new and truly human-centered type of machine learning suitable for use in business- and life-critical decision support.

« back
About Patrick Hall

Patrick Hall is senior director for data science products at where he focuses mainly on model interpretability and model management. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining, Patrick held global customer facing roles and research and development roles at SAS Institute.