Peter Kraft - Databricks

Peter Kraft

Graduate Student, Stanford University

Peter is a third-year graduate student at Stanford University, advised by Peter Bailis and Matei Zaharia. His research focuses on systems, particularly on improving the runtime performance of machine learning applications. His most recent paper, Willump, on improving ML inference performance, will be presented at MLSys 2020.

UPCOMING SESSIONS

Making Machine Learning Inference 10x Faster with WillumpSummit 2020

Systems for performing ML inference are used widely today. However, they are far slower than they could be because they use techniques designed for conventional data serving workloads, neglecting the statistical nature of ML inference. As an alternative, this talk presents Willump, an optimizer for ML inference. Willump speeds up ML inference applications through two new statistically-motivated optimizations which target ML applications whose performance bottleneck is feature computation. First, Willump automatically computes features selectively. It classifies most data inputs using only high-value, low-cost features, improving performance by up to 5x without statistically significant accuracy loss. Second, Willump accurately approximates ML top-K queries. It discards low-scoring inputs with an automatically constructed approximate model then ranks the remainder with a more powerful model, improving performance by up to 10x with minimal accuracy loss. Both optimizations automatically tune their own parameters to maximize performance while meeting a target accuracy level. Willump combines these novel optimizations with powerful compiler optimizations to automatically generate fast inference code for ML applications. Overall, Willump improves the end-to-end performance of real-world ML inference pipelines by up to 16.

PAST SESSIONS