Keshav is a second-year PhD student at Stanford University advised by Professor Matei Zaharia. He’s a member of the FutureData Systems research group and the Stanford DAWN group. His research is focused on building systems and infrastructure to accelerate machine learning workloads. He earned a BS in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2017 and an MS in Computer Science from Stanford in 2019 (dual concentration in Systems and Artificial Intelligence). At UIUC he worked with Professor Indranil Gupta in the Distributed Protocols Research Group.
November 18, 2020 04:00 PM PT
Specialized accelerators such as GPUs, TPUs, FPGAs, and custom ASICs have been increasingly deployed to train deep learning models. These accelerators exhibit heterogeneous performance behavior across model architectures. Existing schedulers for clusters of accelerators, which are used to arbitrate these expensive training resources across many users, have shown how to optimize for various multi-job, multi-user objectives, like fairness and makespan. Unfortunately, existing schedulers largely do not consider performance heterogeneity.
In this talk, we present Gavel, a heterogeneity-aware scheduler that systematically generalizes a wide range of existing scheduling policies. Gavel expresses these policies as optimization problems, making it easy to optimize for objectives in a heterogeneity-aware way, while also being cognizant of performance optimizations like space sharing. Gavel then uses a round-based scheduling mechanism to ensure jobs receive their ideal allocation given the target scheduling policy.
Gavel’s heterogeneity-aware policies allow a heterogeneous cluster to sustain higher input load, and improve end objectives such as average job completion time and makespan by up to 3.5× compared to heterogeneity-agnostic policies. This work will appear in OSDI 2020.
Speaker: Keshav Santhanam