SESSION

A PyTorch Approach to ML Infrastructure

Accept Cookies to Play Video

OVERVIEW

EXPERIENCEIn Person
TYPEBreakout
TRACKData Science and Machine Learning
INDUSTRYEnterprise Technology
TECHNOLOGIESAI/Machine Learning, GenAI/LLMs, Governance
SKILL LEVELIntermediate
DURATION40 min
DOWNLOAD SESSION SLIDES

For the past decade, the prevailing approaches to scaling ML across organizations and infrastructure have centered on model and pipeline portability. Essentially, try to produce a unified compiler to drop your model onto any hardware, and try to produce a unified pipeline format to drop your workflow onto any compute. Ten years later, these approaches haven't panned out, and as a result, enterprise ML is alarmingly fragmented. We propose an entirely new approach: to leave the ML methods (algorithms, models, pipelines, utilities, etc.) in place on the hardware or storage they were defined to live on and make them instantly sharable and accessible from anywhere. Think of it as Google Docs instead of emailing files. We propose a set of open source PyTorch-like APIs that make creating and sharing such shared apps and services intuitive, production-grade, and scalable and imbue the system with a deep agnosticism to the underlying infra or existing ML tooling already in place.

SESSION SPEAKERS

Rohin Bhasin

/Software Engineer
Runhouse

Caroline Chen

/Software Engineer
Runhouse