Matteo Interlandi

Scientist, Microsoft

Matteo Interlandi is a Senior Scientist in the Gray Systems Lab (GSL) at Microsoft, working on scalable Machine Learning systems. Before Microsoft, he was a Postdoctoral Scholar in the CS Department at the University of California, Los Angeles, working on Big Data systems. Prior to joining UCLA, he was a researcher at the Qatar Computing Research Institute, and at the Institute for Human and Machine Cognition. He obtained his PhD in Computer Science from the University of Modena and Reggio Emilia.

Past sessions

The ever-increasing interest around deep learning and neural networks has led to a vast increase in processing frameworks like TensorFlow and PyTorch. These libraries are built around the idea of a computational graph that models the dataflow of individual units. Because tensors are their basic computational unit, these frameworks can run efficiently on hardware accelerators (e.g. GPUs).Traditional machine learning (ML) such as linear regressions and decision trees in scikit-learn cannot currently be run on GPUs, missing out on the potential accelerations that deep learning and neural networks enjoy.

 

In this talk, we'll show how you can use Hummingbird to achieve 1000x speedup in inferencing on GPUs by converting your traditional ML models to tensor-based models (PyTorch andTVM). https://github.com/microsoft/hummingbird

 

This talk is for intermediate audiences that use traditional machine learning and want to speedup the time it takes to perform inference with these models. After watching the talk, the audience should be able to use ~5 lines of code to convert their traditional models to tensor-based models to be able to try them out on GPUs.

 

Outline: 

  • Introduction of what ML inference is (and why it's different than training) 
  • Motivation: Tensor-based DNN frameworks allow inference on GPU, but “traditional” ML frameworks do not 
  • Why “traditional” ML methods are important 
  • Introduction of what Hummingbirddoes and main benefits 
  • Deep dive on how traditional ML models are built 
  • Brief intro onhow Hummingbird converter works 
  • Example of how Hummingbird can convert a tree model into a tensor-based model 
  • Other models 
  • Demo 
  • Status 
  • Q&A 
In this session watch:
Karla Saur, Senior Research Software Development Engineer, Microsoft
Matteo Interlandi, Scientist, Microsoft

[daisna21-sessions-od]

Summit 2017 Debugging Big Data Analytics in Apache Spark with BigDebug

June 6, 2017 05:00 PM PT

Debugging big data analytics in Data-Intensive Scalable Computing (DISC) systems is a time-consuming effort. Today’s DISC systems offer very little tooling for debugging and, as a result, programmers spend countless hours analyzing log files and performing trial and error debugging. To aid this effort, UCLA developed BigDebug, an interactive debugging tool and automated fault localization service to help Apache Spark developers in debugging big data analytics.
To emulate interactive step-wise debugging without reducing throughput, BigDebug provides simulated breakpoints that enable a user to inspect a program without actually pausing the entire distributed computation. It also supports on-demand watchpoints that enable a user to retrieve intermediate data using a guard predicate and transfer the selected data on demand. To understand the flow of individual records within a pipeline of RDD transformations, BigDebug provides data provenance capability, which can help understand how errors propagate through data processing steps. To support efficient trial-and-error debugging, BigDebug enables users to change program logic in response to an error at runtime through a realtime code fix feature, and selectively replay the execution from that step. Finally, BigDebug proposes an automated fault localization service that leverages all the above features together to isolate failure-inducing inputs, diagnose the root cause of an error, and resume the workflow for only affected data and code.

The BigDebug system should contribute to improving Spark developerproductivity and the correctness of their Big Data applications. This big data debugging effort is led by UCLA Professors Miryung Kim and Tyson Condie, and produced several research papers in top Software Engineering and Database conferences. The current version of BigDebug is publicly available at https://sites.google.com/site/sparkbigdebug/.

Session hashtag: #SFr8

Summit 2017 Lazy Join Optimizations Without Upfront Statistics

June 5, 2017 05:00 PM PT

Modern Data-Intensive Scalable Computing (DISC) systems such as Apache Spark do not support sophisticated cost-based query optimizers because they are specifically designed to process data that resides in external storage systems (e.g. HDFS), or they lack the necessary data statistics. Consequently, many crucial optimizations, such as join order and plan selection, are presently out-of-scope in these DISC system optimizers. Yet, join order is one of the most important decisions a cost-optimizer can make because wrong orders can result in a query response time that can become more than an order-of-magnitude slower compared to the better order. Session hastag: #SFr4