Michael Zargham

Director, Data Science, Cadent

Michael has a PhD in Optimization and Decision Science from the University of Pennsylvania with a focus on constrained resource allocation problems. Michael leads the Data Science and Engineering initiatives at Cadent, a leading provider of media, advertising technology and data solutions for the pay-TV industry. He has also taught Convex Optimization at UPenn. He has been a practicing data driven business architect since 2005, working on various subcontracts during his undergraduate and graduate work.

SESSIONS

Apache Spark for Machine Learning with High Dimensional Labels

This talk will cover the tools we used, the hurdles we faced and the work arounds we developed with the help from Databricks support in our attempt to build a custom machine learning model and use it to predict the TV ratings for different networks and demographics. The Apache Spark machine learning and dataframe APIs make it incredibly easy to produce a machine learning pipeline to solve an archetypal supervised learning problem. In our applications at Cadent, we face a challenge with high dimensional labels and relatively low dimensional features; at first pass such a problem is all but intractable but thanks to a large number of historical records and the tools available in Apache Spark, we were able to construct a multi-stage model capable of forecasting with sufficient accuracy to drive the business application. Over the course of our work we have come across many tools that made our lives easier, and others that forced work around. In this talk we will review our custom multi-stage methodology, review the challenges we faced and walk through the key steps that made our project successful.

Data Science Deep Dive: Spark ML with High Dimensional Labels (continues)

This talk is detailed extension of our Spark Summit East talk on the same topic. We will review the hurdles we faced and the work arounds we developed with the help from Databricks support in our attempt to build a custom machine learning model and use it to predict the TV ratings for different networks and demographics. Attendees should leave this session with enough knowledge to recognize situations where our method would be applicable to implement it. Specifically, we dig into the details of the data characteristics that make our problem inherently challenging and how we compose existing tools in the ML and Dataframes APIs to create a machine learning pipeline capable of learning real valued vector labels despite relatively low dimensional feature spaces. Our deep dive will include feature engineering techniques employed, the reference architecture for our n-dimensional regression technique and the extra data formatting steps required for applying the built in evaluator tools to n-dimensional models. #SFdd7

Connect Code to Resource Consumption to Scale Your Production Spark Applications

Apache Spark is a dynamic execution engine that can take relatively simple Scala code and create complex and optimized execution plans. In this talk, we will describe how user code translates into Spark drivers, executors, stages, tasks, transformations, and shuffles. We will also discuss various sources of information on how Spark applications use hardware resources, and show how application developers can use this information to write more efficient code. We will show how Pepperdata’s products can clearly identify such usages and tie them to specific lines of code. We will show how Spark application owners can quickly identify the root causes of such common problems as job slowdowns, inadequate memory configuration, and Java garbage collection issues.

Data Science Deep Dive: Spark ML with High Dimensional Labels

This talk is detailed extension of our Spark Summit East talk on the same topic. We will review the hurdles we faced and the work arounds we developed with the help from Databricks support in our attempt to build a custom machine learning model and use it to predict the TV ratings for different networks and demographics. Attendees should leave this session with enough knowledge to recognize situations where our method would be applicable to implement it. Specifically, we dig into the details of the data characteristics that make our problem inherently challenging and how we compose existing tools in the ML and Dataframes APIs to create a machine learning pipeline capable of learning real valued vector labels despite relatively low dimensional feature spaces. Our deep dive will include feature engineering techniques employed, the reference architecture for our n-dimensional regression technique and the extra data formatting steps required for applying the built in evaluator tools to n-dimensional models.