Layla Yang

Solutions Architect, Databricks

My name is Layla Yang. I am a Solutions Architect at Databricks. Before databricks I started my career in AdTech industry focusing on building Machine Learning models and data products. I spent few years at adtech startups to design, build and deploy automated predictive algorithm into production for real-time bidding (RTB) plugged in major Ad Exchange and SSPs. My work also included MMM (media mix modeling), DMP user segmentation and customer recommendation engine. Currently I work with start-ups in the NYC and Boston area to scale their existing data engineering and data science efforts leveraging Apache Spark technology. I studied physics back in university and I love skiing.

Past sessions

How do you connect the effectiveness of your ad spend towards driving sales? Introducing the Sales Forecasting and Advertising Attribution Solution Accelerator. Whether you’re an ad agency or in-house marketing analytics team, this solution accelerator allows you to easily incorporate campaign data from a variety of historical and current sources -- whether streaming digital or batch TV, OOH, print, and direct mail -- to see how these drive sales at a local level and forecast future performance. Normally attribution can be a fairly expensive process, particularly when running attribution against constantly updating datasets. This session will demonstrate how Databricks facilitates the multi-stage Delta Lake transformation, machine learning, and visualization of campaign data to provide actionable insights on a daily basis.

Afterwards, M&E specialist SA Layla Yang will be available to answer questions about this solution or any other media, ad tech, or marketing analytics questions you may have.

Speaker: Layla Yang

Summit Europe 2019 Unified Approach to Interpret Machine Learning Model: SHAP + LIME

October 16, 2019 05:00 PM PT

For companies that solve real-world problems and generate revenue from the data science products, being able to understand why a model makes a certain prediction can be as crucial as achieving high prediction accuracy in many applications. However, as data scientists pursuing higher accuracy by implementing complex algorithms such as ensemble or deep learning models, the algorithm itself becomes a blackbox and it creates the trade-off between accuracy and interpretability of a model’s output.

To address this problem, a unified framework SHAP (SHapley Additive exPlanations) was developed to help users interpret the predictions of complex models. In this session, we will talk about how to apply SHAP to various modeling approaches (GLM, XGBoost, CNN) to explain how each feature contributes and extract intuitive insights from a particular prediction. This talk is intended to introduce the concept of general purpose model explainer, as well as help practitioners understand SHAP and its applications.

Layla Yang