Stefan Panayotov - Databricks

Stefan Panayotov

Data Engineer, Cadent

Stefan in his current role as a Data Engineer at Cadent, focuses on Big Data computational platform solutions like Spark, that enables Cadent to leverage the Data Science and Machine Learning tasks for achieving faster and better business results. Previous to Cadent, Stefan was an Application Developer at QVC, where he worked on building logistic and warehouse software solutions for the retail industry. He’s also spent time as a SQL Developer at CCP, Senior Software Analyst at EXE Technologies, and an IT Consultant at UNISYS. Stefan received his PhD in Computer Science at the Bulgarian Academy of Sciences, where he also served as an Assistant Professor.

UPCOMING SESSIONS

PAST SESSIONS

Apache Spark for Machine Learning with High Dimensional LabelsSummit East 2017

This talk will cover the tools we used, the hurdles we faced and the work arounds we developed with the help from Databricks support in our attempt to build a custom machine learning model and use it to predict the TV ratings for different networks and demographics. The Apache Spark machine learning and dataframe APIs make it incredibly easy to produce a machine learning pipeline to solve an archetypal supervised learning problem. In our applications at Cadent, we face a challenge with high dimensional labels and relatively low dimensional features; at first pass such a problem is all but intractable but thanks to a large number of historical records and the tools available in Apache Spark, we were able to construct a multi-stage model capable of forecasting with sufficient accuracy to drive the business application. Over the course of our work we have come across many tools that made our lives easier, and others that forced work around. In this talk we will review our custom multi-stage methodology, review the challenges we faced and walk through the key steps that made our project successful.

Data Science Deep Dive: Spark ML with High Dimensional Labels (continues)Summit 2017

This talk is detailed extension of our Spark Summit East talk on the same topic. We will review the hurdles we faced and the work arounds we developed with the help from Databricks support in our attempt to build a custom machine learning model and use it to predict the TV ratings for different networks and demographics. Attendees should leave this session with enough knowledge to recognize situations where our method would be applicable to implement it. Specifically, we dig into the details of the data characteristics that make our problem inherently challenging and how we compose existing tools in the ML and Dataframes APIs to create a machine learning pipeline capable of learning real valued vector labels despite relatively low dimensional feature spaces. Our deep dive will include feature engineering techniques employed, the reference architecture for our n-dimensional regression technique and the extra data formatting steps required for applying the built in evaluator tools to n-dimensional models. #SFdd7

Connect Code to Resource Consumption to Scale Your Production Spark ApplicationsSummit 2017

Apache Spark is a dynamic execution engine that can take relatively simple Scala code and create complex and optimized execution plans. In this talk, we will describe how user code translates into Spark drivers, executors, stages, tasks, transformations, and shuffles. We will also discuss various sources of information on how Spark applications use hardware resources, and show how application developers can use this information to write more efficient code. We will show how Pepperdata’s products can clearly identify such usages and tie them to specific lines of code. We will show how Spark application owners can quickly identify the root causes of such common problems as job slowdowns, inadequate memory configuration, and Java garbage collection issues.

Data Science Deep Dive: Spark ML with High Dimensional LabelsSummit 2017

This talk is detailed extension of our Spark Summit East talk on the same topic. We will review the hurdles we faced and the work arounds we developed with the help from Databricks support in our attempt to build a custom machine learning model and use it to predict the TV ratings for different networks and demographics. Attendees should leave this session with enough knowledge to recognize situations where our method would be applicable to implement it. Specifically, we dig into the details of the data characteristics that make our problem inherently challenging and how we compose existing tools in the ML and Dataframes APIs to create a machine learning pipeline capable of learning real valued vector labels despite relatively low dimensional feature spaces. Our deep dive will include feature engineering techniques employed, the reference architecture for our n-dimensional regression technique and the extra data formatting steps required for applying the built in evaluator tools to n-dimensional models.