Avoiding Performance Potholes: Scaling Python for Data Science Using Apache Spark - Databricks

Avoiding Performance Potholes: Scaling Python for Data Science Using Apache Spark

Download Slides

Python is the de facto language of data science and engineering, which affords it an outsized community of users. However, when many data scientists and engineers come to Spark with a Python background, unexpected performance potholes can stand in the way of progress. These “Performance Potholes” include PySpark’s ease of integration with existing packages (e.g. Pandas, SciPy, Scikit Learn, etc), using Python UDFs, and utilizing the RDD APIs instead of Spark SQL DataFrames without understanding the implications. Additionally, Spark 2.3 changes the game even further with vectorized UDFs. In this talk, we will discuss:

– How PySpark works broadly (& why it matters)
– Integrating popular Python packages with Spark
– Python UDFs (how to [not] use them)
– RDDs vs Spark SQL DataFrames
– Spark 2.3 Vectorized UDFs

Session hashtag: #Py9SAIS

About Garren Staubli

Garren is a Senior Data Engineer at Blueprint Consulting Services in Bellevue, WA. He specializes in and has focused his work on Apache Spark for the past 3 years. Garren created Structured Streaming and Spark ML production applications to do real-time decision making, built a robust real-time big data science and reporting solution for iSpot.tv, and architected the core IP data assets for a B2B marketing company. His interests include enabling data scientists and engineers to use big data at scale to solve vexing problems. Garren has a BBA in Management Information Systems from Washington State University.