Introducing the New Python Data Source API for Apache Spark™
OVERVIEW
EXPERIENCE | In Person |
---|---|
TYPE | Breakout |
TRACK | Data Engineering and Streaming |
INDUSTRY | Energy and Utilities, Enterprise Technology |
TECHNOLOGIES | AI/Machine Learning, Apache Spark, ETL |
SKILL LEVEL | Intermediate |
DURATION | 40 min |
DOWNLOAD SESSION SLIDES |
The introduction of the Python Data Source API for Apache Spark™ marks a significant advancement in making big data processing more accessible to Python developers. Traditionally, integrating custom data sources into Spark required understanding Scala, posing a challenge for the vast Python community. Our new API simplifies this process, allowing developers to implement custom data sources directly in Python without the complexities of existing APIs. This session will outline the API's key features, including simplified operations for reading and writing data, and its benefits to Python developers. We aim to open up Spark to more Python developers, making the big data ecosystem more inclusive and user-friendly. We will also invite one of the Databricks customers to co-present this talk.
SESSION SPEAKERS
Allison Wang
/Sr. Software Engineer
Databricks
Ryan Nienhuis
/Sr. Staff Product Manager
Databricks