Holden Karau is transgender Canadian, Apache Spark committer, and co-author of Learning Spark & High Performance Spark. When not in San Francisco working as a software development engineer at IBM’s Spark Technology Center, Holden talks internationally on Spark and holds office hours at coffee shops at home and abroad. She makes frequent contributions to Spark, specializing in PySpark and Machine Learning. Prior to IBM, she worked on a variety of distributed and classification problems at Alpine, Databricks, Google, Foursquare, and Amazon. She graduated from the University of Waterloo with a Bachelor of Mathematics in Computer Science.
Apache Arrow is new in Spark 2.3, and offers faster interchange between Spark and Python. Apache Arrow also has connections to Tensorflow (and even without those can be fed from Pandas). This talk will look at how to use Arrow to accelerate data copy from Spark to Tensorflow, and how to expose basic functionality in Scala for working with Tensorflow. From there we will dive in to how to construct new Deep Learning ML pipeline stages in Python and make them available to be used by our friends in Scala land.
PySpark is getting awesomer in Spark 2.3 with vectorized UDFs, and there is even more wonderful things on the horizon (and currently available as WIP packages). This talk will start by illustrating how to use PySpark's new vectorized UDFs to make ML pipeline stages. Since most of us use Python in part because of its wonderful libraries, like pandas, numpy, and antigravity*, it's important to be able to make sure that our dependencies are available on our cluster. Historically there's been a few If there is time near the end we will talk about how to expose your Python code to Scala so everyone can use your fancy deep learning code (if you want them to). *Ok maybe not a real thing, but insert super specialized domain specific library you use instead :)
Everyone who has maintained a search cluster knows the pain of keeping our on-line update code and offline reindexing pipelines in sync. Subtle bugs can pop up when our data is indexed differently depending on the context. By using Spark & Spark Streaming we can reuse the same indexing code between contexts and even take advantage reduce overhead by talking directly to the correct indexing node. Sometimes we need to use search data as part of our distributed map reduce jobs. We will illustrate how to use Elastic Search as side data source with Spark. We will also illustrate both of these tasks in two real examples using the Twitter firehose. In the first we will index tweets in a geospatial context and in the second we will use the same index to determine the top hashtags per region.