Spark + AI Summit 2019 features a number of 1-day training workshops that include a mix of instruction and hands-on exercises to help you improve your Apache Spark skills.
Training is offered as an add-on to the Conference Pass.
Students will need to bring their own laptop with Chrome or Firefox Browsers and unfettered access to *.databricks.com.
The Data Science with Apache Spark workshop will show how to use Apache Spark to perform exploratory data analysis (EDA), develop machine learning pipelines, and use the APIs and algorithms available in the Spark MLlib DataFrames API. It is designed for software developers, data analysts, data engineers, and data scientists.
It will also cover parallelizing machine learning algorithms at a conceptual level. The workshop will take a pragmatic approach, with a focus on using Apache Spark for data analysis and building models using MLlib, while limiting the time spent on machine learning theory and the internal workings of Spark.
We will work through examples that will show you how to apply Apache Spark to help you iterate faster and develop models on massive datasets. This workshop will provide you with tools to be productive using Spark on practical data analysis tasks and machine learning problems. After completing this workshop you should be comfortable using DataFrames, the DataFrames MLlib API, and related documentation. These building blocks will enable you to use Apache Spark to solve a variety of data analysis and machine learning tasks.
Programming experience in Python or Scala.
Background in data science very helpful (recommended).
Basic knowledge of Spark DataFrames (recommended).
Brief conceptual reviews of data science techniques will be performed before the techniques are used. Labs and demos will be available in both Python and Scala.
This course is aimed at the practitioning data scientist who is eager to get started with deep learning, as well as software engineers and technical managers interested in a thorough, hands-on overview of deep learning and its integration with Apache Spark.
The course covers the fundamentals of neural networks and how to build distributed TensorFlow models on top of Spark DataFrames. Throughout the class, you will use Keras, TensorFlow, Deep Learning Pipelines, and Horovod to build and tune models. This course is taught entirely in Python.
Each topic includes lecture content along with hands-on labs in the Databricks notebook environment.
This 1-day course is for data engineers, analysts, architects, dev-ops, and team-leads interested in troubleshooting and optimizing Apache Spark applications. It covers troubleshooting, tuning, best practices, anti-patterns to avoid, and other measures to help tune and troubleshoot Spark applications and queries.
Each topic includes lecture content along with hands-on use of Spark through an elegant web-based notebook environment. Inspired by tools like IPython/Jupyter, notebooks allow attendees to code jobs, data analysis queries, and visualizations using their own Spark cluster, accessed through a web browser. Students may keep the notebooks and continue to use them with the free Databricks Community Edition offering; all examples are guaranteed to run in that environment. Alternatively, each notebook can be exported as source code and run within any Spark environment.
This 1-day course is for data engineers, analysts, architects, data scientist, software engineers, IT operations, and technical managers interested in a brief hands-on overview of Apache Spark™ and building data pipelines with Databricks Delta, the next-gen unified analytics engine.
The course provides an introduction to the Spark architecture, some of the core APIs for using Spark, SQL and other high-level data access tools, as well as Spark’s streaming capabilities, machine learning APIs and the use of Databricks Delta in building pipelines. The class is a mixture of lecture and hands-on labs.
Each topic includes lecture content along with hands-on labs in the Databricks notebook environment. Students may keep the notebooks and continue to use them with the free Databricks Community Edition offering after the class ends; all examples are guaranteed to run in that environment.
This 1/2 day lecture is for anyone seeking to become a Databricks Certified Apache Spark Developer. It includes test-taking strategies, sample questions, preparation guidelines and exam requirements. The primary goal of this course is to help potential applicants understand the breadth and depth to which individuals will be tested and to provide guidelines as to how to prepare for the exam.
Each topic includes lecture content and reference material presented in the Databricks notebook environment. Students may keep the notebooks and continue to use them with the free Databricks Community Edition offering after the class ends.
Attendees who select the prep course will take the exam after the course is completed.
This is not a programming course.
Please Note: attending the certification prep course should NOT, by itself, be considered sufficient preparation for successfully taking the Databricks Apache Spark certification exam.
Databricks Certified Developer for Apache Spark 2.x—validates your overall knowledge on Apache Spark and ensures employers that you are up-to-date with the fast moving apache project with significant features and enhancements being rolled out rapidly. The test is about 90 minutes with a series of randomly generated questions.
A testing room will be available from 11:45 am- 5:00 pm on Wednesday and Thursday during the Summit. When registering, you will select the day when you would like to take your exam. Entrance to the room will be on a rolling basis. As a seat becomes available we will let the next person in.
No outside phones or computers will be allowed in the testing room. We will provide a computer for the exam.
In this course data scientists and data engineers learn the best practices for putting machine learning models into production. It starts with managing experiments, projects, and models using MLflow. It then explores various deployment options including batch predictions, Spark Streaming, and REST APIs. Finally, it covers monitoring machine learning models once they have been deployed into production.
By the end of this course, you will have built the infrastructure to log, deploy, and monitor machine learning models.