Apache Spark Training | Spark + AI Summit Europe 2019 - Databricks

Training

Spark + AI Summit 2019 features a number of 1-day training workshops that include a mix of instruction and hands-on exercises to help you improve your Apache Spark skills.

Training is offered as an add-on to the Conference Pass.

Students will need to bring their own laptop with Chrome or Firefox Browsers and unfettered access to *.databricks.com.

Data Science with Apache Spark™

The Data Science with Apache Spark workshop will show how to use Apache Spark to perform exploratory data analysis (EDA), develop machine learning pipelines, and use the APIs and algorithms available in the Spark MLlib DataFrames API. It is designed for software developers, data analysts, data engineers, and data scientists.

It will also cover parallelizing machine learning algorithms at a conceptual level. The workshop will take a pragmatic approach, with a focus on using Apache Spark for data analysis and building models using MLlib, while limiting the time spent on machine learning theory and the internal workings of Spark.

We will work through examples using public datasets that will show you how to apply Apache Spark to help you iterate faster and develop models on massive datasets. This workshop will provide you with tools to be productive using Spark on practical data analysis tasks and machine learning problems. After completing this workshop you should be comfortable using DataFrames, the DataFrames MLlib API, and related documentation. These building blocks will enable you to use Apache Spark to solve a variety of data analysis and machine learning tasks.

PREREQUISITES

Some experience coding in Python or Scala and a basic understanding of data science topics and terminology are recommended. Experience using Spark and familiarity with the concept of a DataFrame is helpful.

Brief conceptual reviews of data science techniques will be performed before the techniques are used. Labs and demos will be available in both Python and Scala.

TOPICS COVERED INCLUDE

  • Extract, Transform, Load (ETL) and Exploratory Data Analysis (EDA)
  • DataFrames
  • Feature Extraction and Transformation using MLlib
  • MLlib Pipelines: Transformers and Estimators
  • Cross validation and performance optimizations
  • Model Parallel vs Data Parallel
  • MLflow
  • Linear Regression, Decision Trees, Random Forests and XGBoost
  • Evaluation Metrics

Hands on Deep Learning with Keras, Tensorflow, and Apache Spark™

This course is aimed at the practitioning data scientist who is eager to get started with deep learning, as well as software engineers and technical managers interested in a thorough, hands-on overview of deep learning and its integration with Apache Spark.

The course covers the fundamentals of neural networks and how to build distributed deep learning models on top of Spark. Throughout the class, you will use Keras, TensorFlow, MLflow, and Horovod to build, tune and apply models. This course is taught entirely in Python.

Each topic includes lecture content along with hands-on labs in the Databricks notebook environment.

PREREQUISITES

TOPICS COVERED INCLUDE

  • Intro to Neural Networks with Keras
    • Neural network architectures
    • Activation functions
    • Evaluation metrics
    • Batch sizes, epochs, etc.
  • MLflow
    • Reproducible ML/DL
    • Distributed model inference
  • Convolutional Neural Networks
    • Convolutions
    • Batch Normalization
    • Max Pooling
    • ImageNet Architectures
  • Horovod
    • Distributed model training
    • Ring-All Reduce

Apache Spark™ Tuning and Best Practices

This 1-day course is for data engineers, analysts, architects, dev-ops, and team-leads interested in troubleshooting and optimizing Apache Spark applications. It covers troubleshooting, tuning, best practices, anti-patterns to avoid, and other measures to help tune and troubleshoot Spark applications and queries.

Each topic includes lecture content along with hands-on use of Spark through an elegant web-based notebook environment. Inspired by tools like IPython/Jupyter, notebooks allow attendees to code jobs, data analysis queries, and visualizations using their own Spark cluster, accessed through a web browser. Students may keep the notebooks and continue to use them with the free Databricks Community Edition offering; all examples are guaranteed to run in that environment. Alternatively, each notebook can be exported as source code and run within any Spark environment.

PREREQUISITES

  • Project experience or with Apache Spark
  • Apache Spark™ Programming – DB 105 or equivalent
  • Basic programming experience in an object oriented or functional language is required. The class will be taught in a mixture of Python and Scala.

TOPICS COVERED INCLUDE

  • The role of memory in Spark applications
  • How to use broadcast variables and, in particular, broadcast joins to increase the performance of DataFrame operations
  • The Catalyst query optimizer
  • How to tune Spark’s partitioning and shuffling behavior
  • How best to size a Spark cluster for different kinds of workflows

Apache Spark™ Programming

This 1-day course is for data engineers, analysts, architects, data scientist, software engineers, IT operations, and technical managers interested in a brief hands-on overview of Apache Spark.

The course provides an introduction to the Spark architecture, some of the core APIs for using Spark, SQL and other high-level data access tools, as well as Spark’s streaming capabilities and machine learning APIs. The class is a mixture of lecture and hands-on labs.

Each topic includes lecture content along with hands-on labs in the Databricks notebook environment. Students may keep the notebooks and continue to use them with the free Databricks Community Edition offering after the class ends; all examples are guaranteed to run in that environment.

TOPICS COVERED INCLUDE

  • Using a subset of the core Spark APIs to operate on data.
  • Articulating and implementing simple use cases for Spark
  • Building data pipelines and query large data sets using Spark SQL and DataFrames
  • Creating Structured Streaming jobs
  • Understanding how a Machine Learning pipeline works
  • Understanding the basics of Spark’s internals

Building Data Pipelines for Apache Spark™ with Delta Lake

BUILDING DATA PIPELINES FOR APACHE SPARK™ WITH DELTA LAKE

Delta is Databricks’ next-gen engine built on top of Apache Spark. This course is for data engineers, architects, data scientists and software engineers who want to use Databricks Delta for building pipelines for data lakes with high data reliability and performance. The course will cover typical data reliability and performance challenges that data lakes face and teach how to address them using Delta. The course ends with a capstone project building a complete data pipeline using Databricks Delta.

PREREQUISITES

  • Completed the Getting Started with Apache Spark™ SQL, Getting Started with Apache Spark™ DataFrames, or ETL Part 1 course, or already have similar knowledge

TOPICS COVERED INCLUDE

  • Creating Delta tables for a data lake
  • Appending records to a Databricks Delta table
  • Performing UPSERTs of data into existing Databricks Delta tables
  • Reading and writing streaming data into a data lake
  • Optimizing a data pipeline and optimization best practices
  • Architecture
    • Comparison with Lambda architecture
    • Getting streaming Wikipedia data into a data lake via Kafka broker
    • Writing streaming data into a raw table
    • Cleaning up bronze data and generating normalized query tables
    • Creating summary tables of key business metrics
    • Creating plots/dashboards of business metrics

Machine Learning in Production: MLflow and Model Deployment

In this course, data scientists and engineers learn best practices for putting machine-learning models into production. It starts with managing experiments, projects, and models using MLflow, then explores various deployment options, including batch predictions, Spark Streaming, and REST APIs. Finally, it covers monitoring machine-learning models once they have been deployed into production.

Prerequisites

  • Python (numpy, pandas, sklearn)
  • Background in machine learning and data science
  • Basic knowledge of Spark DataFrames (recommended)

Topics Covered Include

  • Course Overview and Setup
  • Experiment Tracking
  • Packaging ML Projects
  • Model Management
  • Batch Deployment
  • Streaming Deployment
  • Deployment REST
  • Monitoring

Databricks Certification Exams

Databricks Certified Developer for Apache Spark 2.x will not be available after Oct 31, 2019, and subsequently will NOT be offered to candidates at Spark Summit. This exam is superseded by the Databricks Certified Associate for Apache Spark 2.4.

Databricks Certified Associate for Apache Spark 2.4 validates your knowledge of the core components of the DataFrames API and also validates that you have a rudimentary knowledge of the Spark Architecture. For more information, see the Databricks Certified Associate for Apache Spark 2.4

NOTE: This certification is not affiliated with the Apache Software Foundation.

Benefits

  • Apache Spark is the gold standard of big-data tools and technologies; a certified professional can expect great pay packages.
  • Databricks certifications validate to employers your expertise in developing Spark applications in a production environment.
  • Databricks certifications will help you keep up-to-date with the latest enhancements in the Spark ecosystem.
  • As a Databricks certified professional, you can become an integral part of the growing Spark developer community.
  • Databricks certifications will enable you to meet global standards required to ensure compatibility between Spark applications and distributions.

A testing room will be available all three days of Spark Summit. Availability will be adjusted to accommodate other events at Spark Summit with more information to come.

Half-Day Prep Course + Databricks Certification Exam

This half-day lecture is for anyone seeking to learn more about the different certifications offered by Databricks including the Databricks Certified Associate for Apache Spark 2.4 and our upcoming exams.

It includes test-taking strategies, sample questions, preparation guidelines, and exam requirements for all certifications. The primary goal of this course is to help potential applicants understand the breadth and depth of knowledge on which individuals will be tested and to provide guidelines on how to prepare for the exam.

Attendees who select the prep course will have the option to take either exam after the course is completed.

Topics Covered Include

  • Introduction to the Databricks environment
  • Test-taking strategies specific to the certification exams
  • Sample exam questions
  • Suggested study materials including books, MOOCs, self-paced courses, instructor-led training, etc.
  • Exam-specific prerequisites
  • Two attempts at the certification exam

This is not a programming course.

Please Note: Attending the certification prep course should NOT, by itself, be considered sufficient preparation for any certification exam offered by Databricks.