SELF-PACED

ETL Part 1 - Data Extraction

In this course data engineers access data where it lives and then apply data extraction best practices, including schemas, corrupt record handling, and parallelized code. By the end of this course, you will extract data from multiple sources, use schema inference and apply user-defined schemas, and navigate Databricks and Apache Spark™ documents to source solutions.

Length

2-4 hours, 75% hands-on

Format

The course is a series of seven self-paced lessons available in both Scala and Python. A final capstone project involves writing an end-to-end ETL job that loads semi-structured JSON data into a relational model. Each lesson includes hands-on exercises.

Supported platforms include Databricks Community Edition, Azure Databricks and Amazon.

Learning Objectives

During this course you:

  • Write a basic ETL pipeline using the Spark design pattern
  • Ingest data using DBFS mounts in Azure Blob Storage and S3
  • Ingest data using serial and parallel JDBC reads
  • Define and apply a user-defined schema to semi-structured JSON data
  • Handle corrupt records
  • Productionize an ETL pipeline

Lessons

  1. Course Overview and Setup
  2. ETL Process Overview
  3. Connecting to Azure Blob Storage and S3
  4. Connecting to JDBC
  5. Applying Schemas to JSON Data
  6. Corrupt Record Handling
  7. Loading Data and Productionalizing
  8. Capstone Project: Parsing Nested Data

Target Audience

  • Primary Audience: Data Engineers
  • Secondary Audience: Data Scientists and Data Pipeline Engineers

Prerequisites

  • Programming in Scala or Python required.

Lab Requirements

  • Chrome or Firefox browser. Internet Explorer, Edge, and Safari are not supported.
  • Internet (web access)