Skip to main content

Data Ingestion with Delta Lake

This course prepares data professionals to leverage the Databricks Intelligence Platform to productionalize ETL pipelines. Students will use Delta Live Tables with Spark SQL and Python to define and schedule pipelines that incrementally process new data from a variety of data sources into the Lakehouse. Students will also orchestrate tasks with Databricks Workflows and promote code with Databricks Repos.

Note: This is the first course in the 'Data Engineering with Databricks' series.

Skill Level
Associate
Duration
2h
Prerequisites


  • Beginner familiarity with basic cloud concepts (virtual machines, object storage, identity management)

  • Ability to perform basic code development tasks (create compute, run code in notebooks, use basic notebook operations, import repos from git, etc.)

  • Intermediate familiarity with basic SQL concepts (CREATESELECTINSERTUPDATEDELETEWHILEGROUP BYJOIN, etc.)

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

See all our registration options

Registration options

Databricks has a delivery method for wherever you are on your learning journey

Runtime

Self-Paced

Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos

Register now

Instructors

Instructor-Led

Public and private courses taught by expert instructors across half-day to two-day courses

Register now

Learning

Blended Learning

Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase

Purchase now

Scale

Skills@Scale

Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details

Upcoming Public Classes

Questions?

If you have any questions, please refer to our Frequently Asked Questions page.