Skip to main content

A few months ago, we held a live webinar — Just-in-Time Data Warehousing on Databricks: Change Data Capture and Schema On Read — which covered how to build a Just-in-Time Data Warehouse on Databricks with a focus on performing Change Data Capture from a relational database and joining that data to a variety of data sources.

The webinar is accessible on-demand. Its slides and sample notebooks are also downloadable as attachments to the webinar. Join the Databricks Community Edition beta to get free access to Apache Spark and try out the notebooks.

We have answered the common questions raised by webinar viewers below. If you have additional questions, please check out the Databricks Forum.

Common webinar questions and answers

Click on the question to see answer:

Try Databricks for free

Related posts

Enabling Spark SQL DDL and DML in Delta Lake on Apache Spark 3.0

August 27, 2020 by Tathagata Das, Burak Yavuz and Denny Lee in
Get an early preview of O'Reilly's new ebook for the step-by-step guidance you need to start using Delta Lake. Last week, we had...

Simplifying Change Data Capture With Databricks Delta Live Tables

April 25, 2022 by Mojgan Mazouchi in
This guide will demonstrate how you can leverage Change Data Capture in Delta Live Tables pipelines to identify new records and capture changes...

Diving Into Delta Lake: DML Internals (Update, Delete, Merge)

September 29, 2020 by Tathagata Das and Brenner Heintz in
In previous blogs Diving Into Delta Lake: Unpacking The Transaction Log and Diving Into Delta Lake: Schema Enforcement & Evolution , we described...
See all Company Blog posts