Skip to main content

Automatically Evolve Your Nested Column Schema, Stream From a Delta Table Version, and Check Your Constraints

Delta Lake makes MERGE great with version 0.8.0
Pranav Anand
Tathagata Das
Denny Lee
Share this post

We recently announced the release of Delta Lake 0.8.0, which introduces schema evolution and performance improvements in merge and operational metrics in table history. The key features in this release are:

  • Unlimited MATCHED and NOT MATCHED clauses for merge operations in Scala, Java, and Python. Merge operations now support any number of whenMatched and whenNotMatched clauses. In addition, merge queries that unconditionally delete matched rows no longer throw errors on multiple matches. This will be supported using SQL with Spark 3.1. See the documentation for details.
  • MERGE operation now supports schema evolution of nested columns. Schema evolution of nested columns now has the same semantics as that of top-level columns. For example, new nested columns can be automatically added to a StructType column. See Automatic schema evolution in Merge for details.
  • MERGE INTO and UPDATE operations now resolve nested struct columns by name. Update operations UPDATE and MERGE INTO commands now resolve nested struct columns by name, meaning that when comparing or assigning columns of type StructType, the order of the nested columns does not matter (exactly in the same way as the order of top-level columns). To revert to resolving by position, set the following Spark configuration to false: spark.databricks.delta.resolveMergeUpdateStructsByName.enabled.
  • Check constraints on Delta tables. Delta now supports CHECK constraints. When supplied, Delta automatically verifies that data added to a table satisfies the specified constraint expression. To add CHECK constraints, use the ALTER TABLE ADD CONSTRAINTS command. See the documentation for details.
  • Start streaming a table from a specific version (#474). When using Delta as a streaming source, you can use the options startingTimestamp or startingVersion to start processing the table from a given version and onwards. You can also set startingVersion to latest to skip existing data in the table and stream from the new incoming data. See the documentation for details.
  • Ability to perform parallel deletes with VACUUM (#395). When using `VACUUM`, you can set the session configuration spark.databricks.delta.vacuum.parallelDelete.enabled to true in order to use Spark to perform the deletion of files in parallel (based on the number of shuffle partitions). See the documentation for details.
  • Use Scala implicits to simplify read and write APIs. You can import io.delta.implicits. to use the `delta` method with Spark read and write APIs such as spark.read.delta(“/my/table/path”). See the documentation for details.

In addition, we also highlight that you can now read a Delta table without using Spark via the Delta Standalone Reader and Delta Rust API. See Use Delta Standalone Reader and the Delta Rust API to query your Delta Lake without Apache Spark™ to learn more.

Get an early preview of O'Reilly's new ebook for the step-by-step guidance you need to start using Delta Lake.

Automatically evolve your nested column schema

As noted in previous releases, Delta Lake includes the ability to:

With Delta Lake 0.8.0, you can automatically evolve nested columns within your Delta table with UPDATE and MERGE operations.

Let’s showcase this by using a simple coffee espresso example. We will create our first Delta table using the following code snippet.

The following is a view of the espresso table:
DataFrame table in Delta Lake 8.0.0

The following code snippet creates the espresso_updates DataFrame:

with this table view:
DataFrame table in Delta Lake 0.8.0

Observe that the espresso_updates DataFrame has a different coffee_profile column, which includes a new flavor_notes column.

To run a MERGE operation between these two tables, run the following Spark SQL code snippet:

By default, this snippet will have the following error since the coffee_profile columns between espresso and espresso_updates are different.

AutoMerge to the rescue

To work around this issue, enable autoMerge using the below code snippet; the espresso Delta table will automatically merge the two tables with different schemas including nested columns.

In a single atomic operation, MERGE performs the following:

  • UPDATE: espresso_id = 100 has been updated with the new flavor_notes from the espresso_changes DataFrame.
  • espresso_id = (101, 102) no changes have been made to the data as appropriate.
  • INSERT: espresso_id = 103 is a new row that has been inserted from the espresso_changes DataFrame.
Tabular View displaying nested columns of the coffee_profile column.
Tabular View displaying nested columns of the coffee_profile column.

Simplify read and write APIs with Scala Implicits

You can import io.delta.implicits. to use the delta method with Spark read and write APIs such as spark.read.delta("/my/table/path"). See the documentation for details.

Check Constraints

You can now add CHECK constraints to your tables, which not only checks the existing data, but also enforces future data modifications. For example, to ensure that the espresso_id >= 100, run this SQL statement:

The following constraint will fail as the `milk-based_espresso` column has both True and False values.

The addition or dropping of CHECK constraints will also appear in the transaction log (via DESCRIBE HISTORY espresso) of your Delta table with the operationalParameters articulating the constraint.

Tabular View displaying the constraint operations within the transaction log history
Tabular View displaying the constraint operations within the transaction log history

Start streaming a table from a specific version

When using Delta as a streaming source, you can use the options startingTimestamp or startingVersionto start processing the table from a given version and onwards. You can also set startingVersion to latestto skip existing data in the table and stream from the new incoming data. See the documentation for details.

Within the notebook, we will generate an artificial stream:

And then generate a new Delta table using this code snippet:

The code in the notebook will run the stream for approximately 20 seconds to create the following iterator table with the below transaction log history. In this case, this table has 10 transactions.

Tabular View displaying the iterator table transaction log history

Review iterator output

The iterator table has 10 transactions over a duration of approximately 20 seconds. To view this data over a duration, we will run the next SQL statement that calculates the timestamp of each insert into the iterator table rounded to the second (ts). Note that the value of ts = 0 is the minimum timestamp, and e want to bucket by duration (ts) via a group by running the following:

The preceding statement produces this bar graph with time buckets (ts) by row count (cnt).

Notice for the 20 second stream write performed with ten distinct transactions, there are 19 distinct time-buckets.

Notice for the 20-second stream write performed with ten distinct transactions, there are 19 distinct time-buckets.

Start the Delta stream from a specific version

Using .option("startingVersion", "6"), we can specify which version of the table we will want to start our readStream from (inclusive).

The following graph is generated by re-running the previous SQL query against the new reiterator table.

 Notice for the reiterator table, now there are 10 distinct time-buckets as we’re starting from a later transaction version of the table.

Notice for the reiterator table, there are 10 distinct time-buckets, as we’re starting from a later transaction version of the table.

Get Started with Delta Lake 0.8.0

Try out Delta Lake with the preceding code snippets on your Apache Spark 3.1 (or greater) instance (on Databricks, try this with DBR 8.0+). Delta Lake makes your data lakes more reliable--whether you create a new one or migrate an existing data lake. To learn more, refer to https://delta.io/, and join the Delta Lake community via the Slack and Google Group. You can track all the upcoming releases and planned features in GitHub milestones and try out Managed Delta Lake on Databricks with a free account.

Credits

We want to thank the following contributors for updates, doc changes, and contributions in Delta Lake 0.8.0: Adam Binford, Alan Jin, Alex Liu, Ali Afroozeh, Andrew Fogarty, Burak Yavuz, David Lewis, Gengliang Wang, HyukjinKwon, Jacek Laskowski, Jose Torres, Kian Ghodoussi, Linhong Liu, Liwen Sun, Mahmoud Mahdi, Maryann Xue, Michael Armbrust, Mike Dias, Pranav Anand, Rahul Mahadev, Scott Sandre, Shixiong Zhu, Stephanie Bodoff, Tathagata Das, Wenchen Fan, Wesley Hoffman, Xiao Li, Yijia Cui, Yuanjian Li, Zach Schuermann, contrun, ekoifman, and Yi Wu.

Try Databricks for free

Related posts

Automatically Evolve Your Nested Column Schema, Stream From a Delta Table Version, and Check Your Constraints

We recently announced the release of Delta Lake 0.8.0 , which introduces schema evolution and performance improvements in merge and operational metrics in...
See all Engineering Blog posts