Delta Lake: Reliability for Data Lakes

Delta Lake

Brings data reliability and performance to your data lakes

QUICKSTART DOCUMENTATIONCONTACT US

LIVE DEMO

See Delta Up Close

SAVE MY SPOT

Delta Lake is an open-source storage layer that brings ACID
transactions to Apache Spark™ and big data workloads.

Key Features

ACID Transactions:
Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level.

Scalable Metadata Handling:
In big data, even the metadata itself can be “big data”. Delta Lake treats metadata just like data, leveraging Spark’s distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease.

Time Travel (data versioning):
Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.

Open Format:
All data in Delta Lake is stored in Apache Parquet format enabling Delta Lake to leverage the efficient compression and encoding schemes that are native to Parquet.

Unified Batch and Streaming Source and Sink:

A table in Delta Lake is both a batch table, as well as a streaming source and sink. Streaming data ingest, batch historic backfill, and interactive queries all just work out of the box.

Schema Enforcement:

Delta Lake provides the ability to specify your schema and enforce it. This helps ensure that the data types are correct and required columns are present, preventing bad data from causing data corruption.

Schema Evolution:
Big data is continuously changing. Delta Lake enables you to make changes to a table schema that can be applied automatically, without the need for cumbersome DDL.

100% Compatible with Apache Spark API:
Developers can use Delta Lake with their existing data pipelines with minimal change as it is fully compatible with Spark, the commonly used big data processing engine.

Audit History:
Delta Lake transaction log records details about every change made to data providing a full audit trail of the changes.

Full DML Support:
Delta Lake supports standard DML including UPDATE, DELETE and MERGE INTO providing developers more controls to manage their big datasets.

Instead of parquet

dataframe
   .write
   .format("parquet")
   .save("/data")

…simply say delta

dataframe
   .write
   .format("delta")
   .save("/data")

Users of Delta Lake

Hundreds of companies are processing exabytes of data per month with Delta Lake.

To add your organization here, email our user list at delta-users@googlegroups.com.

Join the Delta Community

Communicate with fellow Delta users and Delta engineers, ask questions and share tips.     Join our Slack channel.

Resources

Get Started

 

Already use Databricks?

 

QUICKSTART

for Existing Users

Databricks

Try Delta on our free community edition

Azure

Get started simply and quickly on Microsoft Azure

AWS

Setup AWS account and security profile