What is Delta Lake?

Delta Lake is an open format storage layer that delivers reliability, security and performance on your data lake — for both streaming and batch operations. By replacing data silos with a single home for structured, semi-structured and unstructured data, Delta Lake is the foundation of a cost-effective, highly scalable lakehouse.

High-quality, reliable data

Deliver a reliable single source of truth for all of your data, including real-time streams, so your data teams are always working with the most current data. With support for ACID transactions and schema enforcement, Delta Lake provides the reliability that traditional data lakes lack. This enables you to scale reliable data insights throughout the organization and run analytics and other data projects directly on your data lake — for up to 50x faster time-to-insight.

ACID transactions

number of partitions chart

Lightning-fast performance

With Apache Spark™ under the hood, Delta Lake delivers massive scale and speed. And because it’s optimized with performance features like indexing, Delta Lake customers have seen ETL workloads execute up to 48% faster.

Open and agile

All data in Delta Lake is stored in open Apache Parquet format, allowing data to be read by any compatible reader. APIs are open and compatible with Apache Spark. With Delta Lake on Databricks, you have access to a vast open source ecosystem and avoid data lock-in from proprietary formats.

Delta Lake and Linus Foundation logos

security and compliance settings

Security and compliance at scale

Delta Lake reduces risk by enabling fine-grained access controls for data governance, functionality typically not possible with data lakes. You can quickly and accurately update data in your data lake to comply with regulations like GDPR and maintain better data governance through audit logging.

Use cases

Improved ETL pipelines

Delta Lake helps you dramatically simplify data engineering by enabling you to perform ETL processes directly on your data lake. With Delta Lake, you can build streamlined pipelines, improve data reliability and simplify cloud-scale production operations. Reduce compute times and costs with a scalable cloud runtime powered by highly optimized Spark clusters.

Read more

BI on your data

Make new, real-time data instantly available for querying by data analysts for immediate insights on your business by running business intelligence workloads directly on your data lake. Delta Lake allows you to operate a multicloud lakehouse architecture that provides data warehousing performance at data lake economics for up to 4x better price/performance for SQL workloads than traditional cloud data warehouses.

Read more

Unify batch and streaming

Run both batch and streaming operations on one simplified architecture that avoids complex, redundant systems and operational challenges. In Delta Lake, a table is both a batch table and a streaming source and sink. Streaming data ingest, batch historic backfill and interactive queries all work out of the box and directly integrate with Spark Structured Streaming.

Meet regulatory needs

Delta Lake removes the malformed data ingestion challenges, difficulty deleting data for compliance, and issues modifying data for change data capture. With support for ACID transactions on your data lake, Delta Lake ensures that every operation either fully succeeds or fully aborts for later retries — without requiring new data pipelines to be created. Additionally, Delta Lake records all past transactions on your data lake, so it’s easy to access and use previous versions of your data to meet compliance standards like GDPR and CCPA reliably.

Data Ingestion Network

Native connectors easily ingest data into Delta Lake quickly and reliably from all your applications, databases and file storage.

customers logos


Healthdirect logo

“Databricks delivered the time to market as well as the analytics and operational uplift that we needed in order to be able to meet the new demands of the healthcare sector.”
– Peter James, Chief Architect, Healthdirect Australia

Learn more

YipitData logo

“By leveraging Databricks and Delta Lake, we have already been able to democratize data at scale, while lowering the cost of running production workloads by 60%, saving us millions of dollars.”
— Steve Pulec, Chief Technology Officer, YipitData

Learn more

Columbia logo

“Delta Lake provides ACID capabilities that simplify data pipeline operations to increase pipeline reliability and data consistency. At the same time, features like caching and auto-indexing enable efficient and performant access to the data.”
— Lara Minor, Senior Enterprise Data Manager, Columbia Sportswear

Learn more

Viacom18 logo

“Delta Lake has created a streamlined approach to the management of data pipelines. This has led to a decrease in operational costs while speeding up time-to-insight for downstream analytics and data science.”
— Parijat Dey, Assistant Vice President of Digital Transformation and Technology, Viacom18

Learn more


eBook series

Tech talks and training