Skip to main content
DELTA LIVE TABLES

Reliable data pipelines made easy

Simplify batch and streaming ETL with automated reliability and built-in data quality.
header image delta live tables
TOP TEAMS SUCCEED WITH INTELLIGENT DATA PIPELINES
BENEFITS

Data pipeline best practices, codified

Simply declare the data transformations you need — let Delta Live Tables (DLT) pipelines handle the rest.

Efficient ingestion

Building production-ready ETL pipelines starts with ingestion. DLT enables efficient ingestion for data engineers, Python developers, data scientists and SQL analysts. Load data from any Apache Spark™-supported source on Databricks, whether batch, streaming or CDC.

Intelligent transformation

From just a few lines of code, DLT determines the most efficient way to build and execute your batch or streaming data pipelines, automatically optimizing for cost or performance while minimizing complexity.

Automated operations

DLT pipelines simplify ETL development by codifying best practices out of the box and automating away the inherent operational complexity. With DLT pipelines, engineers can focus on delivering high-quality data rather than operating and maintaining pipeline infrastructure.

FEATURES

Built to simplify data pipelining

Building and operating data pipelines can be hard — but it doesn’t have to be. DLT is built for powerful simplicity, so you can perform robust ETL with just a few lines of code.

Leveraging Spark’s unified API for batch and stream processing, DLT pipelines allow you to easily toggle between processing modes.

Learn more
databricks processing

DLT makes it easy to optimize pipeline performance by declaring an entire incremental data pipeline with streaming tables and materialized views.

Learn more
end to end incremental processing

DLT pipelines support a broad ecosystem of sources and sinks. Load data from any source supported by Apache Spark™ on Databricks, including cloud storage, message buses, change data feeds and external systems.

Learn more
load data from anywhere

Expectations allow you to guarantee data arriving in tables meets data quality requirements and provide insights into data quality for each pipeline update.

Learn more
data quality

Develop pipelines in notebooks without any context switching. See the DAG, event log and code in one UI, and develop easily with code autocomplete, in-line errors and diagnostics.

Learn more
integrated notebook development

More features

Unified governance and storage

Built on the foundational lakehouse standards of Unity Catalog and open table formats.

Learn more

Serverless compute

Up to 5x better price/performance for data ingestion and 98% cost savings for complex transformations.

Learn more

Task orchestration

Instead of manually defining a series of separate Spark tasks, you define the transformations, and DLT ensures they are executed in the correct sequence.

Learn more

Error handling and failure recovery

Seamless recovery from errors that occur during the execution of data pipelines.

Learn more

CI/CD and version control

Easily specify configurations to isolate pipelines in developing, testing and production environments.

Learn more

Pipeline monitoring and observability

Built-in monitoring and observability features, including data lineage, update history and data quality reporting.

Learn more

Flexible refresh scheduling

Easily optimize for latency or cost depending on your pipeline’s requirements.

Learn more
USE CASES

Streamline your data pipelines

PRICING

Usage-based pricing keeps spending in check

Only pay for the products you use at per second granularity.
RELATED PRODUCTS

Discover more

Explore other integrated, intelligent offerings on the Data Intelligence Platform.

LakeFlow Connect

Efficient data ingestion connectors from any source and native integration with the Data Intelligence Platform unlock easy access to analytics and AI, with unified governance.

Workflows

Easily define, manage and monitor multitask workflows for ETL, analytics and machine learning pipelines. With a wide range of supported task types, deep observability capabilities and high reliability, your data teams are empowered to better automate and orchestrate any pipeline and become more productive.

Lakehouse Storage

Unify the data in your lakehouse, across all formats and types, for all your analytics and AI workloads.

Unity Catalog

Seamlessly govern all your data assets with the industry’s only unified and open governance solution for data and AI, built into the Databricks Data Intelligence Platform.

The Data Intelligence Platform

Find out how the Databricks Data Intelligence Platform enables your data and AI workloads.

Take the next step

Delta Live Tables FAQ

Ready to become a
data + AI company?

Take the first steps in your transformation