Skip to main content

Data ingestion

Easily load your data into Delta Lake

hero image

With Databricks, you can ingest data from hundreds of data sources incrementally and efficiently into your Delta Lake to ensure your lakehouse always contains the most complete and up-to-date data available for data science, machine learning and business analytics.

Data ingestion, simplified

secondary icon

Auto Loader

Use Auto Loader to ingest any file that can land in a data lake into Delta Lake. Point Auto Loader to a directory on cloud storage services like Amazon S3, Azure Data Lake Storage or Google Compute Storage, and Auto Loader will incrementally process new files with exactly once semantics.

graphic

Tracking and monitoring

Let Auto Loader track which files have been processed, discover late arriving data, infer your data schema, monitor schema changes over time and rescue data with data quality problems. Auto Loader can ingest data continuously within seconds or can be scheduled to run at your expected data arrival rate — whether it is once an hour, once a day or once a month.

Icon Graphic

Copy into

The SQL command COPY INTO allows you to perform batch file ingestion into Delta Lake. COPY INTO is a command that ingests files with exactly once semantics, best used when the input directory contains thousands of files or fewer, and the user prefers SQL. COPY INTO can be used over JDBC to push data into Delta Lake at your convenience.

Data ingestion Delta Lake
Data ingestion - Data processing

Efficient data processing

With Databricks, you can pull data from popular message queues, such as Apache Kafka, Azure Event Hubs or AWS Kinesis at lower latencies. By ingesting your data from these sources into your Delta Lake, you don’t have to worry about losing data within these services due to retention policies. You can reprocess data cheaper and more efficiently as business requirements evolve, and you can keep a longer historical view of your data to power machine learning as well as business analytics applications.

Unify your data from other enterprise applications

Leverage a vast data ingestion network of partners like Azure Data Factory, Fivetran, Qlik, Infoworks, StreamSets and Syncsort to easily ingest data from applications, data stores, mainframes, files and more into Delta Lake from an easy-to-use gallery of connectors. Utilize an ecosystem of partners to realize the full potential of combining big data and data from cloud-based applications, databases, mainframes and file systems.

Data Ingestion Network of Partners

Ingesting change data capture from application databases in Delta Lake

Your business relies on your application databases. Leveraging them directly in data analytics use cases can cause interruptions in your business applications due to too much load on the database. By replicating these data sets to your lakehouse, you ensure that your business applications can operate without hiccups as you leverage the valuable information in your analytics use cases. You can ingest data from these data stores by leveraging services like Azure Data Factory, AWS DMS and Auto Loader or partners like Fivetran.

Customers

Related content

Webinar

2022-01-WB-Hassle-Free-Data-Ingestion-Webinar-Series-OG-1200x628-1

eBook

2021-08-EB-Data-Management-101-on-Databricks-OG-1200x628-1

eBook

2022-02-EB-All-Roads-Lead-to-the-Lakehouse-OG-1200x628-1