Skip to main content

On-demand webinar available: Databricks’ Data Pipeline

Dave Wang

in

Share this post

Two weeks ago we held a live webinar – Databricks' Data Pipeline: Journey and Lessons Learned – to show how Databricks used Apache Spark to simplify our own log ETL pipeline. The webinar describes an architecture where you can develop your pipeline code in notebooks, create Jobs to productionize your notebooks, and utilize REST APIs to turn all of this into a continuous integration workflow.

We have answered the common questions raised by webinar viewers below. If you have additional questions, please check out the Databricks Forum.

Common webinar questions and answers

Click on the question to see answer:

Try Databricks for free

Related posts

Near Real-Time Anomaly Detection with Delta Live Tables and Databricks Machine Learning

Why is Anomaly Detection Important? Whether in retail, finance, cyber security, or any other industry, spotting anomalous behavior as soon as it happens...

Measuring Advertising Effectiveness with Sales Forecasting and Attribution

October 5, 2020 by Layla Yang and Hector Leano in
Download the notebooks and watch the webinar for this solution accelerator How do you connect the impact of marketing and your ad spend...

Introducing Databricks Ingest: Easy and Efficient Data Ingestion from Different Sources into Delta Lake

February 23, 2020 by Prakash Chockalingam in
Get an early preview of O'Reilly's new ebook for the step-by-step guidance you need to start using Delta Lake We are excited to...
See all Company Blog posts