Skip to main content
Page 1

Building High-Quality and Trusted Data Products with Databricks

Introduction Organizations aiming to become AI and data-driven often need to provide their internal teams with high-quality and trusted data products . Building...

Introducing the Well-Architected Data Lakehouse from Databricks

June 13, 2023 by Bernhard Walter in
To provide customers with a framework for planning and implementing their data lakehouse, we are pleased to announce that we have recently published...

A data architecture pattern to maximize the value of the Lakehouse

One of Lakehouse's outstanding achievements is the ability to combine workloads for modern use cases, such as traditional BI, machine learning & AI...

Building a Data Mesh Based on the Databricks Lakehouse, Part 2

In the last blog " Databricks Lakehouse and Data Mesh ," we introduced the Data Mesh based on the Databricks Lakehouse. This blog...

Databricks Lakehouse and Data Mesh, Part 1

This is the first blog in a two-part series. In this post we will introduce the data mesh concept and the Databricks capabilities...

6 Guiding Principles to Build an Effective Data Lakehouse

In this blog post, we will discuss some guiding principles to help you build a highly-effective and efficient data lakehouse that delivers on...

New Databricks Integration for Jupyter Bridges Local and Remote Workflows

December 2, 2019 by Bernhard Walter in
Introduction For many years now, data scientists have developed specific workflows on premises using local filesystem hierarchies, source code revision systems and CI/CD...