Skip to main content

Databricks Labs

Databricks Labs are projects created by the field team to help customers get their use cases into production faster!

Node

DBX Icon

DBX

This tool simplifies jobs launch and deployment process across multiple environments. It also helps to package your project and deliver it to your Databricks environment in a versioned fashion. Designed in a CLI-first manner, it is built to be actively used both inside CI/CD pipelines and as a part of local tooling for fast prototyping.

GitHub Sources →

Documentation →

Blog →

Tempo Graphic

Tempo

The purpose of this project is to provide an API for manipulating time series on top of Apache Spark™. Functionality includes featurization using lagged time values, rolling statistics (mean, avg, sum, count, etc.), AS OF joins, and downsampling and interpolation. This has been tested on TB-scale of historical data.

GitHub Sources →

Documentation →

Webinar →

Mosaic logo

Mosaic

Mosaic is a tool that simplifies the implementation of scalable geospatial data pipelines by binding together common open source geospatial libraries and Apache Spark™️. Mosaic also provides a set of examples and best practices for common geospatial use cases. It provides APIs for ST_ expressions and GRID_ expressions, supporting grid index systems such as H3 and British National Grid.

GitHub Sources →

Documentation →

Blog →

Other Projects

Overwatch

Analyze all of your jobs and clusters across all of your workspaces to quickly identify where you can make the biggest adjustments for performance gains and cost savings.

Learn more

Splunk Integration

Add-on for Splunk, an app that allows Splunk Enterprise and Splunk Cloud users to run queries and execute actions, such as running notebooks and jobs, in Databricks.

Github Sources →
Learn more →

Smolder

Smolder provides an Apache Spark™ SQL data source for loading EHR data from HL7v2 message formats. Additionally, Smolder provides helper functions that can be used on a Spark SQL DataFrame to parse HL7 message text, and to extract segments, fields, and subfields from a message.

Github Sources →
Learn more →

Geoscan

Apache Spark ML Estimator for density-based spatial clustering based on Hexagonal Hierarchical Spatial Indices.

Github Sources →
Learn more →

Migrate

Tool to help customers migrate artifacts between Databricks workspaces. This allows customers to export configurations and code artifacts as a backup or as part of a migration between a different workspace.

Github Sources
Learn more: AWS | Azure

Data Generator

Generate relevant data quickly for your projects. The Databricks data generator can be used to generate large simulated/synthetic data sets for test, POCs, and other uses

Github Sources →
Learn more →

DeltaOMS

Centralized Delta transaction log collection for metadata and operational metrics analysis on your Lakehouse.

Github Sources →
Learn more →

DLT-META

This framework makes it easy to ingest data using Delta Live Tables and metadata. With DLT-META, a single data engineer can easily manage thousands of tables. Several Databricks customers have DLT-META in production to process 1000+ tables.

Github Sources →
Learn more →

Please note that all projects in the https://github.com/databrickslabs account are provided for your exploration only, and are not formally supported by Databricks with service level agreements (SLAs). They are provided AS IS and we do not make any guarantees of any kind.  Any issues discovered through the use of these projects can be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for GitHub support. If you are a customer with a current Databricks Support Services contract, you may submit a support ticket relating to issues arising from the use of these projects, request how-to assistance, and request help triaging the root cause of such issues. Project issues found to originate with Databricks Platform Services will be handled per the Databricks Support Policy. For issues determined to originate with the project, Databricks will in its sole discretion provide such support as it deems reasonable and appropriate.