Skip to main content
Page 1

Databricks at NeurIPS 2024

Databricks is proud to be a platinum sponsor of NeurIPS 2024. The conference runs from December 10 to 15 in Vancouver, British Columbia...

DBRX at Data + AI Summit: Best Practices, Use Cases, and Behind-the-scenes

June 3, 2024 by Kobie Crawford in
Businesses are making remarkable progress on their data and AI journeys. They’re advancing from a few pilot projects confined to use cases likely...

Integrating NVIDIA TensorRT-LLM with the Databricks Inference Stack

Over the past six months, we've been working with NVIDIA to get the most out of their new TensorRT-LLM library. TensorRT-LLM provides an easy-to-use Python interface to integrate with a web server for fast, efficient inference performance with LLMs. In this post, we're highlighting some key areas where our collaboration with NVIDIA has been particularly important.

Blazingly Fast Computer Vision Training with the Mosaic ResNet and Composer

Match benchmark accuracy on ImageNet (He et al., 2015) in 27 minutes, a 7x speedup (ResNet-50 on 8xA100s). Reach higher levels of accuracy...