Skip to main content
Page 1
>

Making AI More Accessible: Up to 80% Cost Savings with Meta Llama 3.3 on Databricks

As enterprises build agent systems to deliver high quality AI apps, we continue to deliver optimizations to deliver best overall cost-efficiency for our...

Introducing Structured Outputs for Batch and Agent Workflows

Many AI use cases now depend on transforming unstructured inputs into structured data. Developers are increasingly relying on LLMs to extract structured data...

Introducing Simple, Fast, and Scalable Batch LLM Inference on Mosaic AI Model Serving

Over the years, organizations have amassed a vast amount of unstructured text data—documents, reports, and emails—but extracting meaningful insights has remained a challenge...

Build Compound AI Systems Faster with Databricks Mosaic AI

Many of our customers are shifting from monolithic prompts with general-purpose models to specialized compound AI systems to achieve the quality needed for...

Announcing Advanced Security and Governance in Mosaic AI Gateway

We are excited to introduce several powerful new capabilities to Mosaic AI Gateway, designed to help our customers accelerate their AI initiatives with...

A New Standard in Open Source AI: Meta Llama 3.1 on Databricks

We are excited to partner with Meta to release the Llama 3.1 series of models on Databricks, further advancing the standard of powerful...

Accelerate GenAI App Development with New Updates to Databricks Model Serving

Last year, we launched foundation model support in Databricks Model Serving to enable enterprises to build secure and custom GenAI apps on a...

Building Enterprise GenAI Apps with Meta Llama 3 on Databricks

We are excited to partner with Meta to release the latest state-of-the-art large language model, Meta Llama 3 , on Databricks. With Llama...

Introducing Mixtral 8x7B with Databricks Model Serving

Today, Databricks is excited to announce support for Mixtral 8x7B in Model Serving . Mixtral 8x7B is a sparse Mixture of Experts (MoE)...

Build GenAI Apps Faster with New Foundation Model Capabilities

Following the announcements we made last week about Retrieval Augmented Generation (RAG) , we're excited to announce major updates to Model Serving. Databricks...