Skip to main content
Page 1

The Long Context RAG Capabilities of OpenAI o1 and Google Gemini

Retrieval Augmented Generation (RAG) is the top use case for Databricks customers who want to customize AI workflows on their own data. The...

Databricks announces significant improvements to the built-in LLM judges in Agent Evaluation

An improved answer-correctness judge in Agent Evaluation Agent Evaluation enables Databricks customers to define, measure, and understand how to improve the quality of...

Long Context RAG Performance of LLMs

Retrieval Augmented Generation (RAG) is the most widely adopted generative AI use case among our customers. RAG enhances the accuracy of LLMs by...

Announcing Mosaic AI Agent Framework and Agent Evaluation

Databricks announced the public preview of Mosaic AI Agent Framework and Agent Evaluation alongside our Generative AI Cookbook at the Data + AI...

DSPy on Databricks

Large language models (LLMs) have generated interest in effective human-AI interaction through optimizing prompting techniques. “Prompt engineering” is a growing methodology for tailoring...