Skip to main content
<
Page 4
>

Lakehouse Monitoring: A Unified Solution for Quality of Data and AI

Introduction Databricks Lakehouse Monitoring allows you to monitor all your data pipelines – from data to features to ML models – without additional...

Improve your RAG application response quality with real-time structured data

Retrieval Augmented Generation (RAG) is an efficient mechanism to provide relevant data as context in Gen AI applications. Most RAG applications typically use...

Creating High Quality RAG Applications with Databricks

December 6, 2023 by Patrick Wendell and Hanlin Tang in
Retrieval-Augmented-Generation (RAG) has quickly emerged as a powerful way to incorporate proprietary, real-time data into Large Language Model (LLM) applications. Today we are...

Creating a bespoke LLM for AI-generated documentation

We recently announced our AI-generated documentation feature , which uses large language models (LLMs) to automatically generate documentation for tables and columns in...

Announcing MLflow 2.8 LLM-as-a-judge metrics and Best Practices for LLM Evaluation of RAG Applications, Part 2

Today we're excited to announce MLflow 2.8 supports our LLM-as-a-judge metrics which can help save time and costs while providing an approximation of...

Big Book of MLOps Updated for Generative AI

Last year, we published the Big Book of MLOps, outlining guiding principles, design considerations, and reference architectures for Machine Learning Operations (MLOps). Since...

Llama 2 Foundation Models Available in Databricks Lakehouse AI

We’re excited to announce that Meta AI’s Llama 2 foundation chat models are available in the Databricks Marketplace for you to fine-tune and...

A Pattern for the Lightweight Deployment of Distributed XGBoost and LightGBM Models

A common challenge data scientists encounter when developing machine learning solutions is training a model on a dataset that is too large to...

Deploy Private LLMs using Databricks Model Serving

We are excited to announce public preview of GPU and LLM optimization support for Databricks Model Serving! With this launch, you can deploy...

Using MLflow AI Gateway and Llama 2 to Build Generative AI Apps

To build customer support bots, internal knowledge graphs, or Q&A systems, customers often use Retrieval Augmented Generation (RAG) applications which leverage pre-trained models...