Skip to main content

What is Agent Bricks?

Agent Bricks is Databricks' enterprise agent platform for building, deploying, and governing agents that operate on your business data. It unifies model access, execution, governance, and context across a single system: from the model you call, to the data your agent reads, to the identity it acts under. In your workspace you configure Knowledge Assistants, Multi-Agent Supervisors, and custom Python agents. Databricks handles evaluation, tuning, and quality improvement, then hosts each agent at an HTTP endpoint your app can call.

Your AppKit app connects to Agent Bricks capabilities through two plugins: the Model Serving plugin for agents, foundation models, and governed endpoints, and the Genie plugin for natural-language queries over Unity Catalog tables.

How it fits together

Your AppKit app calls Agent Bricks through a Model Serving endpoint (a foundation model, Knowledge Assistant, Agent Bricks Multi-Agent Supervisor, or custom Python agent) or a Genie space (natural-language queries over Unity Catalog tables). The Model Serving plugin and Genie plugin cover both.

AppKit plugins for Agent Bricks

You want toUse this pluginFrontend helper
Call a foundation model (LLM) with chat messagesservinguseServingStream, useServingInvoke
Call an agent endpoint (Knowledge Assistant, Supervisor Agent, custom Python)servinguseServingStream, useServingInvoke
Give users natural-language queries over Unity Catalog tablesgenieGenieChat, useGenieChat

Pick the plugin that matches the resource. No other primitive is required for the AI surface.

Auth

Serving and Genie HTTP routes run on behalf of the authenticated user by default. If the user doesn't have CAN QUERY on the serving endpoint or CAN RUN on the Genie space, the call fails with a 403. You don't write the permission check.

For server logic outside a route handler, call AppKit.serving("alias").asUser(req).invoke(...) to keep the same behavior.

Why AppKit instead of raw fetch

You could call a serving endpoint directly with fetch and a token. The plugin isn't doing something you can't do yourself. It's doing these things so you don't have to:

  • Routes run as the authenticated user, so per-user permissions apply automatically. Your users only see endpoints and data they're already allowed to see. No OAuth code on your side. See Execution context for the details.
  • All streaming is handled for you. SSE parsing, abort on unmount, token accumulation, and error handling. useServingStream and useGenieChat do this.
  • No secrets in the frontend. The plugin proxies through your server and tokens stay on the backend. No PAT in the React bundle.
  • When your serving endpoint publishes an OpenAPI schema, AppKit generates typed endpoint aliases with TypeScript types for request and response per alias. Autocomplete for chunk shapes, not unknown.
Creating a custom agent

Creating a custom agent is a Python workflow: the ResponsesAgent interface, an agent framework (OpenAI Agents SDK, LangGraph, LlamaIndex), and MLflow for tracing. See Create an AI agent on docs.databricks.com for that track.

Pick a template to start from

Start from a template that matches your use case. Each one includes the Model Serving or Genie plugin wiring, an app.yaml resource binding, and a working UI you can adapt.

You want to...Template
Add a streaming chatbot to your appAI Chat App
Let users query tables in natural languageGenie Analytics App
Add multi-space Genie switching to an existing appGenie Multi-Space Selector

Where to next

  • AI Gateway for governed access to models, agent endpoints, and external tools.
  • Genie spaces for chat-with-your-data over Unity Catalog tables.
  • Custom agent endpoints for wiring Knowledge Assistant, Supervisor Agent, or your own Python agent.