Two ways to use this template
- 1. Click "Copy prompt" below
- 2. Paste into Cursor, Claude Code, Codex, or any coding agent
- 3. Your agent builds the app — it asks questions along the way so the result is exactly what you want
Follow the steps below to set things up manually, at your own pace.
AI Chat App
Model Serving integration, AI SDK streaming chat, and Lakebase-persisted chat history.
What you are building
A streaming AI chat app on Databricks: a user sends a message, the server authenticates with the Databricks CLI profile (or a service-principal token in production), calls an AI Gateway chat endpoint via the OpenAI-compatible provider, and streams the answer back token-by-token. Chat sessions and messages are persisted in Lakebase Postgres so conversations survive page refreshes and redeploys.
How the steps fit together
Work through the steps in the order below. Each one adds one concrete piece; by the end you have a deployable app.
- Spin Up a Databricks App — scaffold a fresh AppKit Databricks App with
databricks apps init(the meta-prompt above already verifies the CLI profile via Set Up Your Local Dev Environment). - Query AI Gateway Endpoints — pick a chat model (e.g.
databricks-gpt-5-4-mini) and wire upcreateOpenAI()with the AI Gateway base URL. - Streaming AI Chat with Model Serving — add the
/api/chatroute withstreamText()and auseChatUI backed byTextStreamChatTransport. - Create a Lakebase Instance — provision a managed Postgres project, branch, and endpoint; capture the connection values.
- Lakebase Data Persistence — add the
lakebase()plugin, schema setup, and CRUD plumbing against your new project. - Lakebase Agent Memory — create the
chat.chatsandchat.messagestables and persist each turn of every conversation.
Before you start
Every step below lists its own workspace-feature checks. Combined, the app needs a Databricks CLI profile that can reach Model Serving (AI Gateway foundation-model endpoints), Lakebase Postgres, and Databricks Apps. Run each step's prerequisite checks upfront so you do not hit gated features mid-build.
Prerequisites
Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.
- Databricks CLI authenticated. Run
databricks auth profilesand confirm at least one profile showsValid: YES. If none do, authenticate withdatabricks auth login --host <workspace-url> --profile <PROFILE>. - AI Gateway (currently in Beta). AI Gateway is built into all Foundation Model API endpoints, but it is still a Beta feature — behavior and APIs can change. Confirm availability by listing endpoints and checking the config:
databricks serving-endpoints list --profile <PROFILE>should return at least onedatabricks-*foundation-model endpoint, anddatabricks serving-endpoints get <endpoint-name> --profile <PROFILE> -o json | grep -q '"ai_gateway"' && echo okshould printok. Endpoint availability varies by workspace and region.
Complete these prerequisite templates first:
- Set Up Your Local Dev Environment — install the Databricks CLI and authenticate a profile.
- Query AI Gateway Endpoints — confirm your workspace exposes a chat endpoint via the AI Gateway.
Then verify these Databricks workspace features are enabled. If any check fails, ask your workspace admin to enable the feature.
- Databricks CLI authenticated. Run
databricks auth profilesand confirm at least one profile showsValid: YES. If none do, authenticate withdatabricks auth login --host <workspace-url> --profile <PROFILE>. - An OpenAI-compatible chat endpoint in Model Serving. Run
databricks serving-endpoints list --profile <PROFILE>and confirm at least one OpenAI-compatible chat endpoint is listed (e.g.databricks-gpt-5-4-mini,databricks-meta-llama-3-3-70b-instruct, ordatabricks-claude-sonnet-4). Endpoint availability varies by workspace and region; note the one you plan to set asDATABRICKS_ENDPOINT. - Databricks Apps enabled. Run
databricks apps list --profile <PROFILE>and confirm the command succeeds (an empty list is fine). A permission ornot enablederror means Apps is not available to this identity in this workspace.
Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.
- Databricks CLI authenticated. Run
databricks auth profilesand confirm at least one profile showsValid: YES. If none do, authenticate withdatabricks auth login --host <workspace-url> --profile <PROFILE>. - Lakebase Postgres available in the workspace. Run
databricks postgres list-projects --profile <PROFILE>and confirm the command succeeds (an empty list is fine — you are about to create the first project). Anot enabledor permission error means Lakebase is not available to this identity.
Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.
- Databricks CLI authenticated. Run
databricks auth profilesand confirm at least one profile showsValid: YES. If none do, authenticate withdatabricks auth login --host <workspace-url> --profile <PROFILE>. - Lakebase Postgres available. Run
databricks postgres list-projects --profile <PROFILE>and confirm the command succeeds. Anot enablederror means Lakebase is not available to this identity. - Databricks Apps enabled. Run
databricks apps list --profile <PROFILE>and confirm the command succeeds (an empty list is fine). The template deploys an AppKit app to Databricks Apps. - A provisioned Lakebase project. Complete the Create a Lakebase Instance template first and collect the project's endpoint host, endpoint resource path, database resource path, and PostgreSQL database name.
Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.
- Databricks CLI authenticated. Run
databricks auth profilesand confirm at least one profile showsValid: YES. If none do, authenticate withdatabricks auth login --host <workspace-url> --profile <PROFILE>. - Lakebase Postgres available. Run
databricks postgres list-projects --profile <PROFILE>and confirm the command succeeds (an empty list is fine). Anot enablederror means Lakebase is not available to this identity in this workspace. - Databricks Apps enabled. Run
databricks apps list --profile <PROFILE>and confirm the command succeeds (an empty list is fine). The chat persistence layer runs inside an AppKit app deployed to Databricks Apps. - A scaffolded AppKit app with Lakebase wired up. Complete the Create a Lakebase Instance and Lakebase Data Persistence templates first. This template adds chat tables on top of that setup.
Query AI Gateway Endpoints
Access Databricks foundation models through AI Gateway endpoints with built-in governance, monitoring, and production-readiness features.
1. Understand AI Gateway endpoints
AI Gateway is a governance layer on top of model serving endpoints that provides permissions, rate limiting, payload logging, and AI guardrails. Currently in beta, AI Gateway is becoming the default way to access foundation models in Databricks.
Note: AI Gateway is built into all Foundation Model API endpoints. If you need to access non-AI Gateway endpoints, use the Databricks SDK's servingEndpoints.query() method directly.
2. Check if AI Gateway is available
All Foundation Model API endpoints have AI Gateway built-in. To verify, check if a known FM endpoint has the ai_gateway configuration:
databricks serving-endpoints get <your-endpoint> --profile <PROFILE> --output json | grep -q '"ai_gateway"' && echo "✓ AI Gateway available" || echo "✗ No AI Gateway"
3. Choose your model
List available AI Gateway endpoints in your workspace:
databricks serving-endpoints list --profile <PROFILE>
Common AI Gateway endpoint names:
databricks-meta-llama-3-3-70b-instructdatabricks-gemini-3-1-flash-litedatabricks-dbrx-instruct
Note: When using this template with a coding agent, specify which endpoint to use based on what's available in your workspace. Endpoint names may vary.
Important: Endpoint availability varies by workspace. Always run
databricks serving-endpoints listto check what's available before configuring your app.
4. Configure environment variables
For local development (.env):
DATABRICKS_ENDPOINT=<your-endpoint>
For deployment (app.yaml):
env:
- name: DATABRICKS_ENDPOINT
value: "<your-endpoint>"
5. Query AI Gateway endpoints
import { getWorkspaceClient } from "@databricks/appkit";
// {} tells the SDK to use default auth chain (env vars / profile).
// Do NOT omit. getWorkspaceClient() with no argument will throw.
const workspaceClient = getWorkspaceClient({});
const endpoint = process.env.DATABRICKS_ENDPOINT || "<your-endpoint>";
async function queryModel(messages: any[]) {
const result = await workspaceClient.servingEndpoints.query({
name: endpoint,
messages: messages,
max_tokens: 1000,
});
return result;
}
For streaming responses: For OpenAI-compatible models, use the Vercel AI SDK's createOpenAI provider with AI Gateway:
import { createOpenAI } from "@ai-sdk/openai";
import { streamText } from "ai";
const databricks = createOpenAI({
baseURL: `https://${process.env.DATABRICKS_WORKSPACE_ID}.ai-gateway.cloud.databricks.com/mlflow/v1`,
apiKey: token,
});
const result = streamText({
model: databricks.chat(endpoint), // e.g., "databricks-gpt-5-4-mini"
messages,
maxOutputTokens: 1000,
});
// AI SDK v6: pipe the text stream to the Express response
result.pipeTextStreamToResponse(res);
Auth for streaming: The streaming example above requires a bearer token for
createOpenAI(). See the Streaming AI Chat template for the full auth helper pattern using@databricks/sdk-experimental.
Note: This pattern works with OpenAI-compatible models (
databricks-gpt-5-4-mini,databricks-gpt-oss-120b). Native Databricks models use the MLflow unified API.Workspace ID: AppKit auto-discovers this at runtime. For explicit setup, run
databricks api get /api/2.1/unity-catalog/current-metastore-assignment --profile <PROFILE>and use theworkspace_idfield.
See the Streaming AI Chat template for a complete implementation.
6. Test the endpoint
Query an AI Gateway endpoint:
databricks serving-endpoints query <your-endpoint> \
--json '{"messages": [{"role": "user", "content": "Hello"}], "max_tokens": 100}' \
--profile <PROFILE>
References
- AI Gateway Overview
- AI Gateway and Serving Endpoints
- Vercel AI SDK - For streaming implementations
Streaming AI Chat with Model Serving
Build a streaming AI chat experience in a Databricks App using Vercel AI SDK with Databricks Model Serving and OpenAI-compatible endpoints.
1. Install AI SDK packages
npm install ai@6 @ai-sdk/react@3 @ai-sdk/openai @databricks/sdk-experimental
Version note: This template uses AI SDK v6 APIs (
TextStreamChatTransport,sendMessage({ text }), transport-baseduseChat). Tested with[email protected],@ai-sdk/[email protected], and@ai-sdk/[email protected].
Note:
@databricks/sdk-experimentalis included in the scaffoldedpackage.json. It is listed here for reference if adding AI chat to an existing project.
Optional: For pre-built chat UI components, initialize shadcn and add AI Elements:
bashnpx shadcn@latest initThis basic template works without AI Elements. They are optional prebuilt components.
2. Configure environment variables for AI Gateway
Configure your Databricks workspace ID and model endpoint:
For local development (.env):
echo 'DATABRICKS_WORKSPACE_ID=<your-workspace-id>' >> .env
echo 'DATABRICKS_ENDPOINT=<your-endpoint>' >> .env
echo 'DATABRICKS_CONFIG_PROFILE=DEFAULT' >> .env
For deployment in Databricks Apps (app.yaml):
env:
- name: DATABRICKS_WORKSPACE_ID
value: "<your-workspace-id>"
- name: DATABRICKS_ENDPOINT
value: "<your-endpoint>"
Workspace ID: AppKit auto-discovers this at runtime. For explicit setup, run
databricks api get /api/2.1/unity-catalog/current-metastore-assignment --profile <PROFILE>and use theworkspace_idfield.
Model compatibility: This template uses OpenAI-compatible models served via Databricks AI Gateway, which support the AI SDK's streaming API. The AI Gateway URL uses the
/mlflow/v1path (not/openai/v1).
Find your endpoint: Run
databricks serving-endpoints list --profile <PROFILE>to see available models. Common endpoints includedatabricks-meta-llama-3-3-70b-instructanddatabricks-claude-sonnet-4, but availability varies by workspace.
3. Configure authentication helper
Create a helper function that works for both local development and deployed apps:
import { Config } from "@databricks/sdk-experimental";
async function getDatabricksToken() {
// For deployed apps, use service principal token
if (process.env.DATABRICKS_TOKEN) {
return process.env.DATABRICKS_TOKEN;
}
// For local dev, use CLI profile auth via Databricks SDK
const config = new Config({
profile: process.env.DATABRICKS_CONFIG_PROFILE || "DEFAULT",
});
await config.ensureResolved();
const headers = new Headers();
await config.authenticate(headers);
const authHeader = headers.get("Authorization");
if (!authHeader) {
throw new Error(
"Failed to get Databricks token. Check your CLI profile or set DATABRICKS_TOKEN.",
);
}
return authHeader.replace("Bearer ", "");
}
This function uses the Databricks SDK auth chain, which reads ~/.databrickscfg profiles and handles OAuth token refresh. For deployed apps, set DATABRICKS_TOKEN directly.
User identity in deployed apps: Databricks Apps injects user identity via request headers. Extract it with
req.header("x-forwarded-email")orreq.header("x-forwarded-user"). Use this for chat persistence and access control.
4. Add /api/chat route with streaming
Create a server route using the AI SDK's streaming support:
import { createOpenAI } from "@ai-sdk/openai";
import { streamText, type UIMessage } from "ai";
app.post("/api/chat", async (req, res) => {
const { messages } = req.body;
// AI SDK v6 client sends UIMessage objects with a parts array.
// Convert to CoreMessage format for streamText().
const coreMessages = (messages as UIMessage[]).map((m) => ({
role: m.role as "user" | "assistant" | "system",
content:
m.parts
?.filter((p) => p.type === "text" && p.text)
.map((p) => p.text)
.join("") ??
m.content ??
"",
}));
try {
const token = await getDatabricksToken();
const endpoint = process.env.DATABRICKS_ENDPOINT || "<your-endpoint>";
// Configure Databricks AI Gateway as OpenAI-compatible provider
const databricks = createOpenAI({
baseURL: `https://${process.env.DATABRICKS_WORKSPACE_ID}.ai-gateway.cloud.databricks.com/mlflow/v1`,
apiKey: token,
});
// Stream the response using AI SDK v6
const result = streamText({
model: databricks.chat(endpoint),
messages: coreMessages,
maxOutputTokens: 1000,
});
// v6 API: pipe the text stream to the Express response
result.pipeTextStreamToResponse(res);
} catch (err) {
const message = (err as Error).message;
console.error(`[chat] Streaming request failed:`, message);
res.status(502).json({
error: "Chat request failed",
detail: message,
});
}
});
5. Render the streaming chat UI
Use useChat from the AI SDK with TextStreamChatTransport for streaming support:
import { useChat } from "@ai-sdk/react";
import { TextStreamChatTransport } from "ai";
import { useState } from "react";
export function ChatPage() {
const [input, setInput] = useState("");
const { messages, sendMessage, status } = useChat({
transport: new TextStreamChatTransport({ api: "/api/chat" }),
});
return (
<div className="flex flex-col h-full">
<div className="flex-1 overflow-y-auto space-y-4 p-4">
{messages.map((m) => (
<div key={m.id} className={m.role === "user" ? "text-right" : ""}>
<span className="text-sm font-medium">
{m.role === "user" ? "You" : "Assistant"}
</span>
{m.parts.map((part, i) =>
part.type === "text" ? (
<p key={`${m.id}-${i}`} className="whitespace-pre-wrap">
{part.text}
</p>
) : null,
)}
</div>
))}
{status === "submitted" && <div className="p-4">Loading...</div>}
</div>
<form
onSubmit={(e) => {
e.preventDefault();
if (input.trim()) {
void sendMessage({ text: input });
setInput("");
}
}}
className="border-t p-4 flex gap-2"
>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Ask a question..."
className="flex-1 border rounded px-3 py-2"
disabled={status !== "ready"}
/>
<button type="submit" disabled={status !== "ready"}>
{status === "submitted" || status === "streaming"
? "Sending..."
: "Send"}
</button>
</form>
</div>
);
}
6. Deploy and verify
databricks apps deploy --profile <PROFILE>
databricks apps list --profile <PROFILE>
databricks apps logs <app-name> --profile <PROFILE>
Open the app URL while signed in to Databricks, send a message, and verify streaming responses appear token-by-token from the AI Gateway endpoint.
References
Create a Lakebase Instance
Provision a managed Lakebase Postgres project on Databricks and collect the connection values needed by downstream templates.
1. Create a Lakebase project
Create a new Lakebase Postgres project. This provisions a managed Postgres cluster with a default branch and endpoint:
databricks postgres create-project <project-name> --profile <PROFILE>
2. Verify the project resources
Confirm the branch, endpoint, and database were created:
databricks postgres list-branches \
projects/<project-name> \
--profile <PROFILE> -o json
databricks postgres list-endpoints \
projects/<project-name>/branches/production \
--profile <PROFILE> -o json
databricks postgres list-databases \
projects/<project-name>/branches/production \
--profile <PROFILE> -o json
3. Note the connection values
Record these values from the command output above. They are required by the Lakebase Data Persistence template and other Lakebase-dependent templates:
| Value | JSON path | Used for |
|---|---|---|
| Endpoint host | ...status.hosts.host | PGHOST, lakebase.postgres.host |
| Endpoint resource path | ...name | LAKEBASE_ENDPOINT, lakebase.postgres.endpointPath |
| Database resource path | ...name | lakebase.postgres.database |
| PostgreSQL database name | ...status.postgres_database | PGDATABASE, lakebase.postgres.databaseName |
References
Lakebase Data Persistence
Add a managed Postgres database to your Databricks app using the Lakebase plugin. Covers schema setup, table creation, and full CRUD REST API routes.
This template assumes you have already completed the Create a Lakebase Instance template and have the connection values (endpoint host, endpoint path, database resource path, and PostgreSQL database name) ready.
The code examples below use a generic items resource as a placeholder. Replace items with your domain entity (products, orders, users, etc.) and adapt the schema columns to match your data model.
1. New app: scaffold with the Lakebase feature
databricks apps init \
--name <app-name> \
--version latest \
--features=lakebase \
--set 'lakebase.postgres.branch=projects/<project-name>/branches/production' \
--set 'lakebase.postgres.database=projects/<project-name>/branches/production/databases/<db-name>' \
--set 'lakebase.postgres.databaseName=<postgres-database-name>' \
--set 'lakebase.postgres.endpointPath=projects/<project-name>/branches/production/endpoints/primary' \
--set 'lakebase.postgres.host=<endpoint-host>' \
--set 'lakebase.postgres.port=5432' \
--set 'lakebase.postgres.sslmode=require' \
--run none --profile <PROFILE>
Use the values returned by list-databases and list-endpoints. The generated template currently requires all postgres fields together during non-interactive scaffolding.
This scaffolds a complete app with Lakebase already wired up, including a sample CRUD app. Skip to step 3 to configure environment variables, then step 5 to deploy.
Naming and routing conventions
The scaffolded Lakebase sample uses lakebase in route names and file paths to make plugin wiring obvious. For production apps, use domain names in user-facing code and keep lakebase only for infrastructure configuration:
- page components and files use domain names:
ItemsPage.tsx,item-routes.ts - routes use domain names:
/items,/api/items,/api/items/:id - keep
lakebasenaming for plugin/config only:lakebase()plugin,LAKEBASE_ENDPOINT,postgresapp resource
2. Existing app: add Lakebase manually
The following changes match what apps init --features=lakebase generates. Apply them to an existing scaffolded AppKit app.
Tip: The code below may be outdated. To get the latest, clone
https://github.com/databricks/appkitand look in thetemplate/directory. Search for{{if .plugins.lakebase}}to find all lakebase-conditional files and blocks. Files entirely wrapped in that conditional are lakebase-only; shared files likeApp.tsxandserver.tscontain conditional blocks you can extract.
Update server/server.ts
Register the lakebase plugin and run route setup inside onPluginsReady. AppKit waits for that hook to resolve before the server starts accepting requests, so your schema setup completes before the first call lands:
import { createApp, server, lakebase } from "@databricks/appkit";
import { setupRoutes } from "./routes/item-routes";
await createApp({
plugins: [server(), lakebase()],
async onPluginsReady(appkit) {
await setupRoutes(appkit);
},
});
Create server/routes/item-routes.ts
CRUD API that creates an items table and exposes REST endpoints. Adapt the table schema and routes to your domain:
import { z } from "zod";
import { Application } from "express";
interface AppKitWithLakebase {
lakebase: {
query(
text: string,
params?: unknown[],
): Promise<{ rows: Record<string, unknown>[] }>;
};
server: {
extend(fn: (app: Application) => void): void;
};
}
const TABLE_EXISTS_SQL = `
SELECT 1 FROM information_schema.tables
WHERE table_schema = 'app' AND table_name = 'items'
`;
const SETUP_SCHEMA_SQL = `CREATE SCHEMA IF NOT EXISTS app`;
const CREATE_TABLE_SQL = `
CREATE TABLE IF NOT EXISTS app.items (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
)
`;
const CreateItemBody = z.object({ name: z.string().min(1) });
const UpdateItemBody = z.object({ name: z.string().min(1) });
export async function setupRoutes(appkit: AppKitWithLakebase) {
try {
const { rows } = await appkit.lakebase.query(TABLE_EXISTS_SQL);
if (rows.length > 0) {
console.log("[lakebase] Table app.items already exists, skipping setup");
} else {
await appkit.lakebase.query(SETUP_SCHEMA_SQL);
await appkit.lakebase.query(CREATE_TABLE_SQL);
console.log("[lakebase] Created schema and table app.items");
}
} catch (err) {
console.warn("[lakebase] Database setup failed:", (err as Error).message);
console.warn("[lakebase] Routes will be registered but may return errors");
}
appkit.server.extend((app) => {
app.get("/api/items", async (_req, res) => {
try {
const result = await appkit.lakebase.query(
"SELECT id, name, created_at FROM app.items ORDER BY created_at DESC",
);
res.json(result.rows);
} catch (err) {
console.error("Failed to list items:", err);
res.status(500).json({ error: "Failed to list items" });
}
});
app.post("/api/items", async (req, res) => {
try {
const parsed = CreateItemBody.safeParse(req.body);
if (!parsed.success) {
res.status(400).json({ error: "name is required" });
return;
}
const result = await appkit.lakebase.query(
"INSERT INTO app.items (name) VALUES ($1) RETURNING id, name, created_at",
[parsed.data.name.trim()],
);
res.status(201).json(result.rows[0]);
} catch (err) {
console.error("Failed to create item:", err);
res.status(500).json({ error: "Failed to create item" });
}
});
app.patch("/api/items/:id", async (req, res) => {
try {
const id = parseInt(req.params.id, 10);
if (isNaN(id)) {
res.status(400).json({ error: "Invalid id" });
return;
}
const parsed = UpdateItemBody.safeParse(req.body);
if (!parsed.success) {
res.status(400).json({ error: "name is required" });
return;
}
const result = await appkit.lakebase.query(
"UPDATE app.items SET name = $1 WHERE id = $2 RETURNING id, name, created_at",
[parsed.data.name.trim(), id],
);
if (result.rows.length === 0) {
res.status(404).json({ error: "Item not found" });
return;
}
res.json(result.rows[0]);
} catch (err) {
console.error("Failed to update item:", err);
res.status(500).json({ error: "Failed to update item" });
}
});
app.delete("/api/items/:id", async (req, res) => {
try {
const id = parseInt(req.params.id, 10);
if (isNaN(id)) {
res.status(400).json({ error: "Invalid id" });
return;
}
const result = await appkit.lakebase.query(
"DELETE FROM app.items WHERE id = $1 RETURNING id",
[id],
);
if (result.rows.length === 0) {
res.status(404).json({ error: "Item not found" });
return;
}
res.status(204).send();
} catch (err) {
console.error("Failed to delete item:", err);
res.status(500).json({ error: "Failed to delete item" });
}
});
});
}
Lakebase tables are owned by the identity that creates them. If you create the app schema locally, your user owns it and the deployed service principal gets permission denied for schema app.
Recommended workflow: Deploy the app first so the service principal creates and owns the schema. Then grant yourself access for local development:
databricks psql --project <project-name> --branch production --endpoint primary --profile <PROFILE> -- -c "
CREATE EXTENSION IF NOT EXISTS databricks_auth;
SELECT databricks_create_role('<your-email>', 'USER');
GRANT databricks_superuser TO \"<your-email>\";
"
If you are the Lakebase project owner, databricks_create_role may fail with role already exists and GRANT databricks_superuser may fail with permission denied to grant role. Both errors are safe to ignore; the project owner already has the necessary access.
This gives you DML access (read/write) but not DDL (create/alter). The service principal remains the schema owner.
If you already created tables locally, drop and recreate the schema so the service principal owns it, or add tables in a separate schema (the Lakebase Agent Memory template uses a chat schema for this reason).
Create client/src/pages/ItemsPage.tsx
List and create UI with CRUD operations against the API routes. Adapt the fields and layout to your domain:
import {
Card,
CardContent,
CardHeader,
CardTitle,
Button,
Input,
Skeleton,
} from "@databricks/appkit-ui/react";
import { useState, useEffect } from "react";
import { X } from "lucide-react";
interface Item {
id: number;
name: string;
created_at: string;
}
export function ItemsPage() {
const [items, setItems] = useState<Item[]>([]);
const [newName, setNewName] = useState("");
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
const [submitting, setSubmitting] = useState(false);
useEffect(() => {
fetch("/api/items")
.then((res) => {
if (!res.ok)
throw new Error(`Failed to fetch items: ${res.statusText}`);
return res.json() as Promise<Item[]>;
})
.then(setItems)
.catch((err) =>
setError(err instanceof Error ? err.message : "Failed to load items"),
)
.finally(() => setLoading(false));
}, []);
const addItem = async (e: React.FormEvent) => {
e.preventDefault();
const name = newName.trim();
if (!name) return;
setSubmitting(true);
try {
const res = await fetch("/api/items", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ name }),
});
if (!res.ok) throw new Error(`Failed to create item: ${res.statusText}`);
const created = (await res.json()) as Item;
setItems((prev) => [created, ...prev]);
setNewName("");
} catch (err) {
setError(err instanceof Error ? err.message : "Failed to add item");
} finally {
setSubmitting(false);
}
};
const deleteItem = async (id: number) => {
try {
const res = await fetch(`/api/items/${id}`, { method: "DELETE" });
if (!res.ok) throw new Error(`Failed to delete item: ${res.statusText}`);
setItems((prev) => prev.filter((item) => item.id !== id));
} catch (err) {
setError(err instanceof Error ? err.message : "Failed to delete item");
}
};
return (
<div className="space-y-6 w-full max-w-2xl mx-auto">
<Card className="shadow-lg">
<CardHeader>
<CardTitle>Items</CardTitle>
</CardHeader>
<CardContent>
<form onSubmit={addItem} className="flex gap-2 mb-6">
<Input
placeholder="New item name"
value={newName}
onChange={(e) => setNewName(e.target.value)}
disabled={submitting}
className="flex-1"
/>
<Button type="submit" disabled={submitting || !newName.trim()}>
{submitting ? "Adding..." : "Add"}
</Button>
</form>
{error && (
<div className="text-destructive bg-destructive/10 p-3 rounded-md mb-4">
{error}
</div>
)}
{loading && (
<div className="space-y-3">
{Array.from({ length: 3 }, (_, i) => (
<div key={`skeleton-${i}`} className="flex items-center gap-3">
<Skeleton className="h-4 flex-1" />
</div>
))}
</div>
)}
{!loading && items.length === 0 && (
<p className="text-muted-foreground text-center py-8">
No items yet. Add one above to get started.
</p>
)}
{!loading && items.length > 0 && (
<div className="space-y-2">
{items.map((item) => (
<div
key={item.id}
className="flex items-center gap-3 p-3 rounded-lg border hover:bg-muted/50 transition-colors"
>
<span className="flex-1">{item.name}</span>
<Button
variant="ghost"
size="sm"
onClick={() => deleteItem(item.id)}
className="text-muted-foreground hover:text-destructive shrink-0"
aria-label="Delete item"
>
<X className="h-4 w-4" />
</Button>
</div>
))}
</div>
)}
</CardContent>
</Card>
</div>
);
}
Update client/src/App.tsx
Add the import, nav link, and route:
// Add import at top
import { ItemsPage } from './pages/ItemsPage';
// Add nav link inside the <nav> element
<NavLink to="/items" className={navLinkClass}>
Items
</NavLink>
// Add route in the router children array
{ path: '/items', element: <ItemsPage /> },
3. Configure environment variables
For local development, add the Postgres connection details to .env:
PGHOST=<endpoint-host>
PGPORT=5432
PGDATABASE=<postgres-database-name>
PGSSLMODE=require
LAKEBASE_ENDPOINT=projects/<project-name>/branches/production/endpoints/primary
For deployment, the platform injects Postgres connection values automatically through the app resource. Keep only the Lakebase endpoint in app.yaml:
command: ["npm", "run", "start"]
env:
- name: LAKEBASE_ENDPOINT
valueFrom: postgres
4. Update databricks.yml
Add the postgres variables, resource, and target values:
variables:
postgres_branch:
description: Lakebase Postgres branch resource name
postgres_database:
description: Lakebase Postgres database resource name
postgres_databaseName:
description: Postgres database name for local development
postgres_endpointPath:
description: Lakebase endpoint resource name for local development
postgres_host:
description: Postgres host for local development
postgres_port:
description: Postgres port for local development
postgres_sslmode:
description: Postgres SSL mode for local development
resources:
apps:
app:
# Add under existing app config
resources:
- name: postgres
postgres:
branch: ${var.postgres_branch}
database: ${var.postgres_database}
permission: CAN_CONNECT_AND_CREATE
targets:
default:
variables:
postgres_branch: projects/<project-name>/branches/production
postgres_database: projects/<project-name>/branches/production/databases/<db-name>
postgres_databaseName: <postgres-database-name>
postgres_endpointPath: projects/<project-name>/branches/production/endpoints/primary
postgres_host: <endpoint-host>
postgres_port: 5432
postgres_sslmode: require
5. Deploy and test
databricks apps deploy --profile <PROFILE>
Verify the app once it is running by opening the app URL in your browser while signed in to Databricks, navigating to the Items page, and creating, updating, and deleting an item.
If the app does not start, check logs:
databricks apps logs <app-name> --profile <PROFILE>
References
Lakebase Agent Memory
Save your AI agent's chat conversations to Lakebase so users can come back to a session, scroll their full message history, and let your agent reason over previous turns across requests, deploys, and machines.
The schema is a simplified, production-shaped relational layout (chat plus message) wired to Databricks AppKit + Lakebase. Once it's in place every chat turn — user input, assistant reply, tool call — is durably persisted in managed Postgres next to the rest of your operational data.
This template assumes you have already completed the Create a Lakebase Instance and Lakebase Data Persistence templates (Lakebase project creation, scaffolding, environment variables, databricks.yml config, and initial deploy).
1. Create chat tables
Create two tables in a chat schema:
chat.chats: one row per chat sessionchat.messages: one row per message
CREATE SCHEMA IF NOT EXISTS chat;
CREATE TABLE IF NOT EXISTS chat.chats (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL,
title TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS chat.messages (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
chat_id UUID NOT NULL REFERENCES chat.chats(id) ON DELETE CASCADE,
role TEXT NOT NULL CHECK (role IN ('system', 'user', 'assistant', 'tool')),
content TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_messages_chat_id_created_at
ON chat.messages(chat_id, created_at);
2. Run setup from your server bootstrap
In server/server.ts, run schema setup inside onPluginsReady so it completes before AppKit starts the HTTP server:
import { createApp, server, lakebase } from "@databricks/appkit";
import { setupChatTables } from "./lib/chat-store";
await createApp({
plugins: [server(), lakebase()],
async onPluginsReady(appkit) {
await setupChatTables(appkit);
},
});
3. Add persistence helpers
Create server/lib/chat-store.ts and use parameterized queries:
Getting userId: In deployed Databricks Apps, use
req.header("x-forwarded-email")from the request headers. For local development, use a hardcoded test user ID.
export async function createChat(
appkit: AppKitWithLakebase,
input: { userId: string; title: string },
) {
const result = await appkit.lakebase.query(
`INSERT INTO chat.chats (user_id, title)
VALUES ($1, $2)
RETURNING id, user_id, title, created_at, updated_at`,
[input.userId, input.title],
);
return result.rows[0];
}
export async function appendMessage(
appkit: AppKitWithLakebase,
input: { chatId: string; role: string; content: string },
) {
const result = await appkit.lakebase.query(
`INSERT INTO chat.messages (chat_id, role, content)
VALUES ($1, $2, $3)
RETURNING id, chat_id, role, content, created_at`,
[input.chatId, input.role, input.content],
);
return result.rows[0];
}
4. Persist in the /api/chat flow
In your chat route:
- create (or load) a chat row
- save incoming user message
- stream assistant response
- save the final assistant response after stream completion
Use an explicit chatId on the client and pass it in each request body.
5. Add history endpoints
Add REST endpoints for your chat UI:
GET /api/chats-> list chats for current userGET /api/chats/:chatId/messages-> load ordered historyDELETE /api/chats/:chatId-> delete chat and cascade messages
6. Update the client to load and resume chats
- Keep selected
chatIdin state or URL - Fetch history with
GET /api/chats/:chatId/messagesand callsetMessages()from theuseChatreturn value to load it into the chat (AI SDK v6 usesmessagesinChatInit, notinitialMessages) - Send
chatIdin every/api/chatrequest by passing it via a customfetchwrapper on theTextStreamChatTransportconstructor (there is noonResponseoption on the transport; use the customfetchto read response headers likeX-Chat-Id)
7. Verify persistence end-to-end
databricks apps deploy --profile <PROFILE>
databricks apps logs <app-name> --profile <PROFILE>
Verification checklist:
- send 2-3 messages
- refresh the page
- confirm prior messages reload from Lakebase
- start a second chat and confirm separate history
- delete a chat and confirm it no longer appears