Embedding Generative UI in Your App
Move beyond text chatbots. Here's how to actually ship generative interfaces in 2025.
If you're building AI features today, you're probably outputting text. A chatbot responds with paragraphs. Maybe some markdown. Users read it, then figure out what to do next.
Generative UI (GenUI) flips this: instead of the LLM returning text, it returns actual components—buttons, charts, forms, interactive widgets. The user doesn't read about the weather; they see a weather card. They don't get instructions for booking a flight; they get a booking form.
This matters in two ways:
For your users: Richer responses than text chatbots. Instead of reading and then acting, they interact directly with generated UI.
For your app: Adaptive interfaces instead of static screens. The same prompt can render different components based on context, user type, or intent.
What Is Generative UI, Really?
1. User sends a prompt
2. LLM decides to call a tool (e.g., displayWeather)
3. Tool executes and returns structured data
4. Your app renders a component with that data
Instead of the LLM saying "The weather in Tokyo is 72°F and sunny," it triggers a beautiful weather card component with icons, animations, and interactivity.
The magic happens when users can interact with those generated components—clicking buttons, adjusting sliders, selecting options—and feeding that context back into the conversation.
1. Vercel AI SDK
The Vercel AI SDK is the most mature option for generative UI in the React ecosystem. It introduced the concept of streaming React Server Components directly from LLMs back in 2024, and remains the go-to for teams building with Next.js.
How It Works
The SDK uses the streamUI function to map tool calls to React components:
import { streamUI } from 'ai/rsc';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const result = await streamUI({
model: openai('gpt-4o'),
messages,
tools: {
displayWeather: {
description: 'Display weather for a location',
parameters: z.object({
city: z.string(),
temperature: z.number(),
conditions: z.string()
}),
generate: async function* ({ city, temperature, conditions }) {
yield ;
return ;
}
}
}
});Key Features
• Streaming components: Show loading states while data fetches, then swap in the final UI
• React Server Components: Stream components from the server with minimal client-side JavaScript
• Multi-provider support: Works with OpenAI, Anthropic, Google, and 30+ model providers
• Type safety: Full TypeScript support with Zod schema validation
The Reality Check
The RSC-based generative UI API (@ai-sdk/rsc) development is currently paused by Vercel. The newer useChat hook with tool parts is the recommended approach for AI SDK 5.0. This means you'll want to use the tool-based pattern rather than the older render method.
Pros:
• Battle-tested in production
• Excellent documentation and examples
• Tight Next.js integration
Cons:
• RSC approach ties you to Next.js (for now)
• Learning curve with AI/UI state management
• Some vendor coupling with Vercel's ecosystem
2. Flutter GenUI SDK
Google quietly released the GenUI SDK for Flutter in late 2025, and it's a serious contender for mobile-first generative interfaces. If you're building with Flutter, this is now the official path.
How It Works
The SDK acts as an orchestration layer between your Flutter widgets, user prompts, and AI agents (primarily Gemini, but other LLMs work too):
// Define your widget catalog
final catalog = WidgetCatalog([
WeatherCard,
BookingForm,
ProductCarousel,
]);
// GenUI generates the appropriate widgets
final response = await genUI.generate(
prompt: userMessage,
catalog: catalog,
);
// Render the generated UI
return GenUIRenderer(response: response);Key Features
• Widget catalog system: Define which widgets the LLM can compose
• Brand compliance: Generated UIs follow your design system
• Interactive components: Supports buttons, sliders, date pickers that trigger follow-up agent calls
• Cross-platform: Same code runs on iOS, Android, web, and desktop
Real-World Example
The experimental "visual layout" feature in Google's Gemini app uses this exact SDK. Ask about trip planning, and it generates a magazine-style interactive itinerary—not just text.
Pros:
• Official Google support
• Native performance on mobile
• Strong typing with Dart
Cons:
• Alpha stage—expect rough edges
• Smaller ecosystem than React
• Gemini-optimized (other LLMs may need more tuning)
3. TanStack AI
TanStack AI launched in late 2025 with a bold pitch: "The Switzerland of AI tooling." Built by the team behind React Query and TanStack Router, it's designed for developers who want full control without platform assumptions.
How It Works
TanStack AI provides headless primitives for AI chat interfaces—think Radix, but for LLM interactions:
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react';
import { toolDefinition, clientTools } from '@tanstack/ai-client';
// Define a tool
const weatherTool = toolDefinition({
name: 'getWeather',
description: 'Fetch weather for a city',
inputSchema: z.object({ city: z.string() }),
});
// Implement client-side
const weatherClient = weatherTool.client(async ({ city }) => {
const data = await fetchWeather(city);
return { data, component: };
});
// Use in your chat
const { messages, sendMessage } = useChat({
connection: fetchServerSentEvents('/api/chat'),
tools: clientTools(weatherClient),
});Key Features
• Multi-language support: TypeScript, Python, PHP server implementations
• Provider agnostic: Swap between OpenAI, Anthropic, Gemini, Ollama without code changes
• Isomorphic devtools: Debug AI workflows on both client and server
• No vendor lock-in: No hosted service, no fees, no middleman
The TanStack Start Bonus
If you're using TanStack Start, you get createServerFnTool—define a tool once, use it from both AI and your components:
const getProducts = createServerFnTool({
name: 'getProducts',
inputSchema: z.object({ query: z.string() }),
execute: async ({ query }) => db.products.search(query),
});
// AI can call it as a tool
chat({ tools: [getProducts.server] });
// Components can call it directly
const products = await getProducts.serverFn({ query: 'laptop' });Pros:
• True framework agnostic (React, Solid, vanilla JS)
• Exceptional TypeScript ergonomics
• Open source with transparent development
Cons:
• Alpha stage—missing some features (speech, transcription)
• Smaller community than Vercel's SDK
• Documentation still catching up
4. Crayon SDK (Thesys)
Crayon, built by Thesys, takes a different approach: instead of you deciding when to render components, the LLM decides what UI to generate. It's the most "generative" of the generative UI options.
How It Works
You provide templates, and the LLM picks which to render based on context:
import { CrayonChat, type ResponseTemplate } from '@crayonai/react-core';
const templates: ResponseTemplate[] = [
{
name: 'expense_breakdown',
component: ExpenseBreakdown,
},
{
name: 'booking_confirmation',
component: BookingConfirmation,
},
{
name: 'product_grid',
component: ProductGrid,
}
];
export default function App() {
return ;
}The C1 API
Thesys also offers C1, their generative UI API that works as a drop-in replacement for standard LLM calls—but instead of returning text, it returns live UI blocks. Think of it as an LLM that speaks in components.
Key Features
• Agentic UI: Build interfaces that respond, adapt, and build themselves from context
• Template system: Constrain the LLM to your design system
• Tool integration: Works with Zod schemas, LangChain, LlamaIndex
Pros:
• Fastest path from zero to interactive UI
• Strong vision for agentic interfaces
• Active development
Cons:
• Smaller ecosystem
• Commercial API (C1) has costs
• Less mature than Vercel's approach
5. LangGraph + LangSmith
If you're already in the LangChain ecosystem, LangGraph now supports generative UI through LangSmith. You can colocate React components with your graph code.
How It Works
Define UI components alongside your agent nodes:
// ui.tsx
const WeatherComponent = (props: { city: string }) => {
return Weather for {props.city};
};
export default { weather: WeatherComponent };
// In your graph
import { push_ui_message } from '@langchain/langgraph-sdk/react-ui';
async function weatherNode(state, config) {
const weather = await fetchWeather(state.city);
push_ui_message('weather', { city: state.city, ...weather });
return state;
}Key Features
• Shadow DOM isolation: Components render in isolated style contexts
• Streaming updates: Update UI components as the LLM generates
• LangSmith integration: Load components from the cloud when needed
Pros:
• Tight integration with LangChain/LangGraph
• Enterprise-grade observability
• Good for complex multi-step workflows
Cons:
• Requires LangSmith infrastructure
• More complex setup
• Overkill for simple chatbots
6. Custom Implementation
Sometimes the best approach is rolling your own. The pattern is straightforward:
1. Define tool schemas with Zod
2. Parse tool calls from the LLM response
3. Map tool names to components
4. Render with props from tool arguments
Basic Implementation
// tools.ts
const tools = {
displayChart: {
schema: z.object({
data: z.array(z.object({ label: z.string(), value: z.number() })),
title: z.string(),
}),
component: ChartComponent,
},
displayForm: {
schema: z.object({
fields: z.array(z.object({ name: z.string(), type: z.string() })),
}),
component: DynamicForm,
},
};
// Renderer
function ToolRenderer({ toolCall }) {
const tool = tools[toolCall.name];
if (!tool) return Unknown tool: {toolCall.name};
const Component = tool.component;
return ;
}When to Go Custom
• You need fine-grained control over streaming behavior
• Your framework isn't well-supported (Vue, Svelte, etc.)
• You're building something truly novel
• You want to understand the internals before abstracting
Pros:
• Total control
• No dependencies
• Deep understanding
Cons:
• More code to maintain
• Edge cases you'll discover the hard way
• No community support
7. Other Notable Options
A React-focused generative UI SDK where the AI decides which components to render—not you. Register your components with Zod schemas, and the LLM picks the right one based on natural language. Supports both "generative" components (render once) and "interactable" components (persist and update as users refine requests). Built-in MCP integration, self-hostable or cloud option. Active development with 800+ GitHub stars.
const components: TamboComponent[] = [{
name: "Graph",
description: "Displays data as charts",
component: Graph,
propsSchema: z.object({
data: z.array(z.object({ name: z.string(), value: z.number() })),
type: z.enum(["line", "bar", "pie"])
})
}];llm-ui
A React library specifically for rendering LLM output with custom components. Great for syntax highlighting, markdown rendering, and smooth streaming animations. Not full generative UI, but excellent for polishing LLM responses.
Hashbrown
An emerging framework with "Skillet" schemas for streaming components. Strong focus on smart home and IoT use cases, but the pattern generalizes.
react-native-gen-ui
If you need generative UI in React Native without Flutter, this library brings Vercel-style patterns to mobile. Still early, but actively developed.
CopilotKit
Open-source framework for building "co-pilot" experiences. Integrates with CrewAI and LangGraph for agentic generative UI with human-in-the-loop flows.
Choosing Your Approach
• Production-ready, Next.js → Vercel AI SDK
• Cross-platform mobile → Flutter GenUI SDK
• Maximum flexibility → TanStack AI
• AI-driven component selection → Tambo
• Rapid prototyping → Crayon SDK
• Complex agent workflows → LangGraph
• Full control → Custom implementation
Decision Framework
What's your framework? Next.js → Vercel. Flutter → GenUI SDK. Everything else → TanStack or custom.
How complex are your agents? Simple tool calls → Any option. Multi-step reasoning → LangGraph or custom.
What's your timeline? Ship this week → Vercel or Crayon. Learning/exploring → TanStack or custom.
Vendor tolerance? Zero lock-in → TanStack. Some coupling OK → Vercel.
The Future Is Adaptive
Compared to text chatbots, it's a better experience for users. They interact with real components instead of parsing paragraphs. A travel query returns an interactive itinerary, not a bullet list.
Compared to static interfaces, it's more flexible for developers. You define the components; the LLM decides when to use them. First-time users and power users can see different things from the same app.
Google's Gemini 3 is already generating full interactive surfaces from prompts. The SDKs covered here—Vercel, Flutter, TanStack, Tambo, Crayon, LangGraph—each take a different approach to the same idea: let the model render UI, not just text.
Start with the SDK that matches your stack. Build something small—a weather widget, a booking form, a data visualization. Feel the difference when users interact with generated components instead of reading generated text.
Then scale from there.
---
As you add GenUI to your app, you're making it more adaptive—components render based on intent, not just static routes. But adaptive systems need a system of record. Differ maintains a living history of your AI coding conversations, decisions, and intent, so you can understand and evolve your dynamic app without losing track of why it works the way it does.