Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
256 changes: 256 additions & 0 deletions Articles/What Is Generative UI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,256 @@
# What Is Generative UI? Why Text Output Is No Longer Enough

Ask an AI assistant to plan a trip and you often get a wall of text:

```txt
Here are three hotels you might like...
1. Hotel A: ...
2. Hotel B: ...
3. Hotel C: ...
```

That works until the user wants to compare prices, filter by neighborhood, save one option, change dates, or book. At that point, the answer wants to become software. The model has useful intent, but the product gives it only one primitive: text.

Generative UI is the missing layer between "the model knows what to show" and "the user can actually use it."

Instead of asking a model to return a paragraph, a generative UI system asks it to return an interface: cards, tables, charts, forms, filters, buttons, and stateful flows that match the user's intent and the product's component system.

That distinction matters because most useful AI products are not writing products. They are decision products. They help a user choose, compare, configure, approve, edit, buy, schedule, debug, or investigate. Text can explain those actions. UI lets the user perform them.

## The failure mode of text-only AI

Text-only AI output is seductive because it is easy to ship. A chat box can sit on top of almost any app. The backend sends the user's message to a model, the model streams tokens back, and the frontend renders markdown.

The problem is not that chat is bad. The problem is that chat becomes the only shape the product can take.

Imagine a customer support assistant. A user asks:

```txt
Why was my invoice higher this month?
```

A text answer can say:

```txt
Your invoice increased because you added three seats and exceeded your storage quota.
```

A better interface would show:

- the current invoice total
- the previous invoice total
- the three added seats
- the quota overage
- a link to seat management
- a downgrade or storage upgrade option
- an audit trail for who made each change

That response is not just "more visual." It is more operational. The user can inspect the evidence and act without translating a paragraph back into navigation steps.

This is where text-only output breaks down:

- It hides structure inside prose.
- It makes the user do the work of comparison.
- It cannot safely expose product actions without another layer.
- It does not match the application's design system.
- It creates long answers when the right answer is a small tool.

Markdown tables and buttons help a little, but they are still generic. Production apps need controlled components, validation, permissions, analytics, loading states, disabled states, and event handlers. A model should not invent those from scratch.

## A practical definition

Generative UI is a pattern where an AI system generates structured interface output from user intent, available data, and an allowed component library.

The important word is "allowed."

In a healthy generative UI system, the model does not emit arbitrary frontend code. It chooses from components the application has registered. The product team defines the building blocks; the model decides how to compose them for the current task.

That gives you a cleaner contract:

```txt
User intent + product state + component library -> generated interface
```

For example, a travel app might allow:

- `HotelCard`
- `HotelComparisonTable`
- `DateRangePicker`
- `MapPreview`
- `BookingCTA`

A finance app might allow:

- `TransactionTable`
- `SpendChart`
- `CategoryFilter`
- `RiskNotice`
- `TransferForm`

The model is not asked to "make a React app." It is asked to return a structured representation that says, in effect:

```txt
Render a comparison table with these columns.
Render three hotel cards.
Render a date picker initialized to this range.
Render a call to action for the selected option.
```

The renderer then turns that structure into real UI using components the app already trusts.

## How this differs from templates

Traditional product UI is mostly predetermined. Designers and engineers decide which screen exists, which components are on it, and how users move through it. That is still the right default for core workflows.

Generative UI is useful when the shape of the answer depends on the user's question.

If a user asks a data assistant:

```txt
What changed in enterprise revenue this quarter?
```

The right interface might be a line chart, a cohort table, and three annotated drivers.

If the next user asks:

```txt
Which accounts are at risk and why?
```

The right interface might be a sortable account table, risk tags, last-touch notes, and suggested next actions.

Both live in the same product. Both use the same data. But their UI shape is different because the user's goal is different.

Templates are best when the path is known. Generative UI is best when the product knows the ingredients, but not the final arrangement.

## How this differs from chatbots

A chatbot treats the conversation as the product surface. Generative UI treats the conversation as one input into a product surface.

That sounds subtle, but it changes the architecture.

In a chatbot, the model usually owns the response. It decides what to say and the UI displays it.

In a generative UI system, the application owns the component contract. The model can only generate within the boundaries the app exposes. That means the product can keep control over:

- which actions exist
- which props are valid
- which data can be shown
- which components are interactive
- how errors and empty states render
- how the result matches the design system

This is why the interesting unit is not "AI-generated HTML." The interesting unit is a typed component contract.

## The basic architecture

Most generative UI systems have five pieces.

First, the application defines a component library. This is the set of UI elements the model is allowed to use. Each component needs a name, a prop schema, and a real implementation.

Second, the system turns that library into model instructions. The model needs to know what components exist and how to call them.

Third, the model receives the user's intent, relevant state, and available tools or data.

Fourth, the model streams structured UI output rather than plain prose.

Fifth, a renderer parses that output and mounts real components in the app.

OpenUI follows this shape. Its README describes OpenUI Lang as a compact, streaming-first language for model-generated UI. It also describes a flow where components define what the model can generate, the library becomes a system prompt, the model streams OpenUI Lang, and the renderer progressively renders live UI.

That progressive rendering is important. AI interfaces feel much better when useful structure appears while the model is still working. A dashboard can stream its shell first, then fill in cards, charts, and follow-up actions as data arrives.

## Why the component contract matters

The unsafe version of generative UI is:

```txt
Let the model write arbitrary frontend code.
```

That is flexible in the same way `eval()` is flexible. It creates obvious problems: security, runtime errors, broken layouts, inconsistent design, and unpredictable behavior.

The safer version is:

```txt
Let the model choose from components you own.
```

That gives teams a real boundary. The model can be creative about composition without being creative about execution.

For example, a product team can expose:

```ts
defineComponent({
name: "PlanComparison",
props: z.object({
plans: z.array(
z.object({
name: z.string(),
price: z.string(),
features: z.array(z.string()),
}),
),
}),
component: PlanComparison,
})
```

The model can decide when a plan comparison helps. It cannot invent a new component with unknown behavior. It cannot pass a function prop. It cannot execute code in the browser. The renderer receives structured data and renders a component the app already understands.

That is the difference between "AI builds my UI" and "AI composes my product's UI."

## Where generative UI helps

Generative UI is especially useful when users are asking open-ended questions over structured data or workflows.

Good fits include:

- customer support account explanations
- analytics dashboards
- sales pipeline inspection
- internal admin tools
- onboarding flows
- product configuration
- document review
- incident response summaries
- travel, commerce, and booking flows

The common pattern is that the user does not only need an answer. They need a surface for the next decision.

Take incident response. A text-only assistant might summarize logs and suggest a fix. A generative UI response can show the timeline, affected services, error-rate chart, suspected deploy, rollback button, and links to traces. The prose still matters, but it becomes annotation around an actionable interface.

## Where it does not help

Not every AI response needs generated UI.

If the user asks for a definition, a short explanation is enough. If the workflow is highly regulated, a fixed reviewed interface may be safer. If the product already has a well-designed screen for the task, generating a new one can add confusion.

Generative UI is not a replacement for product design. It is a way to make product design more adaptive.

The design work moves from drawing every possible final screen to designing a component system that can be safely composed at runtime.

That is a different skill. Teams need to think about component granularity, prop schemas, empty states, permission boundaries, observability, and how much freedom the model should have.

## The real shift

The biggest change is not visual. It is architectural.

With text-only AI, the model is a writer attached to your product.

With generative UI, the model becomes a planner that can assemble parts of your product.

That means AI output starts to look less like a transcript and more like a temporary, task-specific workspace. The interface can change shape around what the user is trying to do, while still staying inside the product's rules.

This is why "text output is no longer enough" is not a slogan. It is a product constraint.

Users do not come to software to read. They come to decide and act. When the answer needs comparison, selection, validation, state, or action, prose is only the first draft of the interface.

Generative UI is what happens when the final answer is not a paragraph, but a usable screen.

## References

- OpenUI website: https://www.openui.com/
- OpenUI GitHub README: https://github.com/thesysdev/openui
- OpenUI Creator Program brief: https://github.com/thesysdev/openui-creator-program/issues/7