Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
198 changes: 198 additions & 0 deletions Articles/generative-ui-text-output-not-enough.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
# What Is Generative UI? And Why Text Output Is No Longer Enough

For the last decade, most software automation has treated the interface as a
fixed shell. A user clicks a known button, fills a known form, opens a known
table, and the system responds inside layouts that were designed long before
the request arrived. Even when language models entered the product stack, the
default answer stayed the same: put a chat box next to the application and let
the model return text.

That was a useful starting point. Text is universal, cheap to render, and easy
to stream. But it is also a narrow output format for many real tasks. When a
user asks for a budget comparison, a migration plan, a sales pipeline review,
a booking workflow, or a compliance checklist, they do not only need prose.
They need structure, controls, state, and visual priority. A paragraph can
describe those things. A generated interface can let the user act on them.

Generative UI is the shift from models that only generate words to systems
that generate usable interface states. Instead of responding with a wall of
text that says “here are three options,” the application can render three
option cards. Instead of describing filters, it can produce filter controls.
Instead of summarizing a workflow, it can create a stepper with actions,
warnings, and progress. The model is no longer just a narrator. It becomes
part of the interface composition layer.

## The problem with text-only output

Text-only AI has three persistent problems in product workflows.

First, text hides hierarchy. A model can write “high priority” or “low risk,”
but users still have to scan paragraphs to understand what matters. Tables
help, but plain markdown tables quickly collapse under complex data: nested
statuses, actions, evidence links, confidence, responsibility, and exceptions.

Second, text separates recommendation from action. A chatbot can advise a user
to update a dependency, compare a plan, or inspect an invoice. The next step
still lives somewhere else in the product. The user must translate the
recommendation into clicks. That translation is a common source of abandonment
and error.

Third, text does not remember UI state naturally. Users expect software to
preserve selections, expanded rows, input values, validation messages, and
partial completion. A chat transcript is a poor container for that kind of
state. The more interactive the job, the more awkward a pure chat UI becomes.

These constraints are not cosmetic. They directly affect task completion. If a
user has to copy details out of a model response, paste them into another
screen, then manually verify whether the action succeeded, the model is doing
only part of the work.

## What generative UI changes

Generative UI treats the response as an interface specification rather than
only a message. The generated output can include components such as cards,
tables, forms, timelines, charts, callouts, comparison grids, accordions, and
action panels. Those components can be rendered by the application in a
controlled design system, so the result still feels native rather than like
arbitrary HTML pasted into a chat window.

The key idea is not “let the model draw anything.” That is risky and
inconsistent. The better model is constrained generation: the product exposes
approved components and schemas, and the model chooses how to compose them for
the user’s task. This gives teams a middle path between rigid hand-coded
screens and unstructured text.

For example, a support tool could expose components like `IssueSummary`,
`RefundDecisionCard`, `EvidenceTimeline`, and `EscalationForm`. When a user
asks “what should I do with this refund request?”, the model can render a
structured decision view: the likely policy outcome, the evidence that
supports it, the risk flags, and a pre-filled form for the next action. The UI
is generated for that case, but the parts are still governed by the product.

## Why now?

Several trends make generative UI practical now.

The first is stronger structured output. Modern models are better at following
schemas, producing JSON-like structures, and respecting component contracts.
That makes it possible to generate interface descriptions that a renderer can
validate before showing them to users.

The second is streaming. Users do not want to wait for a full page to be
invented before anything appears. A generative UI renderer can progressively
hydrate pieces of the interface as the model produces them: a summary first,
then cards, then detailed rows, then optional actions.

The third is design-system maturity. Many teams already have component
libraries with buttons, forms, tables, badges, modals, charts, and layout
primitives. Generative UI can reuse those assets rather than bypass them. That
means product teams can keep brand consistency, accessibility rules, and
interaction patterns while making interfaces more adaptive.

The fourth is workflow pressure. Users increasingly expect software to
collapse research, decision, and action into one flow. A static dashboard may
show what happened. A generated interface can respond to what the user is
trying to do next.

## Where generative UI fits best

Generative UI is most valuable when the user’s request is specific, the data
is structured or semi-structured, and the next action benefits from visual
organization.

Good examples include:

- Data analysis summaries that need charts, filters, and anomaly cards.
- Internal operations tools where each case has different evidence and
required steps.
- Developer tools that can render diffs, test failures, dependency risks, and
suggested commands.
- Sales or customer success workflows that need account timelines, renewal
risk, and next-step forms.
- Finance and compliance tools that need decision traces, confidence, and
audit-friendly outputs.
- Education products that can create exercises, hints, progress checks, and
visual explanations.

Generative UI is less useful when a response is truly simple. If a user asks
for a one-line definition, a generated component tree adds unnecessary
overhead. The point is not to replace every sentence with a widget. The point
is to stop forcing complex work through a paragraph when an interface would be
clearer.

## The product design shift

The biggest change is how teams think about screens. Traditional product
design asks: “What pages do we need?” Generative UI asks: “What components and
actions should the system be allowed to assemble for a user’s intent?”

That changes the unit of design. Instead of designing every possible page
state manually, teams design a governed component vocabulary. They define what
a `RiskCard` can show, what fields an `ActionForm` accepts, which charts can
be rendered, what validation is required, and when human confirmation is
mandatory. The model can then compose those components dynamically.

This does not remove designers or engineers. It gives them a different control
surface. Designers define the visual grammar. Engineers define schemas, state
handling, permissions, and validation. Product leads define which intents map
to which workflows. The model fills the gap between rigid navigation and user-
specific context.

## Risks and guardrails

Generative UI also introduces real risks.

A model may choose the wrong component, omit an important warning, overstate
confidence, or generate a flow that looks actionable but lacks the right
business rules. That is why generative UI needs guardrails: schema validation,
component allowlists, server-side permission checks, deterministic fallbacks,
and clear separation between display and irreversible action.

The model should be able to suggest an action panel. It should not be able to
bypass authorization. It can draft a form. It should not submit a payment,
delete data, or change account settings without explicit product-level
controls. The generated interface must still respect the same security model
as hand-coded UI.

Observability also matters. Teams need to log which component schemas were
generated, which validations failed, which actions users took, and where users
abandoned the generated flow. Without that, generative UI becomes impossible
to improve safely.

## Why text is still part of the answer

Generative UI does not make text obsolete. Text remains the best medium for
explanation, nuance, and narrative. The better pattern is hybrid: short
textual reasoning paired with structured components. A generated pricing
comparison might include a two-sentence recommendation, a comparison table, a
sensitivity slider, and an action checklist. Each part does what it is good
at.

This is the practical future of AI interfaces: not chat versus UI, but chat as
one input mode into an adaptive interface layer.

## What teams should build first

A good first generative UI project is narrow and high-friction. Pick a
workflow where users already copy model output into another tool or manually
translate summaries into action. Define five to ten components that cover the
common cases. Add strict schemas. Render progressively. Keep irreversible
actions behind normal product confirmation. Measure whether users complete the
task faster and with fewer errors.

Do not start by asking a model to invent an entire application. Start by
letting it assemble a better answer for one painful workflow.

## Conclusion

Generative UI is the next step after chat because many user intents are not
naturally text-shaped. People ask for decisions, comparisons, plans, audits,
and actions. Text can explain those things, but it often cannot carry them all
the way to completion.

The opportunity is to let models produce structured, validated, interactive
interface states using the components teams already trust. Done well,
generative UI makes software feel less like a set of static pages and more
like a workspace that adapts to the job in front of the user.

Text output was the demo. Generative UI is where the work starts to happen.