The UI for AI agents is broken. No one is talking about it.
Everyone is obsessed with making AI agents autonomous, but almost no one is thinking about how humans will actually interact with them. My first love (and first job) in tech was as a UX engineer, and I’ve been obsessed with UI ever since. So please hear me out on what has become a personal pet peeve.
Right now, most AI agents are just glorified chatbots. But chat is a terrible UI for:
Complex workflows – No way to visualize dependencies, data, or progress
Edits & reversibility – Undoing AI mistakes via chat is clunky
Multi-agent collaboration – How do multiple AIs + humans work together?
Cognitive load – No one wants to scroll through endless chat logs
If agents are going to be doing real work, the UI needs to reflect that. So what comes next?
The missing UI for AI agents
🔹 Agent dashboards (observable autonomy)
Think: Command centers for AI decision-making.
Agents show their reasoning & confidence levels.
Users step in at key decision points without micromanaging.
Example: An AI assistant that schedules meetings but lets you adjust priorities visually.
🔹 Workflow graphs (dynamic task maps)
Think: Mind maps where tasks, decisions, and AI actions are editable.
Multi-agent collaboration is spatially represented, not buried in chat.
Users modify workflows visually instead of typing commands.
Example: AI planning a marketing campaign as an editable graph instead of a text list.
🔹 Editable AI notebooks (actionable memory)
Think: AI that updates a structured document, not just chat logs.
AI continuously refines a knowledge base that users can edit directly.
Workflows become persistent & referenceable (not ephemeral chat).
Example: AI investment research agents that build due diligence reports as living documents.
🔹 Agent multimodal UIs (beyond text)
Think: Drag-and-drop agents, voice commands, spatial interfaces.
Instead of typing prompts, users interact with AI visually & physically.
Example: A robotics control UI where human operators guide AI via gestures. Initialized portfolio company Medivis has some UI controls like this.
The big idea: agents need UI, not just prompts
Right now, the AI world is stuck in the “single-threaded chatbot” phase. But the real opportunity? Building the UI that lets humans and AI work together at scale.
This means:
✅ Observability – Seeing inputs, variables, and AI reasoning
✅ Data visibility – Understanding what’s happening inside agent workflows
✅ Multi-threading – Allowing multiple AI processes to run in parallel
Who’s working on this? Almost no one.
But whoever cracks the UI for agentic software will define the next era of computing.
What do you think? Which of these UI approaches is most interesting to you?