Essay8 March 2026

In which the interface no longer matters, perhaps.

The Hollowing of the Interface

The Interface Was the Business Model

In most companies, the first ten minutes of the day look almost identical. Someone opens the CRM, refreshes the forecast, scans Slack for temperature. The quarter is where it was left the night before. Same with the pipeline. The inbox contains nothing that couldn't have waited. This takes somewhere between eight and twenty minutes and accomplishes very little. It happens anyway, in a sequence no one has written down, because writing it down would make it look like what it is: a ritual. Then, apparently reassured, the day begins.

Lately, my work starts with a prompt. The response holds together well enough, and I move on. "Enough" is a flexible standard. I used to verify. Now I supervise.

Those first-ten-minute tabs are still open, but they're no longer the first stop.

Seat-based pricing worked because labor was visible. If someone had a login, they were doing work inside the product. Work left a trace: clicks, edits, approvals, exports. More seats meant more activity routed through the system. More activity meant deeper dependency.

The screen was more than packaging. It was the meter.

In the 2010s, Design became strategic. Screenshots became marketing. UX polish became how tools differentiated. Companies hired designers not as ornamentation but because the interface was where customers felt competence, clarity, and trust.

Underneath, of course, there was infrastructure: databases, APIs, permission models, audit logs, automated workflows. Those layers mattered, but they were largely invisible to the users of the software. The interface organized almost everything between the human and the system.

Even the word "application" assumes a human subject. You apply yourself to something, and the software responds.

Work happened in the interface, and it was where and how work was counted.

Which holds only if a human is still clicking.

Demotion of the Screen

For decades, enterprise software assumed a visible actor. A user ID. A click. A timestamp tied to a person.

Entire internal processes were built around that visibility. Audit trails assume named users. Approvals assume an observable user action. Compliance assumes a screen someone could screenshot and attach to a memo.

A headless world doesn't just demote UI. It unsettles that entire model of accountability.

Large language models don't need a dashboard or a navigation menu. They interpret natural language, call APIs, traverse structured data, and return what's needed or close enough. Increasingly, they do this across multiple systems at once.

What once required a person clicking through three applications can now be expressed as a single instruction. The system determines which endpoints to call, reconciles formats, assembles the output, and may execute the next step without requiring a visible confirmation at each layer.

The human doesn't disappear, but they stop clicking. I notice this mostly in what I no longer do. The tab I used to open first. The number I scrolled to find. Those habits haven't disappeared so much as stopped being necessary. That's not the same thing.

The screen becomes supervisory: a place to review, debug, and override. To intervene when something breaks. It starts to look less like the primary surface of value and more like a control panel you glance at rather than inhabit.

If a machine can read documentation, interpret a schema, call endpoints, recover from failure, and assemble the result, the interface is no longer the gatekeeper of capability. It becomes the surface where automated capability is inspected after the fact.

Once the clicking stops, the economics change.

If an agent can orchestrate tasks across systems without a human navigating each screen, friction tied to layout and menu depth matters less. Switching costs rooted in retraining people may weaken, while costs rooted in data quality and API reliability grow sharper.

What holds starts to look different, too. Less polish, more architectural clarity: the coherence of the data model, the reliability of APIs, the integrity of permission boundaries, the ability to observe and control automated execution.

The interface doesn't vanish overnight. It simply stops being the primary site of value creation.

And that changes what gets paid for.

Rerouting the System

For years "headless" mostly meant something technical and straightforward: no visual UI. The Internet of Things was the easiest example. You speak to a device. You never look at a screen.

That definition now feels incomplete.

Headless increasingly describes software whose primary consumer is another system. Humans remain in the loop, but they're no longer the main operator. Work is executed by agents calling APIs, reading schemas, assembling artifacts, updating records, triggering workflows.

Payroll processed without a payroll manager clicking through tabs. CRM fields updated as agents orchestrate multi-faceted outreach. Compliance memos drafted and formatted without anyone touching a template.

In that world, migration fatigue changes shape. There's no retraining hundreds of people on a new dashboard or onboarding teams to a new layout. There's the rerouting of systems.

The cost doesn't disappear. It moves. What was social and political becomes technical and architectural: reconciling data models, untangling schemas, rewriting integrations.

The moat of inertia weakens if agents can abstract away UI differences. But poorly structured data, inconsistent schemas, brittle APIs, unclear permissioning: these become strategic liabilities rather than inconveniences.

In a world centered on screens, bad architecture can be hidden behind good design. In a headless world, there's nowhere to hide it. Humans encounter a field no one remembers, feel irritation, and work around it. Agents don't feel irritation. They hit an error, retry, escalate, or halt. Sometimes back to the irritated human.

What matters is whether the underlying system can withstand coordination at scale.

A few companies have already made that bet, though the categories where it's worked cleanest have had advantages the rest won't inherit.

Identity Crisis

Companies are proud of their interfaces. They market screenshots the way car companies once marketed tailfins. They hire designers and demo flows. The application isn't just the product. It's the identity of the company.

To hollow out the interface can feel like surrender, like admitting the screen was never the point. Becoming infrastructure feels like becoming invisible. And invisibility is difficult to sell.

But infrastructure is what persists.

No one markets their permission model, their reconciliation logic, their uptime guarantees. But that's what persists. Infrastructure doesn't have onboarding flows. It has failure modes.

It doesn't need to delight anyone. It needs to function, quietly and reliably, even when no one is watching.

The uncomfortable question is whether today's interface-first companies have built the kind of architecture this shift rewards.

If the primary consumer of software becomes another system, durability won't be measured by interface polish. It will be measured by architectural coherence and whether it holds under automation, by whether the APIs it depends on fracture or absorb pressure.

Some companies will resist the shift because it unsettles how they understand themselves. Others will begin optimizing for coordination rather than interaction.

Interfaces carry work, and they shape how we think about it. Moving cards on a board, scanning a pipeline view, comparing dashboards side by side, those acts help humans think. A headless system that executes flawlessly still has to answer a quieter question: where does human judgment rehearse itself?

If the screen stops being the primary site of execution, it may become the primary site of reflection. I still open the dashboards when something feels off, not to act, but to inspect. The prompt window gives me an answer but not the sensation of having checked. That's different from the feeling of having seen the data myself, and I'm not sure what to make of the difference, or whether it matters, or whether that uncertainty is just the cost of moving faster.

What's changed is where the work happens. The prompt gives me the quarter, the temperature, the risk. It gives me the answer without the sequence. What I haven't figured out is whether I trusted the sequence because it was accurate, because it was where the work happened, or because it was mine. Those all used to be the same thing.

I'm not sure they ever were.

I'm also not sure this essay isn't just the forecast, refreshed one more time.

Footnotes

Supervision implies I'd know what to catch. That's a reasonable assumption when the work is familiar and the failure modes are legible. It gets shakier when the system assembling the answer is doing things I can't fully trace. I'm not reviewing work I understand and checking it for errors. I'm reviewing an output I didn't produce, for errors I might not recognize. Whether that's supervision or something closer to trust is a question I haven't fully resolved.

The word "interface" is worth sitting with for a moment. It assumed a middle layer, a place where two things met, and one of them was always a person with a name attached to the audit trail. The interface was the translation zone between human intention and machine execution. Someone decided what to do. The software helped. And someone could be called into a room if it went wrong.

That implied person is now optional. The word hasn't changed, but the assumption underneath it has. When an agent navigates a system, there's still an interface in the technical sense, but no one is really interfacing. The surface exists. The person who was supposed to stand at it has stepped back.

It's the kind of language drift that happens before anyone officially updates the vocabulary.

The word "application" carries the same weight. It assumes a human subject.

The first software I used that felt genuinely native to how I thought was Gmail. I remember when it stopped feeling like a tool and started feeling like a place. I had opinions about threading. I had a system for labels that made sense to no one but me. That feeling, software as a place you'd made your own, is probably the thing most at risk in this shift. If the primary software operator becomes some other system, the word "application" starts to feel misaligned and deeply antiquated. But so does the attachment.

This is where the headless argument runs into something sturdier than habit. In regulated industries, medicine, insurance, financial services, the screen isn't just a surface for human convenience. It's the legally required moment where a human witness is inserted before a consequential decision is finalized. The interface, in those contexts, is a liability structure as much as a product.

If an AI agent denies a health insurance claim via an API call and that denial is contested, "the system returned a successful response" is not a defense. Regulators frequently require demonstrable human review before a decision takes effect, which means someone has to see something before the action is logged as approved. The screen is where that seeing happens, officially and on the record.

That requirement doesn't bend easily to architectural preference. It bends when regulation changes, and regulation tends to follow catastrophic failures rather than efficiency arguments. So the pace of headless adoption is likely uneven across industries in ways that don't map cleanly to technical readiness. The companies most confident about the shift may be the ones operating furthest from where that wall stands.

Early LLM experiences oscillated between awe and skepticism, cycling from magic to clever parrot and back again depending on the week and what you were asking.

Today, we're realizing that what matters is not whether the output feels intelligent or seems plausible, but whether what's underneath that output can reliably operate across systems. The shift from "impressive answer" to "reliable operator" is the shift that demotes the screen.

There's a quieter implication here that makes me uneasy. If the interface stops being the primary meter of work, a lot of adjacent systems start measuring the wrong thing. Compensation plans tied to seat expansion. Product roadmaps optimized for feature discoverability. Customer success playbooks built around increasing login frequency.

Those structures don't disappear just because agents abstract the interface. They continue operating, rewarding the behaviors they were designed to reward. If the locus of value shifts underneath them, they may begin optimizing for a surface that no longer carries the weight it once did.

That doesn't mean seat-based software collapses. It means the feedback loops get distorted before anyone admits they've changed. Organizations rarely notice when their meters stop measuring what matters.

I've made some version of this argument to colleagues and occasionally felt, mid-sentence, less certain than I sounded. The logic holds. The direction seems right. I'm just not sure how fast, or how evenly, or whether the companies I'm most confident about have actually done the architectural work the argument assumes. It's possible to be correct about a shift and still wrong about the timeline and the casualties. I try to remember that.

"Headless" started as a term for decoupling the presentation layer from the back-end. A headless CMS stored content in one place and served it to any surface via API: a website, a mobile app, a digital display. The front-end could be anything. That was already a meaningful departure, but it still assumed a human consumer somewhere at the end of the chain. Someone would eventually look at the content.

What's different now is that the assumed human at the end is also becoming optional. An agent querying a system for account risk data, or a workflow that pulls compliance records and generates a memo, doesn't have a person looking at an interface anywhere in the loop until after the work is done, if then. The back-end is serving another back-end. Headless no longer just means the presentation layer is flexible. It means the presentation layer may not exist in any meaningful sense.

Headless migration doesn't remove cost. It moves it.

Cost that was once social and political becomes technical and architectural, which can be easier to ignore and harder to fix. A brittle API doesn't file a complaint. An inconsistent schema doesn't slow-walk its deliverables. The failure mode is usually invisible for months and then suddenly quite expensive.

I suspect this transition favors the companies that have developed the institutional vocabulary to see architectural risk before it compounds, which is a rarer capability than it sounds.

Eoghan McCabe published a detailed account of how Intercom rebuilt around Fin, their AI support agent, growing it into a significant revenue line while deliberately killing roughly $60M in existing product revenue to make room. Most companies in their position would have hedged, optimized for near-term revenue protection, and called it a strategy. Intercom didn't.

But customer support already had a clean, universally understood unit of value: ticket resolved. The category had been under structural automation pressure for years. Offshore outsourcing, scripted chatbots, self-service knowledge bases: buyers were already primed for fewer humans in the loop. LLMs didn't disrupt a stable category so much as accelerate one already in motion.

Not every SaaS category has that clarity. What is the resolved-ticket equivalent for a CRM, a project management tool, a data platform? The value is diffuse. The outcome is co-produced by humans and systems. Attribution is contested. Intercom earned their result. The question is whether the conditions that made their path legible exist elsewhere.

Companies that successfully made infrastructure transitions rarely did so by simply deciding to become invisible. They usually maintained the identity-facing layer long enough to fund the transition underneath it. This is what most pivot-or-die analysis misses.

Amazon is the clearest example. AWS began as internal infrastructure built to serve Amazon's own retail operations. The external product came later, seemingly as an afterthought. The identity of Amazon as a retailer subsidized the construction of what became their most incredible and durable business.

The uncomfortable truth for today's interface companies is that hollowing out the interface may be the wrong frame entirely. The question isn't simply when or whether to abandon the screen. It's whether the infrastructure being built underneath it is good enough to stand on its own when the screen inevitably recedes. Most companies haven't asked that question yet because the screen is still generating enough revenue that it doesn't feel urgent.

That's how water boils in companies: slowly enough that it feels warm, right up until it isn't.

Both of those assume humans remain meaningfully in the loop. The supervision model, the reflection model, the "screen as site of judgment" framing, all of it rests on the assumption that there's a stable role for human input somewhere in the chain.

It's worth asking what happens if that assumption is directionally wrong. Not catastrophic. Just gradual. Systems get reliable enough that intervention becomes rare. Rare enough that the supervisory layer feels ornamental. It's a little like autopilot getting good enough that you stop gripping the wheel, then realize you haven't touched it in twenty minutes.

The human doesn't disappear in one moment. They keep relocating until "in the loop" stops meaning anything.

If labor input approaches zero, pricing logic changes entirely. You're no longer selling access to a tool. You're selling autonomous capability consumed the way organizations consume electricity: continuously, invisibly, without anyone looking at a screen.

That's a different kind of company than anything this essay describes. I'm not sure what it looks like. I'm not sure anyone does.

I ask this as someone who is also, quietly, working out the answer. The dashboards I still open when something feels off might be judgment rehearsing itself. Or they might just be habit that hasn't yet received the message that it's no longer necessary. I genuinely don't know which. The fact that I can't tell might itself be worth something.

Design tools are an interesting case to hold alongside this argument. Figma and Canva have navigated the AI transition with notably less turbulence than most enterprise software categories, and I've been trying to work out why.

My best guess is that the interface was never the meter in the first place. A designer using Figma isn't operating software to produce an output the software could produce without them. The human judgment might just be the product. Canva extended something similar further down the expertise curve, pulling creation toward people who would previously have outsourced it entirely. When AI gets added to either, it seems to extend the human's reach rather than replace the loop they occupied.

If that's right, it points to a category this essay doesn't fully account for: tools where the human in the loop isn't a cost to be optimized away but the reason the thing exists at all. For those, the question of where human judgment rehearses itself has a clearer answer. Though I'd be cautious about mistaking category durability for immunity. The interface persisting doesn't mean the business model is safe. It might just mean the disruption arrives from a different direction than the one everyone's watching.


Reply

I’d welcome your thoughts on this essay. Send me a note →

Related reading
Latest entries