In which everyone keeps asking who dies, which is the wrong question
The Drawbridge Theory of Software Survival
The dropdown has seventeen options and one that says “Other (see Bob).” Bob left before the pandemic. Nobody removed it because nobody was sure what would break.
This is what inherited software looks like from the inside: load-bearing in mysterious ways.
A field called “Legacy_Flag_2” that sixteen reports depend on and nobody remembers creating. A permission group called “TEMP_FINANCE_ACCESS” that has been temporary since 2017. Two integrations: “Salesforce_Bridge_OLD” and “Salesforce_Bridge_NEW,” both active somehow?
Everyone keeps asking which enterprise software companies AI will kill. The answers usually arrive as tidy matrices and neatly labeled quadrants, paired with confident projections about switching costs. What’s hard to plot or graph is the accumulated human cost. The exhaustion at the mere idea of replacing working software is hard to place on a two-by-two.
Not software that “works well,” either. Just “works.” Processes payroll without incident. Generates audit logs that satisfy compliance needs. Pushes notifications to the right places in the right order because someone spent three weeks in 2021 configuring it that way and documented almost none of it. The software running quietly in the background of a large organization isn’t running quietly because it’s excellent. It’s running quietly because every person who might have replaced it did some loose math, felt the weight of what that project would actually cost in human time and political capital and calendar space and emotional bandwidth, and decided to do something else instead.
It’s a rational calculation by people who know that the demo is never the migration.
Most of the public debate about AI and enterprise software doesn’t start with that reality. Some writers describe an agent-heavy future where design and coordination become the bottlenecks rather than code itself. That’s a useful lens, but it still doesn't explain what happens inside organizations once those capabilities exist and someone has to decide what to do with them.
Dan Hockenmaier's recent essay "The Software Shakeout" is in my opinion the best version of the "who's going to die?" genre. His framework is clean: switching costs on one axis, compounding value on the other, companies sorted into durable, eroding slowly, and eroding quickly. His examples are concrete. His point about CrowdStrike processing 100 billion security events daily is the kind of detail that makes you nod your head. I nodded mine. He is correct that some moats are more defensible than others, and also correct that the competitive pressure coming from every direction is structural rather than cyclical.
But in most of this analysis, switching costs are treated as primarily technical or contractual. They’re rarely understood as metabolic, as something that consumes energy across an organization long after the contract is signed.
In most companies, that energy is already being spent on one thing: maintenance. The frameworks talk about data portability, ecosystem lock-in, and per-seat pricing pressure. What they rarely account for is the accumulated operational burden those systems create. It’s distributed, largely unwritten, politically embedded, and accumulating by the day. Any software system older than three years has deposited that burden across an organization like sediment. You don’t just replace the software. You replace the permission models and the audit trails and the internal documentation and the tribal knowledge about why certain fields exist and the integrations two contractors built in 2017 and the reporting logic the CFO specifically requested and the onboarding flows HR rebuilt twice and the bots that depend on some poorly documented API and the one engineer who understands how most of it connects.
Security reviews don’t scaffold themselves. Neither do access controls, data residency clauses, compliance artifacts, change management plans, or the forty-seven browser-based approvals required to spin up a new vendor in a regulated industry.
Increasingly, the code is the easy part.
How Castles Fall (Rarely Through the Front Gate)
If the maintenance burden buys incumbents time, the real question is how that time runs out.
The frontal assault, where a well-funded AI-native startup builds a demonstrably superior product and enterprises switch en masse, is the least likely failure mode and the one that receives almost all the attention. It’s the version everyone models. It’s also the version that looks better on a slide. Decisive overthrow plays better than incremental erosion.
Real castles rarely fall to brute force at the front gate. They fall when the defenders misunderstand the threat and miscalculate what comes next.
Gates aren't always forced open. Sometimes the garrison walks out. The engineers who understand the data schema, who know which integrations are architecturally critical and which are vestigial, who carry fifteen years of institutional knowledge about why certain decisions were made, are all being recruited aggressively right now. Many of them leave for entirely legitimate reasons: better pay, more interesting problems, less bureaucracy, the chance to build something new rather than maintain something old. What walks out with them is the map of the castle, not just the labor.
The code stays behind. The knowledge of how it actually works doesn't.
Slow siege is another mechanism. Existing customers stay because migration is painful and because the people who would manage it have other priorities. New customers, facing a real choice between an incumbent built for 2012 and an AI-native alternative built for now, make different decisions. Net revenue retention drifts down. Logo growth softens. The castle walls are intact and the base isn’t visibly leaving wholesale. By the time the compounding becomes unmistakable in the financials, the window for a decisive or dramatic response has likely closed.
Then there's fatigue miscalculation. Some incumbents overestimate how tired customers are of switching and close off so aggressively they trigger departures. Others make the opposite mistake. They underestimate how tired their own people have become, worn down by bureaucracy and rough edges and ignored requests, and assume inertia will persist indefinitely.
Energy inside organizations is finite. It shows up as a shared drive folder nobody has opened in two years but everyone is afraid to delete. Or: a weekly status meeting that exists because canceling it would require a conversation nobody wants to have. Or: an internal wiki last updated three years ago that answers your question, if you were asking three years ago. Replacing a system costs political capital, management attention, and the patience to ask hundreds of people to change how they work. The audible groan in a team meeting when someone announces another new tool isn't trivial. It's the sound of people who've already absorbed enough change for one quarter.
But living with a system that no longer fits has its own cost. It shows up as small workarounds, duplicated effort, quiet frustration. Over time, that drag accumulates. When enough of it piles up, the balance shifts faster than anyone expected.
Finally, castles fall when the relief force that was supposed to arrive just doesn’t. Every siege assumes reinforcements are on the way. For incumbents, the relief force is the AI roadmap, the belief that the enterprise product team can move fast enough to compete with AI-native startups building from zero. The companies that execute are probably those that already had strong product culture, relatively low technical debt, and leadership that recognized the threat early rather than bolting features onto aging architecture and issuing a press release. A few will actually do it. The question is whether the product culture and architectural headroom required were built before the pressure arrived, because they're nearly impossible to build under it.
Castles rarely fall because attackers breached the front gate. They fall because defenders misjudged supply lines, morale, and time.
What the Siege Looks Like From Inside the Castle
The openness of the SaaS era made strategic sense when code was expensive. Platforms made themselves permeable because permeability aggregated developer energy. APIs were less a gesture of generosity than a mechanism for distribution.
When AI reduces the cost of building new software, the incentives behind openness shift. For an incumbent under pressure, selective closure starts to look rational: repriced APIs, narrower export capabilities, contract language that redistributes liability, integrations that make adjacent systems less substitutable, sales teams renegotiating multi-year agreements while leverage still exists.
That’s what raising the drawbridge looks like.
Epic has been doing exactly this in healthcare for years, criticized for it by everyone from independent physicians to the ONC to technology journalists who find it villainous. It has historically charged for interoperability between its own installations. Yet patients fill out the same intake form every visit, at every provider, on the same platform, sometimes by hand, in 2026. Its market position hasn't been meaningfully affected.
The switching cost in hospital electronic health record (EHR) systems is so catastrophic that customers can absorb a great deal before departure becomes rational. The interesting question is whether this works the same way for companies whose customers have more options.
The drawbridge doesn’t only block attackers. It blocks the organizational energy required to even attempt departure. It buys time by making the act of leaving feel like exactly the kind of project nobody wants to run this quarter.
This strategy requires at least two conditions. Customers must perceive no alternative worth the labor cost of switching. And the incumbent must read its own leverage honestly rather than believing its own story.
The Offensive Data Argument, Examined Honestly
The optimistic argument from incumbents is about data. Workday holds years of compensation history. ADP processes payroll for a significant portion of the U.S. workforce. The promise is that this accumulated data creates advantages a startup can't easily reproduce from scratch.
The honest question is whether incumbents can actually use what they have. That part is less clear. CrowdStrike's security event stream is probably genuinely usable: high volume, consistently structured, labeled by outcomes. Workday's compensation history is likely murkier. Salesforce's CRM records look powerful on a slide, but in practice they're often a dense stack of custom fields, inconsistent definitions, and fields called things like 'Account_Type_FINAL_v3' that nobody can explain but everyone is afraid to delete. The problem usually isn’t a lack of data but deciding which parts of it actually matter to the machines meant to do something with it.
"Has a lot of data" isn't the same as "has clean, model-ready data." From the outside, we're mostly guessing which one any given incumbent actually has.
And even when the data is technically there, getting anything useful out of it becomes its own project. Five years of inconsistent ERP entries have to be reconciled. v4, v5, and v5_REVISED circulate at the same time. The “required” fields turn out not to have been required at all.
The data moat isn't separate from the maintenance burden. In most companies, it's the same sediment under a different name.
The Microsoft Template, Briefly
When people talk about incumbents surviving platform shifts, Microsoft is the example they usually reach for. The story sounds tidy in hindsight. It wasn't.
It navigated the internet era through strategies that led to antitrust proceedings. It missed mobile, or nearly did. For a real stretch, they looked like a company whose main products were the blue screen of death and material for other people's jokes.
Most companies don't survive that long looking that bad. Most leadership teams don't enjoy the years it takes to find out if they will. Survival took capital, timing, and a tolerance for looking out of step for years.
But perhaps most interesting is the Azure pivot. Microsoft didn’t defend Windows as the center of gravity. They shifted value down into infrastructure. Windows and Office continued to exist, but they stopped being the strategic core. The compliance trust, enterprise contracts, and long-term relationships accumulated around Azure instead. In doing so, Microsoft effectively became the trebuchet before anyone else could aim it at them.
Surviving isn’t the same as winning. And surviving for a while isn’t the same as thriving. Platform shifts play out over operational time, not the compressed arc we prefer in retrospect.
What the Shakeout Actually Looks Like
If fatigue and maintenance really matter, the shakeout won’t look like a graveyard filling up overnight. It'll look like compression: fewer dominant platforms, more selectively closed ecosystems, a long tail of AI-native tools, and perhaps depressed multiples that persist longer than markets expect.
The fatigue moat holds only until AI-native vendors can absorb not just functionality, but compliance overhead, institutional trust, and the relationships incumbents have spent years building.
The trebuchet against the drawbridge isn't better UX. It may be an LLM-assisted migration. The moment AI can reliably read your legacy documentation, infer integration maps, reconstruct undocumented permission models, and auto-generate the scaffolding that currently takes months of human coordination, the metabolic cost of switching drops sharply. When that happens, fatigue stops protecting incumbents and starts entrenching whoever successfully absorbs the migration burden.
Knowing which outcome dominates requires levels of detail that frameworks are designed to abstract away.
My honest view is that historically, the hardest thing in software wasn’t writing code. It was building everything around the code: the compliance frameworks, the organizational embedding, the distributed trust real buyers depend on.
Whether those structures are enough to win a prolonged siege is uncertain.
The thing about castles is that people eventually stopped living in them. Some attackers won, some didn't, but the logic that made the walls worth defending disappeared, and everyone quietly moved somewhere else. The companies that don’t navigate this well may not disappear cleanly. They may become quieter, smaller versions of themselves, persisting on inertia while their castle gradually empties from within and someone storms the postern gate.
It’s possible the castle metaphor is already obsolete, and we just haven't admitted it yet. The more interesting incumbents won't defend the walls at all. They'll abandon them, carrying their data, customer relationships, and compliance infrastructure into whatever configuration makes sense next. The drawbridge is what most companies know how to build. Reinvention requires admitting the walls no longer matter.
AI improves week by week. Human institutions change over years. The gap between those speeds may matter more than any feature comparison. The winners may not look like defenders or attackers, but like whoever reorganizes fastest once the walls stop defining the terrain.
Footnotes
The "SaaSpocalypse" talk and the "AI Scare Trade" framing say more about how markets price uncertainty than about how large companies actually behave. Multiples can compress overnight. Institutions rarely change that fast.
The category call is easier than the company call. Whether enterprise SaaS as a model faces structural pressure is a reasonable bet. Which specific companies survive that pressure depends on variables outsiders can't price: how much organizational memory lives in people versus systems, how far customers sit from their own software decisions, whether leadership recognized the threat early enough to matter. So markets make the category call and wait.
A repricing of expectations is not the same thing as a structural collapse in how companies actually buy and use software. Those are different clocks. One runs on story. The other runs on systems and process.
We're all speculating about when, or whether, those clocks converge.
Most writing about AI and organizations conflates two questions that deserve separate treatment: What can AI actually do? And what do organizations do with those capabilities once they exist? They're related but they run on different timelines and often have different answers.
For most of the modern software era, engineering talent has been the biggest constraint on product velocity. Tools that can take a short written request and return working code challenge that constraint directly. Steve Yegge, Maggie Appleton, and Matt Shumer are all describing, in different registers, a world where that threshold has already been crossed and institutions that don't engage will be blindsided.
I find their arguments genuinely compelling. This essay is about the second question, which is slower and messier and gets abstracted away right when it starts to matter most.
This isn’t a hypothetical. In every enterprise migration plan I’ve ever seen, there’s always a section labeled something like "Dependencies and Integration Mapping" that begins confidently and ends with a list of items marked TBD that are still TBD at go-live.
The sediment isn’t the result of laziness. It’s the residue of real work done under real constraints by people who were usually trying to do the right thing. Behind every undocumented system is someone who wanted to document it and ran out of time, budget, or organizational patience. Behind every groan in a team meeting is someone who tried to fix the problem, got a migration approved, and couldn’t make it stick.
The incumbent benefits from all of it. Nobody planned it that way.
The frontal assault even has a cousin: the internal build. Not a new entrant displacing an incumbent, but an organization deciding to stop buying altogether. The build-versus-buy debate is as old as enterprise software itself, and AI is reopening it.
The build option doesn’t dissolve the maintenance problem. It relocates it, usually somewhere with less visibility and fewer people paid to care about it.
If generating functional software gets cheap enough, why buy at all? Some organizations are already asking this. A few are even acting on it. But the custom tool someone vibe-coded in a weekend becomes the thing nobody fully understands two years later. The engineer who built it leaves. The documentation is thin. The integrations turn out to matter in ways nobody mapped. You’ve traded vendor lock-in for internal lock-in, and internal lock-in is often worse because there’s no support contract and no one to call.
The standardization pressure inside a large organization compounds this. A 10,000-person company doesn’t let individual teams choose their own tools. Governance moves slowly and conservatively by design. The idea that Amazon would let each team independently build its own Workday replacement is almost comically at odds with how large organizations actually make software decisions.
The companies that survive platform shifts tend to share one of two things: a genuine capacity for clear-eyed honesty about what customers actually need versus what the company has historically provided, or a distribution advantage large enough to give them room to develop that honesty before the market forces the question.
Neither is common. Organizations are collections of people with careers, incentives, and motivated reasoning, operating inside cultures that have spent years telling themselves stories about their own importance.
In 2015, I would have predicted Microsoft would have faded into irrelevance. Here we are.
The Office of the National Coordinator for Health Information Technology. I used to be very interested in electronic medical records. Then I lost interest, in part thanks to Epic.
There are two economic stories about AI and enterprise software that usually get told as one. The first is margin expansion: AI makes existing software more productive, the same seat count does more, fewer humans are required to operate the system. That's probably already showing up in some earnings calls and it's a reasonable near-term bet for incumbents.
The second is category displacement. Agents doing work don't buy seats. They consume compute and tokens. If that transition takes hold, the competitive question stops being incumbent versus AI-native startup and becomes something stranger: who captures value when the pricing model itself changes? The software layer doesn't disappear, but the entity collecting the rent might.
The first story is bullish for incumbents. The second is a different siege entirely. They're often discussed as the same story, which makes it hard to see where one ends and the other begins.
"AI-native" is doing a lot of work in this essay and all the others out there. It's worth being honest about what it does and doesn't mean.
At the architectural level it means something real: built with AI at the core rather than bolted on later. Clean schemas and coordination-first design may age better than retrofitting intelligence onto legacy workflows. That difference matters.
What it doesn't mean is operating outside enterprise constraints. AI-native companies still sell seats, undergo security reviews, negotiate with procurement, and need reference customers and internal champions. The terrain they're entering is the same terrain incumbents learned to survive on.
Meanwhile incumbents aren't inert. They're folding AI into existing distribution, trust, and data reservoirs they've spent years building. If they can simplify their own architecture fast enough, they can build the trebuchet themselves.
The outcome probably depends less on the label and more on who absorbs complexity best. That could go either way.
There’s another possibility here that’s harder to see. The most durable software in an AI-saturated environment might not be the tools with the best screens but those that don’t rely on screens very much at all.
Systems that coordinate other systems. Humans supervising instead of clicking through forms.
If that shift takes hold, the advantage likely goes to whoever can treat their product less like an application and more like a coordination layer. That could be an AI-native startup. It could also be an incumbent willing to hollow out its own interface.
Which is a separate argument from the fatigue one, but it intersects with it. Migration still costs energy. Maintenance still matters. But if the interface stops being where most of the value lives, some of what incumbents treat as strategic may not be.
Jack Dorsey sent a note to Block in February 2026 announcing a reduction from over 10,000 people to just under 6,000, attributing it explicitly to "intelligence tools" enabling a fundamentally different way of building and running a company. That's either a CEO publicly reorganizing around what's coming rather than waiting for it to arrive, or a convenient narrative for a company whose post-pandemic bets (crypto, blockchain) didn't pan out. Possibly both.
The fatigue framework may underestimate leaders willing to move fast when they believe the alternative is worse. It may also underestimate how often AI becomes the explanation for decisions that had other causes.
The capability story is genuinely compelling, which is part of the problem. I use these systems constantly. The curve of what they can do is steep, and getting steeper. There are weeks where I feel the pull of the bigger narrative, where the organizational friction I've spent years watching starts to feel like it might just dissolve.
The capability story has plenty of writers. The institutional metabolism question is the one that gets abstracted away right when it starts to matter most.
It’s possible I focus on maintenance and fatigue because that’s the terrain I know best. I’ve spent years inside systems that move slowly and resist clean resets. From that vantage point, every siege looks gradual.
Someone building from scratch might look at the same incumbents and see brittleness rather than durability. They might be right.
When the future turns ambiguous, I reach for cleaner stories than the detail justifies. Clean narratives feel like control. Even a dire shakeout story is easier to hold than “it depends on a thousand things.”
The confidence I express here probably arrived before the detail that would fully justify it.
A postern gate is the small side entrance to a castle, usually guarded but not reinforced like the main walls. In software, the adjacent workflows are the postern gate: the scheduling tool, the reporting layer, the communication integration, the things that sit alongside the core product and seem too minor to defend seriously. An AI-native competitor doesn’t breach the main walls but gradually absorbs those side entrances until the traffic quietly reroutes. The core product becomes optional not because it was defeated but because organizational life stopped flowing through it. At that point someone runs the migration math, and for the first time it comes back reasonable.
That mechanism depends on the migration math eventually becoming survivable. In the most sediment-heavy organizations it may never become so. Tribal knowledge lives in Bob’s head, integration logic was never written down, and the original contractors are long gone. The burden isn’t really a switching cost so much as a permanent condition.
When the breach becomes obvious, it appears sudden, even though the conditions that made it possible were in place long before. Or they weren’t, and the castle simply empties.
| Published | 27 February 2026 (8 days ago) |
|---|---|
| Reading time | 21 min |
| Tags | ai, automation |
| Views | – |
Reply
I’d welcome your thoughts on this essay. Send me a note →
