In which the function refuses to fit on the org chart.
The Invisible Engine: Why BizOps Builds Belief, Not Just Plans
Borrowed Authority
BizOps can't force anyone to do anything. The gap between the influence the role carries and the authority it holds is where most of the interesting problems live.
Every model, every framework, every planning template, every definition you propose depends entirely on voluntary adoption. If a team opts out quietly, the whole thing collapses. And teams are very good at quiet opt-outs. They fill in your template with placeholder numbers and real numbers in their own spreadsheet, their own heads. They attend your planning meeting and then hold a second meeting afterward to discuss what they truly think. They use your terminology in cross-functional settings and their own terminology everywhere else, which means the alignment you thought you'd achieved exists only in the rooms you're present for, which is a smaller subset of the rooms that matter than you'd like to believe.
What BizOps usually inherits are half-built systems. A QBR template that nobody fills in with conviction. An "operating cadence" that was announced with great enthusiasm in September and has been dragging through implementation since October, with the kickoff slides still referencing a fiscal year that ended two quarters ago. KPIs that were defined during the Series B and haven't been verified against anything resembling the current business since. The scaffolding looks solid from a distance. You can point to it in a board deck and say "we have a process." Then someone leans on it, and it buckles.
I once inherited a planning process that, on paper, looked comprehensive. There were templates for every department, a timeline that mapped the entire quarter, shared definitions for the twelve metrics that mattered most, and an executive sponsor who genuinely cared about the outcome. The problem was that the templates had been designed by someone who had left eighteen months earlier, and in the interim the company had shifted its business model, acquired a smaller company, reorganized two departments, and changed CROs. The templates still referenced product lines that no longer existed. The timeline assumed a board meeting schedule that had been moved. Three of the twelve metric definitions were subtly wrong in ways that only became visible when you tried to reconcile them across departments. Everything about the system looked functional. Almost nothing about it worked.
This is the environment BizOps operates in: systems built for a company that no longer exists, shaped by tradeoffs no one fully remembers making, inherited by someone who has to decide whether to repair what’s broken and perpetuate the misalignment, or rebuild and spend political capital they haven’t yet earned.
The Way In
The way in is usefulness. Always usefulness. Not through elegant frameworks or impressive strategy decks or the kind of thinking-from-first-principles that looks great in a consulting deliverable and terrible in a real world environment where people need an answer by Thursday.
The headcount plan that matches both the budget and what recruiting can actually fill. The forecast that explains why last quarter missed and, more importantly, why this quarter won't miss in the same way. The pricing model that handles most edge cases without requiring the CEO to arbitrate each one. The board deck where the appendix and the narrative slides actually tell the same story, which sounds like a low bar until you've seen how many board decks fail to clear it.
It's granular, frequently tedious work. You are not building strategy but debugging operations, and the debugging requires a tolerance for the kind of work that makes people's eyes glaze over in meetings, which is often part of why nobody else has done it.
That usefulness buys access. Suddenly you're in rooms where decisions are still liquid, where the outcome hasn't hardened yet. People start to bring problems to you early rather than late, which is the single most important shift in the lifecycle of BizOps, because early problems can be shaped and late problems can only be managed. A VP mentions in passing that they're thinking about restructuring their team, and they mention it to you before announcing it, which means you can model the downstream impact on three other teams before the announcement causes confusion. A product lead shares a concern about a dependency that hasn't made it to the roadmap yet, and because you know the dependency exists, you can flag it during planning rather than discovering it mid-quarter when it's too late to adjust.
But access alone is not enough. Access without trust can feel like observation rather than influence. And usefulness, if it doesn't mature into something deeper, remains transactional: the organization uses you when it needs you and routes around you when it doesn't.
Building Belief
Belief shows up in behavior. Not in words, not in meeting notes, not in the "great job on the planning template" Slack message that arrives and means nothing. Belief shows up when a sales manager updates the forecast unprompted because they know it drives the hiring plan and they want to make sure the hiring plan reflects reality. When Product includes cross-team dependencies in their roadmap because they've seen Engineering actually adjust when dependencies are surfaced early. When Finance stops maintaining a shadow model because the official one has been right, or at least usefully wrong, three quarters in a row.
I want to stay on that phrase, "usefully wrong," because I think it captures something essential about how operational systems earn trust. A forecast is not valuable because it's perfect. Perfection is a fantasy that the forecasting profession promotes and that nobody who has actually built a forecast believes in. A forecast is valuable because, when it misses, you can trace why. The assumptions are documented. The inputs are transparent. The miss itself becomes diagnostic rather than merely disappointing. An organization that can look at a forecast miss and say "we overestimated expansion revenue because we assumed a renewal rate that didn't account for the pricing change in Q2" has learned something. An organization that looks at the same miss and says "the forecast was off" has learned nothing, and will likely miss again in the same way, for the same reasons, next quarter.
A hiring plan matters only if it feels achievable to the recruiters and managers who have to execute it. A pricing framework works only if the sales team believes it covers their actual deal landscape rather than a theoretical one. A planning template gets filled in honestly only if the people filling it in believe that honest answers will lead to better outcomes rather than uncomfortable conversations about why their department is underperforming. The system is alive only when people trust it enough to stop working around it, when they defend it in rooms you aren't present in, when someone new joins the company and learns the process from a colleague rather than from you.
That last signal is, I think, the clearest measure of belief: when the system propagates without your involvement. When a director explains the planning process to their team without referencing your name or your function. When sales ops defends the forecast methodology to a skeptical new hire and can explain not just the how but the why. When the CFO stops asking for a backup model because the primary one has earned enough trust that a backup feels redundant. These are all forms of organizational belief, and they accumulate slowly, and they are extraordinarily easy to destroy and extraordinarily hard to rebuild.
Organizational Debugging
Here is the part of BizOps that I find most intellectually interesting and most difficult to explain to people outside the function: the diagnosis of recurring organizational failures.
Every company has them. Patterns that repeat quarter after quarter despite being identified, discussed, post-mortemed, and theoretically addressed. Deals that slip at the end of every quarter. Headcount plans that never match actuals. Engineering timelines that are always optimistic by the same margin. Product launches that create support ticket surges nobody anticipated. These are not mysteries. The people inside the pattern can usually describe it with precision. What they can't do, because they're inside it, is see the system that produces it.
I once spent two months trying to understand why deals consistently slipped in the last two weeks of every quarter. The sales team attributed it to "buyer behavior." Finance attributed it to "pipeline hygiene." The CRO attributed it to "process discipline." Everyone had a theory, and every theory located the problem in someone else's domain.
The actual cause, which took weeks of tracing to uncover, was a combination of three things that no single team could see: the legal review process took an average of nine business days but was only budgeted for five in the deal timeline; the pricing approval workflow required sign-off from a VP who was routinely traveling in the last week of the quarter; and the contract template had a clause that triggered procurement review at the customer's end, adding another week that was never accounted for in the close date. Each of these was known to someone. None of them were known to anyone together. The "slip" was not a failure of discipline or hygiene or buyer behavior. It was an emergent property of three processes that had been designed independently and never reconciled.
This kind of diagnosis, the ability to see across functional boundaries and trace a symptom back to its distributed causes, is what I think of as organizational debugging. And it's the work that BizOps is most uniquely positioned to do, because BizOps sits at the intersection of enough functions to see the whole system while being embedded enough in each to understand the local logic. The VP of Sales can see the sales process. The VP of Legal can see the legal process. The VP of Finance can see the approval workflow. Only someone who moves between all three can see how they interact, and the interaction is where the failure lives.
The uncomfortable part of organizational debugging, and here is where I have to be honest again, is that the diagnosis often implicates systems that someone built with good intentions, sometimes systems that I helped build. The pricing approval workflow that adds a week to every deal? I designed it, six months earlier, because the CEO was concerned about margin erosion. The legal review timeline that nobody budgeted correctly? I set the expectation at five days because the general counsel told me it "should" take five days, and I didn't push back because I was trying to maintain a relationship with a stakeholder I'd need later. The contract clause that triggers procurement review? I knew about it and didn't flag it during the last template revision because the template revision was already over time and over budget and I wanted to ship something rather than nothing.
The debugger contributed to the bugs. This is more common than I'd like to admit, and I think it's more common across BizOps than anyone in BizOps would comfortably acknowledge.
How It Erodes
BizOps rarely fails dramatically. The failure mode is erosion, and erosion is harder to spot and harder to reverse than a quick collapse because it doesn't produce the kind of crisis that would force a response.
It starts with local optimizations, each one sensible in isolation. Product keeps a Notion roadmap because the official planning tool misses their dependencies and asking for a feature update from IT would take two quarters. Finance adjusts assumptions in their own spreadsheet because the shared model can't handle multi-year deals and building that capability would require three weeks of work that nobody has approved. Sales dusts off a territory map from two quarters ago because it's "simpler" than the one you built, and by "simpler" they mean it doesn't ask them to justify their territory assignments with data.
Each of these workarounds makes sense at the local level. The person creating it is not trying to undermine the system. They're trying to get their work done within the constraints of a system that doesn't quite fit their needs. The problem is that each workaround creates a fork in organizational reality, and the forks multiply. Soon QBRs become reconciliation exercises where the first thirty minutes are spent figuring out why different teams are showing different numbers. Leadership meetings become translation sessions where someone has to explain that when Product says "on track" they mean something different than when Engineering says "on track." Board decks require parallel versions because the version that tells a coherent story uses numbers that don't match the version that's technically accurate.
By the time the erosion is visible, everyone has their own process running alongside the official one, and the official process continues to exist because official processes in organizations have a zombie-like durability that is almost admirable, but it runs on compliance rather than belief, and compliance without belief is just paperwork with a due date.
The hardest part of the erosion, and this is the confession I've been building toward across both of these essays, is that it often feels like success right up until it doesn't. The junk drawer fills because people trust you to handle things. The models go unused because you're already onto the next one. The access grows because leadership finds you helpful. Each symptom of drift registers, in the moment, as evidence that you're doing a good job, and the realization that you've been optimizing for being needed rather than being effective arrives late, if it arrives at all. The difference between those two things, between being needed and being effective, is the difference between a function that serves the organization and a person who has confused their own indispensability with organizational health.
If you are the only person who understands the system, that's a warning sign. The system is fragile, regardless of how well it works, because the system's survival depends on your continued presence. Which means you have built something that serves your career more than it serves the organization, even though you did it for the best of reasons, even though every step along the way felt like the responsible choice.
Rebuilding
Rebuilds don't start with announcements or resets or the kind of all-hands message that begins with "I'm excited to share our new planning framework." They start with artifacts. Small, visible, undeniable.
Pick the metric everyone argues about. Map every definition that exists across the company. Publish a source of truth and get three teams to use it. Not all the teams, because you won't get all the teams, and waiting for all the teams is a form of perfectionism that guarantees nothing gets done. Three teams. That's enough to create a reference point, a gravitational center that other teams can orient toward when they're ready, which some of them won't be for quarters, and that's fine.
Name the failure specifically. Not "the planning process needs improvement," which is the kind of sentence that means nothing and changes nothing. Something specific: "The old forecast couldn't handle multi-year deals, so it missed every Q4 by the same amount, and here's how the new version fixes that." Specific honesty builds more trust than a generalized reset, because a reset implies that everything was wrong, which makes people defensive, whereas a specific diagnosis implies that most things were working and one thing wasn't, which makes people curious.
Make the win visible. Show what changed in terms people care about: the forecast landed within five percent for the first time. The hiring plan matched actuals. The pricing model covered ninety percent of deal structures without exception requests. These are not exciting metrics. They are exactly the kind of metrics that make BizOps work boring to describe and essential to experience.
And expect the gauntlet. Teams will stress-test the new system with every edge case they can find, not because they want it to fail but because they need to know whether it can hold before they abandon their workarounds. Treat the stress-testing as engagement rather than resistance, because it is: the team that throws edge cases at your model is a team that is considering trusting it, and the team that ignores your model entirely is a team that has already decided not to.
Recovery is slow. Each visible fix adds back a layer of trust. Enough layers, and the system holds again. But the trust never returns to its previous level, because once people have experienced a system failing, they maintain a readiness to revert that wasn't there before, a background process of skepticism that never fully terminates, and the best you can hope for is that the skepticism operates as quality assurance rather than sabotage.
The Disappearance
When BizOps works, and I mean genuinely works rather than just running, the system stops drawing attention. Definitions hold. Planning cycles complete without drama. Forecasts converge across teams rather than diverging. The board deck is one version. The arguing in meetings is about what to do, not about whose numbers are right, which is a higher and more productive form of argument.
The proof of a working system is silence. Not the silence of suppression, where people have given up and are simply going through the motions, but the silence of integration, where the process has been absorbed into the way people work and no longer requires someone to champion it.
I have a test for this that I think about often. I call it the vacation test, though I've never had the courage to run it deliberately. The test is: step away during a critical planning moment. Take a week off during budget season. Don't check Slack. When you come back, look at what happened. Did the process stall until you returned? Did it run but produce a lower-quality outcome? Or did it run, produce a reasonable outcome, and even improve in small ways that nobody thought to tell you about because nobody associated the improvement with your absence?
The first result means you're still essential, which feels good and means the system is fragile. The second means the system is functional but dependent on your maintenance, which is sustainable but not scalable. The third means the system has been genuinely absorbed, which is the goal, and which also means that you have, in a meaningful sense, made yourself unnecessary to the thing you built, which is a strange kind of professional success that the culture of BizOps has never quite figured out how to celebrate or reward.
The clearest signal is language. When people stop asking, "What does BizOps want?" and start saying, "Based on our model..." or "According to our process..." That shift, from your system to their system, is quiet. You might not even notice it happening. But it means the work has been absorbed into the company's operating rhythm. The model isn't BizOps' model anymore. The template isn't BizOps' template. The definitions aren't BizOps' definitions. They belong to the people who use them, and the people who use them have forgotten, or never knew, that someone had to fight for them.
That's the paradox at the center of this work. You build something, you tend it, you defend it through rounds of skepticism and stress-testing and political maneuvering, and if you've done it well, the ultimate evidence of success is that nobody remembers you were involved. The system runs. Decisions move. Tuesday feels ordinary. And the belief, at that point, isn't in BizOps at all. It's in the system itself. Which is both exactly the goal and, if you're the person who built it, a peculiar kind of loneliness that nobody warned you about.
Adoption can't be forced. The only path is building systems so useful, so trusted, that teams choose them over their alternatives. And the clearest proof that you've succeeded is the moment when the system doesn't need you to explain, to defend, or to be present at all.
Footnotes
The quiet opt-out is, in my experience, the single most underestimated threat to any operational system, and it's almost never discussed because it's almost never visible. A loud opt-out, the kind where someone says "this process doesn't work for us and we're not using it," can be addressed. You can have a conversation. You can negotiate. You can adapt the system. A quiet opt-out looks like compliance. The template gets filled in. The meeting gets attended. The metric gets reported. The process, by every visible indicator, is functioning. What you don't see is that the numbers in the template are rounded estimates rather than actuals, that the meeting attendance is physical but not intellectual, that the metric is being reported from a different source than the one everyone agreed on. The system appears healthy. The data it produces is unreliable. And the gap between appearance and reliability widens slowly enough that by the time someone notices, the workarounds have hardened into permanent alternatives that will be extremely difficult to retire.
The half-life of a planning system is roughly one major organizational change. A reorg, a leadership transition, an acquisition, a business model shift: any of these can render a previously functional system obsolete in weeks. The system continues to run, but the assumptions it was built on no longer reflects reality. The people still using it start to notice that the outputs don't match their experience, and they start building workarounds, and within a quarter you're back to parallel processes and reconciliation meetings.
I've watched this cycle three times at three different companies, and the pattern is remarkably consistent. The only variable is how long it takes someone to admit that the old system needs to be rebuilt rather than patched, which in retrospect typically takes one quarter longer than it should because admitting the system is now broken feels like admitting that the work you did building it was wasted, even though the work wasn't wasted, it just served a company that has since changed.
The distinction between access and trust is, I think, one of the most important and least discussed dynamics in BizOps. Access means you're in the room. Trust means your input shapes the outcome. You can have access without trust, which looks like being invited to executive staff meetings where you take notes but nobody asks your opinion, or being CC'd on board deck drafts with the implicit understanding that your role is to check the math rather than challenge the narrative. Access without trust is surveillance from the organization's perspective and performance from yours: you're performing relevance by being present without actually influencing anything. The transition from access to trust happens through accumulated credibility, through being right often enough and diplomatic enough about being right that people start to treat your input as signal rather than noise. But the accumulation is slow, and the organization's patience for it is limited, and more than one BizOps person I've known (including me, at one point) has mistaken access for trust and been surprised when the access didn't translate into influence.
The behavioral signals of belief are subtle enough that I've started keeping an informal mental catalog. A sales manager who updates the forecast before being asked, rather than waiting for the reminder Slack message that everyone pretends not to resent. A product lead who includes engineering dependencies in their planning document without being told to, because they've seen the downstream benefit of doing so. A finance partner who stops maintaining their backup spreadsheet, which is the equivalent of a trust fall in corporate finance. Each of these behaviors represents a person who has decided that the system serves them, and their decision was not made through persuasion or mandate but through repeated experience of the system being accurate, fair, and responsive to their reality. This is what I mean when I say that belief can't be installed. It can only be earned, interaction by interaction, quarter by quarter, until the cumulative weight of positive experience tips the balance from skepticism to participation.
The deal-slip diagnosis took two months, which is roughly six weeks longer than any stakeholder expected and roughly four weeks shorter than it actually needed. The pricing approval bottleneck was the easiest to confirm and the hardest to raise, because the bottleneck was a person, not a process, and telling a VP that their travel schedule is costing the company closed revenue is a conversation that requires more political capital than most people have available in their first two quarters. I ended up framing it as a "timing optimization" rather than a "you are the problem" conversation, which worked, though I am aware that "framing" is doing a lot of diplomatic work in that sentence.
The procurement clause was stranger. Nobody on our side knew it triggered a review on the customer's side because nobody on our side had ever been a procurement officer. The clause had been in the template for years, added during a period when the company sold primarily to mid-market customers who did not have formal procurement functions. The customer base had shifted upmarket. The clause hadnt. It took three calls with customer contacts who were willing to explain their internal process before I could even name the problem, and each of those calls required a warm introduction from a sales rep who trusted me enough to let me talk to their account.
This kind of work, finding the path to the answer that does not trigger defensive reactions along the way, is a significant percentage of actual BizOps work and a near-zero percentage of what gets discussed about BizOps work.
I suspect the debugger-contributes-to-the-bugs dynamic is endemic to any function that operates across teams or a full organization.
When you're the person designing processes that multiple teams have to follow, every design decision creates constraints that will eventually bind everyone else. The five-day legal review assumption was wrong, but it was wrong because I optimized for a timeline that leadership wanted rather than a timeline that legal could truly deliver. The pricing approval workflow added friction, but it added friction because I designed it to prevent a problem (margin erosion) that the CEO cared about, and the prevention created a different problem (deal slippage) that the CEO didn't know about until the quarter ended. Each decision was defensible. Each decision contributed to a failure that I later had to diagnose and try to correct.
I don't think there's a way to avoid this entirely. The best you can do is stay close enough to the systems you've built to notice when they start generating the kinds of problems they were designed to prevent, and honest enough to admit when the diagnosis points back to you.
The zombie durability of official processes is one of the more fascinating features of organizational life. I have seen planning templates that no longer correspond to any real business unit continue to be distributed, filled in, and reviewed for years after the business unit was dissolved. I have seen QBR formats that reference metrics the company stopped tracking persist because nobody had the authority or the inclination to declare them obsolete. The process continues because the process is scheduled, and the schedule generates the behavior, and the behavior generates the appearance of function, and nobody asks whether the function is real because asking would require admitting that the last three quarters of QBRs were performative, and performative QBRs that nobody acknowledges as performative are, in some organizations, preferable to the honest conversation about what should replace them.
The distinction between being needed and being effective is, I've come to believe, the central practical question of BizOps. A function that is needed is one that the organization can't operate without. A function that is effective is one that makes the organization operate better. These sound similar, and in the early stages of BizOps they're often the same thing. The divergence happens when the function becomes so embedded in organizational processes that removing it would cause disruption, regardless of whether the function is actually improving outcomes.
At that point, the function's continuation is driven by dependency rather than value, and the person running the function (me, more than once) has a choice: invest in making the system self-sustaining, which means training others, documenting processes, and gradually reducing your own involvement; or maintain the dependency, which guarantees your relevance but caps the organization's maturity. A misalignment in incentives most companies don't notice or truly grapple with. I've chosen both answers. The fact that these two incentives diverge is, I think, a design flaw in how companies evaluate operational roles, and I don't have a solution for it beyond naming it and hoping that the naming helps.
In my experience, the stress-testing phase is the most psychologically demanding part of any BizOps rebuild. You've spent weeks building something you believe in, and now a room full of people whose cooperation you need is actively trying to find all of its flaws.
The temptation is to interpret every challenge as resistance and respond defensively, which is the fastest way to lose the room's trust and the second-fastest way to end up with a system that nobody uses. The more productive interpretation, which I've had to learn and relearn multiple times, is that the challenges are the team's way of testing whether the system can handle their reality. When a sales manager says "this doesn't work for multi-year deals," they're not attacking the system. They're describing a constraint that the system needs to accommodate or lose credibility with everyone in the room who has a multi-year deal. The rebuild succeeds when you treat edge cases as design requirements rather than complaints, and when the team watches you adapt the system in real time and concludes that the person building it is listening rather than imposing.
That conclusion, fragile and hard-won, is the foundation of everything that follows.
Running the vacation test deliberately was hard at first.The prospect of returning to find that everything ran smoothly without me was and still is, if I'm honest, more threatening than the prospect of returning to find that everything fell apart.
With time, I've gotten better at this. Thinking about what could break is a difficult impulse to unplug. But you have to.
The vacation test reveals a tension in my own relationship with work that I suspect is very common in Operations teams, one rarely discussed: the desire to be unnecessary competes with the desire to be valued, and the desire to be valued usually wins, not because it's the right impulse but because it's the louder one. The healthiest practitioners I've worked with are the ones who genuinely celebrate when the system runs without them, who treat their own obsolescence as the ultimate deliverable. I aspire to that. I'm not always there. And the gap between aspiration and practice is, I think, worth naming honestly, because the alternative is a profession of people who talk about building self-sustaining systems while quietly ensuring that the systems always need one more adjustment that only they can make.
The impulse shows up. You notice it. You work with it.
| Published | 20 July 2025 (8 months ago) |
|---|---|
| Reading time | 27 min |
| Tags | org structure, finance |
| Views | – |
Reply
I’d welcome your thoughts on this essay. Send me a note →
