Essay4 May 2025

In which finance oscillates, smooths, and ultimately isn't a veto function but a learning system.

Finance Is the Feedback Loop

The Smoothing

Forecasts get smoothed. I’ve participated in that smoothing. That's not the same as inventing numbers, even though the word sounds...suboptimal. The distinction matters; it's the entire point.

The quarter had come in uneven, which is a polite way of describing what happens when sales bookings land in unpredictable lumps and the revenue recognition schedule you built in October starts to look like a document written by someone who lived in a more orderly universe, three universes over. Sales believed the gap would close, which is what Sales always believes, because Sales operates on a fundamentally different theory of time than Finance does.

Our particular variance line looked volatile in a way that would invite questions we didn’t yet have a coherent explanation for, and if you've ever presented a volatile variance line without clean answers to someone senior, you know that the subsequent forty-five minutes of your life will be consumed by a kind of Socratic interrogation that generates no insight but a great deal of anxiety, after which someone will recommend that you "get closer to the drivers," which is the governance equivalent of a doctor telling you to "be healthier."

When volatility appears, every part of the system looks for coherence. Boards want clarity, executives want confidence, the finance org wants stability. The instinct to smooth doesn’t originate in any one room. It’s ambient.

So we adjusted the weighting, tempered the swing: smoothed. Smoothing can make interpretation easier, but it can also delay detection. It narrows the amplitude of the signal.

That's the slippery part: it's almost impossible, in the moment, to distinguish between noise you're filtering out and signal you're damping. It sits at the exact point where organizations either learn or don't. You'll wonder if, in reality, the system is dampening signals it should have examined more directly.

Assumptions are arguable. They're assumed. Once you start modeling scenarios instead of reporting raw deltas, you're now in a different but possible world. So you document assumptions, make them visible, and ensure the numbers remain within a defensible range.

The January bookings shortfall that looks like random variance might be the first indication of a market shift. The March spike that smooths the quarter might be pull-forward that's borrowing from next quarter. You don't know. You can't know, not yet. And the act of smoothing removes precisely the volatility that would have forced the conversation that might have helped you figure it out. It isn’t fabrication. It’s something subtler and, in the long run, potentially more costly: reducing the system’s ability to sense what’s actually happening by making it appear less volatile than it is.

This is where the tidy story about finance stops matching the texture of the work. The tweetable version says: measure, compare, adjust, improve. Numbers in, insight out.

In practice, the numbers arrive already shaped. What counts, what gets weighted, what gets labeled variance and what gets labeled timing, all of it reflects small human decisions made under pressure. By the time a figure reaches the slide, it has already passed through judgment.

We used to run a monthly forecasting competition at one company I worked at. The rules were almost comically simple: five high-level numbers, forecast them for the upcoming month. Revenue, expenses, cash, headcount, one or two others depending on the quarter. Five numbers. Not segment-level detail, not unit economics, not a probability-weighted scenario analysis. Just five big numbers that, in theory, the team should be able to predict with reasonable accuracy given that we spent approximately all of our waking hours immersed in the data that produced them.

We could get close. Collectively, the team could usually land within a reasonable band. But someone was always off, and not always the same person, and not always on the same number. The misses rotated. The person who nailed revenue would whiff on cash. The person who called expenses would miss headcount because a late-month hire start date slipped by a day and crossed the period boundary. Five numbers. A team whose entire professional existence revolved around understanding them. And still the exercise produced an irreducible uncertainty that humbled everyone who participated.

That competition captured something essential about finance that the "lower expenses, grow revenue faster" crowd never reckons with: even the simplest version of this job is hard. Not intellectually complex in the way that, say, derivatives pricing is hard, but hard in the way that predicting weather is hard, where the system is deterministic in theory and chaotic in practice and the gap between those two things is where all the interesting problems live. If five smart people who live inside the data can't consistently predict five summary numbers one month out, maybe the confident pronouncements about financial strategy that fill earnings calls and board decks and analyst reports deserve a little more skepticism than they typically receive.

The uncomfortable question is whether the clarity we’re creating is costing us signal.

The Loop

Finance often gets cast as the department of rejection slips and red ink. The people who say no. The late arrival at the meeting who ruins the fun by pointing out that the plan is too expensive.

That framing misses something fundamental about the architecture, and I realize I'm about to describe a feedback loop, which risks sounding like every other LinkedIn post about "learning organizations" and "continuous improvement," so let me try to be specific about what I actually mean rather than gesturing at a concept and trusting the jargon to do the work.

Finance is not outside the system evaluating it. Finance circulates through it. Accounting keeps the historical record: what happened, documented with enough precision that a stranger could reconstruct the sequence of events. FP&A simulates possible futures: what might happen if these seventeen assumptions hold, which they won't, but the exercise of building the model forces a kind of structured honesty about dependencies that wouldn't happen otherwise. Strategic finance ties resources to bets: given what we believe, here is what we should fund, and here is what we should stop funding, and here is the conversation about the difference between those two categories that will take three hours and leave everyone slightly irritated. Systems and data connect signal to action, or are supposed to, which is a qualification I'll return to.

Together they form a feedback loop that, in its healthiest version, works roughly like this: you design a plan (bet), allocate resources to it (execute), surface what actually happened against what you expected (detect), interpret the gap between plan and reality (analyze), and update the next cycle accordingly (adjust). When the loop runs, decisions compound into learning. Decision A leads to outcome C, and the system remembers how it got there, and the next time you face a similar decision, you're working from accumulated evidence rather than accumulated confidence, which are extremely different things.

When the loop stalls, motion continues but learning stops. And the stall, this is the critical part, almost never looks like a stall. It looks like normal operations.

Where It Breaks

This is where it starts to fray. The profession tells itself that it is the truth-teller, the guardian of rigor, the adult supervision. That story obscures the degree to which we participate in the system’s failure to learn.

There’s another problem that rarely gets discussed because it’s boring and often invisible, and yet it consumes an astonishing percentage of actual finance and operations work: before you can learn anything from the numbers, you have to structure the data that produces them. Structuring data is roughly 60 to 95 percent of the job, depending on the month, that nobody mentions when they talk about "the job."

I mean this literally. The feedback loop I just described, the elegant cycle of design, execution, detection, analysis, and adjustment, assumes that you can actually connect a decision to its outcome, that when you fund a bet, you can later trace the spending to the result and understand what happened. In theory this is straightforward. In practice it requires chart of accounts design, cost center hierarchies, tagging conventions, allocation methodologies, and data pipeline maintenance that can be so tedious to build and so fragile to maintain that most organizations do them poorly and then wonder why their financial analysis feels thin. You cannot learn from data you cannot structure and keep structured. You cannot structure data that lives in fourteen different systems with inconsistent naming conventions and no shared identifier. And the time it takes to solve these problems, to build the infrastructure that makes the feedback loop possible in the first place, is time that nobody outside the connective functions, the people paid to worry about it, sees or values, because the output is not insight but plumbing, and plumbing is invisible until it breaks.

More than once, I’ve received plans after the bet was already socially committed. I’ve reviewed the assumptions, identified the gaps, and recognized that the room’s appetite was for confirmation rather than interrogation. The analysis still gets built. The question becomes whether you spend political capital reopening the question or documenting the risk and returning to it later. Most organizations reward the second approach more consistently than the first.

Variance gets rationalized instead of examined. A miss against forecast triggers not curiosity but explanation, and there is a meaningful difference between those two responses. Curiosity asks: what did we misunderstand about the system? Explanation asks: what story can we tell that makes this miss feel less alarming? Both responses have passed through my hands, and explanation is easier, faster, and far more rewarding in the short term, because it allows the room to move on. The problem is that moving on is exactly what prevents the loop from closing. Every rationalized variance is a signal the system set aside, and the cumulative effect of ignored signals is a company that has very detailed financial reports and very little understanding of its own economics.

Dashboards circulate but don't shape behavior. At one company, I spent three weeks building dashboards that were, by any technical standard, excellent: clean visualizations, real-time data, appropriate drill-down capability. Sent them to distribution lists of thirty or forty people, every week, on schedule. And I suspected, though I couldn’t yet prove it, that the dashboards were being referenced more than relied upon. They were being used to prepare for meetings, which is different. The dashboard was not a learning tool. It was a performance prop: something you glanced at before a review so you could reference the right numbers when asked, and the difference between a dashboard that teaches and a dashboard that props up the appearance of rigor is the difference between a functional feedback loop and a very expensive screensaver.

Forecasts get polished until they look coherent rather than accurate. Which returns us to where we started, to the smoothing.

Oscillation

Feedback loops can sharpen judgment. They can also overcorrect, and the overcorrection pattern in finance is so consistent across organizations that it probably deserves its own corporate taxonomy.

After a painful miss (a quarter where revenue came in meaningfully below forecast, a product bet that burned cash without producing results, a market shift that rendered the plan obsolete), Finance tightens. The response is fast and comprehensive: hiring slows, discretionary spend requires additional approval, forecast assumptions get stress-tested with a rigor that would have been useful six months ago but is now being applied retrospectively to a plan that has already failed. Experiments shrink. Risk tolerance contracts. The system protects itself.

Signal improves. Ambition contracts.

And this is what makes corporate finance genuinely difficult in a way that "lower expenses, grow revenue faster" completely fails to capture. Finance sits at the junction of resources and belief, which means it has enormous power to shape what the organization attempts. If Finance flows cleanly, if the feedback loop runs and signal moves through the system without being smoothed or rationalized or ignored, the organization senses faster. It learns. It adjusts before the adjustment becomes an emergency.

If Finance clots, if signal gets blocked or blurred at any of the dozen points where small human decisions can mute it, parts of the organization go numb before anyone names the problem. And the numbness is self-reinforcing, because the parts that go numb stop sending useful signal, which means the feedback loop degrades further, which means Finance compensates by tightening controls (since it can't trust the signal, it restricts the spend), which means the organization becomes simultaneously less informed and more constrained, which is roughly the worst possible combination.

The goal is calibration, not control. And calibration requires a kind of ongoing attention that is fundamentally incompatible with the way most organizations think about finance, which is as a periodic reporting function (close the books, present the results, build the next forecast) rather than a continuous sensing system.

When It Works

The healthiest version of the loop is quieter than people expect, and I want to dwell on that quietness because it's the most counterintuitive thing about good corporate finance.

Good finance doesn't look like sophisticated analysis or dramatic interventions or brilliant forecasting. It looks like a model cell turning red early enough that the room leans forward with curiosity instead of bracing for impact. Or a conversation where someone says "the variance is interesting" and means it, genuinely finds it interesting rather than threatening, because the system has enough trust and enough historical context to treat a miss as information rather than failure.

I want to stay on this for a moment, because the relationship an organization has with its misses tells you almost everything about whether the feedback loop is working. In the best rooms I've been in, a plan miss that isn't a crisis is actually one of the most interesting things that can happen in finance. You forecasted X, you got Y, nobody's in trouble, the cash is fine, and now you have this genuine puzzle: where did the model's understanding of the business diverge from reality? Was the assumption wrong, or was the assumption right and something else changed? Did the miss reveal a dependency you hadn't mapped, or a sensitivity you'd underweighted? These are good questions. They're the questions that, if you actually pursue them, make the next forecast better and the next bet smarter.

In most rooms, though, a miss triggers defensiveness regardless of whether it's a problem. The instinct is to explain it away, file it under "timing" or "one-time item," and move on. And the reason is straightforward: most organizations can't distinguish between "the plan was wrong" and "you were wrong," and so admitting that the plan missed feels personal in a way that makes honest examination almost impossible. The feedback loop needs something that gets called "psychological safety" in management literature, but which in practice is simpler and rarer than that: the freedom to be wrong without being punished for it. Our forecasting competition worked because there were no stakes. When the stakes are a board meeting, people smooth. When the stakes are your annual review, people explain. The loop requires exactly the kind of honesty that career incentives discourage, and no amount of organizational design has solved that tension in any company I've worked for.

There's a loneliness to this that I don't think gets acknowledged enough. Being the person in a room who wants to stay with a miss, who finds the variance genuinely interesting and wants to understand it rather than explain it away, while everyone else is visibly eager to move to the next slide. You can feel the room's patience thinning. You can see the CFO glancing at the clock. And you learn, over time, to read the room's tolerance for curiosity the way a comedian reads a crowd, and to ask one fewer question than you want to, and to save the real examination for later, alone, in a spreadsheet nobody asked for.

Finance doesn't earn trust by blocking bad ideas. You can block bad ideas all day and the organization will simply route around you, which I've also seen happen and which is its own kind of instructive humiliation. Finance earns trust by helping the company see what happened clearly enough to choose what happens next with something approaching real understanding rather than sophisticated guessing.

Cash keeps you in the game. A working feedback loop helps you learn how to play it. And the distance between those two things, between survival and understanding, between staying alive and actually getting better, is where the real work of corporate finance happens, in the space that "grow revenue, lower expenses" has never once illuminated.

Footnotes

Sales time is a genuinely fascinating psychological phenomenon. In Sales time, deals that are "two weeks out" remain two weeks out for months. Pipeline that is "soft but real" exists in a quantum state of probability that collapses only when Finance tries to book it.

I don't say this to criticize Sales, whose optimism is necessary and probably correct more often than Finance gives them credit for. I say it because the temporal mismatch between Sales confidence and Finance precision is one of the most reliable sources of organizational friction, and it almost never gets discussed as a structural problem. It's usually discussed as a Sales accuracy problem, which is a framing that Finance finds convenient and Sales finds insulting, and neither framing is entirely wrong.

Smoothing usually feels correct and responsible, and almost never malicious. The narrative becomes easier to defend, the volatility looks contained, and the board spends its time discussing strategy rather than interrogating why January's bookings were 40% below plan while March was 30% above.

The cost is that early warning signals can possibly get muted. By the time the trend becomes unambiguous enough to survive the smoothing, the window for intervention has often narrowed considerably. I have never seen a post-mortem that identified smoothing as a root cause, but I have seen many situations where earlier, noisier data would have prompted earlier, less painful adjustments.

Exercising judgement remains difficult.

There's a version of this that I find almost poignant: the CFO who presents a clean financial narrative to the board not because they're hiding anything but because they genuinely believe that clarity requires coherence, that their job is to make sense of the noise rather than to transmit it. The instinct is good. The effect, cumulatively, is a board that has never seen the actual texture of the business's financial reality, only the curated version, and therefore can't help even when they want to.

The forecasting competition revealed something else, too: how differently people model the same business in their heads. One person's forecast might assume that a large renewal would close on the 28th because it always had; another person, aware of a conversation with the customer that hadn't been widely shared, would assume it would slip to the following month. Both forecasts were reasonable. Both reflected genuine understanding. They just reflected different slices of information, and the gap between them was a map of how unevenly knowledge distributes itself across even a small team. The exercise was humbling in the best possible way, because it made visible the degree to which "knowing the business" is always partial, always perspectival, and never as complete as the confident voice presenting the quarterly forecast might suggest.

I once spent the better part of a quarter reconciling two systems that defined "customer" differently. One system counted parent accounts. The other counted billing entities. The same company could be one customer or seven depending on which report you pulled, which meant that every metric downstream of "number of customers" (average revenue per customer, customer acquisition cost, churn rate) was subtly unreliable in a way that nobody noticed until two teams presented contradictory analyses in the same meeting and the room spent forty-five minutes arguing about numbers before someone realized they were using different denumerators. The data structuring problem isn't glamorous. It also isn't optional. And the time I spent fixing it was time I wasn't spending on the analysis the business actually needed, which is the kind of tradeoff that finance people make constantly and discuss never.

The phrase "pick your battles" comes up constantly in finance, and I've started to notice that it almost always means "choose not to fight this one." I can count on one hand the number of times I've seen a finance leader genuinely pick a battle, meaning push back on a plan with enough force and evidence to change the outcome. The social mechanics of corporate life make it extraordinarily difficult to be the person who says "this plan can't work at this price" when the plan has already been announced internally and the team has already been hired. You're not only delivering analysis at that point. You are delivering a challenge to a decision that others have already aligned around, and organizations have a predictable resistance to reopening alignment once it has already formed.

I once spent three weeks building a unit economics dashboard that tracked customer acquisition cost, payback period, and lifetime value by cohort, with automated weekly refreshes. I was genuinely proud of it. Six months later, during a strategic review, the head of product asked a question about customer payback period that the dashboard answered on its second tab. Nobody in the room referenced the dashboard. Nobody had opened it in weeks. The data existed. The learning didn't. The system had motion without understanding, and I had contributed to the illusion that motion and understanding were the same thing by building a tool that produced the appearance of insight without the organizational behavior that converts insight into action.

I have been in rooms where a single bad quarter reshaped policy for years. Expense controls hardened into culture. Experiments required layers of approval that accumulated like geological strata, each one added during a crisis and never removed afterward because removing a control feels risky in a way that adding one doesn't. The system became safer and smaller at the same time, and the people who stayed adapted to the new constraints, and the people who couldn't adapt left, and gradually the organization selected for a kind of temperament that valued caution over ambition, and nobody planned that, nobody wanted it, but the feedback loop delivered it all the same.

The forecasting competition worked, probably, because the consequences of being wrong were a Slack emoji and some gentle ribbing, not a performance review. In that environment, people were willing to show their real assumptions, not just their polished conclusions. When the stakes rise to a board meeting or an annual review, the incentives change. Forecasts get pressure-tested and communicated more cautiously. Assumptions get stress-tested in private before they’re exposed in public. The instinct is understandable. The effect is that honesty becomes selective.

Which suggests that the barrier to candid finance isn’t competence or even courage. It’s the architecture of consequences: who absorbs the blame when a number misses, and whether “the plan was wrong” quietly becomes “you were wrong.” In most organizations, the two are rarely separated. And so the instinct to stabilize, explain, or soften results persists not because finance people lack integrity, but because the system attaches cost to visible error.

I watched a product team route around a Finance review process by reframing their project as a "pilot" that didn't require standard budget approval. The pilot lasted eighteen months, cost more than the original proposal would have, and was eventually absorbed into the operating budget without anyone formally approving it. Finance had successfully blocked the proposal and completely failed to prevent the spend. The system learned exactly the wrong lesson: that Finance could be bypassed rather than engaged, and that the cost of bypassing was lower than the cost of the approval process itself. That was at least partly Finance's fault, and I include myself in that.


Reply

I’d welcome your thoughts on this essay. Send me a note →

Related reading
Latest entries