Note14 September 2025

In which the numbers don't lie, exactly.

The Numbers on the Wall

Stethoscope

The spreadsheet showed customer acquisition cost at forty-seven dollars. Then someone changed how we allocated event marketing spend, and it became seventy-three. A blended attribution model landed it somewhere in between. Three methods and three numbers. All defensible.

We spent two hours in that conference room arguing about which one belonged on the slide. Twenty minutes in, or thereabouts, I knew nobody was going to win. Not because the math was wrong. It was doing exactly what we asked it to do. Each version answered a slightly different question, and each made a slightly different part of the story visible.

What we were really arguing about was attribution. Which team had created value and which team had burned it. Who would carry the next quarter’s pressure. The number on the slide would decide more than a ratio. It would decide whose story became official.

Still, I pushed for the version I preferred.

Part of it was conviction. I believed the method I supported reflected reality more honestly. But part of it was self-interest. It made my team’s work look better. Both were true. Only one of them had urgency.

That’s what makes framing slippery. You can understand that you’re choosing a boundary and still want yours drawn in permanent ink.

The same pattern showed up again with a marketing campaign that produced record-low CAC from an email blast. The room felt lighter immediately. Leaders nodded. Efficiency, documented and ready for the quarterly review.

What didn’t come up was how the list had been built. Months earlier, we had funded an expensive direct-mail campaign to assemble that audience. The email result was real. It was also downstream from a cost no one felt like re-litigating.

The email looked efficient because the expensive part had already been amortized into memory. The cost still existed. It just no longer had a line item that could protest.

I noticed. I let it stand.

Correcting it would have required a detour into attribution logic that no one wanted to take. It would have complicated a win. It would have shifted the mood in a room that preferred clarity to caveats. So the number stayed clean.

Metrics rarely mislead through deception. More often they mislead through convenience. A number fits. The room relaxes. The slide advances.

The boundary determines who gets blamed. That makes it social long before it is mathematical.

When those boundaries move quietly from quarter to quarter, comparison becomes theater. You can’t learn from a trend that keeps redefining itself. Without a stable frame, every improvement looks plausible and every failure looks contextual.

The most useful metric experience I’ve had came from watching a company almost repeat itself. A new VP proposed a campaign strategy that sounded sharp. Someone else pulled up an old dashboard. The same approach had been tried eighteen months earlier. The pattern was unmistakable because the measurement hadn’t changed.

Nothing about the metric was perfect. It couldn’t explain why the campaign had failed or whether conditions were different now. What it offered was continuity. A record stable enough that enthusiasm couldn’t rewrite history.

Consistency is not about accuracy. It is about constraining revisionism.

A number tracked the same way over time becomes a kind of institutional recall. More reliable than whoever happens to be in the room. More stubborn than confidence.

Danger creeps in when measurement starts masquerading as the thing itself.

Lifetime value becomes future cash rather than a model. Growth rate dominates the all-hands while unit economics thin underneath. Gross margin rises because costs were cut in places that will resurface later as churn. The dashboard brightens. The system weakens.

The people closest to the work usually feel the discrepancy first. They sense it in customer conversations or in the tone of internal debates. By the time the metric reflects the change, the behavior behind it has already shifted.

We ran an engagement survey once that came back strong. Leadership circulated the results that afternoon. Two months later, three of our best people left within the same week.

They had scored everything favorably.

Nothing in the survey was technically wrong. “Fine” was the honest answer for someone who had already decided to leave and didn’t want to trigger a conversation. The instrument didn’t fail. It created a clean surface to hide behind.

The danger isn’t that silence can’t be measured. It’s that once you try to measure it, you convert it into performance. You give people a sanctioned way to answer without revealing anything.

If you aren’t paying attention, the spreadsheet won’t rescue you.

Over time I’ve stopped thinking of metrics as scorecards. They feel closer to stethoscopes.

A stethoscope doesn’t cure anything. It amplifies a signal. But what it amplifies depends on where you place it, and who is allowed to hold it.

The instrument holds steady long enough for judgment to form. It makes something faint audible. It does not decide what you do with what you hear.

Good metrics preserve continuity long enough for patterns to become visible. They allocate accountability by defining what counts and what doesn’t. They record choices in a way that makes them harder to quietly revise later.

They cannot prevent convenience. They cannot eliminate bias. They cannot force honesty.

They can only make forgetting harder.

And still, repetition happens. Leadership changes. Frames shift. Allocation models evolve. The cohort gets redefined. The number moves, and for a while it looks like progress.

Then you’re back in another conference room, deciding whether to redraw the boundary or admit you’ve seen this movie before.

The metric won’t stop you from choosing the convenient frame.

It will remember what the last one cost.


Reply

I’d welcome your thoughts on this essay. Send me a note →

Related reading
Latest entries