Calibration Debt
The liability hidden inside the organizational singularity.
This week Peter Diamandis published “The Organizational Singularity Is Here.” It circulated fast, for good reason: the math is sharp and mostly right. But I kept returning to one sentence he never wrote, the one about where the next generation of orchestrators actually comes from. This essay is what that missing sentence turned into.
There is a specific feeling I have started noticing in myself when I read things lately. An essay, a memo, a proposal. The surface is fine. The structure is competent. The sentences do their job. But a quieter signal, the one a careful reader learns to detect, the sense that the writer understands what they are saying at a depth below the words, is missing or faint.
Sometimes I close the page and assume I will come back to it. Sometimes, if I am honest, I move past it because I cannot articulate what is bothering me. The writer is not lazy. The AI is not hallucinating. The output is plausible. It is the kind of wrong that requires you to have done the thing yourself, many times, to detect.
I keep thinking that feeling names something important. What follows is an attempt to name it.
The math that works
Peter Diamandis published a piece this week arguing that we have crossed a threshold. He calls it the organizational singularity. One person plus AI beats a hundred people without it. Companies shrink to twenty percent of their former size while entrepreneurs launch them at five times the rate. Coase’s law, the insight that firms exist because internal coordination is cheaper than external coordination, is effectively dead. The new job description is orchestrator: strategic judgment, creative synthesis, relationship cultivation, AI fleet management.
I build AI products for a living. That matters here. What Diamandis is describing lines up with what I see from inside the industry. The margins are real. The speed is real. The category of founder he describes is emerging. His forecast is in the ballpark.
So I am not writing to correct it. I am writing because there is a prerequisite hidden inside it, and the prerequisite is finite.
This is not an argument about mass unemployment or the shape of the coming social contract. Those arguments exist elsewhere and they matter. What I want to name is a quieter, more structural problem, one that sits underneath the labor-market question and will still be there after it is solved.
Calibration debt
Diamandis lists the new job requirements: strategic judgment, creative synthesis, relationship cultivation, AI orchestration. He treats them as a job description, as if a person could take them up the way a nineteenth-century worker could learn to operate a lathe.
But orchestration is not a primitive skill. It is a meta-skill. It presupposes that you already know, in your body, what good execution looks like in the domains you are orchestrating. A person who has never written a marketing campaign cannot tell whether the AI just wrote a good one. A person who has never debugged a production incident cannot tell whether the agent is chasing a symptom or a cause. A person who has never negotiated a serious deal cannot tell whether the AI’s draft proposal is promising or embarrassing.
Hubert Dreyfus spent his career arguing this point in a different context. In his five-stage model of skill acquisition, the novice follows rules and the expert no longer does. Expertise, for Dreyfus, is not rule-following but pattern recognition built from thousands of specific past cases. The expert does not consult a checklist. The expert perceives a situation as already similar to situations they have been inside. “The expert’s skill,” he wrote, “has become so much a part of him that he need be no more aware of it than he is of his own body.”
What AI lets you do is operate at the output level of an expert without going through the stages. What it does not let you do is evaluate that output at the level of an expert. The gap between the two is a liability. I want to give it a name.
Call it calibration debt. It is the inverse of technical debt. You accumulate calibration debt any time you deploy AI to produce work beyond your ability to evaluate it. The output ships. The shortfall is invisible. The debt is paid, eventually, by someone, in the form of decisions that looked right on the surface and turned out to be wrong at a depth only an expert could have felt.
Unlike financial debt, calibration debt cannot be refinanced. It can only be repaid by slow, embodied practice you did not do. There is no instrument for borrowing someone else’s judgment at scale.
What an editor notices
An experienced editor, the kind who has spent twenty years on prose, reads a paragraph that a junior colleague has drafted with an AI assistant. The editor does not read the paragraph twice. They do not run it through a checklist. They look at it for perhaps four seconds and say, quietly, “something here is off.” Then they sit with it and articulate, over the next several minutes, what is off: a rhythmic tell, a claim that overshoots the evidence, a cadence that sounds like confidence but is actually camouflage. The junior could not have produced that paragraph alone. The editor could not have produced that diagnosis without the twenty years. Both of them used the word paragraph to mean very different things.
Every domain has this scene. I have seen its analogue in engineering review, in clinical handoff, in product strategy conversations. The expert does not out-reason the AI. The expert recognizes a wrongness the AI cannot feel, because recognition is not a computation the AI is performing.
The first-generation orchestrator
Here is the unspoken condition of Diamandis’s forecast. Many of the orchestrators who make his thesis plausible right now got their judgment the old way. They spent a decade, often more, inside the kind of organization he is cheering the unbundling of. Their judgment was calibrated against mentors they will not provide to anyone. Their taste was shaped by sitting next to someone who already had taste. The AI-native company they now run is not evidence that the orchestrator role can be produced at scale. It is evidence that a subset of the first generation, trained in the old structures, can operate the new ones extraordinarily well.
That qualifier matters. Not everyone in my generation carries this deposit. Seniority is not the same as calibration. Some people spent their decade doing things that did not deposit judgment; others were lucky in their mentors and their specific assignments. The population of first-generation orchestrators is smaller than the population of people being displaced, which is one of the reasons the economic transition Diamandis describes will be uglier than his framing suggests. The new role is not available simply because the old one disappeared.
I catch myself in the pattern too. Whatever I can evaluate now about AI-generated work was deposited in me by slow years of doing the work the hard way, including years in the academy that I sometimes describe as a detour but that were clearly not a detour. They gave me the ability to feel when a philosophical argument is cheating. That feeling is not teachable in the abstract. It was laid down by hundreds of specific bad arguments I had to learn to spot, case by case, with a teacher nearby who could see what I could not yet see.
The firm did something structurally similar for its knowledge workers. Not on purpose. As a side effect. A junior marketer sat next to a senior marketer. A junior engineer watched a staff engineer triage an outage. A junior analyst learned, in the tenth meeting of a specific engagement, why the client’s framing of the problem had been subtly wrong the whole time. The firm was not good at producing output efficiently. It was accidentally good at producing judgment. It was a calibration apparatus that happened to be wrapped around a revenue engine.
If you remove the firm and keep the revenue engine, you keep the output and remove the calibration. That is the trade we are making, and most of the people making it cannot see the second term.
What I do not know
Now the honest part. Everything I have said so far reads as worry, and worry is not the only register available to the argument.
The apprenticeship apparatus is historically contingent. Guilds gave way to firms. Firms may give way to something we cannot yet picture. It is plausible, and I think more than plausible, that the generation losing its calibration inheritance will build the replacement, precisely because they will have to. New forms of transmission could emerge from the same conditions that are dismantling the old ones. Building in public across open repositories. Live feedback loops from audiences larger than any internal team. Peer structures that do not look like any employer. Formal systems for embodied apprenticeship that are deliberately designed rather than accidentally produced, because the accident is no longer available.
Any of these could work, and work better than the firm ever did. I cannot rule it out, and I should not. I could be wrong, and if I am I will look, in hindsight, like someone who worried about the wrong thing.
What I am more confident about is narrower. The old transmission structures are being dismantled. The replacement has not yet taken clear shape. And the people most enthusiastic about the dismantling are, in many cases, the ones whose own judgment was formed inside what they are dismantling. That last observation is what keeps me from treating the transition as automatically benign. Self-blindness to one’s own formation is not evidence that the formation was unnecessary.
The ledger underneath
Diamandis’s forecast reads as a celebration. It is also, quietly, a ledger. On one side, the visible gains: margins, speed, scale, the disappearance of overhead. On the other side, the invisible liability: calibration debt, accumulating silently in every organization whose output now exceeds its members’ ability to evaluate that output. The liability does not show up in any quarterly report. It shows up, years later, in decisions that looked fine on the surface and were not.
I am not asking anyone to freeze the transition. The transition is happening. I am asking for the ledger to be named.
What I would say to anyone who reads Diamandis and feels the thrill of his forecast is this. Ask one additional question. Not “how do I become an orchestrator” but “how did I become the kind of person who could orchestrate anything at all, and what would it cost for the people around me to arrive at the same capacity?” The answer is rarely comfortable. The structures that made you possible are probably not structures anyone designed. They were accidental, and we are unbundling the accident.
I am building my fleet. Most of my friends are building theirs. The math is real.
The question I keep returning to is whether any of us are also building the thing that will produce the next generation’s ability to tell whether the fleet is heading in the right direction. Someone will. I do not yet know who, or what form it will take, and I notice that the ledger is being drawn up now, silently, in our names.
Sources & further reading
Hubert L. Dreyfus and Stuart E. Dreyfus, Mind over Machine (1986). The five-stage model of skill acquisition, and the argument that expertise is pattern recognition rather than rule-following.
Peter H. Diamandis, “The Organizational Singularity Is Here” (Metatrends, 2026). The forecast this essay is in conversation with.
Ronald Coase, “The Nature of the Firm” (1937). The theory of the firm as a response to transaction costs. Diamandis reads it as superseded; this essay suggests at least one thing the firm was doing cannot yet be bought on the market.
