The Friction That Was Thinking
The forklift takes everything. Even the weights you needed.
A followup to What Won’t Cross. That piece made the general case: formation happens in friction, and the gym for thinking has to be built. This one walks inside and picks up the equipment.
The grain changed direction and the plane told me before my eye could.
I was finishing the face of a walnut panel with a hand plane. A plane wants to be pushed with the grain. When you find the direction, the shaving comes off in a single continuous curl, the surface is left glassy, and there is nothing in particular to think about. When the grain turns, the tool stops curling and starts tearing. The change arrived in my wrists first. A hesitation in the blade. A small sound that was not the right sound. The tearout was not yet visible on the face of the panel when I had already lifted the plane and flipped the board.
That lesson was not written anywhere. It was in the pushback.
Matthew Crawford, who left a think-tank job to open a motorcycle repair shop, observed that a repairman has to begin each job by getting outside his own head and noticing things: he has to look carefully and listen to the ailing machine. The machine is not cooperative. It does not return a clean answer to a clean query. You have to attend to it because it requires your attention — it will not reveal what it is doing to someone who is not fully present. It resists, and in the resistance it tells you what it is actually doing, which is almost never what the manual said it was doing.
The wood resists. The room you are teaching resists. The patient whose story does not fit the template resists. In every practice worth its name, something refuses to cooperate, and that refusal is not an obstacle to the work. It is the teacher.
I wrote about what formation is and why it does not cross from one medium to the next. That essay named the loss. This one is about what it would take to replace what was lost.
The loop without the middle
Every practice you care about moves through a cycle: a cue, a routine, a reward. You see a gap in the argument; you work out the position; it holds under scrutiny. You see a patient; you form an impression; it guides the treatment. The cue fires, the routine runs, the reward arrives.
But the routine is never only a behavior. It is the part of the work that forms you. Working out the position is where the thinker learns what she actually believes. Forming the impression is where the physician learns what the chart cannot carry. Something is deposited in the person doing the work: a sensitivity to cases of this kind, an instinct for when a case is not of this kind, a feel for where the work has to be held more carefully than its outputs require.
That deposit is the practice. The reward is just what tells you the loop closed.
Here is what is happening. A system is inserted between the cue and the reward, executing the routine for you. The gap is registered. The position arrives. The reward is collected. The loop closes faster than it ever has.
And nothing forms.
Think of the last time you felt a genuine shift in your own understanding. It did not happen when a perfectly formatted document arrived in your inbox. It happened when a paragraph you were writing refused to resolve, and you had to sit there for twenty minutes until you found the word that fixed it, or when a piece of code failed three times and you had to trace the logic back to the premise you didn’t know you had assumed. That frustration was not the cost of the thought. That frustration was the thought occurring.
The completed loop is what the habit theorists describe without knowing they are describing a problem. The cue fires, the reward arrives, and the middle, the part where resistance would have deposited something in you, has been bypassed. You will not notice it for a while. Skilled practitioners do not lose their skill overnight. The lawyer who has read ten thousand briefs does not forget how to read briefs when she stops writing them herself. The senior engineer who spent a decade debugging race conditions does not forget race conditions when the first pass comes from a model. The loss is downstream: the practice that does not form in the next generation, and the attunement that stops being refreshed in this one. I have called this specific liability calibration debt in another essay; here I want to examine what produces it, and what would prevent it.
What a gym actually is
A gym is a designed environment for the production of difficulty. Not for the output of difficulty, not for trophies or race times, but for difficulty itself, as the point. The gym produces no artifact. What it produces, over time, is the person. And it is engineered, deliberately, to prevent anyone from finding a shortcut.
Good gym design has four features. Equipment that is adjustable, so the load can match your current capacity and grow with it. A protected structure, so the session cannot be cancelled just because there are easier things to do. Other people who take it seriously, because the social environment either reinforces the practice or quietly erodes it. And someone who knows the difference between productive struggle and actual injury, who will let you sit with the hard thing for as long as it is building you and step in only at genuine failure.
We have not built this for thought.
What we have instead is an accelerating pressure to close every loop as fast as possible, because the closed loop is the deliverable, and every open loop looks, from the outside, like wasted time. The structures that used to protect cognitive time under load, the uninterrupted afternoon, the paper-only draft, the meeting where the senior actually read your memo, have been eliminated one by one, and nobody called it a loss because the loops kept closing and the rewards kept arriving.
These four gym features translate directly into design for knowledge work, and I will come back to each of them. But first it helps to understand, precisely, what the load is.
Load and duration
In strength training, time under tension refers to how long a muscle is under load during a set. A person who swings a barbell up with momentum and drops it has moved the weight, but the muscle was under tension for perhaps two seconds. A person who lifts the same weight slowly, pauses at the top, and lowers it under control has kept the muscle under tension for six or eight seconds. The second person lifted less impressively. The second person built more.
The completed loop is the cognitive equivalent of the momentum rep. The cue fires, the system swings the output into place, the loop closes. Time under cognitive tension: near zero.
This is not just an elegant analogy. The brain is physical tissue. Synaptic plasticity, the biological formation of new neural pathways, requires metabolic energy. It is expensive for the body. The resistance of a hard problem is the metabolic trigger that commands the tissue to adapt. When the machine removes the cognitive load, it is not just saving time. It is literally depriving the brain of the metabolic tension required to build a new circuit.
Designing for cognitive time under tension means holding the loop open deliberately. Not because you enjoy inefficiency. Because the time you spend sitting with the unresolved problem, with the argument that is not working, with the data that refuses your model, is the time in which your capacity is being physically built. The delay is not a bug. The open loop is where you were being made into someone who can handle the next version of this problem.
I notice this in my own practice. I read long books and take notes on them, revising the notes as I go. I listen to long-form podcasts, sometimes three or four hours, without skipping, without “multi-tasking”. Sometimes, I will run the material through a quick summary pass first, the kind an AI can generate in seconds, to know whether the conversation is worth the investment before I make it. But the summary is a filter, not a substitute. Once I decide the material is worth the full engagement, I give it the full engagement, every digression and tangent, every place where the argument stalls and recovers. The summary told me whether to sit down to the meal. The summary is not the meal. What the full engagement deposits is not the information, which the summary could have delivered. It is the cognitive paths that duration forces me down, the thoughts I arrive at only because I was required to follow the argument long enough to find them. I could not have reached those thoughts on purpose. I could only have been led there by the material, over time, under sustained load.
That is deliberate cognitive time under tension. The summary pass is the momentum rep. The full engagement is the slow, controlled rep: the mind stays under load, the resistance persists, and something is deposited that the summary would not have deposited.
The difficulty curve
The body adapts. If you lift the same weight every week, the stimulus stops producing growth. The load has to increase. Trainers call this progressive overload: the deliberate, incremental raising of difficulty to keep pace with developing capacity.
AI, as currently deployed, does the opposite. It flattens the difficulty curve. The hard argument becomes as tractable as the routine one. The complex analysis becomes as easy as the simple one. The junior professional’s cognitive load stays constant regardless of how demanding the material actually is, because the system absorbs the surplus difficulty before she encounters it. This is the equivalent of going to the gym every day and lifting the same five-pound weight for years.
A 2025 study of thoracic oncology tumor boards found that AI recommendations agreed with the human clinical team seventy-six percent of the time. The twenty-four percent where they disagreed is the part of the work where clinical judgment still lives: a radiologist remembering a case that ended badly, a surgeon sensing that the imaging does not tell the whole story, the pause where someone asks a question the data would not have prompted. If the junior clinician is trained on a workflow in which the summary is already written, the likely diagnosis is already named, and the first-pass analysis is already done, she will never encounter the load of the twenty-four percent. She will lift the same weight for her entire residency. The seniors will carry the hard cases until the seniors retire. Then there will be physicians who were never overloaded, and who therefore never grew.
The same logic runs through any practice where difficulty is not uniformly distributed. Legal analysis. Code review. Strategic synthesis. The hard cases are where the load actually is. A workflow that smooths difficulty to a constant floor does not produce experts. It produces practitioners who are permanently competent at the mode and permanently fragile at the tails, which is precisely where the work matters most.
The result is a condition you can already feel inside organizations: phantom competency. A junior worker whose outputs are flawless but who feels an intense, quiet impostor syndrome because she knows her capacity has not grown. She is shipping senior-level work while feeling like an intern, acutely aware that she could not reproduce her own deliverables if the network went down. The organization measures the artifact and reports success. The person measures her own formation and feels hollow.
The forklift and the spotter
We have been treating AI as a forklift. Its job is to move the cognitive freight: take the rough argument, deliver the polished version; take the raw data, deliver the summary; take the intake, deliver the differential. The forklift does not care about the person standing next to it. It cares about the load.
A gym does not use forklifts. A gym uses spotters.
A spotter stands behind the bench while you press. The spotter does not lift the weight. The spotter watches you struggle with it, lets you sit in the tension for as long as the tension is productive, and touches the bar only when you reach genuine failure. Even then, the spotter applies the minimum force needed to keep the bar moving. The entire point is that you take the weight. The spotter’s job is to make sure the weight does not kill you, not to spare you from it.
The difference between a forklift and a spotter is the difference between two relationships to difficulty. The forklift assumes difficulty is waste. The spotter assumes difficulty is the product. One optimizes for the artifact. The other optimizes for the person.
Almost everything in the current conversation about AI in professional life is organized around the forklift model. How do we move the cognitive load faster, with fewer errors, at lower cost? That question is not wrong. There is an enormous quantity of cognitive freight that should be moved by machine: formatting, scheduling, boilerplate, summaries of meetings you were not in. Let the forklift take it.
The problem is that the forklift does not know the difference between freight and exercise. It picks up everything. And once it picks up the exercise, the person standing next to it gets weaker without noticing, because the loop still closes, the deliverables still ship, and the reward still arrives.
Consider what happens when a system architect reaches for a whiteboard at the hardest moments of a project. The whiteboard is slower than the screen. The marker cannot undo. There is no copy-paste. She is not choosing it because she enjoys the smell of dry-erase. She is choosing it because the constraints of the medium force her to synthesize before she executes. The inability to undo is the load. The slowness is the tension. She is acting as her own spotter: using the minimum level of tool support that keeps the weight on her, and reaching for the more capable tool only when she genuinely cannot proceed without it.
That is what the spotter model looks like when it is working. Not refusing the machine. Not performing analog virtue. Choosing, deliberately, the level of support that keeps the difficulty on the person.
What the gym requires
Back to the four features of good gym design, and what they mean for a knowledge worker.
Adjustable load: The work a person encounters must include difficulty proportional to their developing capacity, not smoothed to a constant floor by automated assistance. The twenty-four percent of hard clinical cases is not a problem to be resolved before it reaches the trainee. It is the training. Routing hard cases around junior practitioners in the name of efficiency is the organizational equivalent of removing all the heavy weights from the gym and replacing them with five-pound dumbbells, forever.
Protected time: The cognitive session cannot be cancelled because there are easier things to do. Unassisted drafts. Paper-only reviews. Meetings where the primary documents were read by humans, not summarized by machines. These are artificial constraints in exactly the same way the barbell is artificial: they exist to produce a condition that the environment no longer generates on its own. The cognitive gym is engineered resistance. It produces nothing except the person.
Serious community: The social environment reinforces the practice rather than the shortcut. If everyone around you submits AI-drafted work without resistance, the person who drafts her own has no signal that the practice is worth the time. Institutions that want formation have to make the practice legible and reward the attempt, not only the output, because the output is the one thing the machine can also produce.
A knowledgeable spotter: Someone who can calibrate exactly how much help to withhold, and for how long, before the load becomes damage rather than development. This is not the same as a mentor who gives advice. It is the rarer thing: someone whose own formation is intact enough that they can distinguish productive struggle from waste, and who has the discipline not to intervene too soon. In The Thought You Didn’t Have, I traced this in the specific case of writing: the senior who lets the junior’s draft stay clumsy long enough for the junior to find out what she thinks is doing something that the senior who improves the draft immediately is not doing.
None of these are anti-technology propositions. The gym is not against transport. A cognitive gym is not against AI. It is a designed environment for a specific kind of formation that automation does not produce and cannot substitute, in the same way that a running track is a designed environment for a kind of fitness that the car does not produce and cannot substitute.
The choice being made right now
Every organization deploying AI at scale is currently making a choice about this, whether or not it has named the choice. It can deploy AI as a forklift, moving every available load, optimizing every loop, closing every gap between need and output. It will have faster, smoother, cheaper production for as long as the people running it still carry inherited capacity. Or it can deploy AI as a spotter: present at every point of genuine failure, absent at every point of productive struggle, calibrated to keep the weight on the person for as long as that is where the building happens.
The second option is harder to design, harder to measure, and harder to defend in a quarterly review. It also produces people whose judgment can be trusted when the situation has no precedent and the machine has no useful answer. That situation is arriving faster than the forklift model can handle.
The wood taught me by resisting the blade. The long book teaches me by refusing to summarize itself. The long essay teaches me by refusing to resolve in the first three sections. These are not inefficiencies. They are reps.
For any practice you care about, the question is where the resistance still is, and who is meeting it. If the difficulty is still there and you are still the one encountering it, the practice is forming you. If the difficulty has been absorbed and something else is meeting it on your behalf, the loop is completing without you. There are practices that should complete without you. Formatting should. Boilerplate should. Summaries of meetings you were not in should. But the practices whose resistance was building you are your weights. The inconvenience is the load.
The question is not whether to use the forklift. You will. So will everyone around you. The question is whether anyone has decided which weights are yours to carry, and whether the organization you work inside has any interest in protecting that decision, or whether it is, right now, quietly picking up everything it can reach.
---
Sources & further reading
Matthew B. Crawford, Shop Class as Soulcraft (2009). The argument that skilled manual work is not a lower form of cognition but its own form of it. The description of what attention the ailing machine compels is the sentence this essay opens on.
Jonadas, What Won’t Cross (2026). The companion essay that names the formation stage as the part of professional work that does not survive automation. This essay assumes that argument and asks the design follow-up.
Jonadas, Calibration Debt (2026). The argument that firms were accidental calibration apparatuses, and that AI is dismantling the apparatus while retaining the revenue engine. The junior-professional and senior-executive patterns here draw on that framework.
Jonadas, The Thought You Didn’t Have (2026). The specific case of writing as formation practice: what AI quietly takes is not the voice but the process by which the writer was being made into someone with a voice.
K. Anders Ericsson et al., “The Role of Deliberate Practice in the Acquisition of Expert Performance,” Psychological Review (1993). The research behind the popular “10,000 hours” formulation. Ericsson’s actual argument, which the popular version dropped, was that the key variable was deliberate practice under feedback and resistance, not hours alone. Time under tension, not time spent.
Hubert Dreyfus and Sean Dorrance Kelly, All Things Shining (2011). On poiesis, the craftsman’s way of bringing things out at their best. Dreyfus’s wider work on skill acquisition and embodied expertise stands behind the completed-loop argument here.
Journal of Clinical Medicine, Large Language Models in Multidisciplinary Tumor Boards (2025). The seventy-six percent agreement figure between AI recommendations and human clinical teams. Source

