The Machine-Readable Self
We became readable. What stays unread is the work.
Today, May 1, is Labor Day in much of the world. It feels like the right day to publish a piece I have been struggling with for a while, about what work might still be, once we let the machines have the parts they were always going to take.
It is almost three in the afternoon. You are about to approve a request that has crossed your screen a thousand times before. The fields are complete. The pattern matches. The customer is authorized, the dollar amount is below your threshold, the workflow has done its job. Your finger is already moving toward the approval button when, for half a second, it stops.
Nothing dramatic has happened. You did not catch an error. You did not remember a rule. You hesitated because something in the request, the timing, the wording, the order of the line items, the sender’s tone in the comment field, resembled another request that ended badly. You will not remember which one. You will probably not even know, in the next minute, that you stopped. But the hand knew before the mind.
Most office work is full of these tiny recognitions. The body knows where the friction is before the report does. The eye sees the dropdown and remembers the exception. The hand refuses, for half a second, before any reason has been produced. None of this feels dramatic from inside the day. It feels like getting through it.
We have also learned to perform some of this. The mouse jiggle to stay “active” on a chat app. The careful keyword in a status update so the pipeline reads it as priority. The way a ticket is phrased so the queue routes it past the colleague known to slow it down. We have already begun to format our presence for the systems that watch.
What changes now is that the format has a new reader.
Last week, TechCrunch reported, following Reuters, that Meta plans to capture employee mouse movements, clicks, keystrokes, and navigation patterns to train AI agents that will perform the same kinds of computer work the employees are doing. A few months earlier, OpenAI began rolling out memory features that, with the user’s permission, capture the rhythms of a person’s day across applications to give the model a longer continuity. The first records the work life. The second records the rest of life. The first is corporate; the second is voluntary. The difference between what the company coerces and what the user accepts is becoming difficult to feel.
The immediate scandal is surveillance. It should be. But there is something colder underneath the privacy problem.
The project makes sense.
If work has become a sequence of movements across interfaces, then record the movements. If the task lives in the path from ticket to spreadsheet to dashboard to approval flow, then capture the path. If a job can be shown as a choreography of clicks, pauses, fields, and confirmations, then the worker becomes, at least at that level, a training interface. The employee is not replaced after the work has been understood. The employee is used to define what counts as understanding the work.
That is the more disturbing thought. AI is not making us machine-readable from the outside. It is finding the parts of us that have already been rewritten for machines.
Operational legibility
The most disturbing thing about machine-readable work is not the recording. It is what the recording teaches us about how to appear. The person learns, slowly and without any single moment of consent, to present herself first in the form a machine can process. A biography becomes a data structure wearing a human name. The work trains a metaphysics.
A person becomes machine-readable when the important parts of her work can be captured as operational traces. Not the work as she experiences it. Not the responsibility she bears. Not the years that make one pause different from another. The trace: move here, click this, reject that, approve this, escalate there, leave behind a sequence another system can replay.
This is the next step beyond what I called The Writing That Was Never Yours. In that essay, I examined role-prose: the language we speak as a function, the mask we wear to be heard by the institution. Role-prose is what we say to be legible to other humans inside the system. Operational legibility is deeper. It is not what we say, but the residue of what we do. It is the click, the pause, the sequence of movements captured without our conscious authorship. Role-prose is the mask; operational legibility is the fingerprint left on the glass.
Elizabeth Anscombe, in her short and exact book on intention, made a distinction the office never made and that the AI age is forcing us to remember. The same physical movement, she argued, can be many actions, depending on the description under which the action falls. A man pumping water can be poisoning the household, supplying the vegetable garden, exercising his arm, or finishing his shift. Which of these the action is depends on the description that the agent can own. The log captures the gesture. The description is what has an owner.
Operational legibility erases the description. It preserves sequence and discards the reason the sequence was worth performing.
This did not begin with AI, or even with computers. Frederick Winslow Taylor’s scientific management, in 1911, made the worker visible as motion, timing, output, and deviation from the prescribed method. The body entered management as something to be decomposed. Later, the office learned to do the same to attention. Email timestamps, ticket queues, CRM fields, calendar density, productivity dashboards, version histories, response-time metrics: each made some part of professional life easier to count and compare.
Sometimes this was liberating. A system that depends entirely on someone’s private memory breaks when that person leaves. A bureaucracy, at its best, protects us from favoritism. A record can make power answerable. A process can keep a claim from vanishing because the wrong person was absent that day.
Legibility is not the enemy.
The danger begins when the trace stops serving the work and starts replacing our sense of what the work is. The dashboard does not show what happened. It shows what the dashboard can register. The ticket does not contain the case. It contains the case in the form the queue can route. The keystroke log does not capture a worker thinking. It captures the shadow thinking leaves when it passes through software.
That shadow is useful. It is also not the person.
Meta’s project is a nearly perfect image of the conversion. The employee is not asked to describe the work in language, with reasons, doubts, and responsibilities. The employee is asked to keep working while the system learns from the residue. The model does not need to know what the pause meant if it can learn where pauses usually occur. It does not need the lived weight of the decision if it can reproduce the sequence by which the decision normally appears.
Operational legibility is the condition in which the residue becomes enough.
The resume was already a parser
You can feel the same conversion in the modern job search.
A resume once promised, at least ideally, to present a life to a reader: not the whole life, of course, but a shaped account of a professional path. It had omissions, emphasis, tact. It asked someone to see a pattern in the years.
Then the reader became a parser.
Applicants learned the new etiquette quickly. Use standard headings. Remove unusual formatting. Repeat the exact keywords from the job description. Do not describe the work in the terms that are most accurate if the system is looking for a different phrase.
Do not be interesting before you are ingestible.
The two words name two registers, and the order is the whole problem. Interesting is the register a human uses for another human: a turn of phrase, a strange path, a shape of attention. Ingestible is the register a human uses for a parser: a pattern that survives tokenization, a keyword density, a header the software has been trained to recognize. The career advice, harmless on its surface, is metaphysics in disguise. It says: the parser comes first. Be readable, then, if there is room left, be a person.
There is rarely room left.
No one experiences this as philosophy. It is advice passed from recruiter to candidate, from career coach to student, from anxious applicant to anxious applicant. The metaphysics arrives without anyone having authored it.
The same training happens on platforms. A creator begins with a voice, which is to say a way of standing in relation to an audience. Then the graph begins to teach. This opening holds retention. This sentence loses the room. This face travels. This duration dies. The creator adjusts, then adjusts again, and slowly the work begins to arrive pre-shaped for distribution.
In machine learning, reinforcement learning from human feedback trains a model by rewarding some outputs and discouraging others. On platforms, the direction reverses. The algorithm trains the human. A person performs reinforcement learning on herself until the voice becomes easier for the system to carry.
What looks like self-expression is often already self-translation.
The resume and the creator feed are earlier versions of the Meta story. First, a person learns to present herself as a readable object. Then a machine becomes good at reading that object. Then we call the machine uncanny, as if the uncanniness began with it.
The manager as router
Inside organizations, the conversion is quieter because it looks like competence.
The manager opens the dashboard and sees the red items first. A stale ticket. A customer sentiment score. A delayed handoff. An owner missing from a field. A graph moving in the wrong direction. The world arrives already sorted by the system’s sense of salience.
The dashboard offers the relief of reduced complexity. By turning a messy human situation into a glowing red alert, it desaturates the weight of judgment. The dashboard is a filter that makes action possible while making judgment optional.
But the interface has a way of becoming the room.
After a while, the manager knows the work through the traces it leaves: the escalation, the field, the metric, the update. The habit becomes bodily. Eyes go first to the red number. Fingers move toward the routing action. The question “what happened here?” is quietly replaced by “what state should this object move to next?”
This is where the manager becomes legible as a router.
Some routing should be automated. There is no dignity in moving a ticket from one queue to another when the reason is already settled. Let the machine do that. The problem is that many organizations have made it hard to tell which moments were merely procedural and which only looked procedural because nobody had time to understand them.
An AI agent arrives, and the question becomes painfully reasonable: if the job is moving traces across interfaces, why does a person need to be there?
The answer cannot be that a person is always needed. That would be false. The answer has to be more exact. A person is needed where the trace is insufficient: where the case changes meaning when held against a history no field contains, where the metric improved because the underlying practice degraded, where the correct action would be wrong because this customer, this colleague, this student, this patient is not a token of the usual type.
The manager’s real work begins where routing fails. But if the institution only sees routing, it will train the manager to become what it can see.
The old name for this condition
Heidegger helps because he does not begin with the gadget. He begins with a way the world is allowed to appear.
In “The Question Concerning Technology,” he argues that modern technology is not merely a collection of instruments. It is a mode of revealing. Under Gestell, or enframing, things show up primarily as resources to be ordered, optimized, stored, and used. A river appears as hydroelectric capacity. A forest appears as timber inventory. A field appears as yield. The thing has not vanished. It has been made to appear in the aspect under which it can be mobilized.
That is why the Meta story feels less like a new scandal than a completed sentence. The worker appears as usable behavior. The day appears as training data. The pause appears as a statistical feature. The body at the computer appears as an input stream.
Heidegger’s word matters because “dehumanization” is too blunt. It makes the problem sound like a sentimental reduction of human richness by cold machines. The actual process is stranger. The human being remains present, active, clever, adaptive, overworked, sometimes even proud. But the system receives her under one aspect: as something whose usable pattern can be extracted.
That is enframing with a reader attached.
AI does not invent the frame. It reads what the frame has prepared. It is powerful exactly where the world has already been formatted as standing-reserve: the email, the status update, the keyworded resume, the optimized post, the ticket path, the dashboard action, the sequence of clicks through a business process. We keep asking whether the machine can imitate human work. A better question is how much of human work was already forced to imitate a machine-readable form.
The answer is uncomfortable because it is uneven. There are parts of our work we should be relieved to surrender. There are also parts we surrendered years ago while still performing them ourselves. The completed view is the precondition for a question that, until now, was not really available.
What the clearing reveals
Heidegger, near the end of the same essay, says something less quoted than Gestell and more important. The danger of enframing, he claims, is also the only place the saving power can show itself. “The danger, when grasped as the danger, becomes the saving power.” The phrasing sounds mystical and is not. He is saying that there is no escape from the danger to some unspoiled outside. There is only the difference between living inside the danger without seeing it and seeing it as the danger that it is. The seeing is already a turning.
What was making the seeing hard, all this time, was that we ourselves were performing the operational work. As long as the worker, the manager, the applicant, and the creator were the ones running on the dashboard, the dashboard could be confused for the work. We could pretend the metric, the parsable resume, the optimized post, and the ticket queue were what we did. We were the ones doing them, after all. We bled into them. They felt like work because they cost work.
AI changes that, not by replacing us but by separating us. When the agent runs on the dashboard, the dashboard stops being mistaken for the work. The keystroke log, captured by the model, is at last visible as keystrokes. The ticket path is at last visible as a path. The parsable resume is at last visible as a parse. We can finally see, because we are no longer the ones generating the residue, that the residue was never the building. It was scaffolding.
That is the saving power. Not a relief. An exposure.
The exposure forces a question that the era of the dashboard had quietly suppressed. If the operational was not the work, what was?
The question feels strange because the answer was always present and always demoted. The professional vocabulary of the late twentieth century filed it under headings like “soft skill,” “people side,” “judgment call,” “executive function,” “the human element.” The phrasing is telling. Soft, by contrast with the hard countable work. People side, as if the rest of the work were not. Judgment call, in the singular, as a moment of exception, rather than as a continuous capacity. The vocabulary made the actual work sound peripheral. The periphery sounded peripheral because the operational center had grown large enough to occupy the foreground.
The center was empty.
What was always there, behind the operational, is what every profession, in its older self-understanding, knew it was for. The doctor who has a sense of this patient, not patient-as-token. The teacher who hears that this student is asking a different question than the one the syllabus expects. The lawyer who recognizes that the case cannot be resolved by the precedent because the precedent assumed something that does not hold here. The manager who, from a way the meeting went silent, knows that the team has stopped trusting the project. The engineer who, looking at a build, feels that something is wrong before the test catches it. These are not exceptions to the work. They are the work, when the work is taken seriously. The operational was the way they got reported.
Acknowledgment as the form of work
Naming what was demoted requires a vocabulary older than management theory, kept alive by philosophers who refused the terms of the bureaucratic century. Stanley Cavell named it most carefully. He called it acknowledgment: the comportment in which one treats another person as that person, in their irreducible particularity, rather than as a token of a type. Acknowledgment is not interpretation; it is the stance from which interpretation can happen. It is not knowledge of the other; it is the openness in which the other can come to be known. It is what is missing from the most well-supported algorithmic decision and what is present, often without being noticed, in the briefest exchange between two people who actually see each other.
Acknowledgment is not the only thing the operational hid, but it is the one that gathers the others.
Around it sit judgment, in something like the Aristotelian sense of phronesis: the capacity to make a call you can stand behind, in a particular situation, when the rule does not by itself determine what to do. Not “decision under uncertainty as ranked by the model,” which is something a model can do well. The act of standing for the outcome.
Around it sits attention, in something like Simone Weil’s sense: looking at a situation until you actually see it, rather than classifying it until you can act on it. AI classifies endlessly. It does not, and cannot, attend, because attention requires that the attender be capable of being changed by what she attends to.
Around it sits responsibility, in the older sense in which someone’s name is anexed to a decision and they remain in a position to be addressed about it. Not the audit trail. The person who, when something goes wrong, can say I, and here is why, and here is what I knew, and here is what I would do differently.
These are not soft skills. They are the working content of being a person in a profession. The operational era did not invent them. It compressed them, demoted them, called them the human side, and made the rest the dashboard. AI, by absorbing the dashboard, can return them to the center, but only if we accept the surrender of the operational rather than competing with the machine on its ground.
The new measure of work is not how many cases you processed but how many you actually saw. Not how many tickets you closed but how many you closed in a way you could defend in front of a different person tomorrow. Not how many drafts you shipped but how many of them were yours under a description you can own. The shift is from quantity of trace to density of acknowledgment.
This is also why “perform humanity harder,” the obvious response to the Meta story, fails. The performance of authenticity is itself parsable. Platforms reward it; employers template it; the market turns it into a content strategy by lunch. Acknowledgment cannot be performed for the system because acknowledgment is not for the system. It is for the other person. Its illegibility is not a strategy; it is the form of the act.
Can Gelassenheit scale?
Heidegger had a name for the comportment that uses technology without being used by it. He called it Gelassenheit, usually translated as releasement: a yes-and-no said simultaneously to the technical world. Yes, we use the radio, the airplane, the calculator, the agent. We depend on them. We do not retreat into a Luddite refusal. And no, we do not let any of these devices tell us what our being is. We let the machine be a machine; we do not let it become our metaphysics. Heidegger paired this with what he called openness to the mystery: the recognition that even the technical world rests on a kind of disclosure that is not itself technical.
The yes-and-no is harder to live than it sounds. While the operational still flowed through human hands, it was always tempting to identify with what the hands were doing, because the hands were ours. Saying “yes” to the technical infrastructure and “no” to the metaphysical claim was a distinction without a clean cut. The infrastructure and the work were tangled. Now the cut becomes available. AI takes the operational; we can finally say yes to the operational as operational, without confusing it with the work, and no to the suggestion that the operational was ever the measure of who we were.
This is the saving power, said in the other vocabulary. The danger of enframing, when grasped as the danger, opens the clearing in which Gelassenheit becomes practicable. Not because Heidegger was naively right about technology, but because the consummation of Gestell is what makes its outline finally visible.
Heidegger himself did not believe this could scale. He imagined Gelassenheit as a small, contemplative, “here and now and in little things” practice, lived on country paths and in Black Forest stillness. He did not think a society could organize itself around it. He may have been wrong, or he may have failed at imagination. The reason to think he was wrong is precisely the one he could not have foreseen: the technological precondition for a Gelassenheit at scale did not exist for him. It begins to exist now.
A Gelassenheit at scale would not look like Gestell at scale. Gestell scales by uniformity, by making everything appear in the same aspect. Gelassenheit scales differently, by multiplication of singular acts. A society of acknowledgments scales the way a society of friendships scales: not by metric, but by texture. Many particular relations, each carrying its own weight, none reducible to a sum.
The popular version of this proposition is older than the AI debate. Star Trek has been running it for sixty years. The Federation handed the operational over to computers a long time ago. The replicator handles supply; the ship handles routing; the sensors handle parsing. Picard does not approve tickets. He deliberates about the Prime Directive. He sits with a grieving officer. He has a conversation with Data about what it means to be a person. The crew does the work that only crew can do: the call, the recognition, the attention to a particular situation in which a particular person is at stake. The starship is the precondition for the captain. The dashboard is what frees the deliberation, not what replaces it.
We mistake the show for utopia and miss that it is a coherent social proposition. A society in which most human work is non-operational is a society that has decided, deliberately, to let the operational be operational. It is not a fantasy of leisure. It is a fantasy of seriousness: the seriousness about each other that the operational era was too busy to maintain.
Whether such a society is achievable is a different question. There is nothing in the technology that requires us to reorganize this way. Capital can choose, and may choose, to keep humans inside the operational, racing against the machine. The platforms can choose, and have chosen, to monetize the performance of authenticity rather than the practice of acknowledgment. The danger does not save anyone automatically. It only gives us, for the first time, a clearing in which the work of saving could be done.
What the day already contains
Return to the scene at three in the afternoon.
You hesitated. You did not know why. The system, if it was watching, recorded the delay, time-stamped the next action, and moved on. Tomorrow it will use a few thousand such delays, summed across thousands of employees, to teach an agent how often the human at this stage of this workflow takes a beat. The agent will learn. It will not learn what the beat was for.
You knew. Or rather, your hand knew, and a part of you that does not separate from the hand knew with it. The request did not look like the others, in some way you could not list, and the not-looking-like was a piece of work you had spent years acquiring without noticing, and that work had a name. It was acknowledgment. The request was, briefly, a particular request, made by a particular person, against a history you held without naming. You looked. You did not classify. You attended.
The era of the dashboard called that the soft side of the job. The era of the AI exposes it as the side that is left when the dashboard goes to the machine. The hand pausing over the trackpad is not a vestige of an older way of working. It is the early form of a newer one, in which the operational has been delegated and the human is being asked to do, deliberately and at last, the thing that human work was always doing badly under the disguise of the operational.
The clearing is opening. It does not save us. It puts us in front of a question we had been allowed to avoid for a hundred years.
What is the work, if it is not what the dashboard could measure?
The day already has the answer in it. The pause was the work. The next call you make in which you actually see the person is the work. The decision you can stand behind, against the metric, is the work. The acknowledgment is the work. Heidegger’s “here and now and in little things” was never a smaller scale than the operational. It was the only scale at which acknowledgment is even possible. What changes now is that there may be room for many such here-and-nows, distributed, multiplied, taking up the foreground that the operational has finally vacated.
The question is whether we will claim the room.
---
Sources & further reading
TechCrunch / Reuters, “Meta will record employees’ keystrokes and use it to train its AI models” (2026). Reporting on Meta’s workplace data-capture initiative for training AI agents, including mouse movements, clicks, keystrokes, and navigation patterns.
OpenAI, “Pulse and continuity in personal AI” (2026). Public materials describing memory and continuity features that capture aspects of the user’s daily activity to inform the model across sessions.
Martin Heidegger, “The Question Concerning Technology” (1954). The central source for Gestell, or enframing, and for the formulation of the saving power that grows where the danger is.
Martin Heidegger, Discourse on Thinking (1959). The text in which Gelassenheit, releasement toward things and openness to the mystery, is laid out as the comportment that says yes-and-no to the technical world without retreating from it.
Frederick Winslow Taylor, The Principles of Scientific Management (1911). The early management dream of decomposing labor into measurable methods, timings, and outputs.
G. E. M. Anscombe, Intention (1957). The short and exact account of intentional action as action under a description that the agent can own.
Stanley Cavell, “Knowing and Acknowledging,” in Must We Mean What We Say? (1969), and The Claim of Reason (1979). The development of acknowledgment as a stance toward another person that is prior to interpretation, and that no body of evidence can substitute for.
Simone Weil, “Reflections on the Right Use of School Studies with a View to the Love of God” (1942). The classical account of attention as a moral act.
Maurice Merleau-Ponty, Phenomenology of Perception (1945). The background for the claim that skilled action is bodily contact with a meaningful world.
Jonadas, The Writing That Was Never Yours (2026). Companion essay on role-prose: professional writing that had already separated itself from authorship before AI arrived.
Jonadas, The Friction That Was Thinking (2026). Companion essay on the forms of resistance that build judgment rather than merely slowing output.

