The Engineering of Intent, Chapter 2: Cognitive Load and Material Disengagement
This is Part 2 of a series walking through my book The Engineering of Intent. In the previous chapter, we set up the Triadic Relationship Model — the CMDP view of software development where human, agent, and codebase each play a distinct role. This chapter is about what happens to the human when the agent is doing most of the typing.
The Failure Mode With No Dramatic Name
The most dangerous failure mode of AI-native development has no dramatic name. The book calls it material disengagement, borrowing a term from ethics. It’s the condition in which the engineer retains the title of author while having ceased to meaningfully engage with what is being authored.
This is the chapter I most want junior engineers to read and the one I most want senior engineers to re-read. It’s about attention — the finite, vulnerable, under-examined resource on which the entire discipline depends.
Reading Code Has Changed. You Might Not Have Noticed.
When a 400-line diff arrives in three seconds, there is no tractable way to read it token by token. You have to scan for shape. Does the diff touch the right files? Is the dependency direction correct? Are the names in the codebase’s idiom? Does the test coverage match the behavior change?
Senior engineers have always read this way. AI-native development forces it on everyone. That’s empowering, but it’s also demanding — holistic reading is a real, trainable skill, and most teams have not yet decided to train it.
Impressionistic scanning (in five passes)
The chapter walks through a technique I call impressionistic scanning — looking at a diff the way one might look at a painting, first from a distance, then up close on details that seem off. Five passes, in this order:
- The file list. Is the set of changed files surprising? A change to a payment handler should not touch the email templates.
- Imports and dependency changes. New imports are the most reliable signal of hidden commitments.
- The test diff. If behavior changed and no test changed, you have two possibilities — good or bad — and you resolve the ambiguity before reading a single line of production code.
- The shape of the new functions. Long, deeply nested, duplicated conditionals — these are the fingerprints of a shortcut.
- Names. Bad names are the fingerprints of weak intent.
Only then do you read specific blocks in detail — only the ones the scan flagged. The approach is paradoxically faster and more rigorous than line-by-line reading. I won’t spoil the full treatment, but most engineers get measurably better at it within two weeks of deliberate practice.
The Autocomplete Trap
There’s a specific, seductive failure mode the chapter names the autocomplete trap. The agent produces code that’s locally plausible. It compiles. It passes the tests that were written before the change. It looks, on impressionistic scan, like the kind of thing one would have written oneself. So you accept it. And then another piece. And then another.
Hours later, you look up and realize a subsystem has grown by two thousand lines, none of which you have deeply engaged with. The subsystem works. It might even be well-designed. But you could not, if asked, defend any specific decision in it. You are no longer the author. You are an approver. And when the subsystem breaks in production, the learning that would normally have happened during authorship did not happen. You’re debugging a stranger’s code.
“The autocomplete trap is insidious because it feels like productivity. Engagement must scale with generation. When generation is cheap, engagement becomes the scarce resource — and the engineer who cannot pace their engagement cannot pace their project.”
The chapter opens the story with a founder who built, in a single weekend, a 40,000-line prototype of a logistics platform. He raised on the demo on Monday. By Thursday the codebase was unmaintainable — not vaguely, but precisely: he could not reliably make any change without breaking something distant. Two different modules had independently invented an “Order” type with subtly different fields. Three configuration systems fought each other. The tests all passed, because each test had been generated alongside the feature it tested and shared its assumptions. Recovery cost him a month of runway and a week of sleep.
Decision Fatigue Is the Silent Killer
Modern agentic workflows produce a stream of small decisions. Accept this diff? Approve this plan? Is this test good enough? Every decision carries a cognitive cost, and costs accumulate.
The parallels from other domains are sobering. Parole boards grant parole less often late in the day. Doctors order unnecessary tests more often after long shifts. I haven’t seen a good academic study on AI-coding decision fatigue yet, but the anecdotal evidence from teams I’ve worked with is unambiguous: the diffs that cause production incidents are disproportionately approved in the last two hours of the workday.
The defense is structural, not heroic. Don’t rely on willpower to stay sharp; design the workflow so that low-stakes decisions are auto-approved and high-stakes decisions are concentrated in the windows where you’re fresh. Treat decision quality as a resource to be budgeted, not a personality trait.
The Seven Habits of Engaged Engineers
The chapter catalogs seven habits shared by the engineers who do not fall into the autocomplete trap. I’ll name them; the book has the full treatment of each.
- Verbalize. When reading a diff, talk to yourself or a rubber duck. Silent reading is too easy to fake.
- Predict before you look. Before opening a test file, ask yourself what tests you would expect to see. Then compare.
- Run the thing. Check out the branch. Start the server. Click the button. A bug that slips through a read often trips over a click.
- Ask the agent to explain. A good agent answers in a way that reveals its assumptions. A bad answer is a signal to dig deeper.
- Keep a small personal log. A few lines per session. The log becomes a compressed memory that resists disengagement.
- Take breaks before big decisions. Respect your cognitive budget.
- Reject more than you accept. The engineers producing the highest-quality output often have 3:1 rejection-to-acceptance ratios or higher. They’re not accepting their way to victory. They’re rejecting their way to it.
The Explain-It-Back Test
A simple, cheap practice the chapter closes with: before merging any non-trivial agent-generated change, close the diff. Open a blank document. Explain the change in your own words. If you can’t, you don’t understand it well enough to merge it.
The test takes five minutes. It prevents the majority of material-disengagement failures. It is almost never done, because it feels redundant — you just read the code, of course you can explain it. Actually trying reveals how much of the “reading” was pattern-matching rather than comprehension.
Next up — Chapter 3: Context Momentum and Path Dependence. Once you’re engaged, a second problem arises: the first prompt you give the agent has outsized influence over everything that comes after. Chapter 3 is about how context drifts, how conventions rot, and how to keep a codebase steerable even when it’s being touched by a thousand agent sessions a month.
📖 Want the full picture?
The chapter covers the full five-pass impressionistic scanning method, the seven anti-patterns of disengagement (including the “I’ll read it later” merge and the model-as-oracle trap), the on-call engineer case study where material disengagement turned into a 4 a.m. incident, and the two-week comprehension sprint that saves fintech teams from unnecessary rewrites.
Sho Shimoda
I share and organize what I’ve learned and experienced.カテゴリー
タグ
検索ログ
Development & Technical Consulting
Working on a new product or exploring a technical idea? We help teams with system design, architecture reviews, requirements definition, proof-of-concept development, and full implementation. Whether you need a quick technical assessment or end-to-end support, feel free to reach out.
Contact Us