The Engineering of Intent, Chapter 9: Advanced Context Engineering

This is Part 9 of a series walking through my book The Engineering of Intent. In the previous chapter, we looked at the four architectural pillars — Vibes, Specs, Skills, Agents. This chapter zooms in on Specs in their most concrete form: the context the agent actually reads on every turn.


The Highest-Leverage Activity in AI-Native Development

Context engineering is the practice of shaping the information an agent sees at the start of each session. Done well, it is the highest-leverage activity in AI-native development. Done poorly, it is the single largest source of quality problems.

Chapter 9 walks through the concrete artifacts that make context engineering a practice rather than a vibe: the Context Pack, the Layered Prompt, and the economics that determine which one of them you actually ship.


The Context Pack

A Context Pack is a curated set of files loaded into every agent session for a project. It includes the Specs, the conventions, and a carefully chosen sample of recent code. It should be stable enough to be memorized, small enough to fit comfortably in the model’s working context, and current enough to be trusted.

The Layered Prompt

Three layers, always in this order: System (the immutable rules), Context (the pack), Task (the specific ask). Teams that respect this layering produce reliable outputs. Teams that collapse the layers into one long prompt produce erratic outputs. The book has the full template.


More Context Is Not Better Context

“Every token in the context costs latency and money. A team ran an A/B test: half of sessions got a full 25K-token Context Pack, half got a curated 8K-token version with only current conventions and the five most relevant code examples. The curated version produced higher-quality code by every measure the team tracked — at one-third the cost.”

Context is economic. A disciplined team monitors token cost per session and prunes aggressively. Common wins: remove examples the agent is ignoring, remove historical notes describing abandoned directions, summarize rather than quote.

💡 Key idea: The best Context Pack test is the one-page onboarding test. Can a new engineer, reading only the pack, make a correct change on their first day? If yes, the pack is adequate. If no, the pack is lying about your codebase. Treat the Context Pack as human onboarding material that happens to also be consumed by machines.

The Three Anti-Patterns

  • The kitchen sink pack — everything included because nobody wants to decide what’s relevant. Agents drown.
  • The stale pack — written six months ago, never updated. Agents reference removed APIs.
  • The prose-heavy pack — long, discursive explanations where concise rules would do. Agents read past the actual constraints.
âš  Warning: Context Rot — the gradual divergence between what the pack says and what the codebase actually is — is the quietest bug in your agent workflow. The symptom is agents producing subtly wrong code that references patterns no longer in use. The cause is discipline failure. The cure is ruthless: update the pack with every material change. Make it part of the definition of done.

Next up — Chapter 10: The Five-Layer Quality Gate Stack. Once the context is right and the plan is confirmed, Part IV of the book turns to what catches the inevitable mistakes before they reach production. Chapter 10 walks through the defense-in-depth stack — linting, types, security scans, test synthesis, and agentic E2E — and how to tune each layer for your team’s risk tolerance.


📖 Want the full picture?

The chapter covers the full Context Pack template, the three-layer prompt structure with examples, the A/B test methodology so you can run your own, the three anti-patterns with concrete fixes, and the one-page onboarding test as a practical quality gate for your context.

Get The Engineering of Intent on Amazon →

2026-04-25

Sho Shimoda

I share and organize what I’ve learned and experienced.