The Engineering of Intent, Chapter 7: The GenDD Execution Loop

This is Part 7 of a series walking through my book The Engineering of Intent. In the previous chapter, we looked at orchestration — how many agents can work together when the task is mechanical enough. This chapter opens Part III of the book: the methodology that ties the whole thing together. Generative-Driven Development, or GenDD.


A Five-Step Loop That Replaces Your Ceremony Set

Generative-Driven Development replaces the traditional Agile ceremony set with a tighter, five-step loop: Context → Plan → Confirm → Execute → Validate. The loop is fractal. It applies at the level of a line change, a feature, or a sprint. The cadence varies. The structure does not.

Chapter 7 walks each step, explains what goes wrong when it’s skipped, and works a full example through the complete loop. Here’s the compressed tour.


Step 1 — Context (Human)

The human provides the seed. This is not a specification — specifications will be derived. It’s the authoritative starting point: the intent, the constraints, the acceptance criteria, and the relevant parts of the existing state.

Good context is scoped tightly. Too broad and the agent has to guess which of many interpretations is intended. Too narrow and it starves the agent of the information it needs to avoid collisions with existing code. My rule: include everything the agent could not reasonably be expected to discover on its own in the time available. Domain vocabulary, non-obvious constraints, pointers to exemplar code. Skip general knowledge the model already has and cheaply-discoverable facts like file layout.


Step 2 — Plan (AI)

Before a single character of production code is written, the agent produces a plan. A numbered list of steps. An explicit enumeration of the files to be touched. A stated set of assumptions.

A good plan is surprisingly boring. It reads like a pull request description before the code exists. It is also falsifiable — each step either will or will not achieve its stated outcome, and the human can say so without reading any code.

💡 Key idea: A bad plan has tells. “Refactor as needed” without specifying what. “Update tests” without specifying which. A step that requires a migration but doesn’t include the migration. These tells are reasons to reject the plan and ask for a revision, not reasons to proceed and hope.

Step 3 — Confirm (Human)

The human reads the plan and either confirms, modifies, or rejects. This is the highest-leverage moment in the loop, and the moment most teams under-invest in.

Confirmation is not rubber-stamping. It’s an active act of intent validation. Is the plan consistent with the actual goal? Does it honor the hard constraints? Does it respect the existing architecture? If I were a senior reviewer seeing this PR description for the first time, would I expect this plan to produce a good change?

“Bugs caught at confirmation are orders of magnitude cheaper than bugs caught at validation. Five minutes spent on a plan saves thirty minutes on a bad execute. Thirty minutes spent on a plan saves a week of debugging a wrong-headed feature.”


Step 4 — Execute (AI)

The agent executes the confirmed plan. In a mature workflow, this step is hands-off: the human isn’t watching; the agent isn’t asking unnecessary questions. Execution produces a diff, corresponding test updates, and any required documentation changes. Complete, reviewable artifact out.

If the agent encounters something the plan did not anticipate — a missing file, a missing dependency, a test broken before the change — it should stop and surface the issue, not silently adapt. Silent adaptation is where drift is born. The loop is designed to bring surprises back to the human.


Step 5 — Validate (Human + AI)

Validation is both automated and human. The automated part runs the quality gate stack (Chapter 10). The human part is the impressionistic scan from Chapter 2. Only when both pass does the change merge.

Validation is the second most under-invested step, after Confirm. Teams that treat validation as “did the tests pass?” miss the point. The human job at validation is the meta-check: did we verify the right things?


From One-Hour Cycles to Ten-Minute Cycles

âš  Worth noting: A payments team I worked with was averaging one-hour cycle times — from intent to shippable change. After six weeks of disciplined GenDD practice, that number dropped to ten minutes on routine changes and twenty-five minutes on complex ones. The improvements came almost entirely from Confirm and Validate. Context and Plan were unchanged. Execute was actually slightly slower because the agent was forced to stop and ask more often. The net effect was a large win.

That last detail is important. GenDD is not about speeding up the agent. It’s about investing the human’s time in the places where it compounds — Confirm at the start, Validate at the end — and letting the middle run fast. Teams that try to speed up Execute end up in the autocomplete trap. Teams that invest at the edges of the loop get the velocity they wanted without the debt.


Next up — Chapter 8: The Four Pillars of AI Architecture. The GenDD loop sits on top of four architectural primitives — Vibes, Specs, Skills, and Agents — and most teams over-invest in one pillar and under-invest in the others. Chapter 8 is about how to tell which pillar you’re short on and how to rebalance.


📖 Want the full picture?

The chapter works a complete example through the five-step loop end to end, shows the exact structure of a good plan vs. a bad one, covers the validation meta-check in depth, and presents the full payments-team case study with the before/after cycle-time breakdown.

Get The Engineering of Intent on Amazon →

2026-04-23

Sho Shimoda

I share and organize what I’ve learned and experienced.