# Original PRD + Architecture (detailed context)
PRD: "Users can update email. Verify before replacing.
      Send confirmation to BOTH old and new addresses.
      If new email bounces, keep old email.
      Limit: 3 verification attempts per day."

Arch: "SendGrid templates. 32-byte tokens in Redis, 24h TTL.
       User table: email_pending column for staged changes."

# After synthesis step
"Implement email update with verification.
 Use SendGrid and Redis for tokens."

# What got lost
- Send to BOTH addresses (security)
- Bounce handling (edge case)
- Rate limit (abuse prevention)
- 24h TTL (timing)
- email_pending column (schema)

Every time you pass context through an LLM, you lose information. The synthesized version sounds clean but dropped five implementation details. An agent building from it would have to guess.

Option A: Direct (less loss)
Product Requirements + Spec ? Code Agent ? Code

Option B: Extra step (more loss)
Product Requirements ? Synthesis Agent ? Spec ? Code Agent ? Code

Option B looks cleaner. You're "polishing" the inputs. But every LLM pass loses information. Option B is strictly worse.

Why Each Step Loses Information

Shannon's Data Processing Inequality: processing can only preserve or destroy information, never create it.

What the synthesis agent does to your docs:

Summarizes sections (detail lost). Paraphrases "must" to "should" (meaning shifted). Drops "irrelevant" details (edge cases lost). Guesses at ambiguities (sometimes wrong).

Cascading drift:
PRD + Arch ? "Unified Spec" (some info lost) ? "Task Breakdown" (more lost) ? Code (working from degraded intent).

The synthesis agent can't know what the code agent needs. It compresses based on general heuristics. Details it drops as noise might be exactly what handles an edge case downstream.

"But My Docs Are Messy"

Modern LLMs handle messy documents fine. When you "clean up" docs through an LLM, you're not removing noise. You're removing information you can't distinguish from noise.

When Extra Steps Help vs Hurt

The exception: steps that add genuinely new information.

Adds information (good)Loses information (bad)
Human resolves conflict between PRD and archLLM "unifies" two docs into one
Domain expert adds implicit constraintsLLM summarizes for "clarity"
Generating tests (new artifact)Restructuring docs to "help the agent"
Validation that flags contradictionsParaphrasing requirements

A human writing a spec in the Matrix Methodology adds signal: mapping "user enters address" to "REST controller with @Valid request body." That mapping didn't exist in either source doc. An LLM "unifying" documents can only rearrange what's there, losing some along the way.

The Rule

Optimal pipeline = shortest pipeline.
Your intent ? Agent ? Output. With 200K token context windows, your docs probably fit. Every extra LLM step is another round of telephone.

Feed agents the rawest context you can. Don't preprocess for clarity. Don't synthesize for convenience. Fewer steps, richer context, less telephone.