Context Loss and How to Prevent It

You told the AI exactly what you wanted. An hour later, it forgot. Here’s why that happens and how to prevent it.

Working with Claude Code on a large refactoring project, I was migrating a Node.js codebase from callbacks to async/await. Early in the session, I established the pattern: use async/await consistently, wrap legacy callback APIs with util.promisify, and maintain backward compatibility by keeping the original function signatures.

The conversation was long. Dozens of files. At some point, the context window filled up and the system compacted older content into a summary.

After the compaction, Claude continued working. It converted callbacks correctly. It even added the pattern to the project’s CLAUDE.md file. But in the final batch of files, it started using .then() chains instead of async/await — violating the convention we’d established hours earlier.

The irony: Claude was documenting a coding convention while simultaneously violating it. The connection between “this is our pattern” and “apply the pattern to what I’m touching” got lost in the compaction.

Why context loss happens

AI assistants have finite context windows. When conversations get long, something has to give. The system summarizes or drops older content to make room for new content.

Summarization captures the gist but loses nuance. “User wants async/await pattern with promisify wrappers” might summarize to “refactoring to modern async patterns” — accurate but not actionable. The specific convention disappears.

Dropping is worse. Older messages simply aren’t available to the model anymore. Instructions given early in a long session may not exist in the assistant’s view by session end.

This isn’t a bug — it’s architecture. Context windows are large but not infinite. Trade-offs have to be made.

What survives compaction

Not all context is equally fragile. Some information reliably survives:

Explicit file content. If the instruction is in a CLAUDE.md file that gets loaded at session start, it survives. The file is read fresh each time.

Recent conversation. Recent messages are preserved. Instructions given just before the action are safer than instructions given an hour ago.

System prompts. Content in the system layer persists for the session. (You don’t control this layer directly, but it exists.)

Repeated information. If you’ve mentioned something multiple times across the conversation, it’s more likely to appear in summaries.

What gets lost

Early instructions. Things you said at the start of a long conversation are most vulnerable to compaction.

Nuanced rules. “Use async/await except for these specific legacy APIs that must remain callbacks” compresses worse than “always use async/await.”

Implicit connections. “Apply this pattern to everything you touch” is implicit. If the summary doesn’t make the connection explicit, the assistant may not either.

One-time mentions. Information stated once, early on, is least likely to survive.

Prevention strategies

Write it to a file. If a coding convention matters for more than the immediate moment, add it to CLAUDE.md or a project documentation file. File content gets loaded fresh; conversation content gets compacted.

Repeat critical instructions. When starting a new phase of work, restate the important constraints. “Remember, we’re using async/await with promisify wrappers” takes seconds and prevents errors.

Use explicit task lists. When you give the AI a coding pattern during a multi-file refactor, include “apply [pattern] to all modified files” as an explicit task. Don’t assume the connection is obvious.

Verify before committing. Before accepting changes, grep for violations. If you specified async/await, run grep -r "\.then(" src/ to check for Promise chains. The AI may have missed instances.

Treat CLAUDE.md as an active checklist. When adding a convention to the file, immediately check if current work violates it. The rule and the fix should happen together.

The failure mode in detail

Here’s how my incident unfolded:

  1. I established the async/await convention in conversation
  2. Claude acknowledged it
  3. We continued refactoring dozens of files
  4. Context compacted
  5. I asked to add the convention to CLAUDE.md
  6. Claude added it correctly
  7. Claude continued refactoring the remaining files using .then() chains

The summary likely captured something like “User established async patterns — convention added to CLAUDE.md.” Accurate, but it didn’t prompt Claude to check whether the files being refactored follow the convention.

The fix would have been any of:

  • Adding the convention to CLAUDE.md earlier, so it was loaded with each session
  • Restating the convention when we resumed refactoring
  • Including “verify async/await consistency” as an explicit task
  • Grepping for .then( violations before committing

What this means for workflows

Long conversations with AI assistants require different habits than short ones.

For short interactions — ask a question, get an answer — context loss isn’t an issue. The conversation fits comfortably in the window.

For long sessions — multi-file refactors, complex features, extended debugging — you need to account for compaction. The assistant’s view of the conversation changes over time. What you said two hours ago may not exist anymore.

The practical adjustment: externalize important context. Move it from conversation (transient) to files (persistent). The CLAUDE.md pattern exists precisely for this reason.

Checking your own work

When reviewing AI-assisted changes, especially after long sessions, check for:

Consistency violations. Patterns that were stated but not applied consistently across all changes.

Early instruction drift. Did the assistant follow early instructions, or did behavior change over time as context compacted?

Implicit connection failures. When you stated a convention, did you also explicitly connect it to the work being done? If not, verify the connection was made.

A quick verification pass catches most context loss errors. The assistant did most of the work correctly — you’re just checking for the gaps.

The meta-lesson

Context loss is a feature of how AI assistants work, not a failure of any particular interaction. Knowing it happens lets you design workflows that survive it.

The investment in context infrastructure — CLAUDE.md files, project documentation, explicit task lists — isn’t about the AI’s limitations. It’s about building systems that work reliably across sessions, across compaction events, across the natural gaps in any long conversation.

Trust the capability. Design for the architecture.


This article is part of the synthesis coding series . For more on how AI memory works, see How Claude’s Memory Actually Works (And Why CLAUDE.md Matters) .


Rajiv Pant is President of Flatiron Software and Snapshot AI , where he leads organizational growth and AI innovation. He is former Chief Product & Technology Officer at The Wall Street Journal, The New York Times, and Hearst Magazines. Earlier in his career, he headed technology for Condé Nast’s brands including Reddit. Rajiv coined the terms “ synthesis engineering ” and “ synthesis coding ” to describe the systematic integration of human expertise with AI capabilities in professional software development. Connect with him on LinkedIn or read more at rajiv.com .