In a project where 85% of code was AI-generated, the team spent their first week writing code by hand. That’s not a contradiction — it’s the pattern that made everything else work.
The most counterintuitive lesson from successful AI-assisted development: you have to write code by hand first.
I’ve seen this pattern repeatedly in my own work and in every serious case study I’ve encountered. Teams that try to generate everything from day one struggle. Teams that invest upfront in human-built foundations scale smoothly. The difference isn’t luck — it’s architecture.
The failed approach
The instinct when starting an AI-assisted project is obvious: describe what you want, let the AI generate it. Why write code yourself when the machine can do it faster?
OpenAI’s Sora team tried this. Their initial prompt: “Build the Sora Android app based on the iOS code. Go.”
The result was technically functional. The AI produced working code. But the product experience fell short. Single-shot generation produced unreliable results. The codebase lacked coherence. They had to start over.
I’ve made the same mistake. In early experiments with Claude Code, I’d direct Claude to generate complete features without establishing patterns first. Claude’s output worked in isolation but fought the existing codebase. Integration became painful. Each new feature felt like it was written by a different person with different opinions about how things should work.
What foundation-first looks like
The Sora team’s second approach: humans write the architecture by hand for the first week. No AI code generation. Just four engineers building the foundational patterns manually.
What they built:
- Dependency injection setup
- Navigation architecture
- Authentication flow
- Base networking layer
- A few representative features, implemented end-to-end
Critically, they documented patterns as they built them. Not for future humans reading documentation — for the AI that would scale the implementation.
Their reasoning, quoted directly: “The idea was not to make ‘something that works’ as quickly as possible, rather to make ‘something that gets how we want things to work.’”
The first week wasn’t about shipping features. It was about establishing what “correct” looked like in their codebase.
Why AI can’t do this
AI generates code by pattern matching against training data. It produces plausible code that resembles what it’s seen before. But it doesn’t know what you value. It doesn’t know your team’s conventions. It can’t infer your product strategy or architectural preferences from a prompt.
Think of it like hiring a senior engineer on their first day. They know how to code. They’ve built systems before. But they don’t know how your team works. They can’t guess your naming conventions, your error handling philosophy, your opinions about state management. They need examples.
The foundation-first pattern provides those examples. When you build representative features by hand, you’re creating a reference library for the AI. Every pattern you establish becomes something the AI can extrapolate from.
“Build this settings screen using the same architecture and patterns as this other screen you just saw” — that instruction works.
“Build a settings screen” — that produces generic code that may or may not fit your system.
The patterns I establish manually
When starting a new AI-assisted project, I build these things by hand before letting AI generate code:
Project structure. Where do files go? How are modules organized? What’s the naming convention? I create the directory structure and put a few real files in place.
Core abstractions. The base classes, interfaces, or types that other code will extend. If I’m building an API, the request/response patterns. If I’m building a CLI, the command structure. These set the shape everything else follows.
One complete feature. Not a stub — a real feature that touches all the layers. Database to API to UI if applicable. This shows how the pieces connect.
CLAUDE.md with explicit patterns. I document the conventions in a file the AI reads automatically. Not general principles — specific patterns with examples.
Test patterns. How do tests work in this project? What assertions do we use? What’s the fixture strategy? One well-written test file teaches the AI how to write the rest.
The time investment
The foundation phase typically takes 10-20% of project time. For a four-week project, that might be 3-4 days of manual work before AI generation kicks in.
This feels slow. The temptation is to start generating immediately and fix problems as they arise. But the math doesn’t work out. Problems compound. Inconsistent patterns create integration friction. Debugging AI-generated code that fights itself takes longer than building the foundation right.
The Sora team spent one week on foundations, then shipped in three weeks. If they’d skipped the foundation work, they estimate they’d still be debugging.
What goes in CLAUDE.md
The foundation includes documentation the AI reads at session start. I keep a CLAUDE.md file in each project with:
Architecture overview. Two paragraphs explaining what this project does and how it’s structured. Not comprehensive documentation — just enough context that the AI understands the system.
Explicit patterns. “We use X for state management. Here’s an example.” “Error handling follows this pattern. Here’s how.” Specific, not abstract.
File organization. Where do new files go? What’s the naming convention? How do modules relate?
What not to do. Patterns we’ve explicitly rejected. Approaches that look reasonable but don’t fit our system. The AI will suggest these otherwise.
I update the file as the project evolves. I document patterns that emerge during development. When mistakes keep happening, I address them with explicit guidance.
Signs you skipped the foundation
You’ll know if you jumped into AI generation too quickly:
Inconsistent patterns. Different features handle the same problem differently. State management varies. Error handling varies. Naming varies.
Integration friction. AI-generated components don’t connect smoothly. You spend more time adapting generated code than you would have spent writing it.
Repeating the same corrections. You fix the same issue in every AI-generated feature. The AI keeps making mistakes you’ve corrected before.
Debugging unfamiliar code. The AI generated something that works but you don’t understand how. When it breaks, you’re reading foreign code.
If you’re experiencing these, the fix is to stop generating, establish patterns manually, document them, and then resume generation with better foundation.
Foundation-first in existing codebases
The pattern applies to new projects, but what about adding AI to existing codebases?
The foundation already exists — you just need to document it. Before letting AI generate code in an established project, write a CLAUDE.md that captures:
- How this codebase is organized
- Key patterns and conventions
- What the AI should know before generating anything
The documentation effort pays off immediately. The AI stops fighting your patterns and starts extending them.
The meta-pattern
Foundation-first is really about control. AI can generate vast amounts of code quickly, but humans still need to direct what gets generated. The foundation phase establishes that direction.
Without it, you’re not collaborating with AI — you’re cleaning up after it. With it, the AI becomes a multiplier of your architectural decisions rather than a source of new problems.
The investment is worth it.
This article is part of the synthesis coding series . For the case study that inspired this pattern analysis, see What OpenAI’s Sora Build Teaches Us About Synthesis Coding .
Rajiv Pant is President of Flatiron Software and Snapshot AI , where he leads organizational growth and AI innovation. He is former Chief Product & Technology Officer at The Wall Street Journal, The New York Times, and Hearst Magazines. Earlier in his career, he headed technology for Condé Nast’s brands including Reddit. Rajiv coined the terms “ synthesis engineering ” and “ synthesis coding ” to describe the systematic integration of human expertise with AI capabilities in professional software development. Connect with him on LinkedIn or read more at rajiv.com .