From Tool to Team: Managing AI as Distributed Engineering

“It felt less like using a tool and more like managing a team.” That quote from OpenAI’s Sora team captures a shift that changes everything about how engineers work.

There’s a moment in AI-assisted development where the mental model has to change. You stop being an engineer who uses an AI tool. You become an engineering leader who manages AI agents doing the work.

The difference isn’t semantic. It changes what skills matter, what bottlenecks emerge, and how you think about productivity.

The tool model

In the tool model, AI is an accelerator. You write code; the AI helps you write faster. You have a problem; the AI suggests solutions. The work remains yours. The AI is a sophisticated autocomplete that occasionally surprises you with good ideas.

This model works for individual tasks. Write a function. Debug an issue. Refactor a module. The AI helps with each task, and you move to the next.

But the model breaks at scale.

The team model

In the team model, you’re not doing the work — you’re directing agents who do the work. You define objectives. You establish patterns. You review output. You integrate changes. The AI agents handle implementation.

OpenAI’s Sora team ran multiple Codex sessions simultaneously during their build. One worked on playback. Another on search. Another on error handling. Another on tests. Each session operated independently on its assigned area.

At that point, the engineers weren’t writing code. They were:

  • Defining what each agent should build
  • Providing context so agents understood the codebase
  • Reviewing what agents produced
  • Resolving conflicts between parallel changes
  • Making architectural decisions that agents couldn’t make

That’s not tool usage. That’s team management.

Brooks’s Law applies

Fred Brooks observed that adding programmers to a late project makes it later. Communication overhead grows faster than productive capacity. Nine women can’t make a baby in one month.

The same principle applies to AI agents.

You can’t simply spin up more Codex or Claude sessions and expect linear speedup. Each session needs context. Each produces changes that may conflict with others. Someone has to integrate everything. Someone has to maintain architectural coherence.

More agents means more coordination overhead. At some point, the overhead exceeds the benefit of parallelism.

The Sora team understood this. They didn’t run unlimited sessions. They ran enough to parallelize distinct workstreams while maintaining their ability to coordinate output. The constraint wasn’t compute cost — it was human capacity to direct and integrate.

The bottleneck shift

When you move from tool model to team model, bottlenecks shift.

Tool model bottleneck: How fast can I write code with AI assistance?

Team model bottleneck: How fast can I make decisions, provide context, and integrate changes?

Implementation speed stops being the constraint. The constraint becomes:

  • Speed of architectural decisions
  • Quality of context you provide
  • Efficiency of your review process
  • Coherence of integration across parallel workstreams

If you’re still thinking about AI as a way to type code faster, you’re optimizing the wrong thing. Code generation isn’t the bottleneck. Direction is.

Conductor, not player

The metaphor I keep returning to: engineer as conductor rather than orchestra member.

A conductor doesn’t play every instrument. They don’t even play one instrument during the performance. Their job is different: set the tempo, cue the sections, maintain coherence, ensure the parts combine into a unified whole.

That’s the team model of AI-assisted development. You’re not writing the code. You’re ensuring the AI-generated code serves your architectural vision. You’re coordinating parallel workstreams. You’re making the judgment calls that determine whether the output is good.

The skills transfer surprisingly well. If you’ve managed engineering teams, you know how to:

  • Define clear objectives
  • Provide sufficient context for independent work
  • Review work without micromanaging
  • Resolve conflicts between contributors
  • Maintain technical coherence across a team

Those skills apply directly to managing AI agents. The agents are faster, cheaper, and more available than human engineers — but they still need direction.

What changes practically

If you’re moving from tool model to team model:

Invest in context infrastructure. CLAUDE.md files, CONTEXT.md files, architectural documentation that agents read at session start. This is like onboarding for new team members — you invest upfront so they can work independently.

Develop your review process. You’ll review more code than you write. Get efficient at scanning for problems, verifying correctness, checking for coherence with the broader system.

Think in parallel workstreams. What can be done independently? How do the pieces fit together? Where are the integration points? This is project planning, applied to AI sessions.

Budget time for integration. Parallel work creates merge conflicts — logical, if not literal. Changes to the same system from different sessions need reconciliation. Plan for it.

Maintain architectural authority. The AI suggests. You decide. Don’t defer decisions to the AI just because it sounds confident. Your judgment about what fits your system is irreplaceable.

The uncomfortable part

Here’s what makes some engineers uncomfortable: in the team model, you don’t get to write as much code.

If your identity is wrapped up in being someone who writes code, managing AI agents feels like loss. You’re not doing the thing you’re good at. You’re supervising machines that do the thing you’re good at.

But consider what you’re trading. You still make the architectural decisions. You still determine what gets built and how. You still own the quality of the output. What you’re giving up is the mechanical part — typing, syntax, boilerplate. What you’re keeping is the judgment part — design, tradeoffs, coherence.

That’s not a bad trade for most engineering work.

When to use which model

Both models remain valid. The question is which fits the work.

Use the tool model when:

  • You’re learning a new technology (you need to write the code to understand it)
  • The task is small and self-contained
  • You’re in exploratory mode, not sure what you’re building yet
  • The work requires your specific expertise in real-time

Use the team model when:

  • The project is large enough to parallelize
  • You have clear patterns for agents to follow
  • Implementation is the bottleneck, not design
  • You’re building in a codebase with established conventions

The transition often happens mid-project. You start in tool mode to establish foundations, then shift to team mode to scale implementation. The foundation-first pattern I’ve written about elsewhere is really about knowing when this shift should happen.

The skill investment

Managing AI agents is a skill. Like any skill, it takes practice.

The engineers who thrive in this model are learning:

  • How to write context documentation that agents can use
  • How to break large tasks into parallelizable work
  • How to review AI-generated code efficiently
  • How to integrate changes without losing coherence
  • How to maintain architectural vision across many contributors

These are management skills applied to a new kind of contributor. They’re worth developing.


This article is part of the synthesis engineering series . For related content on AI-native workflows, see Synthesis Project Management .


Rajiv Pant is President of Flatiron Software and Snapshot AI , where he leads organizational growth and AI innovation. He is former Chief Product & Technology Officer at The Wall Street Journal, The New York Times, and Hearst Magazines. Earlier in his career, he headed technology for Condé Nast’s brands including Reddit. Rajiv coined the terms “ synthesis engineering ” and “ synthesis coding ” to describe the systematic integration of human expertise with AI capabilities in professional software development. Connect with him on LinkedIn or read more at rajiv.com .