Yesterday evening, I was walking through Midtown Manhattan catching up with my friend Mark Vlasic who happened to be in town. The streets were crowded with the usual mix of tourists, professionals rushing between meetings, and delivery workers weaving through the pedestrian traffic. We were catching up, talking about work, life, the usual—when Mark asked me a question that cut through the casual conversation: “Will artificial intelligence destroy humanity?”
We stopped at a corner, waiting for the light to change, and I realized this wasn’t just small talk. It’s the kind of question that I suspect many people are quietly wondering about, but Mark had the courage to just ask it directly.
It’s the kind of question that usually leads to either breathless panic or dismissive hand-waving. But as we continued walking, weaving around clusters of people taking photos in Times Square, I started to answer him. My thinking on this had crystallized in a way that felt different from most of the AI doomsday scenarios I’ve read. So I want to share what I told Mark, because I think we might be worried about the wrong thing.
Breaking Down the Question
As we walked north on Sixth Avenue, I suggested to Mark that we break down his question into parts and address them one by one. The most immediate concern that dominates headlines is economic: AI will replace all human jobs, leading to mass unemployment and societal collapse. It’s a fear I hear constantly, and it’s the one I disagree with most strongly.
The Iron Man Suit for Your Brain: Current AI as Amplifier
Here’s what I’ve observed from the current and foreseeable evolution of AI: it’s a remarkably powerful tool, but in its present incarnation it’s still a tool. Even if it’s an intelligence, it’s an intelligence that supports and works alongside humans. The present form of AI benefits enormously from having a human drive the process.
To be clear: I’m not arguing that AI can never replace humanity or that conscious superintelligence isn’t a concern. That’s a topic many others have explored in depth. My point is more specific and more immediate—I see AI on a spectrum from brain amplification tool to fully conscious super-intelligence. What concerns me is that this Iron Man suit for the brain can cause tremendous harm long before we reach the conscious super-intelligence stage.
Software Engineering: The Human Driver Still Matters
Take software engineering, for example. This is an area where I have direct experience, and where many people assume AI will simply replace programmers entirely. Yes, there are cases where non-programmers can use tools like Claude Code, Cursor, Replit, or other AI-powered coding assistants to describe what they want and have AI build simple applications. Games like Tetris or Snake, or variations of already-solved problems—sure, AI can handle those reasonably well.
But once you venture into sophisticated applications with unique, specific needs—things that aren’t just remixes of existing solutions—the dynamic changes dramatically. Building a custom financial modeling system, creating a novel security architecture, developing domain-specific business logic that reflects years of organizational knowledge—these require a human programmer to drive the AI.
I’ve experienced this myself repeatedly. I’ll be working on a project with Claude or another AI assistant, and we can move incredibly fast on standard implementations. The AI suggests code, I review it, we iterate. It’s like pair programming with someone who has read every programming manual ever written. But when we hit a truly novel problem—something that requires understanding subtle business requirements, making architectural tradeoffs, or reconciling conflicting constraints—I need to be the one steering.
The difference between a product that actually meets your needs and an almost-but-not-quite solution is human expertise guiding the process. The AI can generate code faster than any human could type, but it can’t make the judgment calls about what code should be written. It can’t push back when a requirement doesn’t make sense. It can’t see around corners to anticipate how a system will need to evolve.
I plan to write a full blog post exploring this dynamic in depth, because I think it’s crucial for understanding how AI will actually transform work rather than eliminate it. But the key insight is this: AI makes excellent programmers dramatically more productive, while making it possible for novices to build simple tools. But the gulf between those two outcomes is enormous.
Journalism and Content Creation: Quality Still Requires the Human Touch
The same pattern holds in journalism and media. It doesn’t make sense for a media company to just unleash AI agents to generate content fully autonomously. I’ve seen this firsthand in my years leading technology and product at major news organizations. AI can absolutely help with research—finding sources, pulling relevant statistics, identifying trends in data. It excels at grammar checking, editing for clarity, and even suggesting structural improvements.
AI can also personalize content for different audiences and platforms in ways that would be prohibitively time-consuming for humans. You can take a deeply reported piece and create versions optimized for your website, for social media platforms with different character limits and audience expectations, adapt it into video scripts, generate podcast discussion points. The AI can help you reach different demographics by adjusting tone, complexity, and framing while maintaining factual accuracy.
But here’s what AI can’t do: it can’t conduct original investigations. It can’t develop trusted sources over years of relationship-building. It can’t make editorial judgments about what stories matter and why. It can’t provide the lived experience and cultural understanding that gives journalism its authority and resonance. It can’t feel the weight of responsibility that comes with publishing something that might affect people’s lives.
High-value journalism is still human-generated, AI-assisted work. It’s not AI-generated slop churned out for ad revenue; it’s human creativity, judgment, and expertise amplified by AI capabilities. The journalists I know who are most excited about AI aren’t worried about being replaced—they’re excited about having more time for actual reporting instead of transcribing interviews or formatting content for multiple platforms.
Again, this deserves a much deeper exploration that I’ll save for a future post. But the pattern is clear: AI augments human capability rather than replacing it, at least in the current and foreseeable iterations of the technology.
Amplifying Excellence and Amplifying Stupidity
This led me to share an analogy with Mark as we walked through Manhattan teeming with fellow humans: current AI is like an Iron Man suit for your mind. It amplifies your capabilities and makes you significantly more effective at what you do.
But here’s the critical caveat I emphasized—and I gestured for emphasis as we walked—AI is also an amplifier for human stupidity, inefficiency, and bad solutions. I’ve watched people use AI to make bigger mistakes than they would have made alone. Whether they’re reviewing legal documents, seeking advice, or brainstorming solutions, AI’s tendency to be highly agreeable can help them confidently march in the wrong direction, just more efficiently.
I’ve come to think of it this way: AI chatbots turn a fool into a 10x fool. Meanwhile, tools like Claude Code make a thoughtful, experienced, productive engineer into a 10x engineer. The amplification works in both directions.
AI chatbots turn a fool into a 10x fool. Claude Code makes a thoughtful, experienced engineer a 10x engineer.
And here’s the uncomfortable truth I’ve learned from personal experience—both my own mistakes and those I’ve observed in others: 1.5x fools are more harmful than 1x fools, but 10x fools are outright dangerous. Give someone who lacks judgment a powerful amplifier, and they’ll cause proportionally more damage with the same flawed thinking.
Human Expertise Remains Essential: Why Jobs Will Transform, Not Vanish
So the threat from AI’s current evolution isn’t economic in the traditional sense. Human jobs will change and evolve, certainly. But because AI functions as an amplifier rather than a replacement, humans remain essential for creating products and solutions that actually make sense for humans. The sword still needs a hand to wield it, even if it’s sharper and light-saberesque than ever before.
बंदर के हाथ में तलवार: A Sword in the Hand of a Monkey
Which brings me to what I actually told Mark is the concerning part. When I was working through these ideas with my friend Arun Lal visiting NYC recently, he gave me a perfect phrase for what I was describing. He said it in Hindi: “बंदर के हाथ में तलवार” (bandar ke haath mein talwar)—a sword in the hand of a monkey.
बंदर के हाथ में तलवार — a sword in the hand of a monkey
That’s exactly what AI represents: an incredibly powerful tool, perhaps even a weapon, in hands that may not be equipped to handle it responsibly.
The real threat from AI isn’t from AI itself. It’s from human beings who could use AI destructively.
Even with all the guardrails, safety measures, and content policies, humans will find ways to jailbreak systems, ask questions indirectly, and get the information or assistance they’re seeking. What a misguided human being, a depressed human being, someone in crisis or consumed by harmful ideology can do with AI is genuinely frightening. Their thinking, problem-solving ability, and capacity for action are hugely enhanced by AI.
Right now, people who want to harm themselves or others have their impact somewhat contained by limited resources, knowledge, and capabilities. They can’t easily cause widespread damage. But with AI as a mind amplifier, individuals could devise innovative ways to cause tremendous harm.
Mark actually brought up wildfires in California during our conversation. He pointed out that many devastating wildfires are caused by a single arsonist or even just one person making a careless mistake—discarding a cigarette, using equipment improperly, or in some cases, deliberately lighting a fire for whatever misguided reason they had. One individual, one moment, massive destruction affecting thousands of people and destroying millions of dollars in property.
That example stuck with me because it illustrates how much damage one person can cause even without AI. Now imagine that same person with access to a super-intelligent advisor that could help them identify exactly where and when to start a fire for maximum impact, or devise methods that would be harder to trace, or coordinate multiple simultaneous events.
The Asymmetric Advantage of Attackers: Why Defense Must Be Perfect
I use another analogy to describe this: it’s like a football (soccer, for my fellow US Americans) game where one side is trying to prevent goals and the other is trying to score. The goalkeeper, no matter how skilled, can block ten shots, twenty shots, a hundred shots. But the attacking side only needs to get through once to score their goal.
In the case of AI safety, the “goalkeepers” are the researchers, policymakers, and engineers trying to build robust guardrails. They have to succeed every single time. But if the “goal” is to cause massive harm to society, bad actors only need to succeed once. And there are enough troubled, angry, or desperate people in the world that the odds start to feel uncomfortably high.
Beyond Conventional Weapons: The Threat of Mass Manipulation and Civilizational Vulnerabilities
Here’s where it gets even more unsettling: the threat isn’t just about creating conventional weapons. A malicious actor using AI doesn’t necessarily need to build a nuclear device or engineer a biological weapon, though those remain possibilities. They could figure out methods of mass manipulation to cause civil unrest, pit groups against each other, spark conflicts or even civil wars.
As AI becomes an increasingly powerful mind amplifier, the concern escalates. Some lone individual—someone like the Unabomber, or any person who finds themselves in a dark enough mental state—could potentially devise methods to cause civilization-level damage.
This is where Mark and I landed on an interesting connection to the Fermi Paradox. One proposed solution to the question “Where are all the aliens?” is that civilizations inevitably destroy themselves before they develop the technology for interstellar travel. Many people who discuss this theory assume AI itself becomes sentient and destroys its creators.
But I think there’s a more immediate and perhaps more likely scenario: civilizations create powerful enough AI that it gives individuals the power to wipe out their entire society before that civilization reaches the stars. It’s not that AI becomes conscious and decides to destroy humanity. It’s that AI-as-amplifier gives individual humans enough power to trigger civilization-ending events.
Think about it this way: currently, the knowledge and technology required for mass destruction is restricted, either by expertise requirements, resource access, or institutional controls. But what happens when an AI mind-amplifier is available to most people? When everyone potentially has access to civilization-ending insights and strategies?
not nuclear chain reactions but human societal chain reactions
They wouldn’t necessarily be physical weapons in the conventional sense. They could be mass manipulation tools that exploit social fault lines. They could be strategies that set off not nuclear chain reactions but human societal chain reactions. Someone might discover through AI assistance that a particular chemical reaction could destroy the ozone layer, or find some way to disrupt Earth’s magnetic field. These sound far-fetched, but that’s exactly the point.
Mark made a great observation here—it’s like James Bond villains with their elaborate schemes to destroy the world. Those cartoonish movie supervillains with their absurdly specific plans to deplete the ozone layer or trigger some bizarre chain reaction always seemed comical. But with a super-intelligent AI as a brainstorming partner, those once-impossible schemes might become achievable. An AI could help someone identify obscure vulnerabilities in our interconnected systems and devise methods to exploit them that no individual could have conceived alone.
The Dangerous Middle Ground: Tool AI Before Conscious AI
What I want to emphasize—and this is where my thinking diverges from many AI safety discussions—is that this threat exists before AI reaches consciousness or true superintelligence.
Most AI doomsday scenarios focus on the moment when AI becomes conscious or develops goals misaligned with human welfare. That’s a valid concern, and I’m not dismissing it. But I’m identifying a different, more immediate risk: the period when AI is an extraordinarily powerful amplifier tool but not yet a conscious entity.
Think of it as a spectrum. On one end, you have AI as a tool. On the other end, a conscious being with its own agency and goals. We’re currently somewhere in the middle, and moving rightward. The scenario I’m concerned about happens in this middle zone—when AI is powerful enough to help individuals devise civilization-ending schemes, but before it’s conscious enough to potentially have safeguards, values, or self-preservation instincts that might prevent such outcomes.
We could face destruction from “Tool AI” well before we need to worry about “Conscious AI.”
The Amplification Paradox: Powerful Tools, Fallible Humans
This ties into something I’ve written about before on my blog: AI as an amplifier and augmentation for humans. I’ve argued that the threat to jobs and employment isn’t from AI itself, but from humans amplified by AI. That observation isn’t unique to me—many others have made similar points. But this conversation with Mark helped me see the deeper implications.
We’re building increasingly powerful amplifiers and distributing them widely, while the humans wielding those amplifiers remain as fallible, troubled, and occasionally malicious as ever. The technology amplifies our capabilities without necessarily amplifying our wisdom, judgment, or ethical reasoning in proportion.
Where Do We Go From Here?
Mark and I discussed all of this, and, we didn’t arrive at easy answers. The solutions to this problem aren’t obvious. Do we restrict access to AI? That seems both difficult to enforce and potentially counterproductive—these tools also amplify human creativity, problem-solving, and beneficial innovation. Do we focus on improving AI safety and alignment? Absolutely, but as the goalkeeper analogy suggests, defense has to be perfect while offense only needs one success.
The answer may lie in addressing the human element more directly—better mental health support, reducing social polarization, building more resilient and equitable societies so fewer people find themselves in the desperate circumstances that lead to destructive acts. But those are civilizational challenges we’ve struggled with long before AI entered the picture.
What I’m certain of is this: the conversation about AI risk needs to expand beyond “will robots take our jobs” and even beyond “will superintelligent AI turn against us.” We need to grapple with the uncomfortable reality that we’re giving powerful amplification tools to a species that includes both the brilliant and the troubled, the wise and the foolish, the constructive and the destructive.
As I told Mark, I don’t think AI will destroy humanity. But I do think AI might give humanity enough power to destroy itself, one amplified individual at a time. And that threat exists right now, today, in this awkward middle phase of AI development where the tools are incredibly powerful but still fundamentally driven by human intent.
It’s not the AI we need to worry about. It’s ourselves, amplified.
A note on the Iron Man analogy: After publishing this post, I learned that Andrej Karpathy and others have also used the Iron Man suit metaphor in discussing AI. Karpathy’s June 2025 YC talk frames it around product design — building “Iron Man suits” (partial autonomy tools) rather than “Iron Man robots” (fully autonomous agents). My usage here developed from thinking about AI as a cognitive amplifier, with particular attention to how it amplifies destructive human tendencies as readily as constructive ones. Given that Iron Man is perhaps the most culturally prominent example of human-technology augmentation, independent discovery of this analogy isn’t surprising. The specific concerns I raise — about amplified human malice being the threat, not AI itself — remain distinct from most discussions of AI risk.