I just watched my friend Richard Socher — founder of you.com — in conversation with Peter Diamandis and Salim Ismail. The video is at the end of this post. I’ve worked in AI and NLP for decades and have served as an advisor to you.com since its founding, so I have some context for the topics they covered.
The current state of AI models
The conversation starts with an analysis of Grok 3’s capabilities versus GPT-4 and Gemini. What strikes me about these benchmarking discussions: the goalposts have shifted so fast. Capabilities that seemed impossible in 2020 are now table stakes.
I saw this acceleration firsthand leading technology teams as CTO at The New York Times and Chief Product & Technology Officer at The Wall Street Journal. We went from basic NLP systems to AI applications that could analyze complex text and generate meaningful content. Even for those of us building AI for decades, the pace still catches me off guard.
Defining artificial general intelligence
Richard’s exploration of what constitutes AGI is the part I keep thinking about. The question has moved from philosophy seminar to practical engineering: what benchmarks matter, what capabilities count.
I agree that AGI isn’t about passing discrete tests. It’s about adaptability across domains without human hand-holding. The progression from narrow AI to general capabilities isn’t linear — it’s expanding in multiple dimensions at once: understanding, reasoning, application.
The open source debate
The open-source versus closed-source discussion hit home for me. I’ve navigated this tension in various leadership roles and watched it shape how AI actually gets built.
Open source models like Llama create an ecosystem where innovation gets democratized and collective effort accelerates improvement. Closed-source models benefit from concentrated resources and coordinated development — often more polished results.
This isn’t academic. It determines who benefits from AI and how these technologies get deployed. I’ve long argued for approaches that balance innovation with responsibility. Neither absolute openness nor complete closure serves society well.
AI as co-scientist
When Richard talks about AI as a “co-scientist,” I think of conversations with my friend Bryan McCann, you.com’s CTO. Bryan — whose recent comments about our work together speak to our collaborative relationship — has been building approaches that augment human intelligence rather than just automating tasks.

AI accelerating scientific breakthroughs by spotting patterns in massive datasets or generating novel hypotheses — that’s a real shift in how discovery happens. Not replacing scientists. Multiplying what they can do: more possibilities explored, more variables considered, problems solved that would otherwise stay stuck.
The infrastructure build-out
Microsoft’s investments in AI data centers point to something easy to overlook: the sheer infrastructure required. These investments aren’t about today’s capabilities — they’re laying foundation for advances we can’t yet imagine.
It reminds me of the railroad networks of the 19th century, or the internet backbone of the 1990s. Physical infrastructure as the enabler of change. We’re seeing the same pattern with AI: computational capacity as the limiting factor for what’s possible.
The emergence of AI agents
Richard’s discussion of agentic AI grabbed my attention. When I wrote about you.com’s launch back in 2021, I highlighted how their AI approach would make search more intuitive and personalized. What we’re seeing now with agentic systems is that vision coming true — AI that doesn’t just answer questions but anticipates needs and acts on them.

The shift from passive tools to proactive agents changes how we think about AI. Not systems that execute predefined tasks, but partners that understand context, recognize goals, and figure out how to achieve them.

The convergence with robotics
The section on humanoid robots and companies like Figure AI points to another development: AI merging with physical embodiment. The implications go well beyond technical specs.
When AI can interact with the physical world through robots, we unlock possibilities for automation and assistance that used to be science fiction. But we also face new questions about safety, labor markets, and how humans and machines will work together. These deserve serious thought.
From vision to reality: ARI’s launch
On the same day as the interview, Richard announced ARI (Advanced Research & Insights) — what he calls “the first professional-grade deep research agent purpose-built for business.” This puts the agentic AI concepts from the interview into practice.
What stands out: ARI analyzes 400 sources simultaneously, ten times more than comparable tools, and breaks complex queries into multiple research steps in real-time. This is the kind of AI augmentation that changes knowledge work.
“The research time has dropped from a few days to just a few hours,” says Dr. Dennis Ballwieser of Wort & Bild Verlag. That compression of research cycles shows how AI is reshaping intellectual work — far beyond simple automation.
The timing makes a point from Richard’s interview concrete: theoretical capabilities are becoming practical applications faster than most people expect.
Looking forward
What I noticed most about Richard’s conversation is the balance — technical depth alongside broader implications. That balance is why I joined you.com as an advisor and investor at its founding: building serious technology while thinking carefully about impact.

As AI moves from research project to real force, keeping that balance matters more. The questions aren’t just about what’s technically possible. They’re about what’s desirable for society and how we handle the transition to a world where AI is woven into daily life.
I recommend watching the full interview. Richard combines technical expertise with entrepreneurial experience in a way that clarifies both where we are and where we’re headed. Having been involved with you.com from the beginning, I’ve watched Richard and Bryan’s vision consistently anticipate how AI technology would evolve.
Did Richard Socher invent prompt engineering?
I asked You.com’s competitor, Perplexity.AI: “Did Richard Socher invent prompt engineering?” Here’s a condensed version of the response. Read Perplexity.AI’s full response with citations .
Richard Socher is widely recognized as having invented prompt engineering and is often called “the father of prompt engineering.” He’s credited with bringing neural networks into natural language processing.
Socher is acknowledged for “inventing the most widely used word vectors, contextual vectors, and prompt engineering.” He has over 200,000 citations, placing him among the top five most cited AI researchers.
His credentials: Chief Scientist and EVP at Salesforce (where he reportedly invented prompt engineering), CEO/CTO of MetaMind (acquired by Salesforce in 2016), Ph.D. in computer science from Stanford. He’s now founder and CEO of You.com and co-founder and managing partner of AIX Ventures.
Wikipedia notes that prompt engineering became more prominent after ChatGPT launched in 2022, but Socher’s contributions predate that mainstream adoption — his foundational NLP work laid the groundwork for modern prompt engineering techniques.
The interview
What are your thoughts on the future of AI and agentic systems? I’d like to hear your perspectives in the comments.
Rajiv Pant is President at Flatiron Software and Snapshot AI , where he leads organizational growth and AI innovation while serving as a trusted advisor to enterprise clients on their AI transformation journeys. He is also an early investor in and senior advisor to you.com , an AI-powered search engine founded by Richard Socher and Bryan McCann. With a background spanning CTO roles at The Wall Street Journal, The New York Times, and other major media organizations, Rajiv brings deep expertise in language AI technology leadership and digital transformation. He writes about artificial intelligence, leadership, and the intersection of technology and humanity.
