Collective Intelligence: Making AI Work for Everyone

Insights from Cooper Union’s panel on AI, inclusivity, and the future of human-machine collaboration

Last week, I joined a diverse group of thinkers and practitioners at Cooper Union ‘s Benjamin Menschel Civic Projects Lab for a thought-provoking panel on “Collective Intelligence.” The conversation centered on a question that has become increasingly urgent in our AI-powered world: How do we ensure that artificial intelligence serves as a force for democratization rather than a technology that widens existing divides?

Note: You can expand each Deep Dive section below for deeper analysis and more details.

A Meeting of Diverse Perspectives

The evening assembled an interdisciplinary cohort of voices at the forefront of AI’s societal implications. Dr. Jason Edward Lewis brought profound insights on indigenous knowledge systems and AI, challenging conventional Western frameworks of intelligence. Dr. Olivier Oullier presented groundbreaking work using brain-computer interfaces to make technology accessible to people with physical and cognitive disabilities. Lady Mariéme Jamme shared her remarkable work with “ I Am The Code ,” which has brought coding skills to 90,000 young women across refugee camps and underserved communities in Africa.

The panel also included Professor Ben Aranda from Cooper Union, who examines AI through the lens of generative design and architectural principles; Robin Miller , who tackles the challenge of scaling AI innovations across African tech ecosystems; Dr. Vukosi Marivate , whose work focuses on computational approaches to developing AI for underrepresented languages; and Hitesh Wadhwa from Google Cloud, who offered practical frameworks for organizations adopting AI.

Mokena Makeka , Director of the Civic Projects Lab, framed our discussion not as a technical exploration of AI capabilities, but as a civic conversation about centralizing humanity in the development and deployment of these powerful technologies. This framing resonated deeply with me – too often, AI discussions gravitate toward technical specifications rather than human implications.

Deep Dive: The Setting and Context of the Cooper Union Panel

Cooper Union’s Civic Projects Lab represents an important bridge between academia, industry, and community. Located at 41 Cooper Square in New York City, the Lab embodies the institution’s commitment to public good and civic engagement.

The “Collective Intelligence” theme reflects a growing recognition that AI development must incorporate diverse perspectives to be truly beneficial. As Mokena Makeka noted in his introduction, the Lab’s work centers on asking “what does it mean for society, what does it mean in terms of social interaction, what does it mean for bringing people along who might not necessarily have the same access to either technology, data, or capital as other societies?”

This civic framing provided the perfect backdrop for discussing how AI might serve as a democratizing force rather than reinforcing existing power structures. The panel’s composition itself modeled this approach, bringing together voices from multiple continents, disciplines, and professional contexts.

The timing of this panel also matters. We’re at a critical inflection point where the patterns and practices established now will likely determine whether AI becomes the most democratizing technology in human history or another force that concentrates power among those who already have it.

How I Used AI to Prepare for This Panel

Before diving into the substance of my contribution, I want to share something meta about my preparation process. I used Ragbot.AI with Anthropic Claude to help me prepare for this panel discussion about AI. This experience offered a practical demonstration of one of my core beliefs: AI serves best as an augmentation of human capability, not a replacement for it.

First, I provided Ragbot and Claude with the context of the panel, the panelists, and the core themes. Ragbot already has my personal knowledge base. The AI analyzed this information and suggested a framework for my talk, identifying key points that would resonate with the audience and complement the other speakers’ perspectives. It created a structured outline with timing suggestions and key message points.

However, this was just the beginning of the process, not the end. I took this framework and adapted it substantially based on my own expertise, experiences, and convictions. I reorganized points, added personal examples, and infused the structure with my authentic voice and values. The final talk represented my thinking, shaped by my years of experience in media technology and AI development.

This collaboration exemplifies what I call “intelligence augmentation” rather than “artificial intelligence” – using AI as a thinking partner that can help organize and expand our own ideas rather than replacing human creativity and judgment. The outline provided a useful starting structure, but the substance and delivery came from my own expertise and perspective.

Deep Dive: The Symbiotic Process of AI-Human Collaboration in Content Creation

My preparation process illustrates a symbiotic relationship between human expertise and AI capabilities:

  1. Contextual understanding: I provided Claude with the background information and parameters of the event, something the AI couldn’t have known on its own.
  2. Information organization: Claude processed this information and proposed a logical structure, helping organize thoughts across multiple dimensions.
  3. Creative expansion: The AI suggested potential talking points based on patterns in my previous work and the panel themes.
  4. Critical assessment: I evaluated these suggestions, discarding some, modifying others, and incorporating my own examples and insights.
  5. Authentic synthesis: The final talk represented my unique perspective, while benefiting from AI’s ability to quickly process and organize information.

This process differs dramatically from both pure human creation (which might have been more time-intensive but no more authentic) and pure AI generation (which would have lacked the depth of personal experience and conviction). Instead, it represents a third path: human-AI collaboration that amplifies human capabilities while maintaining human agency and authenticity.

This approach also demonstrates how AI can serve as an equalizer. Not everyone has access to a team of speechwriters or content strategists, but increasingly, people with varying resources can access AI tools that provide similar support functions, leveling the playing field in terms of content creation and idea refinement.

AI as an Equalizer: My Perspective

My journey through media technology leadership—from The New York Times to The Wall Street Journal to Hearst, and now in my role as President at Flatiron Software and Snapshot AI—has given me a unique vantage point on how technology can either reinforce privilege or democratize opportunity.

I believe AI has the potential to be one of the most powerful equalizers in human history—perhaps even more significant than the internet itself. While the web democratized access to information, AI can democratize expertise and capabilities that were previously available only to the privileged few.

Consider this: Before AI, having assistants, specialized experts, and custom solutions required substantial resources. Today, someone with limited formal education but a powerful idea can use AI to write sophisticated code, craft professional communications, or build complex systems—capabilities that would have required years of specialized training or significant financial resources just a few years ago.

This democratization extends beyond individuals to organizations and communities. At Flatiron Software, we’ve seen startups use AI to compete with enterprises that have hundreds of engineers. We’ve witnessed community organizations leverage AI to create sophisticated data analysis that previously would have required dedicated data science teams.

However, this equalizing potential isn’t guaranteed. It depends entirely on how we build, govern, and distribute AI systems. If AI remains concentrated in the hands of a few large technology companies or accessible only to those with significant resources, it will simply reinforce existing power structures rather than democratizing capability.

Deep Dive: Historical Perspective on Technology as Equalizer

To understand AI’s equalizing potential, it’s helpful to examine previous technological revolutions and their social impacts:

The Printing Press: Initially, printing technology dramatically reduced the cost of book production, but literacy remained limited to elites for centuries. The full democratizing effect wasn’t realized until mass education expanded.

The Internet: The early internet promised information democratization, but the digital divide between those with and without access created new inequalities. As access expanded, new barriers emerged in the form of digital literacy and the ability to create, not just consume, content.

Mobile Computing: Smartphones have been perhaps the most successful democratizing technology, reaching billions of people across economic strata. In many developing regions, mobile technology leapfrogged traditional infrastructure, enabling services from banking to healthcare to reach previously excluded populations.

AI stands at a similar inflection point. Like these previous technologies, its democratizing potential depends on three factors:

  1. Accessibility: Who can use these tools and at what cost
  2. Usability: How much specialized knowledge is required to leverage them
  3. Agency: Whether users can adapt and modify the technology to their specific needs

What makes AI potentially more equalizing than previous technologies is its ability to lower the expertise barrier across multiple domains simultaneously. Where the internet provided access to information but still required human expertise to apply it, AI can provide both information and application assistance, essentially democratizing expertise itself.

However, this also creates more profound risks if access becomes stratified or if AI systems embed and amplify existing biases. The stakes of getting this right are correspondingly higher.

Three Principles for Inclusive AI

During the panel, I outlined three principles that should guide our approach to creating more inclusive AI systems. These aren’t abstract ideals but practical design and distribution philosophies I’ve applied in my own work and advocated for in the broader AI ecosystem.

1. Resource Consciousness

We need to design AI systems with constrained environments in mind first, not as an afterthought. This means prioritizing efficiency and accessibility in how models are built and deployed.

The Deep Seek engine provides an excellent example—demonstrating that powerful AI capabilities don’t necessarily require massive computational resources that only large companies can afford. We’re seeing similar progress with models being optimized to run locally on phones and other relatively limited devices.

Building for constrained environments first ensures that AI benefits aren’t limited to those with the most resources. It’s a design philosophy that sees the diversity of technology access as a primary consideration, not an edge case.

In practice at Flatiron Software, this principle guides our technical architecture decisions. We optimize our systems to work effectively across a range of computing environments, including regions with limited bandwidth and older devices. This often makes our systems better for everyone—more efficient, more responsive, and more resilient.

Deep Dive: The Technical Side of Resource Consciousness

Resource consciousness in AI development requires specific technical approaches:

Model Distillation and Compression: Larger models can be distilled into smaller ones that maintain most capabilities while requiring fewer resources. Techniques like knowledge distillation, quantization, and pruning can reduce model size by orders of magnitude.

Edge Computing Optimization: Moving AI computation to edge devices reduces bandwidth requirements and latency. This requires specialized model architectures and inference optimization techniques.

Differential Architecture: Systems can be designed to operate at different capability levels depending on available resources, degrading gracefully rather than failing completely in resource-constrained environments.

Asynchronous Processing: For applications in low-connectivity environments, systems can be designed to function offline and synchronize when connectivity is available.

The technical challenges of resource consciousness present significant research opportunities. Engineers trained primarily on resource-rich environments often lack experience optimizing for constraints. Developing more efficient algorithms and architectures not only serves the goal of inclusion but often leads to technical innovations that benefit all users.

Additionally, resource consciousness has environmental benefits. The carbon footprint of large AI models has received increasing scrutiny. More efficient models and deployment strategies reduce energy consumption and align AI development with sustainability goals.

2. Participation Beyond Access

Using AI tools is just the beginning. True democratization means enabling people to participate in the creation, adaptation, and governance of AI systems.

Even if someone lacks formal education in machine learning or computer science, they should be able to contribute to and benefit from AI development. Open-sourcing not just models but also workflows, agents, and even sophisticated prompts can dramatically lower the barriers to participation.

This is what motivated me to build and open-source  Ragbot.AI —a personal assistant I initially created for my own use when I found commercial systems too limited. By making it available to anyone, I hoped to give others the ability to build on this foundation rather than starting from scratch.

Participation extends beyond technical contribution to include how AI systems are governed and evaluated. Communities most affected by AI applications should have meaningful input into how these systems are deployed in their contexts. This participatory approach leads to better systems and more equitable outcomes.

Deep Dive: Creating Participatory AI Ecosystems

Building truly participatory AI ecosystems requires addressing several dimensions:

Technical Literacy: We need educational approaches that demystify AI without requiring deep technical knowledge. Platforms like “I Am The Code” demonstrate how coding education can reach populations traditionally excluded from technology fields.

Contribution Mechanisms: Open source provides one pathway for participation, but we also need structures that allow contribution through:

  • Data curation and annotation
  • Evaluation and testing
  • Use case identification
  • Feedback on model behavior
  • Documentation and education

Governance Structures: True participation includes decision-making power. Models for community governance of technology include:

  • Multi-stakeholder oversight boards
  • Community data trusts
  • Cooperative ownership models
  • Participatory design processes

Economic Models: Sustainable participation requires economic models that share value with contributors. Possibilities include:

  • Micropayments for training data or evaluations
  • Cooperative ownership structures
  • Grant funding for community-based development
  • Marketplaces for contributions from non-traditional developers

The challenge is designing systems that maintain quality and coherence while incorporating diverse contributions. Traditional software development often relies on hierarchical structures and gatekeeping mechanisms that can reinforce existing power dynamics. Participatory AI needs new models that balance openness with effectiveness.

At Flatiron Software, we’re exploring models where domain experts without technical backgrounds can contribute meaningfully to AI system development through structured feedback mechanisms and contextual knowledge sharing.

3. Knowledge Sovereignty

The data sets and cultural knowledge used to build AI systems should remain accessible to the communities from which they originate. When AI systems incorporate knowledge from diverse sources, there’s an ethical imperative to ensure that value flows back to those sources.

This is particularly important for media and creative content. Large language models benefit enormously from the work of journalists, writers, and other content creators. In return, AI developers should find ways to support the continued creation of human-generated content—because without it, the models themselves will eventually stagnate.

Knowledge sovereignty also extends to cultural and indigenous knowledge. Dr. Lewis’s work on indigenous AI frameworks offers important insights here—AI systems trained on cultural knowledge should respect the protocols and values of the cultures from which that knowledge derives.

At ScalePost.AI , which I advise, we’re working on models that ensure media companies and content creators maintain sovereignty over their content while still enabling beneficial AI applications. This bi-directional value flow is essential for sustainable AI ecosystems.

Deep Dive: Implementing Knowledge Sovereignty in Practice

Knowledge sovereignty requires mechanisms for:

Attribution and Provenance: Technical systems that track the sources of knowledge incorporated into AI models. This includes both direct attribution (quoted content) and influence tracking (learned patterns).

Compensation Models: Systems for compensating knowledge creators when their work creates value through AI. Examples include:

  • Licensing arrangements for training data
  • Revenue sharing based on influence tracking
  • Investment in sectors producing valuable training data
  • Direct funding of knowledge creation

Cultural Protocols: Recognition that knowledge exists within cultural contexts with specific protocols governing its use. This means:

  • Obtaining appropriate permissions before incorporating cultural knowledge
  • Respecting restrictions on certain types of knowledge
  • Acknowledging the communal ownership of cultural knowledge
  • Ensuring benefits flow back to knowledge-originating communities

Data Rights Frameworks: Legal and technical frameworks that give individuals and communities control over how their data is used in AI systems. These might include:

  • Data trusts for communal management
  • Consent mechanisms that extend to derivative works
  • Rights to access and influence systems trained on one’s data
  • Technical mechanisms to remove specific data from models

The challenge lies in implementing these principles without creating prohibitive friction in the development process. At ScalePost.AI, we’re working on technical approaches that can navigate these complexities efficiently, making ethical data use practically feasible at scale.

Knowledge sovereignty becomes particularly complex with generative AI, which learns patterns rather than storing specific content. Determining attribution and appropriate compensation in these contexts requires new approaches that go beyond traditional intellectual property frameworks.

Edited Transcript of My Panel Contribution

Below is an edited transcript of my contribution to the panel discussion, refined for clarity while preserving the substance of my remarks:

Click to read my full panel remarks

“I believe AI represents a potential force for democratization unlike anything we’ve seen before. My journey through media technology leadership—from The New York Times to The Wall Street Journal to Hearst, and now at Flatiron Software and Snapshot AI—has shown me how technology can either reinforce privilege or democratize opportunity.

The most privileged people typically benefit first from technological advances, but there’s also hope for equalization. I witnessed this with the internet—the knowledge gap between advanced societies and rural developing regions was enormous. You couldn’t find an Encyclopedia Britannica in a village in a developing country, but as the web expanded, that same information became accessible to anyone with connectivity.

AI takes this a step further. Even if you’re in a developing country where you have access through your phone but don’t speak English fluently, AI can now bridge that language gap. This makes AI potentially not just an incremental step but a huge leap forward in democratizing access.

At Flatiron Software and with our product Snapshot, we’ve observed how AI can equalize workplace dynamics. Junior developers with good ideas but less formal education can now use language models and other AI tools to build sophisticated software that previously required years of specialized training.

Open sourcing is crucial to this democratization. I’m encouraged that companies like Meta have made their models available openly, and many Chinese companies have followed suit. At Flatiron, we’re committed to open sourcing much of our software.

I built a personal AI assistant for my own use because I found out-of-the-box language models limited. When I completed it, I realized others could benefit and decided to open source it. This exemplifies how we can share AI benefits—if you solve a need for yourself, why not make it available to everyone?

What makes this moment unique is that even if I open source something that works for my specific context, others can now use AI to customize it for their needs without extensive technical knowledge. This removes a significant barrier that previously limited the impact of open source projects.

To make AI truly inclusive, I propose three core principles:

First, resource consciousness—designing AI systems that work in constrained environments. The Deep Seek engine demonstrated we can achieve powerful capabilities without requiring massive computational resources only large companies can afford.

Second, participation beyond access—ensuring people can contribute to and adapt AI systems, not just use them. This means open sourcing not just models but workflows, agents, and even sophisticated prompts.

Third, knowledge sovereignty—ensuring data and cultural knowledge used to build AI systems remain accessible to communities they came from, with value flowing back to those sources.

AI represents a unique opportunity to make all humans more equal through technology augmentation. When humans are augmented by AI, innate abilities and formal education become less determinative of what people can accomplish. This could be the first technology that truly makes capabilities more equal across humanity.”

AI in Practice: Bridging Divides

Beyond principles, it’s important to examine concrete examples of how AI can bridge divides when developed with inclusion in mind.

At Flatiron Software, our work helps enterprises transform their technology operations through AI-powered solutions. We’ve seen firsthand how AI can be a dramatic equalizer in the workplace. For instance, a junior developer who joined a team after a mid-career change from an unrelated field was able to contribute complex code within weeks using AI assistance—work that would have taken months or years of traditional learning.

Through Snapshot AI, our engineering intelligence platform, we’ve observed how AI can identify and elevate talent that might otherwise be overlooked. The platform analyzes engineering work not just for volume or velocity, but for innovation and impact, often surfacing valuable contributions from team members who didn’t graduate from prestigious institutions or follow traditional career paths.

As an advisor to ScalePost.AI since its founding, I’ve worked to ensure that AI development incorporates diverse content sources rather than just dominant mainstream publications. This helps address one of the core challenges in AI today: ensuring that models don’t simply reinforce existing information hierarchies but represent a truly global perspective.

Deep Dive: Measuring AI’s Impact on Equality

How do we measure whether AI is actually serving as an equalizer? This requires metrics beyond traditional technical benchmarks:

Capability Transfer: How effectively does AI transfer capabilities to users with different backgrounds? Can a user without formal training accomplish tasks previously requiring specialized education?

Access Distribution: Who is using advanced AI tools, and what barriers prevent wider adoption? This includes analyzing usage across geographic, economic, and demographic dimensions.

Contribution Diversity: Who is contributing to AI development, and are these contributions valued equally? This extends beyond code contributions to include data, evaluation, and use case development.

Outcome Gaps: Are the benefits of AI-enabled systems distributed equitably, or do they disproportionately benefit certain groups? This requires analyzing downstream impacts in areas like healthcare, education, and economic opportunity.

Agency Metrics: Do users have meaningful control over AI systems? Can they modify, customize, and direct these systems to meet their specific needs?

At Flatiron Software, we’re developing frameworks to measure these dimensions across our projects. Initial findings suggest that AI can significantly compress learning curves in technical domains, but barriers around awareness, basic digital literacy, and initial access remain significant challenges.

The data also shows that different types of AI systems vary dramatically in their equalizing effects. Systems designed with participation in mind from the beginning show much stronger democratizing impacts than those retrofitted for broader access after development.

The Power of Open Source in AI Democratization

Perhaps the most powerful lever we have for democratizing AI is open source. When Meta/Facebook open-sourced their LLM work, it created a ripple effect that has accelerated innovation across the field. The same principle applies at every level of the AI stack—from foundational models to the tools and workflows built on top of them.

Open-sourcing isn’t just about code availability. It’s about enabling customization. Today, if I build and open-source a tool that works for my specific needs, others can adapt it for their own contexts with relative ease—often by simply providing AI systems with instructions for the necessary modifications.

This represents a fundamental shift in how technology diffuses. Previously, technologies developed in resource-rich environments required significant expertise and investment to adapt to different contexts. AI-enabled customization dramatically lowers this barrier, allowing innovations to spread more rapidly and equitably.

The open-source approach also creates powerful feedback loops. As diverse users adapt and improve systems for their specific contexts, the entire ecosystem benefits from these innovations. This stands in contrast to closed systems, where improvements remain proprietary and benefits accrue primarily to the original developers.

Deep Dive: Beyond Traditional Open Source

Traditional open source software faces several limitations as an equalizing force:

  1. Technical barriers: Contributing to most open source projects requires substantial programming expertise
  2. Resource requirements: Running and modifying advanced software often demands significant computational resources
  3. Contribution asymmetry: Most projects receive the majority of contributions from a small number of elite developers
  4. Customization challenges: Adapting open source software to specific needs traditionally requires deep technical knowledge

AI is transforming this landscape in several ways:

Code Generation: AI assistants can generate modifications to open source code based on natural language descriptions, allowing non-programmers to customize software

Contribution Assistance: AI tools can help newer developers format their contributions properly, understand project guidelines, and improve code quality

Documentation Generation: AI can create and translate documentation, making projects more accessible across language barriers

Adaptation Automation: Systems can automatically adapt software to different environments, reducing the expertise needed for deployment

Novel Contribution Types: AI enables new forms of open source contribution beyond code, including:

  • Prompt libraries and engineering techniques
  • Training datasets and evaluation frameworks
  • Use case documentation and feedback
  • Fine-tuning specifications and parameters

At Flatiron Software, we’re exploring “AI-mediated open source” models where contributions can come from domain experts without programming expertise. The AI acts as a translation layer, converting domain knowledge into technical implementations while maintaining the collaborative, distributed nature of traditional open source.

This approach dramatically expands who can meaningfully participate in technology development, potentially transforming open source from a relatively elite technical activity to a truly mass-participation model.

Looking Forward: Collective Intelligence in Action

The Cooper Union panel reinforced my belief that the term “collective intelligence” is not just an academic concept but a practical necessity. AI systems built on narrow perspectives will inevitably produce narrow results. Only by incorporating diverse viewpoints—across cultures, disciplines, and lived experiences—can we create AI that truly serves humanity in all its complexity.

We’re at a pivotal moment. The patterns established now will likely determine whether AI becomes the most democratizing technology in human history or another force that concentrates power among those who already have it.

For this reason, I believe the work of all the panelists—from Lady Jamme’s coding education for refugee girls to Dr. Oullier’s brain-computer interfaces for people with disabilities to Dr. Lewis’s indigenous AI frameworks—represents essential pieces of a larger puzzle. By bringing these diverse perspectives into conversation, we begin to outline what truly inclusive AI might look like.

My commitment, both through my companies and personal projects, is to push toward democratization—creating and advocating for AI systems that amplify human potential across all segments of society. Because ultimately, the most powerful form of collective intelligence will be one where everyone has the opportunity to contribute and benefit.

Deep Dive: Building an Agenda for Democratizing AI

Moving from principles to action requires a concrete agenda. Here are key initiatives that can advance AI democratization:

Educational Transformation

  • Develop AI literacy curricula for diverse educational contexts
  • Create learning pathways that don’t require traditional technical backgrounds
  • Build educational AI assistants that adapt to different learning styles and contexts

Infrastructure Development

  • Invest in computing infrastructure in underserved regions
  • Develop models optimized for resource-constrained environments
  • Create shared computing resources for communities to fine-tune and adapt models

Governance Innovation

  • Establish multi-stakeholder governance models for AI development
  • Create participatory processes for setting AI research priorities
  • Develop community oversight mechanisms for deployed AI systems

Economic Models

  • Design compensation systems for diverse forms of AI contribution
  • Create funding mechanisms for AI development aligned with community needs
  • Build marketplaces that value non-traditional AI expertise

Technical Research Priorities

  • Advance multilingual capabilities beyond dominant languages
  • Develop better techniques for smaller, more efficient models
  • Create tools for easier customization and adaptation of AI systems

At Flatiron Software, we’re focusing particularly on tools that enable domain experts without technical backgrounds to effectively direct and customize AI systems. We believe this represents a crucial bridge between technical capability and broad participation.

The challenge is substantial, but so is the opportunity. By approaching AI development with democratization as a primary goal rather than an afterthought, we can create a technological revolution that truly expands human capability across all segments of society.

Rajiv and Mokena

Rajiv and Marieme

Vukosi and Rajiv

Robin and Rajiv

Mokena, Barbara (fellow YGL), and Rajiv


This panel was hosted by Mokena Makeka, Director of the Benjamin Menschel Civic Projects Lab at Cooper Union on May 7, 2025. The lab serves as a bridge between academia, business, and society, focusing on work that serves the greater good.

Rajiv Pant is President at  Flatiron Software  and  Snapshot AI , where he leads organizational growth and AI innovation while serving as a trusted advisor to enterprise clients on their AI transformation journeys. He is also an early investor in and senior advisor to  you.com , an AI-powered search engine founded by Richard Socher and Bryan McCann. With a background spanning CTO roles at The Wall Street Journal, The New York Times, and other major media organizations, Rajiv brings deep expertise in language AI technology leadership and digital transformation. He writes about artificial intelligence, leadership, and the intersection of technology and humanity.