On January 20, 2026, a three-month-old AI startup with no product announced a $480 million seed round at a $4.48 billion valuation. The company: Humans&, founded by former researchers from OpenAI, Anthropic, xAI, Google DeepMind, and Meta.
The round represents the second-largest seed in venture capital history, trailing only Mira Murati’s Thinking Machines Lab ($2 billion at $12 billion in July 2025).
What makes Humans& interesting isn’t just the staggering valuation. It’s the philosophical rejection of where AI is headed. While OpenAI, Anthropic, and Google race to build autonomous agents that work independently, systems designed to replace human labor, Humans& is betting $480 million on the opposite: AI that makes people better at working together.
The Founding Team: Elite AI Lab Escapees
Humans& was founded in September 2025 by five co-founders bringing deep credentials from the world’s leading AI labs:
Eric Zelikman (CEO), Former xAI researcher who worked on Grok-2 pretraining and reinforcement learning. Among the first employees at Elon Musk’s xAI.
Andi Peng, Former Anthropic research scientist who worked on reinforcement learning and post-training for Claude 3.5 through 4.5. Left specifically because she disagreed with Anthropic’s push toward autonomous AI.
Georges Harik, Google employee #7, built Google’s first advertising systems (AdWords, AdSense), contributed to Gmail, initiated Google Docs, led Android acquisition. Now investor and co-lead on the funding round.
Yuchen He, Former xAI researcher who worked on Grok chatbot development.
Noah Goodman, Stanford professor of psychology and computer science, former Google DeepMind researcher. Brings interdisciplinary expertise bridging cognitive science and AI.
The broader team of approximately 20 employees includes alumni from OpenAI, Meta, the Allen Institute for AI, MIT, and Stanford.

The $480 Million Round: Who Invested
The January 2026 funding was led by SV Angel (Ron Conway’s firm) and Georges Harik (co-founder).
Major investors include:
- NVIDIA (strategic partnership for hardware/software)
- Jeff Bezos
- GV (Google Ventures)
- Emerson Collective (Laurene Powell Jobs)
- Forerunner, S32, DCVC, Felicis, CRV
- Individual investors: Anne Wojcicki (23andMe), Marissa Mayer (ex-Yahoo CEO), Bill Maris (ex-GV CEO), Thomas Wolf (Hugging Face), Igor Babuschkin (ex-OpenAI)
At $4.48 billion post-money valuation, Humans& became a unicorn on day one. The 9.3x capital multiple aligns with other elite AI lab spinouts (Thinking Machines at 6x, Unconventional AI at ~9.5x).
The NVIDIA partnership is particularly significant—providing access to cutting-edge GPU infrastructure, the primary bottleneck for training large-scale AI models.
What Humans& Is Building
Despite raising nearly half a billion dollars, product details remain scarce. The company describes itself as a “human-centric frontier AI lab” building AI that “serves as a deeper connective tissue that strengthens organizations and communities.”
The vision: Think “an AI version of instant messaging”—but where AI actively facilitates group collaboration rather than executing individual tasks.
The technical problem: What Zelikman calls the “stranger problem.” Current AI interactions feel like repeatedly meeting someone new because models lack long-term memory. Every conversation starts from scratch.
Humans& wants to train models that:
- Remember conversations across sessions
- Ask clarifying questions proactively
- Build persistent understanding of users and teams
- Facilitate group decision-making (e.g., helping teams reach consensus on logos, strategies, hiring)
- Coordinate between multiple humans and AI agents on complex tasks
Key innovations:
- Long-horizon and multi-agent reinforcement learning
- Persistent memory that survives across sessions
- User understanding (adapting to communication styles and preferences)
- Proactive coordination rather than reactive prompting
Product launch: Planned for “early this year” (2026), targeting enterprise decision-making, research collaboration, project management, and consumer family coordination.
The Philosophical Disagreement: Automation vs. Augmentation
To understand Humans&, you need to understand the broader AI debate.
Current industry consensus (OpenAI, Anthropic, Google): Build autonomous agents that complete 8-hour coding tasks overnight, conduct comprehensive research, write entire documents, and make decisions independently. If AI can replace a $150,000/year employee, the economic value is obvious.
The Humans& counter-argument: This approach has fallen into a “task-centric trap”—optimizing for individual tasks while neglecting the emotional intelligence required for human collaboration. As the company states: “No one really accomplishes anything alone. Progress happens when we understand one another, build trust, and work together.”
Current AI systems are trained for single-user, single-task interactions. They don’t understand group dynamics or consensus-building.
The structural tension: The automation approach assumes AI replaces workers (reducing costs). The augmentation approach assumes AI makes workers more productive (same team, more output). In theory both can be true. In practice, enterprise incentive structures nearly always favor cost reduction and headcount elimination.
This is the challenge Humans& must navigate. The company can build collaborative AI, but whether organizations use it for augmentation versus replacement depends on incentives Humans& doesn’t control.
Competition: A Crowded Field
Humans& faces competition from established players and startups:
- Anthropic’s Claude Cowork – Launched January 2026, brings Claude capabilities to team productivity across departments
- Google’s Gemini in Workspace – AI integrated into Gmail, Docs, Meet with 1+ billion users
- OpenAI’s multi-agent systems – Multiple GPT instances collaborating on complex projects
- Microsoft Copilot – Deeply integrated into Microsoft 365
- Granola – $43M raised specifically for AI-powered meeting collaboration
Humans& differentiation: Training models from the ground up for collaboration as a first-class task, not adding features to existing platforms. Whether this foundational approach delivers better results than feature integration remains unproven.
The Cautionary Tale: Thinking Machines Lab
Any discussion of Humans& requires mentioning Thinking Machines Lab, founded by former OpenAI CTO Mira Murati. The parallels are striking:
- Both raised record-breaking seed rounds (Thinking Machines $2B, Humans& $480M)
- Both founded by elite AI lab veterans
- Both emerged with no products, just vision and pedigree
- Both valued at multi-billion-dollar unicorn status
But Thinking Machines’ trajectory offers a warning. After raising $2 billion in July 2025, the company:
- Lost multiple co-founders including CTO Andrew Tulloch
- Suffered an employee exodus in January 2026 when researchers returned to OpenAI
- Saw planned follow-on funding stall
- Became a cautionary tale about hype-driven AI funding
What went wrong: No clear product direction, talent poaching by competitors, internal disagreements over strategy, and investor impatience when progress lagged expectations.
Does Humans& face similar risks? Absolutely. But key differences include: smaller focused team (20 vs 50+), clearer technical thesis, product launch planned for Q1 2026, more modest valuation creating less pressure.
What Comes Next
Humans& faces the challenge every mega-funded startup confronts: turning capital and pedigree into products people use.
Immediate priorities:
- Ship first product early 2026
- Expand from 20 to 50-100 employees
- Train initial models using NVIDIA partnership
- Secure enterprise pilot customers
- Establish product-market fit
Realistic scenarios:
Success (20% probability): Breakthrough collaborative AI becomes essential team infrastructure. Scales to hundreds of millions in revenue, IPOs or acquired for $10-20B.
Moderate Success (35% probability): Solid product serving niche market (consulting, research). $50-200M annual revenue, acquired for $2-4B by Slack/Notion/Microsoft.
Struggle/Pivot (30% probability): Initial product doesn’t gain traction. Multiple pivots before finding product-market fit. Valuation stagnates.
Failure (15% probability): Team fractures, talent poached, technical approach flawed. Burns through capital without traction. Winds down or sells for <$1B.
Why This Matters
Regardless of success, Humans& represents a visible fork in AI development. For three years, the industry converged on autonomous agents. This is the first well-funded attempt at a fundamentally different approach.
If Humans& succeeds, it proves:
- Collaboration-first training produces differentiated capabilities
- Enterprises prefer augmentation over automation when given real alternatives
- Making teams more effective exceeds the value of headcount reduction
- AI that “works with humans” can scale to large markets
If Humans& struggles, it validates the industry consensus: autonomous AI is correct, and collaborative positioning is marketing that doesn’t translate into superior products.
Either way, at $480 million with an elite team, the experiment gets a fair test.
The Bottom Line
Humans& launched with three advantages: pedigree (founders who built ChatGPT/Claude/Grok), capital ($480M seed), and philosophy (AI should augment humans, not replace them).
The first two aren’t unique. Thinking Machines had both and struggled. The third is more interesting. If “human-centric AI” is just marketing, competitors will add similar features. But if the founders genuinely believe autonomous AI is the wrong path, that conviction could drive better decisions.
The challenge: philosophy doesn’t build products. The team now faces the unglamorous work of turning vision into software people actually use software that works better than alternatives from companies with 100x more resources.
Will collaboration-first AI training justify a $4.48 billion valuation? Or will Humans& discover pedigree and capital aren’t sufficient to overcome incumbents with distribution, data, and scale?
We’ll find out when the product launches. Until then, Humans& represents the most serious bet yet that AI’s future isn’t full automation—it’s humans and machines working better together.
And if the founders are wrong? At least they’ll have lost $480 million trying to prove AI should empower people, not replace them.
That’s more interesting than most failures.