NVIDIA CEO Says OpenClaw Did in 3 Weeks What Linux Took 30 Years to Achieve

At this week's Morgan Stanley Technology, Media & Telecom Conference, NVIDIA CEO Jensen Huang called out OpenClaw as "probably the single most important release of software, probably ever."
If you look at OpenClaw and the adoption of it, Linux took some 30 years to reach this level. OpenClaw in, what is it, 3 weeks, has now surpassed Linux. It is now the single most downloaded open source software in history, and it took 3 weeks.
The statement is remarkable on its own. But when you unpack what Huang actually said across the full session, the picture gets much bigger.
From Queries to Actions
Huang framed OpenClaw as the clearest proof that AI is undergoing a fundamental paradigm shift. The era of chatbots answering questions is giving way to agents that perform real work.
The last prompt was queries. This prompt are actions. They're tasks. Do something for me.
The previous wave of AI interaction was built on retrieval: "what is," "when is," "who is." OpenClaw flipped that into execution: "create," "do," "build," "write." A user sends a message on WhatsApp or Slack, and an autonomous agent handles the rest, scheduling meetings, pulling reports, updating a CRM, triggering workflows, summarizing the outcome. No new tab. No manual intervention.
This distinction between queries and tasks is the single biggest driver of the compute explosion NVIDIA is banking on.
The Numbers Behind the Hype
OpenClaw's growth is not just a talking point. The numbers are staggering.
Peter Steinberger published the first version of Clawdbot in November 2025 as a weekend hack, connecting a chat app with Claude Code. It took him about an hour. Following a trademark dispute with Anthropic, the project was renamed Moltbot on January 27, 2026, and then OpenClaw three days later.
By February 24, OpenClaw had surpassed Linux on GitHub. By March 1, it overtook React to become the most starred non-aggregator software project on GitHub with over 250,000 stars. React held that position unchallenged for years and took over a decade to reach those numbers. OpenClaw did it in roughly 60 days.
On February 14, Steinberger announced he would be joining OpenAI, and the project would be moving to an open source foundation. Over 600 contributors and 35,000 forks later, the project shows no sign of slowing down.
If you look at the line even in semi-log, this thing is straight up. It's vertical. It looks like the Y-axis.
Why OpenClaw Matters to NVIDIA
Jensen described AI as a "five layer cake," a framework he first laid out at Davos earlier this year. The five layers are:
- Energy and power generation at the base
- Chips and computing infrastructure
- Cloud data centers
- AI models
- Applications at the top
Huang emphasized that "this layer on top, ultimately, is where economic benefit will happen." OpenClaw and AI agents sit squarely in that applications layer, demonstrating how AI placed in a hyper personalized environment can replicate human workloads through simple prompts.
The reason OpenClaw gained such rapid adoption is not complexity. It showed the world that AI has use cases that directly impact everyday life, making redundant tasks dramatically easier. For enterprises like NVIDIA, agents like OpenClaw have created compute demand the industry never anticipated.
1,000x Token Consumption
With agents now performing bulk web searches, image generation, complex analysis, and other workloads, token consumption has risen by 1,000 times according to Jensen. But the real multiplier is even larger.
A single person using a chatbot might generate 50 queries per day. An autonomous agent performing that same job function might generate 50,000 API calls per day. When these agents run continuously in the background across enterprises, consumption scales to roughly 1 million times more tokens than simple conversational queries.
The amount of compute in our company that we need has just got skyrocketed.
This has created what Huang describes as a structural vacuum where no matter how large hardware deployments become, they remain constrained as long as agentic AI continues to spread into human workloads. NVIDIA posted $68.1 billion in Q4 FY2026 revenue, beating guidance by roughly $3 billion, and guided Q1 FY2027 at $78 billion. Supply commitments rose from $50.3 billion to $95.2 billion quarter over quarter.
In this new world of AI, compute is revenues. Without compute, there's no way to generate tokens. Without tokens, there's no way to grow revenues.
The $700 Billion Buildout Is Just the Start
The five largest US cloud and AI infrastructure providers, Microsoft, Alphabet, Amazon, Meta, and Oracle, have collectively committed to spending between $660 billion and $690 billion on capital expenditure in 2026. That is nearly double 2025 levels.
Huang's message to investors is blunt: this is not the peak.
This new way of doing computing is not going to go back. Businesses are going to be building out this capacity from this point forward and continue to expand from here.
He has characterized the current AI buildout as "the largest infrastructure buildout in human history." If these five companies continue doubling capex annually, spending could reach $2.8 trillion by 2028. The agentic AI wave, driven by projects like OpenClaw, is a major reason why.
What This Means for Compute
Hopper and Blackwell focused on training workloads. With the Vera Rubin platform, unveiled at CES 2026, NVIDIA plans to address agentic AI constraints head on.
Rubin promises a 5x boost in inference performance and a 10x reduction in token costs for Mixture of Experts models compared to Blackwell. The platform introduces a new category of processor called Rubin CPX, the first CUDA GPU purpose built for massive context AI, where models reason across millions of tokens of knowledge at once.
Perhaps most significant for the agentic era is Inference Context Memory Storage (ICMS), an AI native infrastructure tier built into the Rubin platform through the BlueField-4 DPU. ICMS establishes a pod level context memory layer, an Ethernet attached, flash based tier optimized specifically for ephemeral, latency sensitive KV cache. This allows autonomous agents to maintain memory over much longer interactions than was previously possible, solving one of the core bottlenecks of running always on agents at scale.
Given the massive compute to token imbalance, demand for Rubin should be enormous.
Enterprise Adoption Is Already Moving
According to a 2025 KPMG global pulse survey, 91% of leaders anticipate AI will significantly improve operations within two years, and 65% are already piloting AI agents. Platform based AI adoption is projected to reduce per workflow costs by 40% to 70%.
But adoption is not waiting for enterprise approval. CrowdStrike and CyberArk have both published reports noting that 22% of organizations already have employees running OpenClaw without formal authorization. The security implications of autonomous agents acting with delegated authority are significant, and companies like Steptoe, Lyzr, and MintMCP are rushing to build governance frameworks around it.
This is both the opportunity and the challenge. OpenClaw is not a controlled rollout. It is a grassroots movement that enterprises are now racing to catch up with.
What Comes Next
GTC 2026 runs March 16 to 19. Huang will present NVIDIA's full agentic AI roadmap, including Rubin production timelines and enterprise agent deployment frameworks. If the Morgan Stanley session was the thesis, GTC is the product launch.