Recent talks by Nvidia CEO Jensen Huang focus on AI infrastructure and data centers, not a specific plan built around "output units" pricing or tracking.
In short: Recent public talks by Nvidia CEO Jensen Huang describe AI as major infrastructure, not a future explicitly built around “output units.”
Some commentary has described Huang as outlining a future based on the “production, consumption and monetisation of output units.” But in recent public sources from early 2026, including coverage tied to Davos and Nvidia’s GTC keynote, that exact idea and wording does not appear.
Instead, Huang’s consistent message is that AI is becoming basic infrastructure, like electricity grids or the internet. He frames it as a multi-layer build-out that has to scale all at once. He often describes five linked layers, energy, chips, cloud data centers (warehouses of computers), AI models (the trained systems), and applications (the tools people use).
A recurring example is what Nvidia calls “AI factories,” meaning large data centers designed mainly to train and run AI. Think of them like industrial plants, but what they “produce” is AI results, such as answers, images, or predictions. He also points to Nvidia’s long-running CUDA software ecosystem (tools that help developers use Nvidia chips) and to newer data libraries meant to handle both neat tables of data and messier information like text.
It is possible that “output units” is being used as a loose shorthand for measurable AI work, such as tokens (small chunks of text) or paid usage. If Nvidia later adopts a clear metric like that for pricing or reporting, it would likely show up in product notes, earnings materials, or official speeches.
Source: Financial Times
12
Software Development18
Data & Analytics6
Audio & Video Production8
Productivity & Workflow12
Voice & Speech5
Sales & Outreach5
Design & Creative5
Marketing & Growth4
Search & Discovery8
Email & Communication6
Art & Illustration3
Customer Support1
Automation & Workflow1
HR & Recruiting2