NVIDIA’s Vision For AI Factories – ‘Major Trend in the Data Center World’

NVIDIA kicked off the Data Center World 2025 event this week in Washington, D.C., with a bold vision for the future of AI infrastructure.
In his keynote, Wade Vinson, NVIDIA’s chief data center engineer, introduced the concept of AI-scale data centers; these massive, energy-efficient facilities would meet the soaring demand of accelerated computing. NVIDIA envisions sprawling “AI factories” powered by Blackwell GPUs and DGX SuperPODs, supported by advanced cooling and power systems by Vertiv and Schneider Electric.
“There is no doubt that AI factories are a major trend in the data center world,” said Vinson.
Completing phase one of an AI factory in Texas
Vinson pointed to the Lancium Clean Campus that Crusoe Energy Systems is building near Abilene, Texas. As he explained:
- The first phase of this AI factory is largely complete: 200 MW in two buildings.
- The second phase will expand it to 1.2 GW. It should be completed by the middle of 2026.
- The design includes direct-to-chip liquid cooling, rear-door heat exchangers, and air cooling.
- It will comprise six additional buildings, bringing the facility to four million square feet.
- 10 gas turbines will be deployed onsite to provide on-site power.
Additionally, each building will operate up to 50,000 NVIDIA GB200 NVL72s GPUs on a single integrated network fabric, advancing the frontier of data center design and scale for AI training and inference workloads.
Vinson said some AI factories will leverage on-site power, while others will take advantage of sites where power is already available. He pointed to old mills, manufacturing sites, and retail facilities that are already plugged into the grid.
For example, an old mall in San Francisco can be converted to an AI factory in months, rather than the many years required to complete new-build construction and obtain utility interconnects and permits. Such sites often have large roofs that can be used for solar power arrays.
Reconfiguring existing data centers into AI factories
How about existing data centers? Aging structures may struggle to accommodate NVIDIA gear and AI applications. Vinson believes many colocation facilities (colos) are in a good position to be transitioned into AI factories.
“Any colo built in the last 10 years has enough power and cooling to become an AI factory,” he said. “AI factories should be looked upon as a revenue opportunity rather than an expense.”
He estimates that AI could boost business and personal productivity by 10% or more, adding $100 trillion to the global economy.
“It represents a bigger productivity shift than happened due to the wave of electrification around the world that started about 100 years ago,” said Vinson.
Planning is key to AI factory success
Vinson cautioned those interested in building or running their own AI factories about the importance of planning. It’s important to consider the various factors involved, and modeling is vital.
He touted NVIDIA’s Omniverse simulation tool as one way to correctly plan an AI factory. It uses digital twin technology to enable comprehensive modeling of data center infrastructure and design optimization. Failing to model in advance and simulate many possible scenarios can lead to inefficiencies in areas such as energy consumption and can extend construction timelines.
“Simulations empower data centers to enhance operational efficiency through holistic energy management,” said Vinson.
SEE: Data Centres Can Cut Energy Use By Up To 30% With Just About 30 Lines of Code
For example, many data center veterans may find it challenging to shift from traditional concepts of racks, aisles, and servers to GPU gear surrounded by liquid cooling and with adequate power and power distribution equipment.
AI factory designs will have far more power and cooling gear inside than server racks; therefore, layouts will be radically different. After all, the amount of heat generated by GPU-powered SuperPODs is more than that generated by typical data centers.
“Expect significant consolidation of racks,” said Vinson. “Eight old racks might well become one future rack with GPUs inside. It is essential to develop a simplified power and cooling configuration for the racks inside AI factories, as these will be quite different from what most data centers are used to.”