The U.S. AI Data-Center Boom: who’s building what, where, and with whose hardware

The U.S. AI Data-Center Boom: who’s building what, where, and with whose hardware

As AI moves from research labs into production, a wave of hyperscale AI data-center projects is reshaping the U.S. industrial map. Tech giants, cloud providers and specialized “neoclouds” are racing to add hundreds of megawatts — and in some cases gigawatts — of dedicated AI compute. Below are the headline projects (ongoing or recently completed) that are driving that shift.


1) OpenAI — “Stargate” (Abilene, Texas & expanding US footprint)

Owner / Partners: OpenAI, Oracle, SoftBank (partners on Stargate program)
What it is: Stargate is OpenAI’s ambitious AI infrastructure program — Abilene, Texas is the flagship U.S. site and OpenAI has announced multiple additional U.S. Stargate sites as part of expansion. The program targets gigawatt-scale AI capacity across multiple sites.
Capacity / scale: OpenAI has signaled plans that would bring Stargate’s U.S. footprint toward multiple gigawatts (OpenAI’s announcements reference nearly 7 GW when combined with further sites globally).
Compute / server providers: OpenAI-style facilities run large fleets of accelerator hardware (partners include major server and interconnect suppliers). The Stargate program is an industry co-operation across cloud and systems partners.
Source: OpenAI announcement on Stargate expansion. 


2) Amazon Web Services — Project Rainier (New Carlisle, Indiana & other sites)

Owner: Amazon / AWS
What it is: AWS’s Project Rainier is a large AI compute cluster campaign that has moved from announcement to operational stages; Amazon publicly describes it as one of the world’s largest AI compute deployments.
Capacity / scale: AWS reported deploying nearly half a million Trainium2 chips in one large cluster and expects to scale further across regions. Anthropic and other customers are already running production AI workloads on Rainier.
Compute / server providers: Trainium2 (AWS-designed AI accelerators) are the core compute; Amazon deploys them at hyperscale in its own server designs.
Source: AWS Project Rainier announcement and coverage. 


3) Microsoft — Major US AI campus (Wisconsin + national capacity deals)

Owner: Microsoft (Azure / Microsoft CoreAI)
What it is: Microsoft has publicly committed to building world-class AI data-center capacity in the U.S. (one flagship AI build in Wisconsin described by Microsoft as “the world’s most powerful AI datacenter” when launched). Microsoft is also investing heavily in partnerships (neoclouds) to secure GPU supply.
Capacity / scale: Microsoft reports adding multiple gigawatts of capacity globally and specifically committing multi-billion dollar investments in Wisconsin (initial build ~$3.3B, later expanded). Microsoft has also made large access deals for hundreds of thousands of GPUs with neocloud providers.
Compute / server providers: Microsoft runs Nvidia GPUs for many AI workloads and has secured access to large volumes of Nvidia GB300 class accelerators via deals with neoclouds and hardware partners.
Source: Microsoft blog and reporting on Microsoft’s Wisconsin AI datacenter and supplier deals. 


4) Google — Major Texas expansion for AI capacity

Owner: Alphabet / Google Cloud
What it is: Google announced a multibillion (reported $40 billion) investment to build new data centers in Texas focused on cloud and AI compute expansion. The plan covers multiple large campuses in West Texas and the Panhandle.
Capacity / scale: Multi-site, large scale investment designed to accelerate Google Cloud and Vertex/AI capacities in the U.S.
Compute / server providers: Google uses its own TPU and GPU infrastructures and partners for server systems. New builds are intended to expand Google’s internal AI training and serving capacity.
Source: Reuters & DataCenter Magazine reporting on Google’s $40B Texas investment. 


5) xAI (Elon Musk) — Tennessee site (Memphis area)

Owner: xAI (Elon Musk)
What it is: xAI built a high-power AI training site in Tennessee. The project has received scrutiny for rapid expansion of on-site gas turbine capacity and permitting issues. xAI’s build highlights how some AI firms are deploying local generation to secure energy for heavy compute.
Capacity / scale: Reports indicate hundreds of megawatts of on-site generation (turbines) and large GPU fleets for Grok model training.
Compute / server providers: xAI runs custom GPU clusters; exact server vendors vary. The project demonstrates the close coupling of power generation and AI datacenter operations.
Source: Reuters coverage of xAI’s Tennessee operations and permitting controversies. 


6) CoreWeave, Nebius, Lambda and other “neocloud” providers (multiple sites, USA)

Owners: CoreWeave, Nebius, Lambda, Nscale, et al. (independent AI infrastructure firms)
What they are: Specialist providers that design and run GPU-optimized data centers for AI customers. These neoclouds are rapidly expanding capacity and signing large GPU purchase/placement deals with hyperscalers.
Capacity / scale: Deals reported in the tens of thousands to hundreds of thousands of GPUs; some neoclouds have multi-billion-dollar expansion plans (e.g., CoreWeave’s multi-billion project in Pennsylvania).
Compute / server providers: Primarily Nvidia accelerators (GB300 series, etc.) and bespoke rack designs from leading OEMs and systems integrators. Microsoft’s deals for hundreds of thousands of GB300s with neocloud partners were recently reported.
Source: Industry reporting and analysis of Microsoft-neocloud and CoreWeave plans. 


7) Meta, Cologix and other hyperscale developments (US campuses)

Owners: Meta (Facebook), Cologix, and other hyperscale operators
What they are: Meta continues to expand heavy AI capacity (large planned campuses such as Richland Parish, LA), and data-center real-estate firms (Cologix, QTS, etc.) are building or planning large campuses capable of supporting AI workloads. These projects are often multi-billion investments with millions of square feet of IT space.
Capacity / scale: Projects range from several hundred MW to multi-GW campus ambitions; Meta’s large planned sites are among the costliest single projects.
Compute / server providers: Meta designs its own Open Compute servers and teams with major OEMs for rack builds; invariably these campuses use massive volumes of copper, high-density power, and advanced cooling.
Source: Industry reporting on Meta large projects and lists of top planned data centers. 


What ties these projects together (and why copper matters)

  1. Power density & distribution — AI training clusters draw huge sustained power. Copper busbars, high-amp cabling and robust power distribution systems are required to feed racks and scale to megawatt densities per room. (See Microsoft, AWS and OpenAI projects for scale). 

  2. Cooling infrastructure — Liquid cooling loops and heat exchangers in modern AI racks rely heavily on copper for thermal transfer and piping. 

  3. Networking & short-reach interconnects — While long-haul links use fiber, short-reach, shielded copper and hybrid copper/fiber assemblies remain common inside campuses and for PDUs (power distribution units). 


Quick takeaways for investors and traders

- The U.S. is experiencing an AI data-center build cycle of unprecedented scale; projects span hyperscale clouds (Amazon, Microsoft, Google), AI firms (OpenAI, xAI), and specialized neoclouds. 

- These builds are capital-intensive and power-hungry — accelerating demand for copper (power distribution, cooling and networking) as well as other critical metals.

- Contractors, regional utilities and large server-OEMs/OEM integrators are central: expect multi-year procurement pipelines for copper cabling, busbars, switchgears, and liquid-cooling hardware.

Back to blog