Global AI and HUMAIN Partner to Accelerate Sovereign AI with Large-Scale NVIDIA-Powered AI Data Centers
Partnership with HUMAIN strengthens Global AI's position as the world's leading provider of sovereign AI infrastructure.
[SAN JOSE, Calif., March 16, 2026] — Global AI, a NVIDIA Cloud Partner and American provider of sovereign AI infrastructure, today announced that it has completed its deployment of NVIDIA GB300 NVL72 systems at its Endicott facility in New York to run the largest NVIDIA GB300 NVL72 cluster in the state. The company also announced that it plans to deploy theNVIDIA Vera Rubin platform across its U.S. data center footprint.
The continued capacity expansion of the Endicott facility in New York, including the planned NVIDIA Vera Rubin NVL72 deployment, marks Global AI’s next step in building secure, high-density sovereign AI infrastructure for model training, large-scale inference, and sovereign-cloud integration within completely secure ecosystems. Building on Global AI’s access to the latest GPU technologies, including NVIDIA GB300 NVL72, the Vera Rubin deployment underscores the company’s ability to deliver bleeding-edge, high-performance AI compute.
“These deployments underscore Global AI’s compute-first strategy and disciplined execution,” said Michael Jeter, Director and Head of Worldwide Sales at Global AI. “With our success in New York and our upcoming Vera Rubin rollout, we are extending our sovereign infrastructure platform to support clients through the next frontier of reasoning-intensive workloads—while maintaining the operational discipline, security posture, and data-sovereignty controls our customers require.” Adding "In an era where public LLMs risk diluting differentiation, Global AI provides the sovereign infrastructure to keep proprietary intelligence under enterprise control—ensuring your competitive edge doesn’t become the market’s baseline."
“The NVIDIA Vera Rubin platform is engineered with extreme co-design across new six chips, to power agentic AI and reasoning-intensive workloads within a secure, sovereign infrastructure,” said Dave Salvator, Director of Accelerated Computing Products at NVIDIA. “Global AI's deployment of NVIDIA Blackwell, and their commitment to the Vera Rubin platform, clearly demonstrates their focus on delivering performant, sovereign AI at scale.”

Global AI’s deployment program focuses on the seamless integration of compute, networking, storage, and liquid cooling to ensure every subsystem is engineered for sustained high utilization. The company works with leading technology infrastructure providers, Supermicro (SMC), whose high-performance server platforms underpin the rack-scale architecture supporting Global AI’s NVIDIA-powered deployments.
This approach centers on high-density performance, moving from the company’s current NVIDIA GB300 NVL72 deployment in New York toward a future roadmap that incorporates NVIDIA Vera Rubin NVL72 racks. Once deployed, these Vera Rubin-based systems will provide the core acceleration for the next generation of training and inference workloads.
The NVIDIA Vera Rubin platform represents a step change in how large-scale AI systems are designed and operated. It advances rack-scale AI by integrating CPUs, GPUs, and high-bandwidth interconnect in a single system. This architecture supports higher utilization, lower latency, and more predictable performance for large-scale AI workloads — helping customers improve performance per watt, reduce orchestration complexity, and scale efficiently without stitching together disjointed systems.
Global AI is a U.S. based, vertically integrated sovereign AI infrastructure company and the world’s first sovereign AI hyperscaler, designing, building, powering, and operating single-tenant, air-gapped AI data centers that enable nations and enterprises to develop and deploy artificial intelligence within their own jurisdiction. Global AI’s fully integrated model spans land, energy, construction, advanced liquid cooling, and GPU-dense compute, ensuring complete physical and operational separation across the entire stack. With infrastructure deployments across the United States, Global AI is executing a disciplined expansion strategy toward 1 gigawatt of critical capacity by the end of 2029—delivering secure, compliant, and sovereign AI infrastructure at national scale.
Media Contact
Raeda Saraireh
Head of Marketing & Communications, Global AI Raeda.Saraireh@globalai.com