• As demand grows for AI processing closer to end users, a collaboration between Prime Group and Hanwha is deploying distributed, energy-aware data centers by repurposing existing buildings across the U.S.
• Hanwha will provide power generation, storage, and the energy management software for these facilities, which optimizes power use in real time. This model addresses two challenges simultaneously: bringing AI infrastructure closer to demand, and ensuring that distributed facilities operate efficiently without straining local power grids.
• For data center customers, the end result is bringing data centers online faster, with lower latency, at lower cost-per-token.
As AI expands from training to real-time applications, infrastructure demands are diversifying. While hyperscale data centers remain central to AI development, a growing number of use cases benefit from localized processing closer to end users.
Inference — the stage where trained AI models are put to use — is expected to account for roughly two-thirds of all AI compute by 2026, up from a third in 2023. While training involves building models using vast datasets over extended periods, inference is what happens when those models are applied to real-world tasks, such as converting spoken language into text, or interpreting sensor data to trigger automated responses. These tasks are handled by servers running AI models inside data centers, where incoming data is processed and results are generated in real time.
Much of this activity will continue to run in large data centers, particularly for applications that are less sensitive to delays. But applications like computer vision, speech recognition, and industrial automation require low latency and local responsiveness that remote facilities are not designed to provide.
This is driving demand for a different kind of infrastructure: distributed, modular, and energy-aware systems. One emerging model describes these as “Micro AI Factories,” small, standards-based facilities designed to run AI workloads near the point of demand.
How is AI infrastructure evolving?
Urban and suburban areas are where demand for localized AI processing is highest, but they are also where new construction is hardest. Land is scarce, permitting is slow, and grid capacity is often constrained. Building a traditional data center from the ground up in a dense metro area can take years, by which point the demand curve has moved. At the same time, existing infrastructure in these areas — storage facilities, logistics centers, and commercial buildings — often sit on sites with available power capacity that is underutilized.
In the U.S., a collaboration between Prime Group, a nationwide real estate investment firm, and Hanwha, is addressing this through a distributed model that combines existing real estate, an intelligent energy management system (EMS), and on-site power generation and energy storage.
Prime Group is repurposing existing buildings, including storage facilities and logistics centers, into modular data centers. Its current portfolio covers approximately 95% of the U.S. population, with available power capacity and existing grid connections that significantly reduce time to deployment compared to greenfield construction. “Our national footprint and available power capacity create a compelling opportunity to enhance our assets through targeted energy and data infrastructure applications,” says Robert J. Moser, CEO of Prime Group.
Hanwha Qcells provides an AI-powered EMS that has been independently certified for safety by UL Solutions, a global safety testing authority. The system optimizes how these facilities consume, store, and distribute power in real time. Battery energy storage systems (BESS), supplied by TransGrid Energy, another Hanwha subsidiary, stabilize supply by absorbing excess energy during low-demand periods and releasing it during peaks. The system balances operational needs against grid conditions, enabling distributed facilities to operate efficiently while maintaining grid resilience.
“Reliable, energy-aware operations are essential for distributed AI,” explains Dr. Youngchoon Park, President of Grid & Energy Services at Qcells. “Our optimization technologies support Prime Group’s efforts to deploy responsive, grid-aligned AI facilities across the country.”
The system uses Watt Schema, an open-source ontology that standardizes how energy and data systems interact. This replaces the fragmented, multi-vendor approach common in current energy management systems, streamlining and optimizing operations across the network. Microsoft Azure provides the AI and cloud layer. Together, the model creates a distributed, energy-aware network designed for real-time applications.
Why is this fundamentally an energy challenge?
Distributed AI infrastructure does not just redistribute compute. It redistributes energy demand. Each facility draws power from local grids; without intelligent energy management, adding data center loads risks grid instability.
This is why the energy layer is not an afterthought. AI-driven optimization enables facilities to respond to grid conditions in real time, drawing power when it is abundant and releasing stored energy when it is scarce.
Without the coordination needed to manage energy effectively, data centers could add unnecessary pressure to local grids.
Can this model scale?
Dense urban centers around the world are facing similar constraints: limited land, constrained grids, and growing demand for local AI processing. Wherever logistics or storage assets exist in high-density areas, the same repurpose-and-optimize model can apply.
Qcells’ open-source energy management framework is designed to support interoperability across different markets and operators, enabling the approach to extend beyond single deployment. The U.S. rollout serves as a proof point, but the same approach can apply wherever existing real estate and available power capacity align with growing demand for localized AI processing.
As AI applications diversify, the infrastructure that supports them will need to be distributed, energy-aware, and fast to deploy. Repurposing existing assets avoids the bottlenecks of new construction while meeting demand where it is highest. The energy layer, storage, optimization, and grid coordination, is what makes distributed AI infrastructure lower cost than traditional options and operationally viable at scale.
Get the latest news about Hanwha, right in your inbox.
Fields marked with * are mandatory.