At 40 kilowatts per rack, NVIDIA’s latest data center setups are melting through conventional air cooling systems. That’s not a metaphor. It’s a hard engineering limit now dictating billion-dollar infrastructure bets — and why on May 04, 2026, LG’s CEO Ryu Jae-cheol sat across from Madison Huang, NVIDIA’s Senior Director of Product Marketing for Omniverse and Robotics, in a Seoul boardroom where the real topic wasn’t chips or capital, but heat.
Key Takeaways
- NVIDIA’s AI data centers now operate at power densities where standard air cooling fails, forcing partnerships with firms like LG for advanced thermal management.
- LG is positioning its commercial HVAC units as mission-critical infrastructure for AI, aiming to become a recurring revenue supplier inside NVIDIA’s ecosystem.
- The 40 kW/rack threshold marks the point where server performance throttles without liquid or hybrid cooling, destroying ROI on high-end GPUs.
- LG’s CLOiD robot and ‘Affectionate Intelligence’ platform depend on low-latency inference — a challenge that extends from data centers to edge hardware.
- Discussions remain exploratory; no investment amounts or timelines have been formalized.
The Physics Problem No One Wants to Talk About
Everyone’s focused on model size, training runs, and multimodal benchmarks. But the real bottleneck in 2026 isn’t algorithms — it’s thermodynamics. NVIDIA’s data center business posted record revenue, yes, but that success is literally hitting a wall: the physical capacity of air to carry heat away from packed server racks.
At 40 kW per rack, airflow becomes useless. Hotspots develop in seconds. Thermal throttling kicks in. Performance drops. And that $30,000 H100? It’s no longer delivering peak TFLOPS. That’s not a software bug. That’s physics winning.
Traditional data centers run at 5–15 kW per rack. The jump to 40 kW isn’t incremental — it’s exponential in complexity. LG’s commercial HVAC division, which spent years optimizing climate systems for skyscrapers and transit hubs, now sees a new market: AI-grade thermal regulation. Their pitch? Re-engineer cooling not for human comfort, but for silicon survival.
Consider the thermal challenges associated with data center design. The ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers) TC 9.9 committee has been working to develop data center cooling standards. Their efforts are crucial in helping data center operators optimize their cooling systems and minimize the impact of thermal bottlenecks.
The heat generated by data centers can be substantial, with estimates suggesting that a single large data center can produce heat equivalent to a small city’s worth of energy consumption. This is why advanced cooling systems, like those offered by LG, are becoming increasingly important in the industry.
LG’s Play: From Appliances to AI Infrastructure
LG isn’t trying to build AI chips. It’s smarter than that. Instead, the company is embedding itself into the supply chain as a thermal partner — a critical but often overlooked layer beneath the compute stack. At CES 2026, LG quietly showcased high-efficiency HVAC units designed specifically for AI data centers. These aren’t repurposed office coolers. They’re engineered for rapid heat exchange, variable load response, and integration with server telemetry.
The goal is simple: let data centers pack more GPUs into the same floor space without melting down. That’s valuable. And it’s recurring. Unlike selling a one-off appliance, thermal systems require maintenance, monitoring, and upgrades. LG wants long-term contracts, not flash-in-the-pan hardware sales.
This isn’t just about cooling. It’s about positioning. By aligning with NVIDIA now, LG aims to become a default infrastructure vendor — the same way Cummins is to backup power or Schneider Electric is to data center power management.
As an example of this kind of positioning, consider the partnership between Microsoft and Schneider Electric. In 2022, the two companies announced a strategic partnership to provide data center infrastructure solutions. This partnership aimed to help customers optimize their data center operations by integrating Schneider Electric’s power and cooling solutions with Microsoft’s Azure Stack Edge offerings.
LG CNS and the Smart Infrastructure Push
Supporting this shift, LG’s IT services subsidiary, LG CNS, is a sponsor at this year’s IoT Tech Expo North America. That’s not a coincidence. It’s a signal. The company is aggressively expanding its footprint in smart infrastructure, linking thermal systems, edge compute, and IoT monitoring into a single enterprise offering.
Imagine a data center where LG’s cooling units adjust airflow in real time based on GPU utilization, pulling data directly from NVIDIA’s fleet management APIs. That’s the vision: closed-loop environmental control optimized for AI workloads, not human occupancy.
This kind of smart infrastructure is becoming increasingly important as the industry moves towards more edge-based and decentralized architectures. Companies like LG CNS are well-positioned to capitalize on this trend, offering a range of services and solutions that can help customers optimize their data center operations and improve their overall efficiency.
The Robot in the Room: Why Edge Inference Matters
The talks with NVIDIA aren’t just about data centers. They extend to the edge — specifically, LG’s CLOiD robot. Unveiled recently, CLOiD features two seven-degree-of-freedom arms and hands with five individually actuated fingers. It’s designed to handle delicate household tasks, from pouring coffee to picking up fragile objects.
But here’s the catch: to do that safely, it needs zero-latency inference. The robot must process visual data, query local vector databases to identify object properties, and calculate grip force — all in real time. Any delay or miscalculation risks breaking a glass or worse.
That kind of responsiveness depends on two things: efficient edge hardware and strong digital twin infrastructure. LG has the hardware. What it lacks — at least for now — is the simulation backbone to train and validate complex manipulation tasks at scale. That’s where NVIDIA’s Omniverse platform enters the conversation.
Digital Twins and the Latency Wall
Physical AI — machines that act in the real world — can’t be trained purely on real-world data. It’s too slow, too dangerous, too expensive. Digital twins simulate environments where robots learn safely before deployment. NVIDIA’s Omniverse is one of the few platforms capable of high-fidelity physics simulation at scale.
If LG integrates with Omniverse, it could simulate thousands of kitchen scenarios — slippery mugs, wobbly tables, curious pets — and train CLOiD to adapt. But that simulation requires massive compute, which loops back to the original problem: you can’t run complex digital twins without stable, high-density data centers.
It’s a full-stack dependency: better robots need better simulation. Better simulation needs more compute. More compute needs better cooling. And better cooling? That’s where LG comes in.
According to a report by MarketsandMarkets, the digital twin market is expected to grow from $3.8 billion in 2020 to $48.2 billion by 2025, at a Compound Annual Growth Rate (CAGR) of 71.3% during the forecast period. This growth is driven by the increasing adoption of Industry 4.0 technologies and the need for more efficient and effective operations.
- 40 kW/rack: Current power density threshold where air cooling fails
- 7 degrees of freedom: Per arm on LG’s CLOiD robot
- 5 actuated fingers: Per hand on CLOiD, enabling fine manipulation
- 0 latency tolerance: Required for safe physical interaction in dynamic environments
- LG CNS: Sponsor at IoT Tech Expo North America 2026, expanding enterprise reach
The Bigger Picture
As the industry moves towards more edge-based and decentralized architectures, the importance of thermal management in data centers cannot be overstated. Companies like LG are positioning themselves to capitalize on this trend by offering advanced thermal solutions that can help customers optimize their data center operations and improve their overall efficiency.
The partnership between LG and NVIDIA is a prime example of this trend. By working together, the two companies can help customers overcome the thermal bottlenecks that are currently limiting the adoption of AI and edge computing applications.
The future of AI is not just about models and algorithms, but also about the hardware and infrastructure that supports them. Companies like LG are well-positioned to play a key role in this future, offering a range of services and solutions that can help customers optimize their data center operations and improve their overall efficiency.
What This Means For You
If you’re building AI systems, especially those involving physical actuation or edge deployment, the LG-NVIDIA talks should be a wake-up call. Compute isn’t just about model size or training time. It’s about infrastructure stability. Your model might run fine in the cloud today, but if the data center overheats, performance degrades — silently. You won’t get a warning. Just slower inference, dropped tasks, failed predictions.
For developers, this means paying attention to the hardware stack beneath your code. Thermal throttling isn’t logged in your error tracker. But it’s there, eating your performance. And if you’re deploying robots or edge devices, latency isn’t just a QoS issue — it’s a safety issue. The difference between a successful grasp and a shattered vase might be 15 milliseconds of pipeline delay.
: the era of treating AI as pure software is over. Physical constraints are now part of the development stack. You’re not just writing models. You’re designing systems that interact with — and depend on — the real world’s limits.
So here’s the question: if the biggest obstacle to smarter robots isn’t AI, but air conditioning, what else have we been ignoring?
Sources: AI News, original report


