AI Native Data Centres: Hype or the Making of a Sovereign Co…

AI Native Data Centres: Hype or the Making of a Sovereign Co…

Analytics India Magazine (Smruthi Nadig)

In India’s technology corridors, a new phrase is now casually thrown around by founders, cloud providers, system integrators, and even state governments: AI-native data centres. The term promises a future where Indian startups don’t just buy compute, but build it. The hype is loud, and the stakes are real, and whoever controls the infrastructure will control the next epoch of the AI economy.

Whether ‘AI-native’ is a marketing label, or a structural shift is not an academic debate. It determines how Indian startups train models, deploy inference, scale workloads, and wrestle with spiralling cloud bills. Most critically, it decides whether India becomes a buyer of intelligence or a builder of it.

From rented compute to building intelligence

For the past decade, India’s software ecosystem has been engineered around public cloud consumption. The assumption was that infrastructure would be someone else’s problem, viz. AWS, Azure, or Google Cloud. This approach worked when workloads were CPU-driven, ephemeral, or latency-tolerant. AI has broken that model.

Founders across India are learning the hard way that GPU-heavy training workloads, distributed inference pipelines, and low-latency interconnects cannot simply be “scaled” by switching instance types. The economics and thermodynamics do not bend.

A Bengaluru-based software developer who works with startups, seeking anonymity, described the difference: “AI data centres have way higher power and thermal requirements for compute and, in turn, cooling. Nvidia has a new line of data centre clusters that it sells directly. And these clusters also require much, much higher bandwidths to communicate between nodes.”

His logic is simple, though, that “GPUs consume more power. The more power they consume, the more waste heat they produce, and the more cooling capacity they require. We are talking about multiple GPUs working together to train ever-increasingly large models.”

Traditional data centres built for web servers and storage arrays are not designed to handle 50-100 kW per rack, nor fabric-level bandwidths measured in terabytes per second.

The numbers are staggering. “The latest and greatest offering by Nvidia (GB200 (B200)) has HBM3e with a combined total bandwidth of 576 terabytes per second. That is unimaginably high. Their own interconnect (NVLink) can do 140TB/s,” the developer said.  

Startups no longer want to just rent racks

The real story isn’t that mega companies are building bigger machine rooms. It’s that India’s startup community, traditionally allergic to hardware investment, is crossing into infrastructure design.

According to Invenia’s CEO and whole-time director, Pankaj Malik, the digital infrastructure and IT services company partners with a growing group of AI infrastructure builders developing GPU clusters and training environments. They focus on providing high-density, GPU-ready connectivity that ensures low latency and high throughput for efficient distributed AI training.

Their support includes scalable network architectures that utilise automation, GIS-based planning, and repeatable design templates, allowing builders to scale clusters based on demand without large upfront investments.

Invenia also offers edge and hybrid capabilities for GPU clusters that reduce latency and enhance performance, along with integrated managed services for seamless deployments. 

“This helps AI-infra builders accelerate their rollout cycles, reduce time-to-train, and operate more reliably, while keeping their teams focused on building AI capabilities rather than managing the underlying infrastructure,” Malik added. 

This marks the first cultural break from cloud dependency since the SaaS boom. Infrastructure is becoming a form of strategic ownership, not an operating expense. The price of GPU training on international clouds has made the economics of “renting intelligence” untenable.

AI-native means architecture, not rebranding

Krishna Bhatt, founder and CEO of Webuters Technologies, dismisses the idea that AI-native data centres are just a marketing stunt. In his words, “The term ‘AI-native data centres’ largely refers to facilities designed with specific aims to fulfil the high demands of AI workloads, but remain data centres at their core.” Webuters provides digital transformation, AI-powered solutions, digital commerce services, and managed IT services. 

But the core difference is architectural. “What differentiates them, even within India’s fast-growing data centre market, is the way infrastructure is adapted for AI’s unique needs, especially in terms of energy efficiency and scalability,” he further explained. 

The shift in data centre architecture is structural, moving beyond mere cosmetic changes to support the demands of modern AI and high-performance computing. This transformation involves adopting GPU-heavy racks, engineered explicitly for continuous training and inference workloads, which require ultra-high power allocation per rack. 

Furthermore, it requires extremely low-latency networking to enable efficient multi-node distributed training, while incorporating advanced cooling systems optimised to withstand India’s specific challenges of high heat and humidity.

Bhatt said, “It is a structural shift to meet new performance and efficiency imperatives that traditional data centres in India are not built for.”

Liquid cooling is inevitable

For instance, if AI inference is a drizzle, then AI training is the monsoon season. That monsoon generates brutal thermal loads. Liquid cooling, immersion systems, and phase-change technologies are no longer add-ons, but crucial mechanisms for the Indian market. 

“Liquid cooling pods and immersion racks are increasingly seen as essential for AI compute environments, particularly in India’s climate, where high ambient temperatures push traditional cooling close to its limits,” Bhatt highlighted.

He added that beyond energy, there is operational reliability: “AI workloads operating continuously in Indian data centres will have greater dependability and uptime.”

Manoj Dhanda, founder of Utho Cloud, an Indian public cloud provider, told AIM that instead of repackaging the conventional cloud model, Utho is intentionally architecting environments for GPU-first workloads. 

Crucially, Utho positions itself as the inverse of hyperscalers: price, sovereignty, and ownership, with “70% less than AWS” prices.

That delta is transformative for a startup training a model over 30 days. In cloud economics, one month of GPUs can decide whether a business pivots, raises capital, or crashes.

Fibre, bandwidth, and the missing piece

GPU clusters are not islands. They are distributed organisms whose intelligence depends on low-latency inter-node communication. Without that, training implodes.

This is where network infrastructure builders like Invenia-STL Networks enter the story. They are not selling racks. They are building the connective tissue for India’s sovereign compute ambitions.

“Beyond traditional data-centre design, we are creating a tightly interconnected AI fabric that links compute, storage, and networking across edge and core environments with predictable performance and scalability,” Malik told AIM. 

The AI-native conversation matures here. Cooling and GPUs are hardware, while interconnect is sovereignty.

Are AI-native data centres marketing hype?

The hype exists, but it does not diminish the reality.

The ‘AI-native’ label becomes marketing when applied to traditional web-hosting facilities with a few GPUs bolted in. But, when purpose-built design guides power density, cooling, interconnect, and workload orchestration, the phrase describes a new class of infrastructure.

Bhatt summarised that AI-native data centres are a response to operational and economic imperatives, not a trend. Because, the alternative is strategic dependence. If Indian AI startups continue paying Western cloud providers to train their models, they export capital, talent, and data know-how. India becomes the sandbox, not the architect.

AI-native data centres push the country toward owning the means of intelligence production.

The post AI Native Data Centres: Hype or the Making of a Sovereign Compute Era? appeared first on Analytics India Magazine.

Generated by RSStT. The copyright belongs to the original author.

Source

Report Page