Why Responsible AI Demands Both Trust and Compute Ownership
Analytics India Magazine (Ankush Das)
Artificial Intelligence now influences decisions across sectors, but not all decisions carry the same weight. A chatbot’s casual error may be forgivable. In finance or healthcare, however, a single wrong prediction can cost billions, or even a life.
This is why experts argue that regulated industries require a responsible AI, systems designed for trust and accountability from the ground up.
Bhaskarjit Sarmah, head of AI research at Domyn, a composite AI platform to design, deploy, and orchestrate AI Agents, explained the stakes in an exclusive interaction with AIM.
“Nobody in the world can make an AI with that much accuracy… but the question is, how do I know which one is a mistake and which one is not,” he said.
Responsible AI, in his view, goes beyond fine-tuning existing large language models. It requires infrastructure, domain-specific training, and an open approach to data ownership.
The Risk of Generic Models
Most mainstream AI systems are trained to be general-purpose. While this approach works for broad tasks, it falls short when precision and trust are non-negotiable.
“We cannot use ChatGPT for financial services. Sometimes this model hallucinates. Sometimes it is generic and offers biased output,” said Sarmah, adding that ChatGPT will never tell you when to trust or not to trust its output.
This is where domain-specific models come in. By training language models directly on financial or healthcare data, researchers can reduce risks of hallucination and bias.
But domain-specificity is not enough on its own. Sarmah stresses that enterprises also need to control the full AI stack, data, models, and deployment, especially when sensitive information is involved.
Why Compute Ownership Matters
Training responsible AI requires enormous computing power, which remains a bottleneck for most countries. Sarmah draws a distinction between renting GPU clusters from big providers and owning infrastructure outright.
“At BlackRock, I never had the chance to train language models from scratch. It requires massive compute investment, which nobody has in India,” he said.
This lack of sovereign compute capacity means many organisations depend on closed providers, often moving sensitive data outside local networks. By contrast, owning compute enables enterprises and governments to train and deploy models within controlled environments, ensuring privacy and accountability.
Domyn, where Sarmah now leads AI research, offers one example of how this can be done.
The company has partnered with NVIDIA to build Colosseum, a supercomputer in southern Italy capable of 115 exaFLOP calculations per second.
From its India-based team, Domyn is training foundation models from scratch on this infrastructure, something Sarmah notes is not happening elsewhere in the country.
Across Europe, Asia, and the US, governments are recognising the same need and pouring billions into national AI supercomputers. The message is consistent: responsible AI is not only about software safeguards, it also depends on who owns the hardware.
The EU’s AI Act extends the regulatory scope to include hardware, requiring organisations to identify both the software and hardware components of AI systems and ensure their safety.
Meanwhile, China is moving toward self-reliance by subsidising domestic AI chip production and aiming for smart computing infrastructure independence by 2027, underlining the strategic role of hardware ownership.
Responsible AI as a Global Imperative
The idea that stands out from Sarmah’s reflections is not about one company’s product, but about the direction AI may need to take.
Regulated industries require AI systems that are not only accurate but also transparent, explainable, and accountable. This involves building models from scratch, publishing methods to detect bias, and enabling users to understand why outputs can or cannot be trusted.
“The point is that we are not using AI blindly, we are using it responsibly,” Sarmah said. His words echo a broader challenge: as AI spreads into sensitive sectors, the conversation must shift from speed and scale to responsibility and trust.
The post Why Responsible AI Demands Both Trust and Compute Ownership appeared first on Analytics India Magazine.
Generated by RSStT. The copyright belongs to the original author.