Thoughtworks India CTO Warns Against the Naive Rush to Turn …
Analytics India Magazine (Ankush Das)

Artificial Intelligence (AI) is reshaping enterprise technology far beyond algorithms. While AI adoption is accelerating across enterprises, the rush to embed agents into every workflow has introduced a new wave of technical missteps.
The AI wave has forced enterprises to rethink everything from infrastructure automation to testing practices, said Bharani Subramaniam, Thoughtworks CTO for India and the Middle East. He cautioned that despite the rise of agents and code generation, the industry is still learning where autonomy ends and human oversight begins.
In a conversation with AIM, Subramaniam explained how accelerated compute, agentic systems, and autonomous workflows are pushing organisations into a new phase of real production adoption, not mere experimentation.
Workloads Become Heterogeneous
AI infrastructure now spans specialised chips, GPU fleets and advanced orchestration layers, he said, adding that the automation layer is finally catching up. Cloud providers and hardware vendors are coming up with tools to manage GPU-heavy workloads that behave very differently from traditional microservices.
Where infrastructure-as-code became standard for CPU clusters, the same discipline is only now arriving for AI workloads. He cited Thoughtworks latest report ‘Technology Radar,’ where the company describes the phenomena in detail.
He explains that the current reality in accelerated computing, driven by the intense demand for GPUs, is that organisations often need to source these resources from multiple providers. For instance, a client might use GPU-equipped workstations in their own data centre and also procure GPUs from one or more cloud providers (e.g., AWS, Azure, GCP).
Subramaniam added that securing a large number of GPUs from a single vendor is extremely challenging right now. To address this multi-vendor environment, critical infrastructure platforms have emerged from Thoughtworks.
One such platform, SkyPilot, is utilised by teams to manage workloads across disparate GPU sources, say, five from Azure, three from AWS, five from Google Cloud Platform (GCP), or even specialised GPU cloud providers. This capability demonstrates how infrastructure automation in the AI space has successfully adapted to the complexities of the present-day market reality.
Software Testing Is Evolving
Amid anxieties about AI automating testers out of the picture, Subramaniam offered a grounded view.
The number of tests may increase dramatically, but human oversight becomes even more crucial. “If a human team wrote it, they would have written 200 tests. [But] the AI model would have written 2000 tests, right?” he quipped.
“You still need to spend time to vet, is it valid? not valid? and fine-tune the generation.”
That would be extra work, even though AI has generated it, he continued. But still, that wouldn’t be as tedious as writing the tests “yourself”.
Efficiency rises, he said, but not to a point where humans disappear. “With AI, maybe you need three or four engineers to do the job… but we have not attained a level where I don’t need any humans.”
Subramaniam also warned of “complacency with AI-generated code,” an anti-pattern they flagged in their industry landscape report. Developers may implicitly accept AI-suggested solutions without evaluating alternatives.
“If you give [a] problem to any model…you are given a solution, whether that solution is good or not.”
“And implicitly, you get biased to accept that it is a valid solution, because then you have to review to prove this is wrong, right?” he said.
Agentic Workflows, Not Autonomous Agents
Despite the buzz around AI agents, Subramaniam stressed that true autonomy is rare in the enterprise. Most implementations remain tightly guided.
“They are more of a workflow where the actual path is pre-determined by a human,” he said. Agents can take non-deterministic steps within constraints, but cannot, for instance, wander off and perform actions outside a loan-processing workflow.
He said, “In the spirit of becoming agentic, some organisations rush to convert these APIs to be an MCP server, and then give agents access to these APIs. I would call it very naive.”
Takeaways for Leaders
Most Thoughtworks clients, he said, are now past the POC stage. “We are very much in the third phase where it’s not pilot, it’s actual production that they are using AI for,” he said.
Indian enterprises, especially in regulated sectors, are already shaping the sovereign AI landscape.
Strict data-localisation requirements are pushing companies to self-host inference inside the country.
“We have to ensure that the requests stay within India,” he noted.
Subramaniam summarised the conversation with three takeaways for business leaders. “Raise the level of literacy,” he said, stressing that AI understanding cannot remain confined to tech teams.
Second, enterprises “can only be as successful in AI as your trust in your old data,” calling for cleaner, more mature data systems. Finally, leaders must “reimagine what customer experience you want to give,” before deciding where agents and AI meaningfully fit.
The post Thoughtworks India CTO Warns Against the Naive Rush to Turn APIs into MCP Servers appeared first on Analytics India Magazine.
Generated by RSStT. The copyright belongs to the original author.