The SaS Flywheel Spins, but Can Indian IT Keep Control?

The SaS Flywheel Spins, but Can Indian IT Keep Control?

Analytics India Magazine (Smruti S)

Indian IT is entering a new era defined by AI-driven workflows, but the technology powering this shift is far from foolproof. 

The Services-as-Software (SaS) flywheel, described by HFS Research, envisions large language models accelerating development, agentic AI orchestrating workflows, and vibe coding translating intent into deployable code. 

It is pitched as a model of seamless co-creation between humans and machines, promising unprecedented speed and automation. Yet, speed is only part of the story. The flywheel also demands a profound transformation in how Indian IT delivers, manages, and governs software.

Jobs in Flux

AI-first pipelines are reshaping delivery roles. Developers are evolving into AI curators and orchestrator managers, guiding machine output rather than writing every line. Testers are becoming verification strategists responsible for setting compliance guardrails. Project managers are being recast as governance officers tasked with overseeing AI debt ledgers, traceability, and audit trails.

But these shifts are not without risks.

Junior developers and testers may lose opportunities to build foundational skills if AI tools handle most of the basics. “Skill gaps are a real concern,” says Sreejith M, COO of Zero One Consulting. “If AI automates too much, it will leave younger engineers without the grounding they need to spot flaws later.”

AI-driven testing and deployment also require significant workflows re-engineering. In regulated industries, AI-generated documentation must still undergo human review. “AI pipelines can feel too black-box for firms that rely on rigorous traceability, documentation, and compliance standards,” said Balaramanan, a senior IT professional who works as an AVP-information security at a renown IT firm.

Where Indian IT Stands

This transition begins from a position of strength. Indian IT has decades of maturity in delivery, process-driven SDLC, cost efficiency, and one of the world’s largest talent pools skilled in enterprise software, cloud migration, testing, and infrastructure. 

Yet, adoption of AI-first delivery remains cautious.

“Most large firms are experimenting with AI-assisted coding pods, vibe coding pilots, and agent-driven QA,” said Balaraman, adding, “Infosys and Cognizant market ‘AI-first delivery,’ but adoption remains patchy, concentrated in innovation hubs.”

The gaps are equally stark. Balaramanan mentioned that governance frameworks for managing traceability, compliance debt, and explainability remain underdeveloped. And workforce reskilling is proving difficult: most engineers are checklist-driven executors rather than curators of AI output. 

As Somak Roy, VP at Maximl and former IT analyst, warned: “Generative AI is often treated as magic, somehow exempt from the rules of machine learning. That’s not sustainable.”

Fragile Code and Testing Risks

The biggest risks show up in coding and testing. Balaraman warned of “fragile, non-explainable code; hidden dependencies and compliance blind spots.” Without deliberate design, artifacts like test logs and traceability matrices don’t exist, undermining quality and auditability.

Roy adds that overconfidence in generative AI will lead to “inevitable technical debt as tools replace foundational engineering practices.”

Vibe coding illustrates this fragility. Marketed as revolutionary, results have been uneven. “The tools are not yet mature, and there have been setbacks when Vibe Coding was used without proper validation,”said Sreejith. “Unvalidated code could create long-term issues.”

He argued current implementations are closer to advanced snippet generators — “like enhanced Stack Overflow or Reddit search” — than true automation platforms. In practice, only skilled engineers capable of spotting flaws can extract real gains, while others risk pushing brittle, error-prone code into production.

Even in testing, where AI shows the clearest productivity boost, risks abound. AI now assists in test case creation, scenario validation, and execution. But as Sreejith cautioned: “Blindly trusting AI-generated tests or code may lead to quality and security failures.” 

Over-reliance risks propagating errors into production, introducing vulnerabilities that are difficult to detect or fix.

Commercial Pressures and Shadow Adoption

Adoption, however, is not always top-down. “Developers have begun using these tools independently, without management’s knowledge,” Sreejith observed. “Organisations are now working to channelise this usage by formally adopting AI and developing guidelines around it.” 

This shadow adoption reflects AI’s inevitability — and the lack of structured governance to manage it.

Balaramanan highlighted structural barriers in this context. Waterfall-driven program managers resist “blurred phase” AI models. Traditional commercial frameworks, such as per-FTE (full time equivalent) billing, do not align easily with vibe coding and AI agents. Audit challenges persist, as Indian IT relies heavily on traceability and thorough documentation, whereas AI-first pipelines can appear opaque and difficult to scrutinise. Verification in AI-first workflows becomes ongoing rather than gated at the user acceptance testing phase.

Hybrid strategies are emerging to bridge this gap. AI-accelerated development can be positioned as a “fast-track” lane within traditional waterfall projects, retaining governance layers such as documentation, checkpoints, and audit logs while generating them automatically through AI debt ledgers, traceability tags, and continuous verification reports. This approach allows firms to maintain compliance with client audit requirements while delivering at AI speed.

At the same time, commercial pressure is mounting. Clients are eager to leverage AI to cut costs, pushing firms to deliver faster and with leaner teams. Roy warned this is compounding risk: “Clients are incentivising corners that increase technical debt. The result could be brittle software that fails under complex or untested conditions.” 

A High-Stakes Experiment

For Indian IT, the Services-as-Software Flywheel is both a promise and a gamble. It offers speed, automation, and new models of co-creation. But it also introduces systemic risks: fragile code, compliance blind spots, inadequate validation, and skill erosion.

Hybrid strategies offer partial bridges but do not solve the deeper issues of governance, reproducibility, and technical reliability.

The takeaway is clear: Indian IT’s process-driven model is being forced into a high-stakes experiment with immature tools.

Or, as Roy puts it: “This will show up in some time.”

The post The SaS Flywheel Spins, but Can Indian IT Keep Control? appeared first on Analytics India Magazine.

Generated by RSStT. The copyright belongs to the original author.

Source

Report Page