Most Enterprise AI Failures Start with ‘Let’s Build it Ourse…
Analytics India Magazine (Mohit Pandey)
Enterprises love the idea of building their own AI. It looks bold. Sounds strategic. Feels visionary. Boards nod, investors cheer, executives take credit. Despite the recent viral MIT study that said 95% of GenAI projects fail in production, there is no running from the fact that GenAI projects are the default now.
But, how enterprises go about adopting GenAI involves a learning curve.
Jaspreet Bindra, co-founder of AI&Beyond, a company that helps organisations build AI and tech literacy, told AIM that enterprises often mistake in-house generative AI builds as a “badge of vision and innovation.” It may look bold and strategic, but he argued that success in AI is less about who builds it and more about how quickly it delivers business value.
“Most organisations do not need to reinvent the AI stack from scratch, they need to harness it to solve real problems.”
Armand Ruiz, VP of AI platform at IBM, put it out on LinkedIn, “Most enterprise AI failures start with the same decision: ‘Let’s build it ourselves.’ It sounds bold. Strategic. Cost-effective. But the problem is that most are not building a moat. Instead, they are building a distraction… you don’t get extra credit for reinventing the stack. You get fired when your GenAI pilot dies in procurement.”
That framing reflects reality inside most organisations. The speed of AI doesn’t match the speed of enterprise IT.
Foundation models shift every three weeks. Tooling churns every three months. Yet companies expect that the teams would be able to build the AI capabilities in months. The result isn’t strategy. It’s sabotage.
In Bindra’s view, partnerships and collaborations with the right ecosystem players whether startups, hyperscales, or domain experts allow enterprises to leapfrog experimentation and accelerate adoption, while focusing on their real differentiator: the business challenge they are solving.
“The winners in GenAI will not be those who spend years building in isolation, but those who intelligently combine their core strengths with external expertise to move from pilots to production at scale,” he said.
The Failure Rates Don’t Lie
The numbers behind enterprise AI are brutal. RAND research shows project failure rates above 80% (2022). IDC and CIO surveys push that higher, reporting nearly 88% of AI pilots never reach production (2023).
McKinsey calls this “pilot purgatory”—proof of concept after proof of concept, with no deployment in sight.
Avinash Kumar, senior PM at Workato, a platform that unites AI agents and search with data, apps and workflows, highlighted the real killer: time. “By the time you’ve built something internally, the ecosystem has moved three generations ahead.”
MIT Sloan’s 2024 study cut deeper as it said that internally built AI tools reached full deployment only about one-third of the time. When external partners were involved, success rates doubled.
But then Google added confusion.
In its 2025 ROI of AI report, it claimed 88% of executives at “agentic AI” early-adopter organisations were already seeing ROI, compared to 74% across all organisations. That contradicted MIT’s findings.
Nitin Aggarwal, senior director of generative AI at Microsoft, pointed it out, noting Google’s sample size was larger but the conclusion clashed.
That contradiction sums up enterprise AI today: noisy data, mixed results, wins and failures scattered inside the same firms. One team celebrates success while another buries its pilot. Neither proves the larger thesis, but the broader pattern is hard to ignore—DIY AI is harder, slower, and riskier than buying or partnering.
Why DIY Breaks Down
Every internal AI project starts with a demo. A couple of engineers hack together open weights and orchestration. The demo looks good enough to impress leadership. Budgets get allocated. The board gets a slide.
Then reality sets in.
Infrastructure isn’t production-ready. The model drifts. Governance demands explainability. Compliance flags data risk. Suddenly, the AI project isn’t about customer value—it’s about Kubernetes configs, uptime guarantees, and GDPR audits.
The team that built the prototype can’t run a production system at enterprise scale. By the time leaders realise it, millions are gone. Often what’s left is a brittle, outdated system—or nothing at all.
Gartner’s numbers are telling. Custom systems can exceed $10,000 per user per year in support, while SaaS AI tools stay in the low hundreds.
Industry practitioners don’t sugarcoat it. Michael Coté, senior member of technical staff at VMWare, wrote in his newsletter, “You’ll spend 12 months building your own platform. It’ll barely work—if at all—and cost $2 million in staffing. No ROI. A third of the functionality you promised.”
With models and frameworks moving at breakneck speed, in-house builds fall behind almost immediately.
So What Works?
Skeptics point back to Google’s report: if 88% of early adopters see ROI, doesn’t that prove building can work?
Coley Perry, a transformation executive, also disagrees. “In Formula 1 they don’t outsource the race team. You must own your race ops end to end, and know what it is, what it does and how it works intimately,” he said.
Not exactly. Google’s “agentic AI organisations” are already deeply tied into its cloud ecosystem. They may “build,” but it’s on top of Google’s APIs, managed services, and models. That’s not the same as trying to stand up model hosting from scratch.
This is where Aggarwal’s critique matters. Both MIT and Google can be right. AI outcomes aren’t uniform. One company’s breakthrough doesn’t erase another’s collapse.
What it shows is that leadership choices matter. CIOs and CTOs must make bets with incomplete information. The only safeguard is to fail cheap. A $200,000 SaaS experiment is a lesson. A $20 million internal build that never ships is a career-ending disaster.
The post Most Enterprise AI Failures Start with ‘Let’s Build it Ourselves’ appeared first on Analytics India Magazine.
Generated by RSStT. The copyright belongs to the original author.