Why AI implementations fail: Key takeaways from The Business Show London
A huge thank you to everyone who visited the Imobisoft stand and attended our presentation at The Business Show London 2025. It was inspiring to see so much energy in the room, but the conversations we had confirmed a common theme. While interest in AI is at an all-time high, successful implementation remains a significant challenge.
For those who couldn’t make it to the talk, or for those who want a recap of the key points, we wanted to share the insights we presented on why AI implementations fail.
If you are currently embarking on an AI implementation or just considering it, the landscape can be confusing. On one side, you have industry figures like Sam Altman from OpenAI predicting massive transformation, AGI, and autonomous agents.
But when we contrast that hype against the real world, the data tells a different story.
The reality of AI failure
The premise that there is a large-scale failure of AI implementations appears to be true. Just look at the statistics we shared during the talk:
-
- MIT examined 300 public deployments involving $30-40 billion in investment; 95% of executives reported zero return.
- RAND and S&P Global have estimated the failure rate of AI initiatives as high as 80%.
- IDC Research found that for every 33 Proof of Concepts (PoCs) for AI, only 4 graduated to production. That is an 88% attrition rate.
- S&P Global noted that 42% of businesses have simply abandoned their AI projects.
This is not about staff simply logging into ChatGPT, we are talking about AI projects building applications to solve specific business problems.
However, we need to define “failure” carefully. Implementing AI is innovation, and innovation is hard.
Generative AI is still a very new technology, ChatGPT was only released publicly on November 30, 2022. Given that we still see failures in mature software stacks, it is natural to expect a higher rate of failure in such a nascent technology.
That said, there are common pitfalls that you can avoid.
The 5 common reasons for failure
In our presentation, we highlighted five key areas where projects go wrong:
1. Poor problem definitions
We often see companies starting with an AI model looking for a use case, rather than a business problem to resolve. This approach often leads to AI being applied in areas to which it is not suited.
If a process has a fixed logical sequence, standard software automation is often better, with more consistent reproducible outputs than AI.
2. Data issues
“Garbage in, garbage out” applies just as much to AI as any other system. Data preparation can take up half the project time. Crucially, it is pointless to spend resources cleansing historic data if you don’t implement new data governance to ensure future data meets the right standards.
3. Lack of business alignment
AI is often viewed as an IT project, not a business transformation. While IT must be involved, they should rarely be the lead. Without executive sponsorship or cross-functional collaboration, PoC’s will fail to scale.
4. Organisational and cultural barriers
It is impossible to introduce successful AI if you don’t bring your staff on the journey. We are all bombarded with messaging that AI will take our jobs.
Fear and resistance which are present in any business change are amplified in this environment. It is up to your leadership to guide them through that fear and resistance as without all employee but in your project will struggle to gain traction.
5. Unrealistic expectations
AI is not a panacea for all ills. The promise of AGI creates unrealistic expectations that we have to battle against. We do not believe you should try to build entirely self-determining AI solutions. All system designs must retain an element of Human in the Loop (HITL).
Technical pitfalls
Beyond the strategic issues, there are specific technical reasons why prototypes fail to scale.
- Simplified prototypes: Models that work well as prototypes often need significantly more work to scale. A functioning prototype can give a wrong signal regarding the amount of effort required to get to a live model.
- Integration with legacy systems: Many firms set up to build only the AI aspect but lack the skills to build the middleware or APIs necessary to integrate with existing legacy infrastructure.
- Poor model monitoring: There is a dangerous assumption that once a model is live, the work is done. Just like any other software, you need support and maintenance for your AI model.
- Error rate expectations: At a PoC level, when you are normally working with limited parameters and curated data sets, your model might have a low error rate and be significantly faster than the current process. However, as you scale, you may maintain the speed of process but often error rates increase as we have to introduce increased parameters and real world data sources. There comes a point when the increase in speed of process is not justified against the increase in error rate.
A tale of two projects: Our learnings
To bring these points to life, here are two contrasting case studies from our own experience.
The failure (3 years ago): We built a “Sustainability AI” PoC for a global petrochemical company. It was technically brilliant as it benchmarked sustainability reporting and predicted Bloomberg Index scores.
- Why it failed: The goal was poorly defined, the operational team refused access to necessary data, and sustainability lacked internal “clout.” Despite success at board presentations, the Global IT Director killed the project because it hadn’t originated as an IT project.
The success (current): Fast forward to today, and we are running a project for a water services engineering company using a completely different approach.
- Why it works: We spent a month refining the business use case first. We have a steering group including IT, Operations, Finance, HR, and the CEO. We investigated their existing systems first to ensure seamless integration. We are ready to deploy by the end of the year.
To summarise: How should we navigate the maze
If you want your project to survive the 88% attrition rate, here is our advice:
- Innovation is hard: Plan for some failure.
- Data prep: Leave plenty of time and resources for it.
- Define the use case: Ensure you have clear KPIs that measure the right things.
- Review legacy systems: Do this before you start, to stop surprises later.
- Cross-functional teams: Ensure the whole business has input, not just IT.
- Challenge expectations: Bring your people with you.
Did you miss us at the show?
If you want to make sure that your AI project makes it from PoC to production, get in touch with the team at Imobisoft today. We are here to help you navigate the pitfalls and deliver real ROI.