Valliance AI Summary
Enterprises are pouring £39.2M a year into AI, yet less than half of projects deliver value. Valliance’s research reveals “pilotitis”, endless experiments stalled by poor metrics, low adoption, and misaligned consulting models. The fix is value and people first approach, with clear outcomes and real capability transfer.
_Executive Summary
AI failure isn't about the technology. It's about people and value.
We surveyed 1,000 senior leaders at Europe's largest enterprises and found that, although investment is accelerating (27% year-over-year growth, averaging £39.2M annually), value remains hard to come by.
Average time to value is 5.9 months, stretching to 6.6 months for organisations stuck in pilot mode. What's more, less than half (45%) of projects define success metrics from the start, creating the conditions for failure before work even begins.
However, the problem isn’t the projects themselves but the way they’re being executed. The most mature organisations are running more pilots (48%) than early adopters are, proving that experimentation itself isn't wrong. Instead, experimentation is mismanaged, with misaligned expectations from the start.
These dynamics explain what we call pilotitis: endless experiments without a measurable framework for value.
The average annual investment across UK and Netherlands enterprises sits at £39.2M. Going further, organisations with technology budgets exceeding £100M, invest £50.1M annually in AI alone, and they have been increasing this spend by 30% year-over-year.
AI now represents more than half of IT expenditure for high-spend organisations
Sector patterns reveal where enthusiasm is the highest. IT and technology companies invest £45.8M annually on average, with those in the finance, insurance, and legal sectors follow closely at £44.2M. This demonstrates that enterprise AI projects are more than just fleeting experiments; these are substantial commitments reflecting genuine belief in the potential of AI.
Yet outcomes lag behind the investment. Only 50% of projects deliver against their stated success metrics, this number dropping to 43% for pilot stage organisations. The gap between spending and results points to a deeper problem: businesses are over-indexing on technology and external support rather than value and adoption, or rather, they’re buying activity, when they should be buying outcomes.
Consultancies capture a significant share of this spend. On average, 21% of AI budgets go to external consultants, translating to £8.4M per enterprise annually. Across UK enterprises alone, that's more than £66.1 billion spent on consultancy services every year, mostly without demonstrable ROI.
Enterprises spend an average of £8.4M per year on consultancies to support AI projects – yet less than half demonstrate success
The billable hour model ensures that consultants profit regardless of whether or not success is achieved. They simply aren’t incentivised to seek out value, and this, in turn, exacerbates the overall pilotitis problem.
40% of AI initiatives across our survey are pilots or experiments by design, rising to nearly half (48%) in mature organisations with established AI programmes. In fact, mature organisations run more pilots than early adopters – but this isn't a sign of immaturity. It's proof that experimentation itself isn't the problem. The problem lies in what happens next, or more accurately, what doesn't happen.
Pilotitis correlates directly with poor value conversion. Businesses stuck in pilot stage report lower success rates (43% vs. 50% overall), longer time to value (6.6 months vs. 5.9 months), and weaker ROI (only 20% report strong ROI compared to 76% in mature organisations). Projects stall between proof of concept and production. Lessons aren’t learned or applied elsewhere and therefore capability doesn't scale.
Businesses stuck in pilot stage report:
Pilot-stage organisations report strong ROI in just 20% of AI projects
CEOs are the most likely to support experimentation (48% of their initiatives are pilots), suggesting cultural endorsement from the top is essential and that the lack of it may even stall maturity. When the leadership team treats pilots as learning exercises without clear pathways to production, the business mirrors that behaviour. Experimentation becomes the goal rather than the means to a solution.
Legacy consulting models contribute directly to pilotitis. They reward effort over outcomes, so there's no incentive to move fast or shut down failing experiments. They push preferred vendors and technology platforms regardless of fit, creating lock-in without value. They fail to develop internal capability, ensuring continued dependency. And they undervalue adoption and workflow integration, treating deployment as success rather than usage.
Only 45% of AI projects set success metrics upfront. This is a systemic blind spot that undermines ROI from the start.
Without measurement, outcomes can only be based on subjective opinions. Teams argue about whether projects succeeded and budgets get allocated based on enthusiasm rather than evidence. As this continues, trust erodes, because nobody can prove whether AI actually works. Leaders in pilot-stage organisations report the lowest trust in AI (82%) compared with 95% in mature organisations. This gap exists because mature organisations measure, learn, and iterate based on hard evidence rather than gut feel.
Directors and senior managers lag slightly below average in setting metrics (45% and 41% respectively), revealing operational gaps with projects being initiated without proper success criteria. By the time leadership asks for results, it's too late to retrofit measurement frameworks. The project either gets extended (generating more billable hours for consultants) or quietly shelved.
Trust erodes because nobody can prove whether AI actually works.
82%
Leadership AI at pilot-stage
95%
Leadership AI trust within mature organisations
Just 45% AI projects have defined success metrics
ROI patterns underscore what's at stake here. Mature organisations report strong ROI in 76% of projects; for pilot-stage organisations, that number is just 20%. The difference isn't budget or technology - it's discipline around defining, measuring, and optimising for value from day one.
Enterprises AI projects fail when technology is prioritised over measurable outcomes. In turn, enterprise AI adoption falls further and further behind.
Only a third (35%) of leaders across UK and Netherlands enterprises actively use AI tools daily, despite wide availability of ChatGPT, Copilot, and Gemini. This begs the question: if leadership isn't using AI, why would anyone else?
Only 35% enterprise leaders actively use AI tools
Usage patterns tie directly to value performance. Mature organisations show 45% daily usage among leaders while pilot-stage organisations show just 27%. The gap explains why mature organisations achieve better outcomes - their people actually use AI in their day-to-day work, not just in isolated experiments.
35%
38%
45%
35%
45%
Workers can achieve demonstrable gains when empowered correctly. Harvard Business School research shows 25% faster output and 40% higher-quality work when AI is integrated properly into workflows. But "properly" requires more than simple access to AI-based tools. People have to understand how to use the technology in the right way to improve the outcome of their work. This requires hands-on guidance, community learning, shared examples, and the right encouragement from the top. Tool drops and surface-level training don't work.
Leaders cite people-related barriers as top challenges: people and education, AI trust and risk, value measurement gaps. These aren't technical problems; they're human ones. AI projects fail not because AI is incapable, but because organisations fail to put people and measurable value at the centre of transformation.
This is why so many solutions go underused, why capability doesn't build, and why pilotitis persists despite rising investment.
One in three enterprises stuck in pilot mode don't trust their consultants to deliver value. The top consultant-related challenges explain why:
Leaders say consultants are too tech-focused rather than being outcome-focused. Poor knowledge transfer means internal teams can't sustain or scale what consultants build.
Pricing and costs are a constant concern, particularly when billable hour models incentivise longer timelines. Adding to this, consultants’ lack of AI-native expertise means the organisations they serve are funding their learning curve. Insufficient capability leads to continued dependency, and a lack of trust is especially pronounced in the Netherlands and among CEOs, undermining the entire relationship.
The consultant business model depends on maximising billable hours, but this approach doesn’t align with generating proven ROI. Consultants stretched thin across multiple projects deliver mediocre work under pressure to hit targets that have nothing to do with results. These traditional consultancies are only selling time, and that has nothing to do with outcomes.
When asked what would increase confidence in consulting partners, leaders gave clear answers that align directly with how Valliance operates:
These are basic expectations that traditional models can't deliver because the economics don't work. When consultants profit from confusion and extended timelines, transparency and speed become liabilities to their business model.
AI projects only work when people can use AI confidently, consistently, and effectively in their day-to-day. When people understand what good AI use looks like, trust increases and capability begins to scale. Celebrating early wins and sharing learnings across teams is essential to embedding AI as the way work gets done, not just as a side experiment.
Too many organisations still start with tools rather than the problems that need to be solved, creating misalignment between investment and business priorities.
A value-first approach requires identifying high-value use cases (not just interesting ones), setting measurable objectives upfront, designing for workflow integration rather than standalone outputs, iterating quickly with real-world feedback loops, and ensuring internal capability transfer so value grows over time.
When people learn to use AI confidently, value becomes measurable
When value is defined and visible, adoption accelerates. When both are aligned, trust increases and so does investment.
Value
Higher AI maturity correlates directly with faster time to value (5.3 months vs. 6.6 months), higher success rates (56% vs. 43%), higher trust (95% vs. 82%), stronger ROI (76% vs. 20%), and higher leadership usage (45% vs. 27%). This shows what happens when organisations get the fundamentals right: clear metrics, leadership adoption, internal capability, and value-driven delivery.
Early stage
Pilot stage
Scaling stage
Maturing stage
Mature organisations treat pilots as rapid learning cycles with clear gates to production, rather than endless science projects conducted for the sake of it. They measure obsessively and kill what doesn't work quickly, invest in their people, and partner with firms whose success depends on their success, not on how many hours they can bill.
AI trust is rising overall (84% saw trust increase year-over-year), yet this trust increase is at its lowest among pilot stage organisations (71%), proving the need for disciplined transformation beyond experimentation. Rising investment alone won't close the value gap, and neither will better technology or conducting more pilots. What works is aligning incentives, measuring what matters, empowering people, and building internal capability.
The consulting industry is being forced to confront a simple truth: clients want partners who share their risk and their upside, not vendors who profit from confusion. The billable hour era is ending because AI is automating much of what firms once sold as expertise. What remains is the hardest and most valuable work: helping organisations define value, build capability, and actually deploy AI that gets used.
This is what Valliance was built for.











