§ MENU
Approach
§ PRODUCTS · PLATINE Platine Vibe OS § PLATFORMS WE LEAD ON Dynamics 365 Business Central Project Online · Migration
§ THE PRACTICE Engagements Industries § THINKING & WRITING Field Notes
About Contact → Français → Book a call
INDUSTRY CASE STUDY · Q1 2026 · 6-MIN READ

Where 30% of AI projects die, and what survives.

Independent research from Gartner, BCG, and McKinsey now agrees on the failure mode. The path through it is narrow, but clearly documented.

TL;DR
  • Gartner forecasts that ~30% of generative-AI projects will be abandoned after proof-of-concept by end of 2025. Failure modes are consistent across studies.
  • BCG finds only 26% of companies move past AI PoCs to capture value at scale. The other 74% remain in the "experimentation trap".
  • Programs that survive share three operational traits: use-case-first selection, evaluation discipline from week one, and operator-led design.

The Gartner press release in July 2024 made it official: at least 30% of generative AI projects will be abandoned after proof-of-concept by the end of 2025. The reasons it cited, poor data quality, inadequate risk controls, escalating costs, unclear business value, were not surprises. What was new was the specificity of the failure mode.

BCG's October 2024 study went further. Surveying enterprises across sectors, they found that only 26% have developed the capabilities to move beyond AI proofs-of-concept and actually capture value at scale. The remaining three-quarters are stuck in what BCG called the "experimentation trap", running pilots, generating slides, accumulating sandbox tools, and making no measurable change to operator workflows.

The pattern of programs that survive

Across the same studies, the survivors share three operational traits.

i. Use cases are selected before tools

Programs that succeed start with a friction the operator can name, invoice classification, contract review, customer-call summarization, data-extraction from unstructured documents. The model and the platform are chosen after the use case is defined. In stalled programs, the order is reversed: someone procures Copilot or GPT first, and the team begins searching for problems it might solve.

ii. Evaluation is built in from week one

Successful AI deployments run automated test suites, accuracy benchmarks, hallucination detection, edge-case behaviour, from the first iteration. McKinsey's 2024 State of AI report shows organizations that invest in evaluation harnesses are substantially more likely to move from pilot to production. Without evaluation, AI is hope. With it, AI is engineering.

iii. Operators are in the design loop

BCG's research is explicit on this: AI initiatives co-designed with the operators whose work will change show meaningfully greater value capture than top-down deployments. The reason is simple. Adoption is the rate-limiting step. If the operator distrusts the AI's output or finds it slower than the manual workflow, the deployment will fail regardless of the model's underlying capability.

What the data does not say

None of the published research argues that AI is a poor investment. The opposite, McKinsey's 2024 update found 72% of organizations now use AI in at least one business function, up from 55% the prior year. Adoption is no longer the question. The question is whether a given AI program is in the 26% that captures value or the 74% that does not.

The methodology that puts a program in the 26% is not exotic. It is the same operating discipline that has made non-AI digital programs succeed for two decades: diagnose first, design with the operator, measure against the baseline, ship to production, then iterate.

SOURCES  ·  Gartner, "Hype Cycle for Generative AI", July 2024  ·  BCG, "Where's the Value in AI?", October 2024  ·  McKinsey, "The State of AI in Early 2024", May 2024

§, RELATED

More from the field.

§ FN · INTERROGATIVE

Want to discuss a similar engagement?

Schedule
→ Book a 30-minute call
All notes
→ Browse all field notes
Phone
514-546-0711