Why so many promising AI projects end up stuck in pilot purgatory? You've probably seen the MIT study highlighting a 95% failure rate for GenAI pilots, a statistic that can be alarming when you're considering investments in customer experience or automation.
Despite this, those remain the right areas to focus on. This high failure rate isn't due to poor AI models or regulations. Halting your innovation efforts is not the right takeaway from this statistic.
The reason is rarely the technology. Our internal research shows it’s the playbook. Too many companies approach AI with a flawed, linear model that is destined to fail.
We recently gathered multiple retail leaders for our "Commerce Conversations" event in collaboration with Arthur Hunt, a top business consulting company, to move past the hype and get to the root of this problem. A more effective model emerged from those discussions. Successful AI adoption does not work like a project with a start and end date. It works like a flywheel: a self-reinforcing system where each component helps drive the next.
This flywheel has three parts: People, Process, and Tech & Techniques. Let’s get into all three below.
The hardest part of getting a flywheel moving is the first push. In AI adoption, that push is overcoming human resistance. In our experience, AI is viewed as a huge opportunity, up until the implementation part. That’s when the concept starts to feel rather optional internally, like a “nice-to-have” technology. Overall, there’s a lot of misunderstanding when it comes to the full scope of AI as a technology and what it can provide in terms of cost savings and business outcomes.
That is why, before a pilot project can be approved, you must address the full scope of the pain point you’re looking to solve, and the organizational inertia that kills innovation, related to both people and the stakeholders demanding proven ROI. This is not (yet) a technology challenge.
This inertia is a complex reaction rooted in specific, rational fears. It’s a combination of:
Insights from the event pointed to a clear playbook for getting the AI adoption flywheel moving internally. It’s less about the technology itself and more about managing the human element of change.
Make a visible career promise. Directly counter the fear of irrelevance with a clear plan. Invest in visible upskilling and reskilling programs to show people exactly how their careers will evolve with AI, not be replaced by it.
As Alina Sîrbu from Arthur Hunt highlighted during the session:
Once you have initial buy-in, it’s time to strategically assess the fear of a massive, upfront investment and the fear of compromising data security. The process must be both pragmatic (to avoid the budget trap) and trustworthy (to address security fears).
Before any pilot, you must address the data. Retailers often have vast amounts of unstructured and siloed data, which makes even the most advanced AI ineffective. The first step is to transform this unused data into a valuable resource by integrating information from internal systems, like CRM, ERP, and e-commerce platforms, into a centralized platform. This includes first-party data, which offers a competitive advantage but must be handled in compliance with GDPR.
A top cause of AI project failure is poor input data. Before a pilot, conduct a small-scale data audit to ensure your information is clean and reliable.
Security is non-negotiable. A pilot project is a test of your organization's trustworthiness. A security breach, even in a small-scale PoC, can destroy all momentum and validate the fears of skeptics.
The first essential step is to classify your data, clearly separating public information from confidential customer data, sales figures, and employee details. This classification is crucial because it allows you to choose the right AI environment from a clear security hierarchy, ensuring your approach is aligned with regulations like the AI Act and GDPR.
As discussed during our event, this security hierarchy offers different levels of control:
The consensus from every leader in the room was to abandon the idea of a massive, multi-year AI project. The goal is a low-risk Proof of Concept (PoC) with a clear, measurable ROI that can be demonstrated in a single quarter. This approach directly counters the common objection, "it will be cheaper in a year," by proving that the value generated from a successful pilot today far outweighs the potential cost savings of waiting.
Leaders we’ve talked to are already running these small-scale, high-impact pilots to solve specific business pains across the entire retail spectrum:
When securing buy-in after a PoC by presenting a pilot's ROI, translate metrics into tangible outcomes. Instead of just "cost savings," present "X hours saved per employee per week" or "a Y% reduction in out-of-stock items." This makes the value concrete and relatable to different stakeholders.
For a pilot to deliver that quick win, the team executing it must get high-quality results from the AI tools. This is what makes the flywheel spin faster with each rotation. Better execution reinforces the business case and accelerates adoption. A successful execution framework has two core parts: choosing the right technology for the job and using the right technique to get results.
The first step is a strategic choice. The question is not “what technology is most advanced?” but “which technology best aligns with our pilot’s goals and our company’s context?”. Do you leverage existing, powerful AI engines (Buy or Adapt), or do you invest in creating something entirely new from scratch (Build)? Each path has significant implications for cost, speed, risk, and potential competitive advantage.
This involves understanding the trade-offs:
This path demands a high implementation cost, extensive data engineering, deep ML expertise, and significant infrastructure. Oftentimes, it comes with a low success rate, as if it’s done without proper planning, it becomes a high-risk R&D or "moonshot" experiment.
Within either approach, you must also choose the right type of AI model for the job. This involves matching the model’s capability to the business problem you aim to solve:
As our Data & AI Director, Radu Săndulescu, explains:
The right choice depends entirely on the PoC's objectives. A deep understanding of your business needs, combined with data analysis, determines which model will create the most value.
Getting value, especially from GenAI models and tools, also depends on the quality of your inputs. Generic prompts lead to generic outcomes that erode confidence.
The solution is to equip your team with a framework for directing these tools, not just using them. The C-T-P-F Method is a simple but powerful tool for execution.
Our tip? Once your team masters the basics, introduce Few-shot Prompting. By providing the AI with 2-3 examples of the exact output you want (e.g., three social media posts following a specific Hook-Benefit-CTA format), you can teach it more complex patterns, dramatically improving the quality and consistency of its responses.
This article is the direct result of a pragmatic conversation between Zitec, our partners at Arthur Hunt Romania, and 24 leaders from the Romanian retail sector. During the “Commerce Conversations" event, we facilitated an open dialogue focused on the real challenges of implementing AI, from debunking the most common AI myths to prompt engineering and securing internal buy-in. The insights presented here represent a synthesis of the practical solutions and lessons learned, shared directly by those navigating this transformation day by day.