Why Agentic AI Pilots Stall and How to Fix Them

Why Agentic AI Pilots Stall and How to Fix Them

In the world of enterprise technology, Oscar Vail has seen countless waves of innovation promise to revolutionize business. As a Technology expert with deep experience in emerging fields from quantum computing to robotics, he has a rare ability to cut through the hype and ground futuristic concepts in practical reality. Today, we sit down with Oscar to discuss the latest boardroom buzzword: agentic AI. He’ll unpack why so many initial projects are stalling and provide a clear-eyed roadmap for success, touching on the critical groundwork of infrastructure overhaul, the life-or-death importance of data quality, the necessity of human oversight in an automated world, and what a truly mature, interconnected system of AI agents will look like.

Many businesses treat agentic AI as a simple upgrade, like a more advanced chatbot. What are the key distinctions between agentic and conventional AI? Please share a practical example of the strategic shift required to integrate it successfully, beyond a “bolt-on” approach.

That’s the core of the problem right there. People see this new technology and think it’s just another tool to plug in. But agentic AI is a fundamental shift in capability. Where a conventional AI might perform a discrete task, like sorting invoices based on a set of rules, an agentic AI acts with intent to achieve a goal. It could take that sorted invoice, approve the payment, flag an anomaly for review, and then update the compliance systems autonomously. This isn’t a “bolt-on” upgrade; it’s an entirely new operational paradigm. To make it work, you can’t just drop it on top of existing processes. You have to weave it directly into the enterprise fabric, connecting it to the right data and workflows, which demands a deep, contextual understanding of how your business actually functions.

Given that many enterprises run on siloed legacy systems, an agentic AI often lacks the data needed to perform well. What are the first concrete steps a company should take to overhaul its infrastructure? Could you walk us through a brief roadmap for creating a unified environment?

Infrastructure is almost always the biggest, most immediate stumbling block. When you consider that a staggering 80% to 90% of all enterprise data is unstructured and locked away in different systems, you start to see the scale of the challenge. Think of a government agency where processes and content are spread across decades-old applications. Asking an AI to make an informed decision in that environment is like asking it to assemble a puzzle with most of the pieces missing. The roadmap begins with investing in cloud-native foundations. The goal is to build interoperable content platforms that unify that fragmented information. It’s not the headline-grabbing part of the AI revolution, but this groundwork is essential to creating a seamless environment where an agent can access complete, real-time data and perform its job without making flawed or partial decisions.

In a field like healthcare, an AI agent’s recommendation could be flawed if it pulls from incomplete patient data. How should an organization begin auditing its vast unstructured data? What key governance practices are essential before trusting an AI with critical decision-making?

Healthcare is the perfect, high-stakes example of why data quality is non-negotiable. An agent supporting a clinician needs to pull from medical histories, lab results, and imaging data simultaneously. If any one of those pieces is missing, misaligned, or inaccurate, the recommendation it produces could be dangerously flawed. The very first step, before you even think about full deployment, is a comprehensive data audit. You absolutely must gain a firm, unvarnished understanding of where your unstructured data is. You need to know what you have, where it lives, and, most importantly, how it’s governed. This initial audit informs the governance framework you build, which must be embedded from day one to cover everything from regulatory compliance to ethics and operational control, ensuring you don’t hand over critical decision-making power to an AI that is flying blind.

In financial services, an agent might flag a compliance issue but a human makes the final call. How do you design an effective “human-in-the-loop” model? What guidelines help teams decide where to draw the line between AI autonomy and necessary human oversight?

This is one of the biggest misconceptions—that agentic AI is about removing people. The most effective and trustworthy implementations actually blend autonomy with human oversight. In financial services, you can have an agent that’s incredibly efficient at verifying documents and drafting initial compliance reports, which accelerates workflows tremendously. However, the final call on a high-risk case or a flagged anomaly must still rest with a human expert. Designing this model is about identifying the points of highest risk and lowest confidence. The guideline is to automate the predictable, high-volume tasks but preserve human judgment for nuanced, high-consequence decisions. This balanced approach builds trust in the technology and maintains accountability, allowing you to scale autonomy gradually as the system proves itself and your team’s confidence grows.

As the technology matures, we may see networks of AI agents coordinating across workflows. How would this change daily operations in a complex organization like a hospital, and what new challenges in transparency and interoperability does this interconnected model present for leaders?

That’s where the real breakthrough will be—when we move from isolated agents to interconnected systems. In a hospital, imagine one agent surfacing a patient’s complete history, another managing surgical scheduling in real-time, and a third flagging potential billing issues, all coordinating and contributing to a single, shared context for the clinical team. This would be transformative. But it introduces major new challenges. The first is transparency. Leaders will demand that these agents “show their work”—what data they used, what reasoning they followed, and what compliance checks they applied. Without that audit trail, you’ll never trust them with high-value work. The second challenge is interoperability. Organizations will need the flexibility to integrate agents powered by different models and switch providers as their needs evolve, all within a hybrid or multi-cloud environment. A locked-in, proprietary system just won’t cut it.

What is your forecast for agentic AI?

My forecast is that agentic AI is currently in its adolescence. It’s powerful, full of potential, but also prone to mistakes if not guided properly. We’re in that difficult but necessary transition phase, much like the early days of cloud computing, where the initial hype is meeting the hard reality of implementation. The organizations that ultimately succeed won’t be the ones that adopted it the fastest, but those that prepared the best. By taking the time to align strategy, modernize their infrastructure, clean their data, and embed strong governance, they will move from shaky experiments to genuine transformation. It’s a marathon, not a sprint, but the payoff will be genuinely intelligent systems that could truly reshape how work gets done for the next generation.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later