96% of leaders say agentic AI is critical, yet only 23% are ready to support it. Here is what’s holding the enterprise back.
Agentic AI has moved beyond the hype phase of copilots that summarize meetings and draft emails. The real shift is underway as autonomous agents analyze, decide, and take action across your digital environment, from security investigations to incident response to customer experiences.
It’s an epoch-level change in how work gets done. But here’s the uncomfortable truth we see every day: AI ambition is sky-high, but infrastructure readiness is not. And that gap has a cost as telemetry and token volumes are exploding faster than budgets can absorb them, making infrastructure spend the next hard conversation in the AI era.
The readiness gap: Ambition vs. architecture
To get a clean read on where enterprises really stand, Cribl partnered with Harvard Business Review Analytic Services to survey global business leaders.
The signal is loud and clear: 96% of organizations say agentic AI will be critical to their strategy within the next two years. Yet just 23% say they have a formal strategy and the infrastructure to support it today.
That disconnect, aka the AI readiness gap, is a leading indicator that the status quo has been disrupted but reality is just starting to settle in. This isn’t a story about lack of interest, talent, or use cases. It’s about something much more fundamental. The premise is simple really: we’re trying to run agentic AI on architectures that were built for dashboards and humans typing in queries. At some point, the stack just can’t keep up. It’s like strapping a rocket engine to a minivan – technically it moves, but nothing underneath was built for that speed.
Where the stack starts to crack
Who doesn’t love a good pilot? They’re small scale, they do really cool things, and they give you a glimpse of the nirvana that is a fully agentic future. Unfortunately as organizations move from pilots to production, they’re discovering that their telemetry foundations are the real bottleneck. The HBR report surfaces three common problems we hear from customers:
The telemetry surge: 76% of leaders expect AI to significantly increase telemetry volumes, and 80% already recognize that their infrastructure needs to change. Legacy observability and security stacks were never designed for this kind of agent-driven load.
The budget shock: 47% of organizations already report that infrastructure costs for AI are materially higher than expected. When every additional log, trace, or metric drives more ingest, more indexing, and more storage, and every new prompt spins up more LLM tokens, AI becomes a tax, not a tailwind. This helps explain why, in the HBR research, 82% expect a financial hit just to meet the infrastructure demands on agentic AI.
The ROI barrier: Everyone says they want AI outcomes. But without the right telemetry, you can’t measure performance, quality, or risk. That’s the number one reason AI projects stall out in pilots. You can’t prove value on black-box systems.
The pattern is the same: AI is not failing. The underlying architecture is.
Telemetry: From afterthought to critical AI fuel
One of the most important findings in the report is how leaders are starting to rethink telemetry.
For years, telemetry has been something you needed to collect “just in case,” to dig through after an incident. It’s always been a critical forensics tool.
In an agentic AI world, however, that “just in case” increases in importance. It becomes the basis for predictive modeling, for agents to start to anticipate activity and behavior that can be used to predict just about anything. From better customer service to better user experience to more proactive security posture and more adaptive systems.
While agents build and grow and learn from historical data, they also depend on telemetry for real-time context in their decision-making. As an example, agents can differentiate “normal” behavior of a system if there was, say, a recent system patch to the firewalls that protect customer data from an attack on the systems that house customer data. If any of this data is locked in a data silo due to proprietary software, it creates the same blind spots for your agentic systems as it would for any human analyst tasked with managing them. The more telemetry feeding your agentic systems, the more it will solve the right problem at the right time and the fewer things it will miss.
When the pricing model of a system becomes a structural ceiling to the amount of data agentic AI can access, you start to understand why telemetry becomes the most critical data in your business. When you can’t explain or govern decisions properly your business loses trust, with teams, regulators, and, most importantly, with customers. The organizations that are breaking out of this trap are rethinking their data infrastructure for an AI-first world. In the HBR research, leaders say their top AI challenges are unprepared data architectures and that the most urgent upgrades are to data quality, integration, governance, and storage, not just more tools or hardware.
What “ready” actually looks like
In the HBR research, the organizations that are ahead of the curve don’t describe themselves as having “more AI.” They describe themselves as having a different foundation. That foundation has three characteristics:
Control: They treat telemetry as a first-class workload, not an unbounded liability. They route, filter, and shape data before it hits expensive systems, so ingest, storage, and token spend stay within guardrails. They tier data across real-time, training, and archival paths so agents get what they need in real time while cheaper storage still supports “just in case” investigations and model training. And they enforce governance and policy at the data layer, not one tool at a time.
Context: They add semantic understanding on top of raw telemetry so agents and humans can actually reason. The platform knows which fields matter, normalizes data across systems, understands how services relate, and maps current signals back to historical incidents, tickets, and runbooks, aligning telemetry with code changes, chat war rooms, and documentation to explain what changed, when, and why it matters to the business, turning telemetry into a shared resource that agents can query at high speed and concurrency with the context needed to solve problems quickly.
Choice: They refuse to bet their future on a single vendor stack. Instead, they opt for open, interoperable architectures that support a multi-model, multi-vendor AI future. This means that telemetry can flow to and be queried from any system, not locked into one platform’s economics or one model’s context window.
AI initiatives don’t die because the vision is wrong. They die because the foundation can’t carry the weight. If you get the telemetry architecture right, the AI part gets dramatically easier, and a lot more durable.
Benchmark yourself: Are you ready for the agentic era?
If you’re serious about agentic AI, you need an honest look at where you stand. That’s why we partnered with HBR Analytic Services on this research. It’s not a vendor pitch. It’s a reality check from global leaders who are in the same fight.
Download the full HBR Analytic Services Report to see how your organization compares, where the biggest readiness gaps are, and what leaders are doing differently.
Take the Agentic AI Infrastructure Benchmark to see how you stack up against leaders surveyed by HBR. Understand how your telemetry, data architecture, and operating model compare, and where to focus next.
Agentic AI is here. Whether it becomes your unfair advantage or your next uncontrolled cost center comes down to the telemetry foundation you choose to build on.









