Artificial intelligence is reshaping how organisations use data. While dashboards and periodic reports once served business intelligence needs, the rise of autonomous AI agents is placing real‑time insight at the heart of strategy. The transition from static dashboards to autonomous AI agents makes the availability of real‑time insights from business data a critical prerequisite. AI is not an incremental addition but a fundamental shift that places new demands on data platforms, particularly in terms of speed and concurrency.
This structural change reflects more than a trend: it marks a turning point in enterprise operations. According to recent industry analysis, 2026 is poised to be the year enterprise AI moves from experimentation to trusted, embedded workflows — with nearly half of enterprise applications expected to incorporate task‑specific AI agents. Yet success depends on integration and accountability, not just model innovation.
Rethinking data architecture for agentic workloads
Traditional cloud‑based data warehouses were designed for batch processing and human‑guided analysis. They were particularly good at updating data on schedules that were measured in hours rather than seconds. With the rise of AI, “on demand” data is becoming increasingly central to business strategy. This shift exposes the shortcomings of traditional cloud‑based data warehouses. Many legacy architectures are optimised for batch processing and periodic reporting, which creates barriers to scaling AI initiatives.
The fundamental difference lies in how AI systems interact with data. Rather than issuing a handful of queries during ad‑hoc analysis, AI agents generate continuous, layered, and concurrent requests as they explore context, reason, and act. A seemingly simple question posed to an AI assistant in natural language can generate dozens of database queries, often simultaneously. What looks like a single user query is actually a burst of concurrent queries. Traditional systems were not engineered for this pattern of use, leading to increased latency, unpredictable costs, and platform bottlenecks.
To meet these demands, organisations are redesigning their data platforms around what some practitioners call an “agentic data stack”. This approach prioritises real‑time responsiveness, high concurrency, and the ability to sustain thousands of simultaneous operations without performance degradation.
Integrating transactions and analytics
Another emerging architectural theme is the fading boundary between transactional systems and analytical workloads. Operational systems like customer databases and logistics platforms capture the events of business. Analytical systems uncover patterns, trends, and insights from those events. Historically, these layers were separated: transactional systems fed nightly or hourly extracts into analytics platforms, creating delays and fragmentation. AI challenges this separation.
Real‑time decision‑making requires data that is current, contextual, and capable of supporting continuous feedback loops. It’s critical for performance purposes to note that AI systems do not interact with data in the same way as human analysts, as said earlier – a single natural language query can trigger dozens of database queries simultaneously.
Legacy systems struggle to efficiently manage these bursts of activity. As a result, latency increases and costs can escalate unexpectedly. This dynamic has led to tighter integration between operational and analytical layers, enabling systems to deliver insights without heavy pipeline dependencies.
Security and sovereignty remain core concerns
Alongside performance, AI adoption raises fundamental questions about data security and sovereignty. Organisations increasingly demand that AI models operate using proprietary data within clearly defined governance boundaries. Without careful control, sensitive information can inadvertently be exposed to third-party providers or cross jurisdictions where compliance risks are heightened.
The strategic use of AI is inseparable from issues of data security and sovereignty. Organisations are looking to apply AI models to proprietary data while maintaining strict control over where that data resides. This requires a deliberate approach to data classification, storage, and access controls – ensuring infrastructure choices align with both regulatory and operational expectations. This theme resonates with broader industry discussions about sovereign cloud and data residency requirements, especially under evolving European and global data protection frameworks.
Observability for autonomous systems
The ability to understand system behaviour over time is evolving. Traditional observability tools aggregate logs and metrics for human consumption, often summarising data for dashboards. AI agents, by contrast, demand access to detailed, historical, and unaggregated data to correlate events across time and resolve incidents autonomously.
Within Site Reliability Engineering (SRE), the traditional separation between metrics, logs, and traces is fading. AI‑driven systems place new expectations on observability, where aggregated logs and sampled data are often insufficient to detect patterns across time. For AI to resolve incidents autonomously, access to granular, complete, and historically consistent data is required.
This shift brings observability closer to the core data platform, making it foundational rather than peripheral and enabling richer insights into system behaviour, anomaly detection, and operational costs.
The wider context: data readiness and organisational success
The importance of foundational data capabilities is reflected across industry research. Analysts report that a majority of AI initiatives struggle to scale not because of model quality, but due to fragmented, inconsistent, or poorly governed data. An industry trend known as the GenAI Divide shows that up to 95% of AI investments fail to move beyond experimentation when data quality and continuity are inadequate.
This challenge, and the response to it, is now influencing how enterprises prioritise infrastructure investments. Organisations that build robust, responsive, and governed data foundations are better positioned to scale AI workflows into mission‑critical operational systems.
The strategic imperative for data leaders
The shift toward AI‑driven systems is more than an upgrade cycle; it reflects a fundamental change in how data is consumed and acted upon. For data leaders, the practical starting point is to align infrastructure with real use cases: measure how existing systems perform under the pressure of real‑time, concurrent workloads and identify areas where simplicity and quality drive clear benefits.
At its core, this transition is a structural decision — one that determines whether organisations make a manageable investment now or face a more costly and disruptive migration later.
About Arno van Driel, VP EMEA at ClickHouse
With a strong background in enterprise software and cloud technologies, spanning more than two decades, Arno van Driel serves as vice president for EMEA at ClickHouse, where he leads regional go-to-market strategy across Europe, the Middle East, and Africa. He works with CIOs and data leaders to modernize analytical data platforms, enabling real-time insight across observability, customer analytics, and AI-driven use cases. Arno brings a pragmatic perspective on scaling data systems for performance, resilience, and cost control. He focuses on helping organizations turn fast-moving data into reliable, actionable intelligence.
ClickHouse is a global, open-source, column-oriented database management system that allows generating analytical data reports in real-time using SQL queries.
About ClickHouse
The Online Analytical Processing (OLAP) database designed for high-performance analytics and unprecedented concurrency.

