From hype to staying power in a world of geopolitical turmoil, new regulations and generative promises

In 2026, enthusiasm about (generative) AI in Dutch organizations is undiminished. At the same time, the tone has changed. Whereas the past few years were mainly characterized by exploration, experimentation and "seeing what can be done" with generative AI, the focus is now shifting to a much more fundamental question: what can it demonstrably deliver, at what risks and under what conditions can we scale it up responsibly? 

The AI hype has given way to a mature reality. Administrators, CIOs and data professionals are discovering that successful deployment of AI is not primarily a technological issue, but a strategic as well as an organizational task. It is no longer a question of of you need to do something with AI, but where, why, under what preconditions and who remains ultimately responsible. 

In that force field, we believe three major developments are emerging that together define the playing field for Data & AI in 2026. We'd like to take you through these first in high places. Later we will deepen these trends with statements, examples and our vision on these developments. But first: what do we see happening? 

 

1. From experimentation to targeted and responsible deployment of (generative) AI 

Responsible AI use requires more than just technical solutions. It is about finding the right balance between innovation and diligence. The basis for this lies in three pillars: honesty and ethics, safety and transparency, and sustainable quality.  

The first development we see is the shift from "AI as an end" to "AI as a means." Organizations have now experienced that generative AI is impressive in demos, but erratic in production. Chatbots that seemingly know everything suddenly turn out to be unreproducible, unexplainable and difficult to control in critical processes. 

This creates a sharp distinction between two worlds: 

  • AI as human assistant 

    Think code assistants, text summarizers, search helpers, agents who do prep work. This is where huge productivity gains lie in 2026. The risks are manageable as long as humans remain ultimately responsible. 

  • AI as an engine of process automation 

    Once AI makes decisions in underwriting, compliance, scheduling or customer interaction, the game changes. Then robustness, reproducibility, explainability, bias control and auditability become crucial. "Human in the loop" becomes not a slogan but a design principle. Especially in high-risk contexts - such as healthcare, education and government - responsible use of AI becomes not a nice-to-have, but a prerequisite for trust, compliance and social legitimacy. 

AI offers unprecedented opportunities, but without a solid business case, it is just an experiment. Many organizations struggle with persistent myths that lead to unrealistic expectations. In doing so, it is important to understand that chatting with an AI is not the same as automating processes - it requires a different approach. Transparency in decisions (explainability) and clear governance are essential to maintain trust. And above all, ethics should never be left behind in the race to innovation. 

Later, we will elaborate on this development in more detail in terms of practical implications. The key message: value is not created by the smartest model, but by the best designed chain of people, data, model and process guided by a clear vision and data strategy. 

The first development that we see is the shift from

"AI as an end" to "AI as a means"

2. From borderlesscloud to strategic choices about sovereignty and regulation 

In parallel, the foundation under Data & AI is shifting. For years, "cloud-first" was taken for granted. In 2026, that is no longer dogma but a consideration. 

Geopolitical tensions, the CLOUD Act, European legislation and the advent of the AI Act cause organizations to become aware of one crucial question: where does our data reside and what legal regime does it fall under? This choice is no longer a technical detail, but a strategic decision. Data sovereignty has become a governance issue. 

On top of that, the cost model of AI and data platforms is becoming increasingly burdensome. Generative models, vector databases, real-time pipelines and experiments at scale make cloud accounts unpredictable. FinOps, hybrid architectures and even a reassessment of on-premise solutions are the logical consequence. 

We will also deepen this trend later by zooming in on the danger of vendor lock-in, managing cloud costs with FinOps, a move back to on-premise, and the impact of international laws and regulations. The common thread: technological freedom is giving way to conscious dependencies. 

For years, "cloud-first" was taken for granted. 

In 2026, that's a trade-off.

3. From pioneering to professionalizing the Data & AI organization

The third, often implicit but all-important development is the organizational maturation of the field itself. 

AI can no longer be "added on" by an innovation team or a loose data science group. Especially in sectors with high social impact, but actually in all organizations, this requires structural embedding of ethics, explainability, monitoring and compliance in the governance of Data & AI. In 2026, we see successful organizations investing in: 

    • Clear ownership and governance; 
    • Bridge roles such as the Data & AI translator; 
    • An AI-ready data model to make information analysis more accessible;
    • Architectural principles and central frameworks for low-code and no-code; 
    • Ethical decision-making that goes beyond legal box ticking; 
    • Policies around BYO-AI and shadow use; 
    • Integration of FinOps, security, privacy, compliance and model governance (fairness, explainability, monitoring) in the data domain.

In the sequel you will read more about the need for translators, low-code swamps, chatting with your data, BYO-AI, the importance of ethics, shades of gray in regulations and breaking through black-and-white thinking. The message: Data & AI are becoming a regular, business-critical domain - with the same demands for professionalism as finance or risk.  

The message:

Data & AI becoming a regular, business-critical domain

From technology to direction: the choices for 2026 

Together, these three developments form the force field in which organizations must make their choices in 2026. Not as separate technological decisions, but as coherent issues about value, risk, responsibility, trust and direction. 

Getting curious? In the next article in this series, we take it a step further. Using stimulating statements and recognizable practical examples, we will help you to look beyond the hype and get a sharp focus on what Data & AI now really mean for your strategy, architecture, organization and governance.

We will discuss the choices that matter, the pitfalls we often encounter in practice and the questions you must ask when deploying AI in high-risk applications. How do you guarantee honesty and explainability? How do you ensure sustainable quality, also in the long term? And how do you confidently build solutions that are not only innovative, but also responsible and future-proof?

From our broad view of the entire data value chain - from strategy to engineering, from analytics to responsible AI application - we share insights that help you make better decisions today for tomorrow.

 

If you work with Data & AI, you won't want to miss the following articles. Keep an eye on the website or sign up below to automatically receive follow-up articles and other news about Data & AI in your mailbox!

Sign up Trend articles

Name(Required)
Email address(Required)

Recent posts

Automating sales proposals with AI

Automating sales proposals with AI

Discover how AI improves sales proposal automation. Using Microsoft Copilot Studio, we streamline workflows and increase efficiency by integrating AI-driven tools. Learn more about the benefits and challenges of this innovative approach.

read more