The Dutch AI & Algorithms Report: a call to action.
On 5 March 2026, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) published the sixth edition of the Rapportage AI & Algoritmes Nederland (RAN), providing a periodic overview of the effects and risks of algorithms and AI in the Netherlands. The AP's AI Impact Barometer has deteriorated since the last edition, and the AP makes clear – through their website and radio performance by the current chairman Aleid Wolfsen – that both government and private organisations need to step up their efforts. That said, the picture is not entirely bleak. The regulatory framework exists, the expectations are increasingly clear, and organisations that engage seriously with compliance now are well-positioned to get ahead of the curve.
The Barometer: A Governance Gap, Not a Governance Collapse
The AP, as coordinating supervisory authority on algorithms and AI, tracks nine indicators of AI governance health using a colour-coded framework. The Winter 2025/2026 barometer shows four pillars rated as making insufficient progress: (1) frameworks and powers for AI supervision; (2) harmonised and practically applicable standards; (3) registration and transparency of algorithms and AI systems; and (4) visibility of incidents and embedding of lessons learned.
It is worth noting that several of these shortcomings are within the government's own territory and that the AP is not entirely without fault here either. As the coordinating supervisory authority, the AP bears some responsibility for the pace at which national governance frameworks have developed. Pointing to red indicators in a barometer it largely designed, while enforcement capacity remains limited and the implementing legislation for the EU AI Act has yet to be adopted, raises legitimate questions about the balance between signalling urgency and delivering results. Organisations can be forgiven for finding the current regulatory environment somewhat uncertain.
That said, the underlying message is valid: AI deployment is accelerating, and governance is not keeping pace. For organisations, the practical takeaway is to focus on what is within their own control.
The AP finds a rise of risks in the use of AI
Governments and companies are investing heavily across the entire AI value chain in the development of AI products and models, but also in the underlying infrastructure, including cloud services, chips, and energy supply. The AP identifies a shift in attention away from safety and values-driven development towards protecting and reinforcing competitive positions, warning for the exponential rise of the risks of AI.
This is a real tension, and it is one that is playing out at European level as much as at national level. Many Member States, including the Netherlands, are exceeding the deadlines for establishing supervision under the EU AI Act. Civil society organisations have also expressed concerns about possible delays and the risks for the level of protection in the EU.
For organisations, this context is actually somewhat reassuring: the regulatory landscape is still being shaped, and there remains a genuine window of opportunity to engage with standard-setting processes, provide input on implementing legislation. Early engagement with compliance is not just a legal obligation, it is a competitive advantage.
AI in Recruitment
The most extensive thematic chapter in the RAN covers AI in recruitment and selection. The AP clarifies that it will focus on clarifying requirements in the sector, and wants to make applicants more aware of their rights in this area. Rather than worry, organisations should read it as a clear map of what good practice looks like and where the gaps currently are.
The report identifies that bias can accumulate across all phases of the recruitment process: from AI-drafted vacancy texts and algorithmic vacancy placement through to CV screening and assessment tools. Although the bias in individual phases may be minimal, the AP clarifies that there is a cumulative build-up of significant bias across the entire chain.
The good news is that the regulatory requirements, while demanding, are clear. Under the EU AI Act, AI systems used for recruiting and selecting persons are classified as high-risk, with full compliance requirements applying from August 2026. This includes requirements on human oversight, data quality, transparency, and the ability to explain outcomes to candidates. Providers and users of such systems have time to prepare, but that preparation needs to start now. Similarly, emotion-recognition systems used in the workplace have been prohibited since 2 February 2025. If your organisation is using such tools, addressing this is straightforward: stop, and document that you have done so.
The AI literacy requirement, also in force since 2 February 2025, requires organisations deploying AI to ensure relevant employees have sufficient knowledge and understanding of the systems they use. This is achievable with targeted training programmes, or trainings by external legal counsel, and it simultaneously reduces legal risk and improves the quality of human oversight.
Asks of the AP
The AP makes specific demands of the new cabinet, wanting a swift implement of the AI Act. This means adopting the Dutch implementing legislation, appointing supervisory authorities, structuring funding for supervision, and providing clarity on the application of the rules. In addition, the AP believes that the Netherlands should urge Europe to swiftly conclude the discussions on postponing and simplifying the regulations.
The AP's calls for urgency have been consistent across several editions of the RAN — yet the pace of progress on implementing legislation and supervisory infrastructure has remained slow. The new cabinet will need to weigh these demands against a crowded legislative agenda. In the meantime, organisations cannot afford to wait for perfect regulatory clarity before acting.
What this means for organisations
For organisations deploying or considering AI, the practical action points are straightforward:
- Audit your AI systems. Identify which systems are likely to qualify as high-risk under the EU AI Act and begin preparing for the August 2026 compliance deadline.
- Review your classification decisions. Registering an AI system as a non-AI algorithm to avoid high-risk obligations is not a safe strategy. Supervisory attention to misclassification is increasing.
- Address prohibited tools.
- Invest in AI literacy. The legal obligation to ensure AI-literate staff is also an opportunity to build internal governance capacity that reduces risk across the board.
Conclusion
The AP is right that the pace of governance development needs to accelerate, though it might usefully reflect on its own role in that acceleration. With implementing legislation still in development and standards still being shaped, there is a genuine opportunity for organisations to contribute to the framework they will ultimately be governed by, gaining a competitive advantage by doing so.