"Moving Forward Responsibly”, the Dutch DPA's vision on Generative AI

Article
NL Law

Today, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, or AP) has fired a regulatory shot across the bow of generative AI. On February 4, the Dutch regulator published its vision document Verantwoord Vooruit: AP-visie op generatieve AI ("Moving Forward Responsibly: AP's Vision on Generative AI"), setting out how organisations can develop and deploy generative artificial intelligence lawfully under the GDPR (AVG in Dutch).

The document is more than an intellectual exercise. It signals the AP's regulatory priorities for AI chatbots, image generators, and other general-purpose AI models. The regulator's message is clear: innovation is welcome, but not at the expense of fundamental rights. Below, we unpack the AP's key insights, and explain what businesses must do now to align with the AP's expectations on purpose limitation, risk assessment, and AI governance. 

Key Safeguards expected by the AP

To guide responsible AI deployment, the AP proposes a set of technical and organisational safeguards closely aligned with GDPR principles and emerging best practice:

  1. Transparent system design and operation
    Transparency is a leitmotif of the AP's vision. AI developers and providers should be transparent ensuring that generative AI applications are clearly recognisable and lend themselves to further analysis where desired, for example by sharing (the results of) various assessment criteria on request. This may include providing documentation (such as "model cards") that explain the AI's capabilities, limitations, and potential biases. Transparency fosters trust and is essential for accountability – and for complying with GDPR obligations around information provision and fairness.
  2. Risk assessments and mitigation
    Before and during AI deployment, conduct thorough risk assessments – such as Data Protection Impact Assessments (DPIAs) and Fundamental Rigts Impact Assessment under the AI Act – and implement measures to mitigate identified risks. The AP expects organisations to map out the privacy, bias, and safety risks that generative AI may pose, and to address them proactively. This echoes both the GDPR's "privacy by design" mandate and the upcoming EU AI Act's requirements for risk management.
  3. Clear purpose limitation and legal basis
    Organisations must define a specific, explicit purpose for any processing of personal data by AI, and identify a valid lawful basis under the GDPR for that processing, including monitoring this purpose throughout the processing. The AP expects organisations to articulate why data is being processed (e.g., "to train a model that performs function X") and to respect the principle of purpose limitation. Vague, open-ended data collection for AI projects will not pass muster.
  4. Controlled environments and robust data governance
    The AP urges organisations to maintain control over the environments in which AI systems operate and the data they process. Practically, this could mean hosting models on secure, EU-based infrastructure, enforcing strict access controls, and applying comprehensive data governance policies to monitor AI usage. By containing generative AI within well-governed IT environments, businesses can prevent unauthorised data breaches and ensure compliance with data residency and security requirements.
  5. Lawfulness from development through deployment
    The AP’s report makes it clear that both the development of AI models and their deployment in applications must comply with the GDPR. This means AI developers need a legitimate basis to collect and use personal data for training, and AI service providers must ensure any personal data processed by their generative AI (for instance, user inputs or AI-generated content that includes personal data) is handled lawfully.

By implementing these safeguards, organisations can demonstrate operation within the bounds of the GDPR (and the EU AI Act). The AP's vision is one of "lawful AI by design" – where privacy and compliance measures are baked into the AI lifecycle from development to deployment and ongoing use.

Expected responsibilities: purpose limitation, risk assessment, and AI governance

The vision document repeatedly underscores that organisations bear responsibility for ensuring their use of generative AI is purpose-specific, risk-managed, and well-governed. The AP expects businesses to establish robust internal governance spanning the entire AI lifecycle. In practical terms, this means:

  • Think first, deploy second. Clearly define what purpose the AI serves and process personal data only as necessary for that purpose (adhering to the GDPR's purpose limitation principle).
  • Conduct Data Protection Impact Assessments (DPIAs) or equivalent risk assessments such as the “Fundamental Rights Impact Assessment” to identify how the AI might affect individuals' rights or pose ethical issues, and address those risks up front. This might involve setting guardrails on AI outputs (to prevent disinformation or discrimination) or deciding against deploying AI in high-risk scenarios without additional safeguards.
  • Maintain continuous governance across the AI's lifecycle – from design and training through deployment to monitoring and updates. Governance encompasses documenting decisions, establishing oversight committees or AI councils, and being prepared to explain and justify the AI's functioning (linking to GDPR requirements such as transparency and the "right to an explanation" in automated decisions).
  • Demonstrate accountability to regulators. Organisations should keep records of AI systems, audit logs of AI outputs, and clear policies on acceptable AI use. The AP's vision also touches on user-facing measures – for instance, ensuring end-users know when they are dealing with AI and providing channels to request human intervention or contest important AI-driven decisions.

In summary, the AP expects a holistic governance approach: clear purpose definition, rigorous risk assessment, transparent operations, and ongoing control. Businesses that instil these practices will be better positioned to satisfy both the AP's expectations and the forthcoming obligations of the EU AI Act (which will impose requirements on high-risk AI systems covering risk management, transparency, human oversight, and more). 

Upcoming AP guidance and coordination under the AI Act

To help steer generative AI in the right direction, the AP is rolling out further guidance and tools in 2026:

  • Final guidance on generative AI and data protection: the AP will publish final guidance on generative AI and data protection. This guidance will likely answer thorny legal questions around training data, use of pre-trained models, and application of GDPR principles to AI – providing much-needed clarity for companies building or deploying AI in the Netherlands.
  • AI Helpdesk (AI Loket): The AP is launching an AI helpdesk for generative AI, allowing developers and users to pose questions and share concerns. This support desk will give businesses a direct line to the regulator for advice on difficult issues, whilst helping the AP keep its finger on the pulse of real-world AI developments.
  • AI Regulatory Sandbox: In line with the EU AI Act's requirement that each Member State establish at least one regulatory sandbox by August 2026, the AP – together with other  regulatory authorities such as the Dutch Authority for Digital Infrastructure (Rijksinspectie Digitale Infrastructuur or “RDI”) – has proposed a Dutch AI sandbox to foster compliant innovation. In such a sandbox, AI developers can experiment with new technologies under regulatory guidance, receiving feedback on compliance during the development phase. Regulators, in turn, gain insight into emerging AI trends and can refine appropriate safeguards before AI systems reach the mass market.
  • Coordination Role Under the EU AI Act: the AP is poised to assume a coordination role under the forthcoming EU AI Act. We can expect the AP to lead or heavily participate in the national AI supervisory body once the AI Act enters into force, ensuring that privacy and data protection considerations are fully integrated into AI oversight.

What can we expect?

The AP's vision on generative AI heralds an era of more assertive and structured oversight of AI technologies. Business can expect the AP to follow up with concrete guidelines in 2026, , for example through their periodical risk reports, clarifying how to develop and deploy generative AI in compliance with the GDPR. We also anticipate increased engagement from the AP – through its AI desk, industry roundtables, and the regulatory sandbox – to coach and, where necessary, correct organisations on AI best practices.

On the enforcement front, the AP has identified AI as a top priority for 2026 and beyond. We may see the AP scrutinising deployments of generative AI more closely – for instance, investigating how a company uses a chatbot service or whether a model was trained on EU personal data without proper authorisation. The AP's 2026–2028 strategy mentions stepping up interventions against mass surveillance and risky AI, suggesting that non-compliant AI uses (especially those with significant societal impact) will attract regulatory action. At the same time, the AP is positioning itself as a guide for responsible AI use, so businesses can expect not only strictness but also support in the form of templates (risk assessment checklists, model transparency frameworks) and forums to discuss AI governance.