NL — Country Profile

The Netherlands

35TOTAL
7OFFICIAL SOURCES
13TOPIC AREAS
Law / Act2
Policy / Guidance2
National Strategy2
Standard / Framework2
Working Paper6
Court Case6
News / Press1
Other14
Agentic AiChips & Data CentresComputeCybersecurityData Privacy & ProtectionDefense & National SecurityGenerative AIHealth & Life SciencesJudicial & Law EnforcementLiability & AccountabilityNational StrategyOnline Safety & Child ProtectionSandbox
Court Case✓ Official

ECLI:NL:RBNNE:2025:4814

Lawyer used ChatGPT in proceedings before the Rechtbank Noord-Nederland. Fabricated: Case Law | Pleitnota verwees naar een door de advocaat/geëxposeerde Hoge Raad-arrest aangeduid als 'HR 19 december 2003' en ECLI:NL:HR:2003:AL8444 (NJ 2004/75); rechtbank constateert datumverschil (5-12-2003), ECLI niet terugvindbaar en voorgestelde rechtsregel ontbreekt. Outcome: ..

Court: Rechtbank Noord-NederlandParty: LawyerTool: ChatGPT
19 November 2025Judicial & Law EnforcementGenerative AILiability & Accountability
↗ Link available
Court Case✓ Official

ECLI:NL:RBGEL:2025:9423

Lawyer appeared before the Gelderland. Fabricated: Case Law | Plaintiff's counsel cited several CRvB rulings with ECLI numbers that the court could not locate; some cited statements appear not to exist and were therefore rejected. Outcome: Court found several cited rulings non-existent or irrelevant, rejected reliance on that case law, and dismissed the appeal..

Court: GelderlandParty: Lawyer
6 November 2025Judicial & Law EnforcementGenerative AILiability & Accountability
↗ Link available
Court Case✓ Official

The Boys of Rockanje v. Dwaard

Lawyer appeared before the Rotterdam D.. Fabricated: Case Law | The court noted that the party cited fabricated ECLI numbers

Court: Rotterdam D.Party: Lawyer
27 August 2025Judicial & Law EnforcementGenerative AILiability & Accountability
↗ Link available
Court Case✓ Official

X BV in Z v. Tax Inspector

Lawyer used ChatGPT in proceedings before the The Hague CA. Misrepresented: Exhibits & Submissions | Party submitted ChatGPT-derived statements; Court disregarded them because the underlying question/prompt was unknown, so the content lacked reliable provenance. Outcome: Arguments rejected; No formal sanction but severe judicial criticism..

Court: The Hague CAParty: LawyerTool: ChatGPT
26 June 2024Judicial & Law EnforcementGenerative AILiability & Accountability
↗ Link available
Court Case✓ Official

X BV in Z v. Tax Inspector

Lawyer used ChatGPT in proceedings before the The Hague CA. Outcome: Arguments discounted; No formal sanction but strong judicial criticism.

Court: The Hague CAParty: LawyerTool: ChatGPT
5 March 2024Judicial & Law EnforcementGenerative AILiability & Accountability
↗ Link available
National Strategy

Dutch Government unveils $82m AI Hub Plan in Groningen

The Dutch government has announced a €70 million (approximately $82 million) investment to build a national AI hub in the city of Groningen. The initiative, dubbed the "AI Factory", is expected to integrate research, education, and enterprise applications into a single facility, aimed at advancing the Netherlands’ role in the European AI landscape. The facility will be built on the site of a former Niemeyer tobacco factory, symbolically repurposing an industrial relic into a hub for future technologies. It will house a supercomputer designed to train large-scale AI models and support a wide range of public and private AI initiatives. construction is projected to start in 2026.

29 June 2025National Strategy
↗ Link available
Law / Act

The Netherlands to take sector-by-sector approach to EU AI Act enforcement

It is reported that the Netherlands is taking a sector-by-sector, iterative approach to enforcing the EU’s AI Act, according to Sven Stevenson of the Dutch data protection authority. Stevenson emphasised that (1) enforcement will evolve, initially concentrating on prohibited AI practices and gradually addressing high-risk systems; (2) the need for regulators to engage directly with AI use cases and treat guidance as living documents; and (3) ultimately, the Dutch authorities aim to support developers in understanding and meeting compliance obligations.

2 June 2025Data Privacy & Protection
↗ Link availableSecondary source
Other

DeepSeek banned from civil servants' computers

It is reported that digitalisation minister Zsolt Szabó has banned Dutch civil servants from using the Chinese AI service DeepSeek due to concerns over data security.

6 February 2025Data Privacy & ProtectionCybersecurityGenerative AI
↗ Link available
Court Case

Netherlands Court of Audit releases findings of investigation into government use of AI

The Netherlands Court of Audit has released a report about its investigation into the use of AI by central government. The reported findings included (1) AI is not yet widely used within central government - most systems are still experimental and a clear majority of the organisations (88%) use no more than 3 systems; (2) government organisations have not weighed up the opportunities of more than half their AI systems against the risks; (3) there is an incentive for the organisations to classify their systems as "low risk", with only 5% of the systems have been published in the Algorithm Register; and (4) the organisations that use AI the most are the police and the Employee Insurance Agency, with 23 and 10 systems respectively.

16 October 2024SandboxData Privacy & Protection
↗ Link available
Other

The Netherlands has partnered with UNESCO to address AI

The Netherlands is the latest country to join the race to regulate AI alongside UNESCO's Social and Human Sciences Sector. The partnership between UNESCO and the Dutch Authority for Digital Infrastructure will analyse design processes for AI that meet with EU regulation. To meet this demand, UNESCO is working on creating a report on AI for Dutch authorities that will outline a set of best practices and organise institutional training sessions to understand AI.

6 October 2023Data Privacy & Protection
↗ Link available
Other

Dutch AP warns autonomous AI agents expose users to cyber dangers

The Dutch Data Protection Authority (AP) has issued a formal warning against the use of OpenClaw and similar autonomous AI agents due to significant cybersecurity risks, including data breaches and account takeovers. These open-source systems often require full access to a user's computer and programs to function, creating a "Trojan Horse" scenario where the assistant can execute tasks without explicit consent. Security researchers have identified that approximately 20% of available plugins on the platform contain malware designed to steal credentials or cryptocurrency, while the framework itself is vulnerable to indirect prompt injection through hidden commands in websites and messages. Furthermore, critical vulnerabilities in the software allow for remote code execution and the exposure of sensitive files, such as financial records and identity documents. In response to these threats, the AP has urged both organizations and individuals to avoid using such experimental systems on devices containing private data and has called for these autonomous agents to be strictly regulated under the EU AI Act.

12 February 2026Agentic AiData Privacy & ProtectionCybersecurity
↗ Link available
National Strategy

Dutch AP presents vision for values-based generative AI

The Autoriteit Persoonsgegevens (AP) has presented a strategic vision on generative AI that emphasises the necessity of safe, responsible, and rights-compliant deployment within the Netherlands. As the market experiences a surge in AI applications primarily from US technology firms, the AP warns that increasing European dependence threatens digital autonomy amidst shifting geopolitical landscapes. With adoption rates reaching 23% among the general Dutch population—and significantly higher among youth—the AP highlights urgent societal risks, including the emergence of non-consensual deepfake imagery, unreliable mental health chatbots, and the premature integration of AI into critical sectors without adequate impact assessments. To mitigate these threats, the AP rejects undesirable "Wild West," "bunker," or "missed opportunity" future scenarios in favour of a "values at work" approach, which advocates for innovation grounded in the GDPR and the AI Act. The AP calls for organizational transparency, rigorous risk assessment, and a decentralized development landscape to ensure that the rapid integration of AI into education, healthcare, and government remains a controlled advancement of democracy rather than an unregulated social experiment.

4 February 2026Generative AIData Privacy & Protection
↗ Link available
Other

Dutch DPA flags risks in EU proposals on AI and data rules

In its media statement, the Dutch Data Protection Authority (DPA) said that while it supports the European Commission's ambition to simplify digital regulations through its official Omnibus proposal to amend laws like the GDPR and the EU AI Act, this must not come at the expense of human rights, particularly the right of people to decide what happens to their personal data. The DPA stressed that due diligence and care must be paramount, arguing that such crucial laws should not be hastily amended without first investigating the profound consequences for citizens, businesses, and regulators, and emphasized that legal certainty must be maintained with clear and enforceable rules for the EU. It concluded that innovation and legal protection can coexist, but new applications involving data and AI require clear safeguards to ensure people retain control over their data and respect for privacy and human autonomy.

21 November 2025Data Privacy & ProtectionGenerative AI
↗ Link available
Other

Data Protection Authority issues a report examining risks of using AI chatbots as voting aids for elections

The Netherlands Data Protection Authority (DPA) has issued a report warning against the use of AI chatbots as voting aids due to their propensity for delivering strongly biased and polarized political advice, which dangerously misrepresents the Dutch fifteen-party system. In an experiment using fictional voter profiles, the DPA found that two parties, GroenLinks-PvdA and PVV, overwhelmingly dominated first-place recommendations, effectively funneling left-leaning voters toward the former and right-leaning voters toward the latter, thus oversimplifying the political landscape while rarely suggesting other established parties. The DPA stresses that, unlike traditional voting aids, chatbots lack transparency, neutrality, and verifiability due to opaque biases embedded in their underlying models, and despite providers' claimed safeguards, they offered voting advice in nearly every test query. Classifying AI systems that influence elections as high-risk under the EU AI Act, the DPA urges developers to implement effective safeguards and advises citizens against using these tools for electoral guidance.

21 October 2025Data Privacy & ProtectionGenerative AIOnline Safety & Child Protection
↗ Link available
Other

Dutch AP and ACM warns companies from overusing AI, citing ‘one of the biggest annoyances’ people are facing

Dutch Data Protection Authority (AP) and the Authority for Consumers & Markets (ACM) have emphasized that organisations utilising chatbots in their customer service must always provide customers the option to speak with a human representative. They also stress the importance of transparency, requiring organisations to clearly indicate when a chatbot is being used and to ensure that these bots do not provide incorrect, evasive, or misleading information. The regulators highlight existing consumer laws mandating direct and effective communication, noting that many companies fail to meet these standards. With the upcoming EU AI Act set to enforce new transparency obligations from 2 August 2025, the AP and ACM are calling for additional clear guidelines on the design of AI chatbots to ensure they are fair, recognisable, and accessible. This call to action is driven by a significant increase in complaints about chatbots, including issues like poor or incorrect responses and the difficulty consumers face in reaching human agents when needed. Additionally, the regulators warn of privacy and security risks associated with chatbots, as they can be exploited to access confidential information, potentially leading to data breaches.

2 October 2025Data Privacy & ProtectionCybersecurityGenerative AI
↗ Link available
Working Paper

Dutch DPA publishes report on AI emotion recognition systems

The Dutch Data Protection Authority (AP) has published its fifth edition report on AI and algorithms, highlighting the growing use yet contested effectiveness of AI emotion recognition systems across sectors, including customer service, healthcare, and wearables. The report warns these systems risk infringing fundamental rights, including privacy and autonomy and stresses organisations must critically assess, transparently deploy, and secure consent when using them. The report highlighted that since February 2025, AI emotion recognition has been banned in education and workplaces in the Netherlands, with broader regulation and political debate ongoing. The AP also emphasises the need for mature AI governance through mandatory algorithm registration, audits, and bias reduction. It also noted progress on harmonised AI standards and the increasing role of AI in national strategies amid evolving European AI regulation frameworks.

15 July 2025Data Privacy & ProtectionHealth & Life Sciences
↗ Link available
Working Paper

Dutch DPA launches consultation on emotion AI

The Dutch Data Protection Authority has opened a public consultation on the societal use of emotion AI to inform its upcoming AI and Algorithm Risks Report. The consultation applies to emotion recognition technologies, including those embedded in consumer devices or used during customer service interactions, to interpret emotional states using biometric data (e.g. facial expressions or heart rate). The consultation aims to explore the practical risks and applications of such systems, excluding those in workplace or educational settings, where their use has been prohibited since 2 February 2025. The consultation closes on 6 May 2025.

18 April 2025Data Privacy & ProtectionCybersecurity
↗ Link available
Working Paper

DPA opens consultation on tools for meaningful human intervention in algorithmic decision-making

The Dutch Data Protection Authority has initiated a public consultation on tools for meaningful human intervention in algorithmic decision-making. The outline focuses on meaningful human intervention in automated decision-making, distinguishing between substantive and symbolic human oversight under the GDPR and the Law Enforcement Directive (LED). The consultation closes on 6 April 2025.

6 March 2025Data Privacy & ProtectionAgentic Ai
↗ Link available
Working Paper

Dutch DPA launches consultation on prohibited AI systems used for criminal risk assessment

The Dutch Data Protection Authority has launched a consultation in relation to the EU AI Act prohibition of AI systems that assess an individual’s likelihood of committing a criminal offence solely based on profiling or personality traits. The Authority seeks input from stakeholders to clarify the enforcement and interpretation of this prohibition. The consultation closes on 3 April 2025.

20 February 2025Data Privacy & Protection
↗ Link available
News / Press

Dutch privacy watchdog to launch investigation into China's DeepSeek AI

It is reported that the Netherlands' privacy watchdog (AP) will launch an investigation into Chinese AI firm DeepSeek's data collection practices and urged Dutch users to exercise caution with the company's software.

1 February 2025Data Privacy & Protection
↗ Link available
Other

Dutch DPA calls for input on prohibition on AI systems for social scoring

The Dutch Data Protection Authority (DPA) has called for input on the EU AI Act prohibition of AI systems on social scoring. The call ends on 7 February 2025.

18 December 2024Data Privacy & Protection
↗ Link available
Other

Dutch Data Protection Authority reports on AI system risks and necessary design requirements

The Dutch Data Protection Authority (AP) has issued final advice on a report on the risks associated with AI systems and the necessary design requirements to mitigate these risks in the Netherlands. The report (1) notes that current democratic control of AI systems is insufficient and that enhanced oversight and governance are required, especially in local governments, which use the most AI systems; and (2) recommends updating the national AI strategy to address modern AI challenges, increasing coordination among stakeholders and taking timely action on emerging risks for responsible AI development and deployment.

7 November 2024Data Privacy & ProtectionCybersecurity
↗ Link available
Other

Dutch DPA calls for input on prohibition on AI systems for emotion recognition in the areas of workplace or education institutions

The Dutch Data Protection Authority (DPA) has called for input on the EU AI Act prohibition of AI systems on emotion recognition in the areas of workplace or education institutions. In a later stage, input will also be asked for with respect to other prohibitions. The call ends on 17 November 2024.

31 October 2024Data Privacy & Protection
↗ Link available
Policy / Guidance

Dutch DPA releases guidance on manipulative, deceptive and exploitative AI systems

The Dutch Data Protection Authority has released Guidance on Manipulative, Deceptive and exploitative AI systems, subject to public input. For background, Article 5 of the AI Act prohibits certain AI systems posing an unacceptable risk level, a provision which enters into force on 2 February 2025. The DPA is responsible for setting out criteria for the application of these prohibitions, namely manipulative and deceptive systems, and exploitative systems. The Guidance provides definitions of different elements and criteria for these prohibitions. The consultation closes on 17 November 2024.

27 September 2024Data Privacy & ProtectionOnline Safety & Child Protection
↗ Link available
Other

Dutch DPA publishes Artificial Intelligence and Algorithmic Risks Report Summer 2024

The Dutch Data Protection Authority has published the "Artificial Intelligence and Algorithmic Risks Report, Summer 2024". The Report (1) highlights the growing integration of AI in society and emphasises the need for vigilance from stakeholders due to challenges in assessing AI control and risks; (2) addresses a number of possible AI risks, including inadequate oversight and discrimination risks in profiliing; (3) points to possible measures, including AI literacy, mandatory algorithm registration, and improved regulation; and (4) discusses the potential impacts of the implementation of the EU AI Act.

11 September 2024Data Privacy & Protection
↗ Link available
Other

Dutch DPA imposes a fine on Clearview because of illegal data collection for facial recognition

The Dutch Data Protection Authority (Dutch DPA) has imposed a fine of €30.5 million on Clearview AI for a range of breaches of the GDPR after confirming that its facial recognition database contained images of Dutch citizens. The DPA has warned it has ordered an additional penalty of up to €5.1 million that will be levied on Clearview AI for continued non-compliance. The DPA has also warned Dutch organisations to not use Clearview AI (which is now prohibited), with hefty fines for non-compliance.

3 September 2024Data Privacy & Protection
↗ Link available
Working Paper

Data Protection Authority issues report on design requirements to mitigate risks in AI

The Dutch Data Protection Authority (AP) has issued a report on the risks associated with AI systems and the necessary design requirements to mitigate these risks in the Netherlands. The AP’s report notes: (1) the need for careful deployment of AI systems due to the potential for incidents, (2) the importance of public awareness and vigilance regarding AI risks, (3) the necessity for organizations to understand and manage AI risks before implementation, and (4) the recommendation for algorithm registration by government and semi-public organizations to ensure transparency and accountability.

18 July 2024Data Privacy & ProtectionCybersecurity
↗ Link available
Policy / Guidance

Dutch DPA publishes guidance on facial recognition

The Data Protection Authority (DPA) has published guidance on the legal framework for facial recognition. The DPA states that facial recognition is generally prohibited, with a few exceptions (e.g. personal or domestic purposes). The guidance also provides clarity on how GDPR rules on biometric data apply to facial recognition technology.

2 May 2024Data Privacy & Protection
↗ Link available
Standard / Framework

Dutch Protection Authority publishes guidelines on data scraping

The Dutch data protection authority has published guidelines for data scraping by private individuals and organizations. The guidelines differentiate data scraping from a search engine and apply to both data scraping and web crawling. Organizations processing sensitive personal data must still meet GDPR requirements for processing such data, regardless of the purpose for processing it.

1 May 2024Data Privacy & Protection
↗ Link availableSecondary source
Working Paper

Data Protection Authority inquiry into AI and algorithm risks in democratic processes

The Data Protection Authority (DPA) in the Netherlands opened a consultation on AI and algorithm risks in democratic processes until 12 April 2024. The consultation will feed into an upcoming report on AI and algorithm risks in the Netherlands.

15 March 2024Data Privacy & Protection
↗ Link available
Other

Second AI and Algorithmic Risks Report released

The Dutch Data Protection Authority released its 'second AI and Algorithmic Risks Report' which highlights the urgent need for better risk management and incident monitoring. The report recommends a comprehensive strategy (a national master plan) that includes human control and oversight, secure applications and systems, and strict rules to ensure that organisations are in control.

18 January 2024Data Privacy & ProtectionCybersecurity
↗ Link available
Other✓ Official

US introduces Pax Silicia Initiative (with coalition of other countries)

The US Department of State has launched the Pax Silica Initiative, being a coalition of countries to build a "secure, prosperous, and innovation driven silicon supply chain—from critical minerals and energy inputs to advanced manufacturing, semiconductors, AI infrastructure, and logistics". The initiative aims to reduce coercive dependencies, protect the materials and capabilities foundational to AI, and ensure aligned nations can develop and deploy transformative technologies at scale. The inaugural Pax Silica Summit convenes counterparts from: Japan, Republic of Korea, Singapore, the Netherlands, The United Kingdom, Israel, United Arab Emirates, and Australia. Countries will partner on securing strategic stacks of the global technology supply chain, including, but not limited to: software applications and platforms, frontier foundation models, information connectivity and network infrastructure, compute and semiconductors, advanced manufacturing, transportation logistics, minerals refining and processing, and energy. Countries affirmed a shared commitment to: (1) pursue projects to jointly address AI supply chain opportunities and vulnerabilities in: priority critical minerals. semiconductor design, fabrication, and packaging, logistics and transportation, compute, and energy grids and power generation; (2) pursue new joint ventures and strategic co-investment opportunities; (3) protect sensitive technologies and critical infrastructure from undue access or control by countries of concern; and (4) build trusted technology ecosystems, including ICT systems, fiber-optic cables, data centers, foundational models and applications.

11 December 2025ComputeChips & Data Centres
↗ Link available
Law / Act

All EU countries back Dutch coalition for Chips Act 2.0

It is reported that all 27 EU member states have endorsed the Semicon Coalition's declaration for a revised Chips Act 2.0, presented by Dutch Minister Vincent Karremans to the European Commission in Brussels. This initiative, initially launched in March 2025 by the Netherlands and eight other countries, outlines five strategic priorities: enhancing cooperation within the semiconductor ecosystem, aligning and accelerating investment approaches, developing a robust talent pipeline, promoting sustainable production practices, and strengthening international partnerships while maintaining European strategic autonomy. The coalition aims to address previous criticisms of the 2023 European Chips Act by fostering a more focused and efficient strategy to bolster Europe's semiconductor industry.

29 September 2025National Strategy
↗ Link availableSecondary source
Standard / Framework

Data protection authorities adopted joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protecting AI

Twenty (20) data protection authorities, including from Australia, Belgium, Canada, France, Germany, Hong Kong, Ireland, Italy, Korea, Netherlands, New Zealand, Luxembourg, Spain, and UK adopted the joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protecting AI. The statement recognises the opportunities and risks of AI, including discrimination, misinformation, and hallucination from inappropriate data use, and stresses embedding privacy by design, strong governance, and transparency. The statement commits to clarifying lawful grounds for AI training data and exchanging information on proportionate safety measures. It also focuses on monitoring technical and societal impacts with contributions from non-governmental organisations, public authorities, academia, and businesses, and reducing legal uncertainties through regulatory sandboxes and best practice sharing.

17 September 2025Data Privacy & ProtectionGenerative AISandbox
↗ Link availableSecondary source
Other✓ Official

UN General Assembly adopts Resolution /C.1/79/L.43 on military AI, as proposed by the Netherlands and South Korea

The United Nations (UN) General Assembly First Committee has adopted Resolution A/C.1/79/L.43 which outlines a framework for international standards governing AI in the military domain, including its procurement, operation and decommissioning. The resolution, proposed by the Netherlands and South Korea, addresses risks such as accountability for AI errors, trust in AI decision-making, and the prevention of an AI-driven arms race.

6 November 2024Defense & National Security
↗ Link available