AI’s Arrival in Dutch Courts · European Legislation Weblog – Model Slux

Synthetic intelligence instruments are making waves throughout the authorized sector. ChatGPT specifically is all of the hype, with debates nonetheless ongoing about its potential position in courtrooms: from aiding judges in drafting opinions, to attorneys counting on AI-generated arguments, and even events submitting ChatGPT-produced proof. As a report on Verfassungsblog suggests, the “robotic choose” is already right here, with Colombian judges utilizing ChatGPT to jot down full verdicts. Within the UK, enchantment choose Lord Justice Birss described ChatGPT as jolly helpful for offering summaries of an space of regulation. In the meantime, in China, AI is embraced in courtrooms, with the “Little Sage” (小智) system dealing with total small claims proceedings.

Within the Netherlands, ChatGPT made its judicial entrance when a decrease court docket choose induced controversy by counting on data supplied by the chatbot in a neighbour dispute over photo voltaic panels. The incident triggered important dialogue amongst Dutch attorneys concerning the affect of AI on litigants’ rights, together with the precise to be heard and get together autonomy. Nonetheless, a number of different judgments have additionally talked about or mentioned ChatGPT.

On this weblog put up, I’ll have a look at all six printed Dutch verdicts referencing ChatGPT use—both by the choose or by litigants—and discover whether or not there may be any frequent floor in how AI is approached. I may even sketch the EU-law context surrounding AI-use in courts, and think about the implications of this half-dozen rulings for present efforts by the Dutch judiciary to control the usage of AI.

ChatGPT in Court docket: Guarantees and Pitfalls

Earlier than delving into the precise judgments, it’s useful to grasp why ChatGPT is drawing a lot consideration in court docket.

Authorized functions of AI usually are not new. For many years, the sector of AI and Legislation has researched the probabilities of so-called ‘knowledgeable methods’, primarily based on logical fashions representing human authorized reasoning, to interchange sure parts of authorized decision-making. At the moment, such methods are deployed on a big scale in, for instance, social safety and tax administration. Nonetheless, extra lately, the data-driven method to authorized AI has induced a revolution. By combining massive datasets (Huge Information) with machine studying strategies, AI methods can study from statistical correlations to make predictions. This allows them to foretell the chance of recidivism or use earlier case regulation to forecast outcomes in new circumstances.

Massive Language Fashions (LLMs) resembling ChatGPT observe related rules, coaching on huge, internet-scraped textual datasets and deploying machine studying and pure language processing to foretell, in essence, probably the most possible subsequent phrase in a sentence. ChatGPT can immediately generate responses to advanced questions, draft paperwork, and summarize huge quantities of authorized textual content. As a exceptional byproduct, it may seem to carry out fairly nicely in authorized duties resembling analysis help, and it has even confirmed able to passing first-year American regulation exams.

But, these prospects include dangers. ChatGPT can produce inaccurate solutions (so-called “hallucinations”) and lacks real-time entry to personal authorized databases. Analysis by Dahl et al. demonstrated that ChatGPT-4 generated misguided authorized data or sources in 43% of its responses. In a now-infamous incident, a New York lawyer was reprimanded after ChatGPT cited non-existent case regulation. Moreover, the know-how is akin to a black field: as a result of advanced nature of neural networks and the huge scale of coaching information, it’s usually troublesome—if not not possible—to hint how particular outputs are generated. Lastly, bias can come up from incomplete or selective coaching information, resulting in stereotypical or prejudiced output. Over- or underrepresentation within the enter information impacts the system’s outcomes (rubbish in, rubbish out).

Regardless of these important caveats, the next Dutch judgments present how AI is more and more making its look in courtrooms, probably shaping judicial discourse and follow. First, the usage of ChatGPT by a choose is mentioned, adopted by the circumstances through which litigants used the chatbot.

ChatGPT in Motion: From the Bench to the Bar

A.    Judicial Use Instances

1.     Gelderland Court docket (ECLI:NL:RBGEL:2024:3636)

On this neighbour dispute over rooftop building and diminished output from photo voltaic panels, the court docket in Gelderland used the chatbot’s estimates to approximate damages. It held:

“The district court docket, with the help of ChatGPT, estimates the common lifespan of photo voltaic panels put in in 2009 at 25 to 30 years; it subsequently places that lifespan right here at 27.5 years … Why it doesn’t undertake the quantity of € 13.963,20 proposed by the claimant in the principle motion has been sufficiently defined above and in footnotes 4–7.” (at 5.7)

The choose, once more counting on ChatGPT, additionally held that insulation materials thrown off the roof was now not usable, thus awarding damages and cleanup prices. (at 6.8)

B.    Litigant Use Instances

2.     The Hague Court docket of Enchantment (ECLI:NL:GHDHA:2024:711)

On this appellate tax case in regards to the official valuation of a resort, the Court docket of Enchantment in The Hague addressed arguments derived from ChatGPT. The appellant had launched AI-generated textual content to contest the assessed worth, however the court docket discovered the reference unpersuasive. It held:

“The arguments put ahead by the get together that had been derived from ChatGPT don’t alter [the] conclusion, notably as a result of it’s unclear which immediate was entered into ChatGPT.” (at 5.5)

3.     The Hague Court docket of Enchantment (ECLI:NL:GHDHA:2024:1771)

In a tax dispute relating to the registration tax (BPM) on an imported Ferrari, the Court docket of Enchantment in The Hague rejected the taxpayer’s reliance on ChatGPT to determine appropriate comparable automobiles. The claimant had requested ChatGPT to record automobiles that shared an analogous financial context and aggressive place with the Ferrari in query, resulting in a collection of ten luxurious vehicles. The court docket explicitly dismissed this method, contemplating that whereas AI would possibly group automobiles primarily based on basic financial context, this methodology doesn’t mirror what a median client (a human) would think about genuinely comparable. (at 5.1.4)

4.     District Court docket of The Hague (ECLI:NL:RBDHA:2024:18167)

On this asylum dispute, the claimant (an alleged Moroccan Hirak activist) argued that returning to Morocco would expose him to persecution as a result of authorities there routinely monitor protestors overseas. As proof of this state surveillance, his lawyer cited a ChatGPT response. The court docket dismissed the argument as unfounded:

“That the claimant’s consultant refers on the listening to to a response from ChatGPT as proof is deemed inadequate by the court docket. First, as a result of the claimant has not submitted the query posed nor ChatGPT’s reply. Furthermore, the lawyer admitted on the listening to that ChatGPT additionally didn’t present any supply references for the reply to the query posed.” (at 10.1)

5.     Amsterdam District Court docket (ECLI:NL:RBAMS:2025:326)

On this European Arrest Warrant case, the defence submitted a Polish-language report about jail circumstances in Tarnów, hoping to show systemic human rights violations. Nonetheless, on the listening to, counsel acknowledged utilizing ChatGPT to translate the report into Dutch, and the court docket discovered that inadequate. Missing an official translation or a model from the issuing group itself, the court docket dominated it couldn’t confirm the authenticity and reliability of the AI-generated translation. In consequence, the ChatGPT-based proof was dismissed and no additional questions had been posed to the Polish authorities in regards to the Tarnów jail. (at 5)

6. Council of State (ECLI:NL:RVS:2025:335)
In a dispute over a compensation declare for diminished property worth, the claimant (Aleto) tried to point out that the knowledgeable’s use of sure comparable properties was flawed and submitted late-filed materials derived from ChatGPT. The Council of State dominated:

“In the course of the listening to, Aleto defined that it obtained this data by ChatGPT. Aleto didn’t submit the immediate. Moreover, the ChatGPT-generated data didn’t present any references for the reply given. The data additionally states that, for an correct evaluation, it could be advisable to seek the advice of a valuer with particular data of commercial websites within the northern Limburg area.” (at 9.2)

Double Requirements: AI-Use by the Court docket vs. the Litigants

make sense of those rulings? Let’s begin with the ruling through which the choose used ChatGPT to estimate damages, which generated by far probably the most debate. Critics of the ChatGPT use on this case argue that the choose primarily launched proof that the events had not mentioned, thereby working counter to elementary rules of adversarial proceedings, resembling the precise to be heard (audi et alteram partem), and to the requirement below Dutch civil regulation that judges base selections solely on info launched by the events or on so-called “info of basic data” (Article 149 of the Dutch Code of Civil Process).

A comparability has been drawn to prior case regulation involving judges who independently searched the web (the so-called “Googling choose”). As an important criterion, the Dutch Supreme Court docket has dominated that events should, in precept, be given the chance to precise their views on internet-sourced proof. Nonetheless, others are much less essential of the choose’s ChatGPT-use, mentioning that judges have appreciable discretion in estimating damages below Article 6:97 of the Dutch Civil Code.

Wanting extra carefully on the manner the choose used ChatGPT on this case, it stays unclear whether or not the events had been afforded the chance to contest the AI-generated outcomes. Nor does the ruling specify what prompts had been entered or the exact solutions ChatGPT produced. That stands in stark distinction to the Colombian choose’s use of ChatGPT, described in Juan David Gutiérrez’s Verfassungsblog put up, the place the court docket totally transcribed the chatbot’s responses, comprising 29% of the judgment’s textual content.

It additionally contrasts paradoxically to the judicial perspective towards litigants’ personal ChatGPT submissions. In actual fact, three causes have been cited by Dutch judges for rejecting ChatGPT-generated proof:

  1. It’s unclear which immediate was entered into ChatGPT;

  2. The claimant has not supplied ChatGPT’s solutions;

  3. The ChatGPT-generated textual content didn’t cite sources.

In different circumstances, ChatGPT use was dismissed as a result of ChatGPT’s views had been seen as incomparable to these of the common client, or as a result of the interpretation was deemed unreliable.

Returning to the Dutch choose’s use of ChatGPT, plainly it suffers from the exact same shortcomings courts have recognized as grounds for rejecting ChatGPT-based proof when launched by the events. Such double requirements, for my part, level to an pressing must develop constant tips.

Rising Pointers for Judicial AI-Use?

Though new steerage paperwork on AI use by judges and attorneys are rising in jurisdictions such because the EU, the UK, New Zealand, and the Flemish Area, they hardly ever spell out express necessities for introducing AI-generated content material in authorized proceedings, and as a substitute emphasize basic rules resembling acknowledging limitations of AI and sustaining attorney-client privilege. Against this, the Dutch case regulation so far suggests at the very least three parts that may form finest practices: (1) making certain the immediate is disclosed, (2) requiring that ChatGPT’s full solutions are shared, and (3) demanding correct references.

Whereas such necessities align with well known rules of transparency and reliability, they alone could not suffice. The Dutch response to the choose’s use of AI displays deeper considerations about truthful trial ensures and the significance of human oversight, which deliver the matter into the area of constitutional significance. Consequently, when judges make use of LLMs to help in decision-making, they need to be vigilant as to the affect on events’ rights to be heard, to submit proof, and to problem proof.

Moreover, it is very important think about how the necessities of the EU AI Act come into play when LLMs resembling ChatGPT are utilized by judges and litigants. Annex III of the AI Act qualifies sure judicial functions of AI as ‘excessive threat’, triggering the AI Act’s most stringent obligations, such because the requirement for deployers to conduct a Basic Rights Influence Evaluation (Article 27), in addition to inter alia obligations relating to data-governance (Article 10), human oversight (Article 14), and transparency (Article 13). These obligations come into play with regard to:

“AI methods supposed for use by a judicial authority or on their behalf to help a judicial authority in researching and deciphering info and the regulation and in making use of the regulation to a concrete set of info, or for use in an analogous manner in various dispute decision” (Annex III(8)(a)).

Because the corresponding recital notes, this high-risk qualification serves to “tackle the dangers of potential biases, errors and opacity” (Rec. 61).

Nonetheless, the classification turns into problematic for AI-systems that serve greater than strictly authorized functions, like LLMs. LLMs like ChatGPT fall below the umbrella of general-purpose AI (GPAI) methods, which means they’re extremely succesful and highly effective fashions that may be tailored into a wide range of duties. In follow, the exact same LLM is perhaps employed to extend effectivity in organizational processes (in precept a low-risk utility), but even be employed to help judicial authorities in “researching and deciphering info and the regulation”. Whether or not a given use case falls below ‘high-risk’ can subsequently be a nice line, despite the fact that it issues enormously for the relevant obligations.  

Furthermore, even when AI use in court docket wouldn’t meet the factors for high-risk classification, sure basic provisions nonetheless apply, resembling Article 4 on AI literacy. The AI Act’s inclusion of guidelines for AI methods able to inter alia textual content (Article 50(2) AI Act) additional complicates issues by imposing transparency necessities in a machine-readable format. It stays undefined how this transparency obligation would possibly function in, for instance, advert hoc use of ChatGPT by judges and litigants. In that regard, the transparency obligation is directed primarily at suppliers, who should implement technical options, reasonably than the customers themselves.

Lastly, with regard to privateness and information safety, the EU AI Act merely refers back to the basic framework laid down by the Basic Information Safety Regulation ((EU) 2016/679), the ePrivacy Directive (2002/58/EC), and the Legislation Enforcement Directive ((EU) 2016/680). Nonetheless, the data-driven method inherent to machine-learning algorithms like ChatGPT—involving huge processing of (private) information—opens up a Pandora’s field of novel privateness dangers.

As Sartor et al. emphasize of their examine for the European Parliament, this method “could lead […] to an enormous assortment of private information about people, to the detriment of privateness.” But, from a privateness perspective, the AI Act doesn’t itself lay down a framework to mitigate these dangers, and it stays questionable whether or not current rules suffice. Specifically, exactly how the GDPR applies to AI-driven functions stays topic to ongoing steerage by information safety authorities. It goes with out saying that, in delicate contexts resembling judicial settings, making certain compliance with the GDPR’s necessities is significant for preserving public belief.

Prospects for a Dutch Judicial AI-Technique

On a concluding observe, the jurisprudence mentioned above carries direct implications for the Dutch judiciary’s forthcoming AI technique. The Council for the Judiciary’s 2024 Annual Plan mentions the event of such a technique by mid-2024, but no doc had been printed by the top of that 12 months. In a letter to the Dutch Senate, the State Secretary for Digitalization and Kingdom Relations said that the judiciary intends to current its AI technique in early 2025; nonetheless, no such technique has been made public so far.

What is obvious from the rulings to date is that AI and LLMs are more and more discovering their manner into courtrooms. But, the present scenario is way from splendid. It displays a fragmented patchwork of double requirements and authorized uncertainty regarding a know-how that intersects immediately with constitutional ensures resembling the precise to a good trial, together with the precise to be heard and equality of arms.

In gentle of this, it appears important that any overarching AI coverage be accompanied by clear and sensible tips for judicial AI use. Based mostly on the developments reviewed on this put up, three parts seem particularly pressing:

1.     Establishing acceptable preconditions for AI use by each judges and litigants (thus avoiding double requirements whereas safeguarding events’ elementary rights and due course of);

2.     Introducing a transparent threat categorization according to the EU AI Act (benefiting from lower-risk functions whereas exercising warning with high-risk ones); and

3.     Guaranteeing strong information privateness and safety.

Though it might be tempting to undertake AI instruments just because they’re “jolly helpful,” disregarding these rules jeopardizes the belief and legitimacy wanted for a accountable integration of AI throughout the judicial system. If AI is to change into a part of the way forward for the courts, then the time to put down the principles is now.

D.G. (Daan) Kingma holds an LLB in Worldwide and European Legislation from the College of Groningen and a grasp’s in Asian Research from Leiden College (each research cum laude). He at present research authorized analysis (LLM) at Groningen College, specializing in the intersection between know-how, privateness, and EU digital regulation.

Leave a Comment

x