Justice, Justice You Shall Automate?

How Artificial Intelligence is subverting the pursuit of justice in Israel

Mordechai Sones By Mordechai Sones 14 Min Read

The Torah portion of Shoftim contains one of the most foundational injunctions in Jewish jurisprudence: “צֶדֶק צֶדֶק תִּרְדֹּף” – “Justice, justice you shall pursue.” Rashi explains the repetition of the word “justice” as a profound command: one must pursue justice relentlessly, through just and righteous means. The method of pursuit is as crucial as the outcome.

This command by the Creator poses a direct and urgent challenge to the modern State of Israel as it navigates the integration of Artificial Intelligence into its legal sphere. The premature and inadequately regulated adoption of AI, a tool that is inherently opaque, prone to error, and devoid of human conscience, represents not merely a new method, but a fundamental subversion of this holy pursuit. It threatens to replace the nuanced, human striving towards righteousness with the cold, and often flawed, logic of the machine.

Prevailing policy prioritizes economic innovation over essential legal safeguards, leading to demonstrable failures in the administration of justice. Currently, there is still a stark contrast between the secular court system, which is grappling with the tangible consequences of AI misuse, and the Rabbinical Batei Din, which largely reject AI’s role in judicial reasoning on deeply-rooted theological grounds. Current AI adoption subverts the pursuit of true justice by introducing systemic bias, eroding human discretion and public trust, and creating a profound accountability vacuum.

A Deliberate Gamble on Innovation Over Precaution

The challenges of AI within Israel’s legal system are a direct consequence of a national policy that consciously prioritizes technological and economic ambitions over robust legal and ethical safeguards. This permissive environment, characterized by a preference for “soft law,” sets the stage for the injustices documented in courtrooms across the country.

In December 2023, the government cemented a strategy of forgoing formal, rigid AI legislation, viewing it as a potential impediment to the nation’s powerful high-tech sector, which contributes 18% of its GDP. The policy’s primary objective is to foster AI advancement, with safeguarding human rights presented as a parallel, but not overriding, concern.

This strategic choice has created a fundamental disconnect between the executive branch’s economic priorities and the judiciary’s need for clear, enforceable rules. The government’s “soft law” approach effectively outsources the work of setting AI standards to the courts, forcing them into a reactive posture.

The consequences are severe. In the absence of binding rules, legal practitioners and even state agencies have experimented with powerful but flawed AI tools, leading directly to the submission of fabricated evidence and fictitious laws in court.

The 2023 AI Policy champions “Responsible Innovation” and “explainability” as core principles. Yet, the hands-off regulatory approach simultaneously allows for the unfettered development of “black box” systems whose inner workings are opaque.

This is the “explainability paradox”: the government’s policy espouses a critical safeguard for justice while cultivating an environment that ensures this safeguard cannot be enforced. The direct result is the situation sharply criticized by an Israeli District Court, where the police used a predictive AI tool that operated as a “black box,” with no one able to explain how it reached its conclusions, making effective judicial review impossible.

The Machine in the Courtroom: A Record of Missteps and Rebuke

The negative impact of AI on the Israeli justice system is not speculative but a documented fact. The most glaring failure has been the repeated submission of court filings based on AI-generated “hallucinations”—entirely fabricated case law and fictitious legal statutes.

This pattern of misconduct has escalated from individual attorneys to the Israel Police. In one instance, an attorney submitted pleadings to the Jerusalem Magistrates’ Court citing fabricated rulings. The crisis reached a head in February 2025, when the Supreme Court heard an appeal from a Sharia Court decision where the argument was based almost entirely on fabricated judgments.

The case involving the Israel Police is particularly alarming. In May 2025, during a hearing at the Hadera Magistrate’s Court, the police submitted an argument citing legal clauses that simply did not exist. The defendant’s attorney correctly suspected the use of ChatGPT, a suspicion the police representative was forced to admit was correct. Judge Ehud Kaplan reacted with shock, stating, “If I thought I had seen everything in the 30 years I have been on the bench, I must have been wrong.” This incident reveals a dangerous combination of technological illiteracy and professional irresponsibility from a state actor.

Beyond fabricated law lies the more subtle threat of algorithmic bias. In a personal injury lawsuit in the Haifa Magistrates’ Court, an AI-generated summary of medical records was disqualified not because it was demonstrably false, but because of the inherent risk that the AI could create “new, processed content tailored to the needs of the party using them,” subtly influencing an expert’s judgment. This established a crucial precedent: the potential for undetectable bias can be sufficient grounds for inadmissibility.

Corrosion of Trust

The very fabric of Israeli society is woven with trust—trust in the State’s institutions, media, and legal system. Technology, particularly AI, is proving to be a powerful corrosive agent against this trust. Just as a simple Google Translate glitch can turn a routine police report into an international incident by mistranslating an innocent bystander’s post as “attack them,” the uncritical use of AI in law destroys the assumption of veracity that underpins the entire judicial process.

When a lawyer or police officer can, with a few keystrokes, generate and submit a legal argument based on non-existent laws, the court is no longer a forum for truth-seeking.

This technological carelessness finds fertile ground in a legal culture already accustomed to sidelining due process, most notably through the state’s use of administrative detention, where individuals are imprisoned without trial and judicial oversight is often rendered moot. In an environment so permissive of such a profound departure from the rule of law, it is little wonder that the new, more subtle erosion of justice represented by AI has been so easily and carelessly accepted.

It becomes a theater of digital illusion, and public confidence plummets. Each AI “hallucination” submitted as fact is a small tear in the social contract, leaving citizens to wonder if they are governed by law or by algorithmic whim.

This erosion of trust is not a bug but a feature of a system that moves too fast and with too much complexity for human oversight to keep pace. The allure of speed and efficiency creates a powerful temptation to abdicate responsibility to the machine.

As seen in other high-stakes domains, from aviation to finance, the increasing autonomy of AI systems creates an “inescapable risk.” When these complex, self-learning systems fail, they fail in unpredictable and often catastrophic ways, and the distributed nature of their creation makes it nearly impossible to assign accountability.

This is the accountability vacuum now emerging in Israeli law: when an algorithmic error leads to a miscarriage of justice, who is to blame? The developer? The user? The machine itself? Without clear answers, there can be no true recourse, and therefore, no justice.

Beit Din and the Algorithm: Halachic Impasse

While the secular legal system struggles with the misapplication of AI, Israel’s religious court system, the Batei Din, present a different case. Here, there are no documented instances of a dayan being caught using AI for legal rulings. Resistance to AI stems from a profound understanding of the nature of halakha (Jewish law).

A significant body of rabbinic opinion argues that AI is constitutionally incompatible with rendering a ruling. Halakha is not a binary legal code; rulings depend on intangible human factors. Justice, in this view, requires an empathetic engagement that a machine cannot replicate.

Furthermore, a correct ruling is derived not only from written texts but from mesorah—the vast body of unwritten tradition passed down orally—and shimush, the practical wisdom gained through long apprenticeship. An AI has no access to this lived, embodied wisdom. Finally, traditional Jewish thought holds that a qualified human judge receives divine assistance—siyata dishmaya—to arrive at a just ruling, a grace not extended to an algorithm.

However, the religious courts are not entirely insulated from AI’s disruptive influence. In a significant case, an attorney was caught misusing AI in an attempt to challenge a religious court’s ruling within the secular legal system. The lawyer’s appeal to the Supreme Court was based almost entirely on AI-hallucinated precedents. This incident highlights how the risks of AI can still affect and undermine the religious courts, even if the technology is not used by the judges themselves, by corrupting the appellate process in the secular system.

Are We Just Machines Now?

The uncritical embrace of technology risks a profound dehumanization, a flattening of human experience into machine-readable data. We are becoming primitive moderns, armed with godlike technological power but increasingly disconnected from the truth and wisdom that give it meaning. In a world mediated by luminous screens and dead algorithms, we are losing the ability to truly hear one another. Communication becomes a transaction, an exchange of information rather than a meeting of souls.

This is precisely the danger AI introduces into the courtroom. It tempts the legal system to treat human beings—with their complex histories, emotions, and moral struggles—as mere bundles of data points to be processed. A judge’s wisdom, a lawyer’s empathy, a litigant’s plea for understanding—these are the very things that cannot be digitized. By outsourcing legal reasoning to a machine, we are not just risking error; we are risking our own humanity.

Pursuing True Justice

The evidence from Israel’s national policy, secular courtrooms, and religious legal philosophy converges on a single, troubling conclusion: Artificial Intelligence, in its current form and under the prevailing regulatory regime, fundamentally subverts the pursuit of justice. The injunction “Justice, justice you shall pursue” is not a call for perfect, automated outcomes. It is a call for a deeply human process—a relentless, conscientious, and empathetic struggle toward what is right. It demands that we use just means to achieve just ends.

An algorithm cannot “pursue.” It can only process. A machine cannot weigh the unquantifiable essence of a human life. It cannot understand mercy, context, or repentance. To place our faith in such a tool is to abandon the very pursuit the Torah commands. To realign the integration of technology with the core principles of justice, a fundamental shift is required: from a “soft law” approach to binding legislation for high-stakes sectors; from advisory opinions to mandatory professional education; and from a naive embrace of innovation to a mature, critical engagement with its moral limits.

The pursuit of justice is the work of human hands, human minds, and human hearts working in partnership with the Creator, according to His law.

As AI’s footprint grows, so does the urgency to protect this essential truth, lest Israel’s courts, mere processors of data, ravage the souls they blindly judge while remaining deaf to the Torah’s call to pursue justice, justly.

Don't Miss Our Alerts!

Get vital alerts and headlines for the Jewish community that other news sites ignore or suppress
Share This Article
1 Comment