icon icon heb

AI in the Courtroom: Experts’ Use of Artificial Intelligence Tools inLegal Proceedings

January 30, 2025

Background: In an interesting decision handed down by the Haifa Magistrates Court,[1] Justice Tali Merom expressed a clear-cut position against providing a court-appointed expert with a medical summary that had been prepared using artificial intelligence software.

The plaintiff, a woman who had been injured in three traffic accidents, filed a claim for compensation, and one of the defendant insurance companies submitted to the court-appointed expert a document prepared by an AI software program called DigitalOw

According to the ruling, the DigitalOwl software program is a technological application designed to analyze complex medical documents, which can read medical documents presented to it and produce a document that includes chronological summaries, colored highlighting of relevant keywords, and hyperlinks to underlying raw documents.
All of these features are designed to make reading documents easier and to highlight important information.

However, the plaintiff claimed that such practice created a deliberate bias that was liable to prejudice the objectivity of the expert’s judgment, so that the expert would rely on the conclusions offered by the artificial intelligence, instead of examining the underlying raw materials himself and reaching an independent conclusion.

The court accepted the plaintiff’s motion, and prohibited the presentation of the AI-generated summary to the court-appointed expert.
Judge Merom determined that AI-generated documents raise a real concern lest the expert’s judgment should be unwittingly influenced:

“The use of AI-based documents, which highlight certain information in a colorful and structured way, raises an apprehension that the expert’s judgment may be unwittingly influenced and the objectivity of his opinion compromised,” she noted in the decision.

The judge also warned that allowing the presentation of such documents could create  a “slippery slope”, where new technologies will prejudice the independence of court-appointed experts.
Therefore, it was decided that the expert would ignore the AI-generated summary.

Critique and ramifications: According to Regulation 8 of the Traffic Accident Victims Compensation Regulations (Experts), 1986, each party must submit to the expert appointed by the court “all documents regarding the medical treatment that he received and regarding the examinations that he underwent for the purpose of such treatment, which relate to the matter in dispute, provided that he does not provide the expert with a medical opinion.”
Viewing this regulation as proscribing the submission of processed material, rather than underlying raw documents, to the court-appointed expert makes it possible to arrive at the outcome reached by the Haifa court.

Would the outcome have been the same, had an expert retained by one of the parties used material prepared for him by artificial intelligence software?
Would it have been the same in a case where the matter at hand had not been a medical issue but had dealt with a specific technological field?
The spirit of the decision is a vindication for those who answer in the affirmative: the expert is always limited to examining the primary raw materials and drawing his own conclusions, because reliance on an artificial intelligence tool to examine the primary raw materials is liable to distort his judgment.

Nevertheless, it is our view  that a better approach would be to allow an expert to use artificial intelligence tools while forming his opinion, subject to several conditions.

First, the artificial intelligence tool should be such as to allow the expert to fulfill his duty and explain to the court and the parties how he reached his conclusion.
This follows from combining the explainability requirement found in artificial intelligence systems and the legal obligations of an expert according to existing law.
In this regard, the expert would do well to record his conversations with the artificial intelligence tool, including the queries he entered into the system – and the answers he received.

Second, the expert must make a representation and a convincing argument that he has properly examined the content generated by the artificial intelligence tool and has determined, according to his professional experience, that he stands behind his opinion when it is based on such use.
In particular, the expert will have to examine whether the artificial intelligence correctly understood the content of the raw material and did not “hallucinate” about things that did not appear in the raw material while reaching the conclusion it reached.

Third, due disclosure must be required – both regarding the very use of artificial intelligence and regarding the entity that provided the artificial intelligence tools — inter alia, in order to ensure the absence of a conflict of interest.

The first requirement above should also be borne in mind by developers of systems intended for use in the legal world and in the courtroom.
Transparent and clear protocols must be developed for the use of AI tools in legal proceedings.
It is important to ensure maximum transparency and explainability regarding the manner in which the technology — and the information created through it – is used.
Care should be taken to keep original versions of documents alongside processed versions, for audit and follow-up purposes.

Indeed, on the one hand, care must be taken not to make reckless and unrestrained use of innovative technological tools, which merely mirror the information they trained on, including the multitude of biases inherent in it. But on the other hand, the requisite degree of caution should not be exaggerated so as not to miss the benefits inherent in the use of artificial intelligence. Paraphrasing the court’s comment, it may be asked whether, when a flesh-and-blood lawyer highlights certain information in one way or another in the pleadings, this may not unconsciously bias the judge’s judgment?

Given that we trust the court-appointed expert to decide on a complex technological matter, it stands to reason that we can also trust him to expertly examine the quality of the product he received from the artificial intelligence tool.

To the same extent, we may ask – we would like to reach a situation where a doctor who makes use of an artificial intelligence-based decision support system should be acting improperly by the very fact of being in such position, regardless of the quality of the information he receives and the professionalism of his own decision-making process?

And last but not least. Several years ago, a petition to cancel an arbitration award was heard in court, after it became clear that the arbitrator, a retired judge, was assisted by a family member, a lawyer by profession, in preparing the award.
The court dismissed the petition, ruling that an arbitrator was entitled to receive assistance as long as he acted according to his best judgment and he was the one who ultimately decided to issue the arbitration award.
The requirement of transparency can be fulfilled more easily when a professional uses a computerized tool and his queries and answers thereto are documented, than in the case where the professional is assisted by flesh-and-blood people, the working of whose minds remain unfathomed.

To summarize, in light of the continuous development and improvement of artificial intelligence tools, it is only natural that expert witnesses will also resort to using them.
However, these expert witnesses – and courts and the parties to whom they provide their services – should be aware that technological innovation is not exempt from criticism, and in some cases may even raise complex ethical issues.

In addition, although the decision at hand did not examine in depth the artificial intelligence tool that had been used, and contented itself with rejecting the generated output solely due to the concern expressed, it is not impossible that in the future the artificial intelligence tools themselves and their use will stand up to the examination of the various courts of law, and it is even reasonable to assume that there will be parties who will agree in advance that experts on their behalf will be allowed to use an agreed-upon artificial intelligence tool, in order to ensure, to the greatest possible extent, freedom from allegations and hallucinations.[1]


[1] CC 41416-12-23 Plonit v Clal Insurance Company Ltd et al (Nevo,December 9, 2024).

[1] In England, for example, the use of artificial intelligence tools has been allowed for the purpose of making decisions regarding the scope of the material to be presented to the opposing party in document discovery procedures.