Read it Yourselves: Experts in Legal Proceedings May Not Rely on Work Products of Artificial Intelligence Tools
February 09, 2025
Background: In an interesting decision handed down by the Haifa Magistrates Court,[1] Justice Tali Merom expressed a clear-cut position against providing a court-appointed expert with a medical summary prepared using artificial intelligence.
The plaintiff, a woman injured in three traffic accidents, filed a compensation claim. The Court appointed an expert and instructed the parties to provide him with the pertinent materials, such as medical records and the like, based on which the expert may prepare his report. One of the defendants – an insurance company – sent the expert – alongside these pertinent materials – a document prepared by an AI tool called DigitalOwl, described as a summary of those medical records provided to him.
According to the ruling, DigitalOwl is an AI-powered platform designed to analyze complex medical records and produce a document that includes chronological summaries, colored highlighting of relevant keywords, and hyperlinks to underlying records. These features are designed to make reading documents easier and highlight important information.
The plaintiff was unenthused and asked that the AI-made summary be excluded from the materials to be reviewed by the expert, claiming that the expert does need the AI-made summary to begin with, as he is perfectly capable of reading the actual materials and coming up with his conclusions, and added that the expert’s reliance on this AI-made summary might bias him, as a medical summary created by an AI-tool could not be relied on as neutral.
The court accepted the plaintiff’s motion, and prohibited the presentation of the AI-made summary to the court-appointed expert, writing that “[t]he use of AI-based documents, which highlight certain information in a colorful and structured way, raises a concern that the expert’s judgment may be unwittingly influenced and the objectivity of his opinion compromised”.
The judge also warned that allowing the presentation of such AI-made documents could create a “slippery slope” where new technologies will prejudice the independence of court-appointed experts. Therefore, it was decided that the expert may not be presented with the AI-generated summary.
Critique and ramifications: According to Regulation 8 of the Traffic Accident Victims Compensation Regulations (Experts), 1986, each party must provide to the expert appointed by the court “all documents regarding the medical treatment that he received and regarding the examinations that he underwent for such treatment, which relate to the matter in dispute, provided that he does not provide the expert with a medical opinion.” Viewing this regulation as proscribing the provision of processed material, rather than underlying raw documents, to the court-appointed expert makes it possible to arrive at the outcome reached by the Haifa court.
Would the outcome have been the same if an expert retained by one of the parties had used materials prepared for them by AI? What if the matter at hand had not been a medical issue but rather dealt with a specific technological field? The spirit of the decision supports those who would not have the experts reviewing AI-made materials: the expert is always limited to examining the primary raw materials and drawing their conclusions because reliance on an artificial intelligence tool to review the primary raw materials is liable to distort their judgment.
Nevertheless, a better approach would allow an expert to use artificial intelligence tools while forming their report, subject to several conditions.
First, the artificial intelligence tool should be such as to allow the expert to fulfill their duty and explain to the court and the parties how they reached their conclusion. This follows from combining the explainability requirement often made of artificial intelligence systems and the legal obligations of an expert according to existing law. In this regard, the expert would do well to record their conversations with the artificial intelligence tool, including the queries they entered into the system – and the answers they received.
Second, the expert must make a representation and a convincing argument that they have adequately examined the content generated by the artificial intelligence tool and have determined, according to their professional expertise, that they stand behind their opinion when it is based on such use. In particular, the expert will have to examine whether the artificial intelligence correctly understood the content of the raw material and did not “hallucinate” about things that did not appear in the raw material while reaching the conclusion it reached.
Third, due disclosure must be required – both regarding the very use of artificial intelligence and the entity that provided the artificial intelligence tools — inter alia, to ensure the absence of a conflict of interest.
The first requirement above should also be considered by developers of systems intended for use in the legal world and the courtroom. Transparent and clear protocols for using AI tools in legal proceedings must be developed. It is important to ensure maximum transparency and explainability regarding how the technology — and the information created through it – is used. Care should be taken to keep original versions of documents alongside processed versions for audit and follow-up purposes.
Indeed, on the one hand, care must be taken not to make reckless and unrestrained use of innovative technological tools, which merely mirror the information they trained on, including the multitude of biases inherent in it. But on the other hand, the requisite degree of caution should not be exaggerated so as not to miss the benefits inherent in using artificial intelligence. Paraphrasing the court’s comment, it may be asked whether, when a flesh-and-blood lawyer highlights certain information in one way or another in the pleadings, this may not unconsciously bias the judge’s judgment.
Given that we trust the court-appointed expert to decide on a complex technological matter, it stands to reason that we can also trust them to expertly examine the quality of the product they received from the artificial intelligence tool.
Much in the same way, we wouldn’t want to fault a doctor for using an artificial intelligence-based decision support system before we had a chance to review the quality of the information they receive and the professionalism of their decision-making process.
And last but not least. Several years ago, a motion to cancel an arbitral award was heard in court after it became known that the arbitrator, a retired judge, was assisted by a family member, a lawyer by profession, in drafting the award. The court dismissed the motion, ruling that the arbitrator was entitled to receive assistance as long as he acted according to his best judgment and was the one who ultimately decided to issue the award. The requirement of transparency can be fulfilled more easily when a professional uses a computerized tool and his queries and answers thereto are documented than in the case where the professional is assisted by flesh-and-blood people, the working of whose minds remain unknown.
To summarize, in light of the continuous development and improvement of artificial intelligence tools, it is only natural that expert witnesses will turn to them. However, these expert witnesses – and those courts and parties to whom they provide their services – should be aware that technological innovation is not exempt from criticism and, in some cases, may even raise complex ethical issues.
In addition, although the decision at hand did not examine in depth the artificial intelligence tool that had been used and contented itself with rejecting the generated output solely due to the concern expressed, it is not impossible that in the future, the artificial intelligence tools themselves and their use will stand up to the examination of the various courts of law. There may even be parties who will agree in advance that the experts on their behalf may use an agreed-upon artificial intelligence tool to ensure, to the greatest possible extent, freedom from allegations and hallucinations.[2]
[1] CC 41416-12-23 Plonit v Clal Insurance Company Ltd et al (Nevo,December 9, 2024).
[2] In England, for example, the use of artificial intelligence tools has been allowed for the purpose of making decisions regarding the scope of the material to be presented to the opposing party in document discovery procedures.
This article is provided for general information only. It is not intended as legal advice or opinion and cannot be relied upon as such. Advice on specific matters may be provided by our group’s attorneys.