US Judge terminates case after discovering self-represented claimants used deepfake video evidence
A US court has struck out a case after determining that videos submitted as evidence were deepfakes, marking the next frontier of artificial intelligence (AI) in the courtroom. The judge in California issued a ‘terminating sanction’ after discovering that two videos presented by self-represented claimants were fabricated using generative AI.
Victoria Kolakowski, a judge in the Superior Court of California, made the decision after examining the videos submitted by the claimants, who were seeking summary judgment. According to Judge Kolakowski, the videos showed clear signs of being manipulated, including robotic expressiveness, monotone speech, unusual word choices, and mismatched mouth movements. In one of the videos, she noted that the “mouth flap did not match the words being spoken,” which further convinced her that the videos were artificial creations.
Additionally, the judge identified a looping video feed and other discrepancies that indicated the use of generative AI. She further raised concerns when Maridol Mendones, one of the claimants, mentioned that some of the witnesses shown in the videos were either deceased or could not be contacted, heightening the court’s suspicions about the authenticity of the evidence.
Embed from Getty Images
The case also involved a photograph, purportedly taken by a Ring doorbell camera, which was found to be “materially altered” with poor editing, such as the background being in black and white while the subject appeared in colour. Although the judge suspected more evidence had been doctored, she noted that she lacked the time, funding, and expertise to investigate further.
Judge Kolakowski ultimately decided against referring the claimants for criminal prosecution, stating that it would be “too severe and not sufficiently remedial.” Instead, she imposed the terminating sanction, ruling that the claimants had violated the court’s and defendants’ trust by using AI-generated evidence. The sanction, she said, would serve as a deterrent, sending a clear message to litigants that the court has “zero tolerance” for attempting to pass deepfakes as legitimate evidence.
The ruling is seen as a significant moment in the evolving intersection of AI and the legal system. Judge Scott Schlegel, a member of the Fifth Circuit Court of Appeal in Louisiana, remarked in a blog post that the case was a warning shot for the future. He highlighted the rapid pace of AI development, stressing that while current AI forgeries can still be identified, the technology is advancing quickly. Schlegel added that courts will need to find new ways to cope with the growing sophistication of AI-generated fakes.
The Federal Advisory Committee on Evidence Rules is considering a new rule that would require AI-generated evidence to meet the same reliability standards as expert testimony, though Judge Schlegel noted that this might not have prevented the situation with the self-represented litigants in the Mendones case.