"Courts Were Already Getting Video Evidence Wrong. AI Will Make That Look Like A Warm-Up." Article Reflection No. 174 (3/1/2026)
- Mary

- Mar 1
- 2 min read
In the Forbes article “Courts Were Already Getting Video Evidence Wrong. AI Will Make That Look Like A Warm-Up.,” journalist Lars Daniel discusses the previous application of surveillance video footage in the court system and the lack of federal standardization for the handling of this form of evidence, noting the R. v. Benn case, where a trial judge solely used his visual perception to conclude that a man in video evidence was the defendant—without any application of the scientific method or input from forensics experts. This development is relevant—in fact, over 80% of the court cases in the U.S. present video(s) as evidence (from the University of Colorado Boulder's Visual Evidence Lab), according to the article. Daniel also emphasizes the need for applying—along with standards—advanced technology in the court process. Furthermore, generative AI further exacerbates the issue by creating, at first glance, an illusion of improved clarity when it is, in fact, not actually true; instead, the journalist encourages turning to the device the video was recorded on.
I wonder what the process of creating these standards for video evidence processing will look like. The part of this article that especially stands out to me is the portion about AI and how it can undermine the accuracy of the evidence, potentially leading to wrongful convictions. In an article I read a while ago, I learned that developing technologies (e.g. AI) can help with making the virtual filing system more efficient and thus help with investigations. In this article, I learned about how AI applications can harm the justice system by facilitating the use of potentially deceiving evidence. How will the court systems maintain a balance of AI use across these different areas—investigations and evidence presentation in court—to ensure that the most accurate evidence is presented?
