lawyermonthly 1100x100 oct2024eb sj lawyermonthly 800x90 dalyblack (1)

Is Artificial Intelligence Undermining The Legal System?

In this Article
Reading Time:
5
 minutes
Posted: 7th March 2022 by
Dr Lance Eliot
Last updated 18th July 2024
Share this article

Let’s consider an interesting potential legal case that might be arising sooner than you think.

A prosecutor announces the filing of charges against a well-known figure. This riles up ardent fans of the popular person. Some of those fans are adamant that their perceived hero can do no wrong and that any effort to prosecute is abundantly unfair, misguided, and altogether a travesty of justice.

Protests ensue. Rowdy crowds show up at the courthouse where the prosecutor is typically found. In addition, protesters even opt to stand outside the home of the prosecutor and make quite a nuisance, attracting outsized TV and social media attention. Throughout this protest storm, the prosecutor stands firm and states without reservation that the charges are entirely apt.

 All of a sudden, a news team gets wind of rumours that the prosecutor is unduly biased in this case. Anonymously provided materials seem to surely showcase that the prosecutor wanted to go after the defendant for reasons other than the purity of the law. Included in the trove of such indications are text messages by the prosecutor, emails by the prosecutor, and video snippets in which the prosecutor clearly makes inappropriate and unsavoury remarks about the accused.

 Intense pressure mounts to get the prosecutor taken off the case. Likewise, similar pressure arises to get the charges dropped.

What Should Happen?

Well, imagine if I told you that the text messages, the emails, and the video clips were all crafted via the use of AI-based deepfake technologies. None of that seeming “evidence” of wrongdoing or at least inappropriate actions of the prosecutor are real. They certainly look to be real. The texts use the same style of text messaging that the prosecutor normally uses. The emails have the same written style as other emails by the prosecutor.

And, the most damning of the materials, those video clips of the prosecutor, are clearly the face of the prosecutor, and the words spoken are of the same voice as the prosecutor. You might have been willing to assume that the texts and the emails could be faked, but the video seems to be the last straw on the camel's back. This is the prosecutor caught on video saying things that are utterly untoward in this context. All of that could readily be prepared via the use of today’s AI-based deepfake high-tech.

 I realise it might seem far fetched that someone would use such advanced technology simply to get the prosecutor to back down. The thing is, the ease of access to deepfake creating capabilities is increasingly becoming as straightforward as falling off a log. Nothing expensive about it. You can readily find those tools online via any usual Internet-wide search query.

 You also don’t need to be a rocket scientist to use those tools. You can learn how to use the deepfake creating facilities in just an hour or less. I dare say, a child can do it (and they do). The AI takes care of the heavy lifting for you.

 Lest you think that the aforementioned scenario about the prosecutor is outsized and won’t ever happen, I bring to your attention a recently reported circumstance that made for interesting headlines. Recent headlines blared that cybercriminals planted criminal evidence on a lawyer that is a human rights defender.

This is perhaps more daunting than the prosecutor scenario in that the so-called incriminating evidence was inserted into the electronic devices customarily used by the lawyer. When the devices were inspected, the disreputable materials seemed to have been created by the lawyer. Unless you knew how to look carefully into the detailed bits and bytes, it would seem that the attorney indeed self-created the scandalous materials.

According to the news coverage, this took place in India and was part of an ongoing plot by cybercriminals that are carrying out an Advanced Persistent Threat (APT) type of cyberattack against all manner of civil rights defenders. The evildoers are targeting attorneys, reporters, scholars, and just about anybody that they believe ought to not be doing any noteworthy legal-oriented civil rights actions.

The presumed intent of the planted content is to discredit those that are involved in human rights cases. By seeding the targeted computers with untoward materials, a later startling reveal can at the right time cause the unsuspecting victim to be claimed as a villain or otherwise having appeared to commit some other crime or misconduct that can undercut their personal and professional efforts as a civil rights proponent.

You never know what evil might lurk on your own electronic devices (keep a sober eye on your smartphone, laptop, personal computer, etc.)

Using AI To Make Lawyers Look Like Crooks

The incident that was reported as occurring in India could definitely happen anywhere in the world. Given that your electronic devices are likely connected to the Internet, it is feasible to do a cyber break-in by someone in their pyjamas on the other side of the globe. Make sure to have all of your cybersecurity protections enabled and kept up to date (this won’t guarantee avoiding a break-in, though it reduces the odds). Do ongoing electronic scans of your devices to try and early detect any adverse implants.

There wasn’t reported indication of whether the planted materials were made by hand or via the use of an AI-based deepfake system. Text messages and emails could easily be prepared by hand. No need to necessarily use an AI system to do that. The video deepfakes are a lot less likely done by hand per se. You would pretty much need a reasonably good AI-based deepfake tool to pull that off. If the deepfake is crudely prepared, this would allow the victim to potentially with relative ease expose the videos as fakery.

We all know that video and audio are the most powerful of deepfake productions. You can usually persuasively argue that texts and emails weren’t originated by you. The problem with video and audio is that society is enamoured of something they can see with their own eyes and hear with their own ears. People are only now wrestling with the realisation that they should not at face value trust the video and audio they perchance come across. Old habits of immediate acceptance are hard to be overcome.

It used to be that the AI used for deepfakes was quite crude. You could watch a video and with a scant modicum of inspection realise that the video must be a fake. No more. Today’s AI generators that produce deepfake video and audio are getting really good at the fakery. The only way nowadays to try and reveal a fake video as being fake tends to involve using AI to do so. Yes, ironically, there are AI tools that can examine a purported deepfake and attempt to detect whether fakery was used in the making of the video and the audio (there are telltale trails sometimes left in the content).

This AI versus AI gambit is an ongoing cat and mouse game. Improvements are continually being made in the AI that produces deepfakes, and meanwhile, improvements are equally being made in the AI that tries to ferret out deepfakes. Each tries to keep a step ahead of the other.

Final Thoughts

So, be on the watch for getting AI-based deepfake materials produced about you.

This won’t be happening on any widespread basis in the near term. On the other hand, in a few years the likelihood of using AI-based deepfakes in a nefarious way toward attorneys, judges, and likely even juries are going to expand. Ease of use, low cost, and awareness are all that it takes for evildoers to employ AI-based deepfakes for foul purposes, especially if a few successes get touted as having undercut the wheels of justice in any notable fashion.

You should also be on your toes about the use of AI-based deepfakes underpinning evidence that is attempted to be introduced at trial. Do not be caught off-guard. You can decidedly bet that both criminal and civil trials will soon enough be deluged with evidence that might or might not be crafted via AI-based deepfakes. The legal wrangling over this is going to be constant, loud, and add a hefty new wrinkle to how our courts and our court cases get handled.

 About the author: Dr Lance Eliot is globally recognised for his expertise on AI & Law and serves as a Stanford University Fellow affiliated with the Stanford Center for Legal Informatics, and serves as the Chief AI Scientist at Techbrium Inc. His writings have amassed over 5.6+ million views, including his ongoing and popular Forbes column. Formerly a professor at the University of Southern California (USC), he has been a top tech executive, a worldwide CIO/CTO, and most recently was at a major Venture Capital firm. His books on AI & Law are highly praised and ranked in the Top 10.

Sign up to our newsletter for the latest Legal Education Updates
Subscribe to Lawyer Monthly Magazine Today to receive all of the latest news from the world of Law.

About Lawyer Monthly

Lawyer Monthly is a news website and monthly legal publication with content that is entirely defined by the significant legal news from around the world.