lawyermonthly 1100x100 oct2024eb sj lawyermonthly 800x90 dalyblack (1)

ChatGPT: Weighing AI’s Risks and Rewards for Law Firms

In this Article
Reading Time:
5
 minutes
Posted: 28th April 2023 by
Steve Whiter
Share this article

ChatGPT and similar artificial intelligence projects took the world by storm in 2022, and their use by law firms may not be long in coming.

Steve Whiter, director at Appurity offers his advice to potential early adopters of this technology on what challenges they should be aware of and what risks they will be taking.

The way we communicate, do business and even complete simple tasks is changing – all thanks to artificial intelligence (AI). And while AI tools have existed for some time, interest in this new technology recently soared when Open AI released its artificial intelligence chatbot, ChatGPT.

ChatGPT captured the public’s imagination overnight. Its ability to generate copy at speed, complete research tasks and even participate in humanlike conversations opens up multiple operational possibilities for businesses and organisations across the globe. Law firms are no exception.

A report released by Thomson Reuters in April 2023 surveyed 440 lawyers across the UK, US and Canada about their attitudes and concerns on ChatGPT and generative AI in law firms. The survey found that 82% of respondents believe ChatGPT and generative AI can be “readily applied to legal work.” The bigger question, of course, is an ethical one. Should law firms and their employees use ChatGPT and generative AI for legal work? “Yes”, replied 51% of the survey’s respondents.

Many firms are cautious about the growing use of ChatGPT. They understand that the tool may streamline operational processes, but they are worried about how they can leverage the benefits of AI in a way that is secure, upholds confidentiality and privacy requirements and, crucially, remains ethical. Can ChatGPT be used by law firms to aid productivity? What are the relevant risks? To all partners and fee earners thinking about how to use ChatGPT or other AI tools at their firm, here are the key considerations:

Accuracy, Bias, and Ethical Concerns

AI has the potential to assist lawyers with a range of tasks. Automating clerical work, legal research and even drafting briefs could significantly improve a firm’s productivity and efficiency. However, any such use of AI comes with risks. And as sophisticated as ChatGPT may be, it is not always accurate.

Should law firms and their employees use ChatGPT and generative AI for legal work? “Yes”, replied 51% of the survey’s respondents.

For starters, AI tools are known to fabricate information. These ‘hallucinations’ are concerning because they are unknown. A user has no way to know when ChatGPT provides completely false information, because that content isn’t flagged as wrong, or incorrect, or missing crucial context. The only way a user can guarantee the accuracy of any AI proclamation is by verifying that information themselves. So while there may be some operational gains in time or cost savings when relying on AI to take over menial tasks, these benefits may be counteracted by requiring a human element: a user who checks and verifies all the AI’s outputs.

A related concern inherent in language processing tools is their bias – something that even the best fact-checker might not be able to mitigate against. How a language processing tool is trained will determine its output information. This means that the people used to create the tool, the decisions they make about where the training information is sourced and how, is critical to the output information a user receives. This bias may not be necessarily malicious, but it will be present – especially when the tool is used to deliver ‘opinions’ or make human-like decisions. There may well be future regulatory requirements that firms will have to adhere to around the use of language processing in law to tackle the difficult task of bias elimination.

Accuracy and bias concerns also go hand-in-hand with ethical considerations. Lawyers must serve the best interests of their clients – can they still do so if they are relying more heavily on AI to deliver content and complete tasks? And what does it mean for the profession as a whole if lawyers spend their time fact-checking the work done by language processing tools? Firms and their lawyers go through rigorous training and are bound by strict regulations. They have an ethical obligation to uphold professional standards; ChatGPT does not. But it is the firms themselves which will be held liable if content from ChatGPT is used inappropriately. The malpractice implications could be huge.

Implications for Client Confidentiality

Firms must keep their clients’ data confidential and secure. This is an existential obligation; data mishandling or misuse could violate data protection laws or industry codes of conduct. The problem with AI tools is that users often do not know what’s happening with the data they input. Relinquishing control of data in this way is a risk that firms really shouldn't take.

Before using any AI tool to assist with legal studies, firms should understand exactly how inputted data is processed and used. Where is this data stored? Is it shared with third parties? What security systems are in place to ensure that the risk of data leaks are minimised? Firms already have multiple systems and processes in place to protect their clients’ data, with separate approaches for data stored on premise, in the cloud and across multiple devices. With the introduction of AI tools, it is not enough anymore for firms just to secure their own infrastructures. Are there processes in place to protect specifically against a data leak or misuse of data by AI technology?

Lawyers must serve the best interests of their clients – can they still do so if they are relying more heavily on AI to deliver content and complete tasks?

Firms might want to consider how their digital communications policies and procedures could be extended to language processing tools like ChatGPT. Where fee earners and partners currently use SMS or WhatsApp to communicate with clients, their messages should be backed up, managed, and secured. A firm’s IT team should also have a complete record of all messages sent via modern communication methods. Firms might consider adopting the same approach to AI. Keeping comprehensive registers of all data that is shared with language processing tools is the minimum.

Prioritising Cybersecurity

Cybersecurity concerns should be front and centre for any firm considering using language processing tools. It goes without saying that when any new tool or technology is introduced to a firm’s workflow, it must be treated as a potential attack vector which must be secured. And if a user does not know exactly who has authority over the tools and technology they use for work, how these tools hold, manage and potentially manipulate data – then they are leaving the door open to vulnerabilities.

ChatGPT’s advanced language capabilities means that well-articulated emails and messages can be generated almost instantaneously. Bad actors can leverage this to create sophisticated phishing messages or even malicious code. While ChatGPT will not explicitly create malicious code, where there’s a will, there’s a way, and hackers have already discovered how to use ChatGPT to write scripts and malware strains.

As other and newer AI tools emerge, too, firms will need to remain vigilant and educate their lawyers about the present risks and the responsibility of everyone to protect themselves and the firm against potential attacks. Firms might need to conduct more in-depth security awareness training, or even invest in new technologies to combat AI-generated phishing attempts. Some newer, more advanced malware protection tools scan all incoming content, flagging or quarantining anything that looks suspicious or shows signs of having a malicious footprint.

[ymal]

AI natural language processing tools may well transform how we work forever. By leveraging the advanced capabilities of ChatGPT and other AI innovations, businesses are not far away from automating clerical or low-value tasks. However, as is the case when any new tool or technology is touted as the next big thing in business, potential adopters and users must be aware of both the risks and rewards. Partners and their firms must think critically about whether their infrastructures are ready for this disruptive tech, and how they can stay protected against any new security risks and threats. In doing so, we can embrace the AI revolution and make it a success for firms, partners, fee-earners, and clients.

 

Steve Whiter, Director

Appurity Limited

Clare Park Farm, Unit 2 The Courtyard Upper, Farnham GU10 5DT

Tel: +44 0330 660 0277

E: info@appurity.co.uk

 

Steve Whiter has been in the industry for 30 years and has extensive knowledge of secure mobile solutions. For over 10 years, Steve has worked with the team at Appurity to provide customers with secure mobile solutions and apps that enhance productivity but also meet regulations such as ISO and Cyber Essentials Plus.

Appurity is a UK-based company that offers mobile, cloud, data and cybersecurity solutions and applications to businesses. Its staff draw upon a wealth of in-depth knowledge in industry-leading technologies to aid their clients in developing secure and efficient mobile strategies. Working closely with its technology partners that include Lookout, NetMotion, Google, Apple, Samsung, BlackBerry and MobileIron/Ivanti, Appurity is delivering mobile initiatives to customers across multiple verticals such as legal, financial, retail and public sector.

About Lawyer Monthly

Lawyer Monthly is a news website and monthly legal publication with content that is entirely defined by the significant legal news from around the world.