About Steve Whiter Steve Whiter has been in the industry for 30 years and has extensive knowledge of secure mobile solutions. For over 10 years, Steve has worked with the team at Appurity to provide customers with secure mobile solutions and apps that enhance productivity but also meet regulations such as ISO and Cyber Essentials Plus. About Appurity Appurity is a UK-based company that offers mobile, cloud, data and cybersecurity solutions and applications to businesses. Its staff draw upon a wealth of in-depth knowledge in industry-leading technologies to aid their clients in developing secure and efficient mobile strategies. Working closely with its technology partners that include Lookout, NetMotion, Google, Apple, Samsung, BlackBerry and MobileIron/ Ivanti, Appurity is delivering mobile initiatives to customers across multiple verticals such as legal, financial, retail and public sector. Contact Steve Whiter Director Appurity Limited Clare Park Farm, Unit 2 The Courtyard Upper, Farnham GU10 5DT Tel: +44 0330 660 0277 E: info@appurity.co.uk www.appurity.co.uk and technology they use for work, how these tools hold, manage and potentially manipulate data – then they are leaving the door open to vulnerabilities. ChatGPT’s advanced language capabilities means that wellarticulated emails and messages can be generated almost instantaneously. Bad actors can leverage this to create sophisticated phishing messages or even malicious code. While ChatGPT will not explicitly create malicious code, where there’s a will, there’s a way, and hackers have already discovered how to use ChatGPT to write scripts and malware strains. As other and newer AI tools emerge, too, firms will need to remain vigilant and educate their lawyers about the present risks and the responsibility of everyone to protect themselves and the firm against potential attacks. Firms might need to conduct more in-depth security awareness training, or even invest in new technologies to combat AI-generated phishing attempts. Some newer, more advanced malware protection tools scan all incoming content, flagging or quarantining anything that looks suspicious or shows signs of having a malicious footprint. AI natural language processing tools may well transform how we work forever. By leveraging the advanced capabilities of ChatGPT and other AI innovations, businesses are not far away from automating clerical or low-value tasks. However, as is the case when any new tool or technology is touted as the next big thing in business, potential adopters and users must be aware of both the risks and rewards. Partners and their firms must think critically about whether their infrastructures are ready for this disruptive tech, and how they can stay protected against any new security risks and threats. In doing so, we can embrace the AI revolution and make it a success for firms, partners, fee-earners, and clients. and procedures could be extended to language processing tools like ChatGPT. Where fee earners and partners currently use SMS or WhatsApp to communicate with clients, their messages should be backed up, managed, and secured. A firm’s IT team should also have a complete record of all messages sent via modern communication methods. Firms might consider adopting the same approach to AI. Keeping comprehensive registers of all data that is shared with language processing tools is the minimum. Prioritising Cybersecurity Cybersecurity concerns should be front and centre for any firm considering using language processing tools. It goes without saying that when any new tool or technology is introduced to a firm’s workflow, it must be treated as a potential attack vector which must be secured. And if a user does not know exactly who has authority over the tools SPECIAL FEATURE 39
RkJQdWJsaXNoZXIy Mjk3Mzkz