Key takeaways
Authors
ChatGPT is having a moment; its open-source use has propelled the possibilities of artificial intelligence (AI) into the mainstream and the public’s consciousness. Yet the technology that underpins this fascinating tool has been around long before its 2022 launch.
According to Liz Pinnock from RSM and Jeannie Marie Paterson from the University of Melbourne, the global legal sector has been benefiting from the use of AI for some time. In this article, they discuss how AI can be utilised by lawyers to enhance client services, the risks of its use, and what’s next for the tech transforming the legal landscape.
The legal sector’s digital awakening
Back in 2018, the Law Society for England and Wales published an article outlining the six ways that the global legal sector is already using AI in day-to-day work, which included practice management automation, predictive coding, legal research and voice recognition. Also in 2018, Harvard Law School published an article stating that “Within a few years, AI will be taking over (or at least affecting) a significant amount of work now done by lawyers. 39% of in-house counsel expect that AI will be commonplace in legal work within ten years.” - These articles were published four years before ChatGPT even existed.
Jeannie Marie Paterson, Professor of Law and Co-Director of the Centre for Artificial Intelligence and Digital Ethics (CAIDE) at the University of Melbourne commented, “Although tech in the legal sector has been around for some time, it has predominantly been used for what is sometimes called process automation, for example helping lawyers to manage large volumes of documents in due diligence or discovery.
“The use of AI in the legal space has evolved over time and we are now seeing lawyers using a variety of digital tools to make their workflow more efficient. For example, contracting software provides lawyers with the ability to find categories of clauses in large volumes of documents quickly and efficiently, or even identify the kinds of clauses that are missing from a draft contract presented to them. Lawyers may be using simple rules-based chatbots or expert systems to onboard new clients or guide compliance obligations under regulatory or licensing regimes.”
Liz Pinnock, Director and Head of Group Legal at RSM South Africa, added, “AI has certainly taken the legal industry by storm. Various sectors of the industry such as traditional law firms, legal consultants and even in-house legal departments are now looking to AI-powered solutions to increase efficiencies.
“Although AI has previously been in use for discovery purposes during litigation, the legal industry is now looking at other ways in which to use AI. We have seen countries such as the United States, Germany, the United Kingdom, Australia, and China become the front runners in the use of AI in the legal industry in recent years.
“At RSM, we believe in being at the forefront of technological developments. Some examples of the use of AI in our South African offices, for example, include using the technology to formulate fraud risk assessments for clients in the financial services and multimedia industries. The AI has been helpful in providing industry-specific information, which we then review and expand on for our clients in such assessments. AI-powered software can also be used to analyse large volumes of data and contracts to summarise and raise red flags that the legal professionals can then explore further.”
The generative language capacity of ChatGPT raises the potential for AI to augment lawyers’ skills even further. But there are a number of challenges and hurdles to overcome.
AI risks in legal work
A 2023 survey by the Thomson Reuters Institute, which gathered insight from more than 440 lawyers at large and midsize law firms in the United States, the United Kingdom, and Canada, found that lawyers are taking a cautious approach to the use of AI. When asked, 62% of the lawyers (80% for partners or managing partners) surveyed said that they had concerns with the use of ChatGPT and generative AI in their work. Liz commented, “There are a number of risks associated with the use of AI in legal work. These include both the quality of the output and the potential for bias. The output is based on and is only as good as, the data on which humans train the AI solution, which, if biased or incorrect, can result in inaccurate results.
“As the AI requires access to data to analyse and base its output on, there are significant risks of data security and intellectual property breaches. This is because certain confidential data and intellectual property may be inputted into AI solutions in early-stage use, particularly for research purposes. However, this may undermine the security of data, including personal data, and/or the value of the intellectual property as it then becomes publicly available, and therefore easily accessible to other users including competitors.
“A real risk is also presented in legal professionals becoming overly reliant on AI solutions, which is a problem because AI lacks human reasoning. Legal professionals are bound by ethical obligations and the irresponsible use of AI may result in civil liability as well as reputational damage.
“The key principles for legal professionals to exercise when using AI solutions are critical thinking, professional scepticism, independent judgment, and responsible use. These principles will ensure that legal professionals apply their minds whenever they use AI solutions.”
Jeannie added, “It is imperative that industry professionals consider the risks of AI technologies in their client work now because the consequences if they don’t could be huge. Let’s say a legal firm gives their client incorrect information and the client relies on it, the firm is then liable for the losses incurred – which could be significant. Lawyers need to understand that by sharing AI-generated information with their clients, they are effectively endorsing it, and therefore need to take full responsibility for the quality of that advice.
“So legal professionals will need to become adept at not only using the technology but scrutinising its outputs before it is used in live client projects, and furthermore, become more aware of other risks associated with AI such as privacy, confidentiality, bias, and discrimination. We have seen instances where sensitive client data can be leaked and the risk of this is heightened when using AI if there aren’t appropriate controls in place to manage data security.”
One example of how the use of AI in legal work can lead to great risk was reported in the news in May 2023, when a lawyer in the United States found himself in hot water with the courts when representing a man in a personal injury lawsuit. It was found that he had submitted a federal court filing that cited at least six cases that simply did not exist. The reason for this unusual oversight? The lawyer had relied on the AI chatbot, ChatGPT, to compile the details in the filing and failed to verify the authenticity of the references that were used. Unfortunately for the lawyer, his firm and their client, the cases that ChatGPT inserted into the draft were fictional, the AI chatbot fabricated the sources to make its response sound more authentic.
According to experts, the technology is notorious for generating inaccurate information. This is sometimes called hallucinating. This arises, Jeannie Marie Paterson explains ‘not because the AI has some malevolent purpose, but because it is in very simple terms producing content by predictions drawn from its vast training data set’.
The same complaints have been cited for Google’s competitor product Bard. Despite the company that created the tech behind ChatGPT, OpenAI, offering a solution to detect inaccuracies in the AI-generated outputs, it only has a 20% accuracy rate at detecting the errors. OpenAI and others are now offering opportunities for business customers to train the foundation model of their own data for more accurate outputs. Jeannie observes that the ‘need for caution remains, as the technology is still only predictive’. Jeannie continued, “Managing the risk of inaccuracies occurring in the use of AI technology relates back to the importance of robust risk management strategies and careful design about how the human and AI interact.
One key risk management strategy is to ensure the people using the technology talk to the IT team about how it works. They will then have more of an understanding that the AI is at most mimicking human intelligence by applying mathematics to large volumes of data. From this perspective, it is not surprising that a tool like ChatGPT may produce fantastic prose and sound eloquent in its answers, but at the same time, the information its answer is based on may have no empirical validity when interrogated by an expert in that field.
“Additionally, the role of the human in scrutinizing the use of AI tools shouldn’t be left to chance. Firms need to have a clear and evolving position of how they work with new technologies and processes for ensuring that stance is embedded across all aspects of its practice.”
Critical skills for a digital future
According to the Law Society’s report, ‘Future Skills for Law’, automation, changing client demands and new generations of workplace tech are likely to significantly change the type of talent required by law firms and in-house legal departments in the future.
Jeannie commented, “As mentioned, the emergence of this AI technology in industries such as professional services require some understanding of and the appetite to use maths and statistics. For some lawyers, this may not be an appealing, thought because historically they haven’t needed to use these skills in their work. Looking ahead, lawyers will need a preparedness to engage with data and data-driven technologies such as AI, and an ability to adapt as the technology changes.
She also emphasises the element of thoughtful design in how AI tools are positioned to genuinely augment human skills. We shouldn’t assume AI is replacing humans. Nor should we assume the human is merely the rubber stamp for the AI. Rather, I think the interesting question is how we build effective human and computer collaborations, in a way that allows us to get the best out of both players - the technology and the human professional.
“This would mean a reset of our perception of what is an AI solution like ChatGPT. Instead of considering it a fact checker or tool to retrieve public information, we should look at the extent to which we can extend human expertise using the capacity of these technologies. Simultaneously, we could be honing those special human skills of the professional. For example, using the data to really understand the client’s business, give them a competitive edge, and use that to build rapport and loyalty with them.
“The key is to engage the next generation of professionals, those starting their careers in the legal sector who have grown up with technology firmly at the centre of their daily lives and studies. The partners and senior managers at firms driving decision making perhaps may have more of a learning curve, and therefore new talent needs to be brought into the conversation early on so they can help to influence and drive the transformation.”
Liz added, “There does appear to be a perception in the legal fraternity that the use of AI will require the engagement of costly technological experts. This perception may be valid due to, amongst other reasons, the overwhelming number of legal AI solutions on the market with various functionalities.
“In reality though, the cost will depend on several factors including, but not limited to, the solutions that are deployed, the level of support required, the level of customisation required, the amount of training required and the number of users. It is inevitable that the deployment of AI will require monetary investment, however, we know that legal analysts agree that with the greater adoption of AI solutions over time, the costs are likely to decrease significantly.
“Ultimately, future legal experts need to be creative problem solvers with the ability to work efficiently and adapt to change. Routine legal work that is more administrative by nature will be done by AI. So, legal experts will need to be well versed in the use of technology to aid in quality and timely decision-making as well as providing sound advice. Also importantly, future legal experts will have to remain critical thinkers and think independently of technological tools.”
The law is playing catch up
Commenting on how international regulation has kept pace with the acceleration of AI technology, Jeannie said, “The ethical considerations of how AI is used around sensitive data also needs to be factored in. The law hasn’t quite caught up with this yet, but we do know that ethical frameworks and standards are being developed globally, and these will need to be weaved into a firm’s risk management systems, governance strategy and employee training programmes.”
Liz added, “We are seeing lawmakers in several countries where AI is developing start to respond to the risks posed by AI. This is being achieved by either extending the applicability of existing laws (such as data privacy laws) to consider such risks or developing new laws to address these risks. For example, the United States is currently seeking public comments on accountability measures for AI.
“Additionally, some international associations are developing frameworks and guidelines to combat the risks, such as the proposed regulatory framework on AI being developed by the EU, which will follow a risk-based approach. Interestingly, Stanford University’s 2023 AI Index has shown that 37 AI-related bills were passed into law globally in 2022, with the United States being in the lead having passed nine laws. This is an indication that AI will become increasingly integrated into our laws in the future.”
How AI is shaping legal services of the future
Commenting on how AI will develop in the legal sector in the future, Jeannie said, “It’s difficult to predict where the technology is going to go from here since we have seen a rapid evolution in just the last 12 months alone, but the genie is definitely out of the bottle. We can expect the technology to become more nuanced to particular industries over time. Now is the time for professional service firms to consider how they can use it effectively and what it will mean for client relationships. It’s not going to replace lawyers or consultants but it’s going to require a change in the way those professionals deliver their service.
“We can expect lawyers to adopt the technology in ways that are beneficial to them and to their clients, but they will need to be very transparent about where and how they are using it in order to protect the client’s trust that is so often hard won.”
Liz concluded, “It is reasonable to conclude that the use of AI in the next 5-10 years will dominate the legal sector, and legal professionals who are not making use of AI to increase efficiencies will fall behind significantly to the point of becoming redundant. AI may also potentially see certain job functions and positions becoming redundant such as legal researchers and highly administrative legal functions which don’t require the application of one’s legal knowledge. On a positive note, new roles will also be created such as the role of legal AI specialists.
“Ultimately, AI can advance growth in the sector, driving huge efficiencies, streamlining cross-border legal work, enhancing the quality of legal work, and building closer relationships between lawyer and client, but these benefits will only be made possible if lawyers understand and can mitigate the risk involved.”
Find out more about RSM’s global legal services here.