ARTIFICIAL INTELLIGENCE AND THE LAZY LAWYER: Will the Machine Truly Replace Human Expertise in the Legal Arena?

AUTHOR:  Zama Ngcobo & Tamlynn Caelers-Avis

Artificial Intelligence, commonly known as AI, is a trending topic of discussion. The conversations surrounding AI range from its benefits to its potential for misuse to its inherent weaknesses. WMN Inc. previously issued an article about the positive uses of AI within the construction industry, however, it is important to note that any discussion of AI must also include the necessary caveats that lawyers are legally obligated to provide.

This article, however, will focus on the use of AI within the legal fraternity and whether AI can replace lawyers.  

Let’s start with the most known AI application, currently in use and which has probably garnered the most publicity, both positive and negative – ChatGPT. ChatGPT is an application that allows an individual to type in a request or prompt and receive a generated response based on the information that it has been trained on. For instance, if you are drafting a letter and need it to sound more professional, you can input your sentence and ask ChatGPT to provide you with a more businesslike alternative. From there, you can amend the response to suit your purposes.  If used correctly, this can help improve vocabulary and language skills.

A crucial point to this discussion is to emphasize the responsible use of ChatGPT and recognize its limitations.  There was also a trend which emerged, where people would ask ChatGPT a question on a specific topic and ChatGPT would spit out an entire response which would then be submitted as an answer or as fact.  The obvious danger of this is that ChatGPT can only answer what it has been trained on, and it does not guarantee factual accuracy when doing so. Further, universities have already failed students for using the application to submit assignments and homework and are being forced to rethink their policies on plagiarism as more and more students are turning to the application to draft articles and reports – and given that the application rewords sources which already exist, without affording the appropriate credit – thus unintentionally committing plagiarism.

Students looking for shortcuts, while not admirable, this is not exactly newsworthy, however, this year, there have been two cases of professionals seeking a shortcut through the use of ChatGPT and facing serious consequences as a result thereof.

In the case of Mata v. Avianca, Inc., No. 1:2022cv01461 - Document 55 (S.D.N.Y. 2023) (the New York Case), the public was provided with a lesson highlighting  the importance of the responsible use of AI tools like ChatGPT in legal processes. In June 2023, two lawyers who were fighting for their client’s case to proceed, utilised ChatGPT to provide the judge with a brief of ten pages that set out various cases that supported their client’s position.  However, the lawyers failed to determine whether the cases existed, instead choosing to accept ChatGPT’s answer that the cases could be found in reputable databases. As a result of this, the judge hearing the matter fined the two attorneys $ 5000 (five thousand US dollars) and issued a ruling that the lawyers had acted in bad faith and “made acts of conscious avoidance and false and misleading statements to the court." However, the judge also stated that “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance”. Nonetheless, he added “existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” Thus, the issue wasn’t with the use of ChatGPT itself but that the lawyers in question failed to verify and check the results presented by ChatGPT and moreso failed as Professor Moses of UNSW Allens Innovation Hub so succinctly put it “[ChatGPT] has no truth filter at all. It's not a search engine. It's a text generator working on a probabilistic model. So, as another lawyer at the firm pointed out, it was a case of ignorance and carelessness, rather than bad faith. However, as any attorney knows, being ignorant and careless can be enough to attribute bad faith and negligence to an attorney which has its own set of consequences.

The 2023 incident cited above serves as a cautionary tale, but it would seem that the public embarrassment of the abovementioned New York attorneys was not enough to deter the use of ChatGPT by some attorneys closer to home in South Africa. In July 2023, approximately a month after the debacle in the New York Federal Court, attorneys on the Johannesburg Regional Court, who were litigating on behalf of their client, a Ms Parker, against body corporate trustees, were also slapped across the wrists for the use of ChatGPT as citations to support their case. However, in this case, the parties were provided with a postponement to do additional research on whether or not a body corporate was able to be sued for defamation and which could dispense with the matter in question. The attorneys for Parker sent their opposing attorneys a list of eight cases in support of their client’s position, however, the attorneys for the body corporate were unable to locate such cases. The advocate for Parker admitted that its attorneys had made use of ChatGPT to provide citations and case law, without any further research being done.

What set this case apart from the New York case, was that the cases were not submitted to the magistrate but were solely presented to the opposing party. So, while there was a delay in the case, the magistrate took the view that there was no deliberate attempt to mislead the court and that the attorneys were “overzealous and careless”. A cost order, calculated on the attorney-client scale for the period between 28 March 2023 and 22 May 2023, was imposed against Parker. Despite this, the magistrate also opined that he felt that the embarrassment caused to the attorneys was fair punishment coupled with the costs requested by the Defendant for the time wasted, which constituted a just penalty. It is noteworthy that, given that attorneys and advocates roles as officers of the court and are meant to serve justice, it seems strange that the magistrate would not deem the actions of the attorneys as an attempt to mislead the court – especially in light of a granted postponement based on the attorneys’ assertion of the existence of pertinent and relevant case law, subsequently proven to be inaccurate, leading to delays and wasting the time of the Johannesburg Bar Library called on to assist through its case retrieval efforts.

At the date of this article, it appears that attorneys are doing their research, checking citations, and not simply taking ChatGPT’s responses at face value. The actions of the attorneys in the situations described above are not actions that should be associated with the legal profession. Attorneys are expected to always act in the best interests of justice and their clients and are held to a higher standard than the average person. It remains to be seen whether these attorneys will be taken to task by their clients who may have a case of negligence against them, especially in the case where costs are ordered against a party due to the attorney’s actions.

It is important to mention that the use of ChatGPT as a tool to assist in research and drafting is not the issue, as mentioned by various people, including the judges quoted above, technology advances and it is there to be used.  The idea that a person can type in a topic or sentence and be able to have some sort of base to start researching or drafting on is fantastic, however, this requires a significant amount of understanding as to what is being researched and how to extract what is useful and factual from what is not.

In this regard, the idea of AI being used by lawyers to research more efficiently and draft better letters – more plain-language and concise letters that the layperson can understand – is great. However, at this stage, we are in the infancy of artificial intelligence. Currently, law firms are researching how to streamline template creation and enhance customer service through the use of artificial intelligence. However, with great power comes great responsibility and just because artificial intelligence can comb through massive amounts of data and understand (to a degree) how interactions take place, this does not mean that lawyers can simply hand over their drafting and customer relations to an application. While there are law firms that have successfully introduced artificial intelligence into case management (drafting of templates etc) and developing a 24/7 client response system (which still required human response and intervention at times), artificial intelligence still requires a remarkable amount of time and investment from its users, which may be lawyers preparing for cases or trying to assist clients, or perhaps clients trying to prepare themselves before they go and see a lawyer in the first place.

In addition to this investment in time, confidentiality is a key component of the attorney-client relationship, and the use of artificial intelligence and its development to assist lawyers has the potential to cause some very uncomfortable and perilous breaches of confidentiality if not implemented correctly. It raises further questions on how AI technology works, and how legal practitioners are obligated to use secure AI platforms that have strong encryption protocols, that will ensure practitioners that sensitive legal content processed by the AI tool is safeguarded. Thus, responsible use of AI in the legal field extends into a thorough understanding and inquiry into how the technology works, more critically being aware of AI security features offered, like monitoring data transmission and evaluating storage protocols. Legal practitioners also need to review the terms of service or user agreements that the AI provider outlines. This offers a means for practitioners to ensure that the security practices align with their confidentiality requirements.

Also, while certain legal consultants have managed to develop a system where simple legal questions are answered, the fact that human beings still have to be consulted to assist with more complex matters is indicative that lawyers are unlikely to be entirely replaced by a machine. Certain questions need to be mulled over and require more capabilities than that of a computer. On a more “cuddly” or interpersonal note, sometimes people don’t just want their lawyers to be soulless textbooks, they seek trust and rapport established through human interaction. Clients, as a matter of preference, may want a human being to sit across the table to listen and engage, whether to validate or challenge the plausibility of their case.

Therefore, it is overly simplistic to say that artificial intelligence within the legal profession is troublesome or may cause issues – it may in fact streamline case management and assist in expeditious service delivery – however, the abuse of artificial intelligence and the laziness and/or carelessness of its users that definitely can create issues.

Thus, does or will artificial intelligence pose a threat to the legal profession, this author does not believe so, lawyers are known as one of the oldest professions in the world for a reason – we serve to fill a gap in oversight capabilities that cannot be easily or exhaustively filled – we serve and protect justice while offering comfort, and it’s infinitely harder to receive comfort from something that does not have a physical form. Further, the law which is studied by people around the world is ever-changing, so knowledge is ever growing. The practice of law is not just about “knowing” the law, it is about knowing how to interpret it, knowing when it is applicable and when it is practical to fight or defend a legal matter. While artificial intelligence is ever-changing, it seems unlikely that it will ever get to a stage where it will replace the human elements of the law which are so important.