Artificial Intelligence and the Practice of Law in the 21st Century
6.19.2023
The legal system has never been quick to embrace change. Whether it is electronic research, electronic filing, video depositions or artificial intelligence, most of the legal system will be dragged into the future kicking, screaming and holding on to whatever antiquated methods it can sink its claws into. This is, of course, true not just of the courts or the county clerk’s office but also at law firms. It is not even a question of money, as many big law firms could easily spend what’s necessary to truly improve their efficiency and profitability with the newest tech. Surprisingly, it is often the smallest firms, with the smallest budgets, that embrace technology, realizing that they need to innovate to survive. Additionally, it is often the small firms that can experiment with various tech solutions to optimize efficiency, while the bureaucratic process of implementing technology at the big firms often defaults to the tried-and-true solution before any implementations talks can even start.
To make life more interesting, the universe threw us a curveball in 2020. Offices were shuttered, and the world went remote. We were all forced to turn to technology to find solutions to problems we previously did not even consider. Some of those solutions are here to stay (like expanded electronic filing), while others may or may not catch a foothold now that we are all able to go places again (like the dreaded cattle call and in-person depositions). At the very least, just about every lawyer was forced to learn to video conference, do basic work with PDFs, including e-filing, and become much more comfortable with e-signing. So where does our technological toolbox go from here?
It would be difficult to find anyone now who opposes electronic filing. That is now a technology that we have accepted as functional and convenient. The jury is still out on virtual depositions and trials, with many lawyers arguing that it’s more difficult to examine a witness or get a sense for the judge or jury when everything is remote. That may very well be true. The counterargument is that it substantially reduces the costs of litigation, which may be just as true.
However, lawyers are now facing a deluge of technological solutions, many of which are overlapping and overwhelming and whose purposes are often questionable. What’s worse is that much of this technology only works well if both sides are using it, such as AI-based contract negotiation and drafting tools or virtual deposition software, and some would have to be formally adopted by the court before it can be used; for example, New York State courts have adopted Microsoft Teams.
While these challenges are surmountable, we are also being offered tools that include predictive and generative AI. These tools are shiny and exciting and, for those of us who are into tech, are simply droolworthy. But do they actually work and produce a net benefit in the practice of law? To varying extents.
Predictive AI has been around for quite a while, and most of us began using it without really acknowledging its existence. What is it? One example is when you type a text message on your smartphone and you find that the messaging app offers to complete the word you started typing, having “predicted” what word you were intending to use. The technology then got better, and eventually you saw proposed words under where you were typing, where your device would actually try to predict the most likely candidates for each next word. Was it always correct? Of course not. But often, you could put together a simple responsive text message just by using the predictive offers for each next word. It wouldn’t necessarily end up being what you wanted to say, but it would be contextually appropriate and grammatically correct. In other words, predictive AI used what was said to you and what you started responding with as clues as to what the most likely next words could be.
Another great example of predictive AI is when your smartphone reviews all your photos and prepares proposed collections or collages for you, based on what it expects you would want to see, using your photographing and photo viewing history.
Have we accepted predictive AI into our lives without knowing it? Most likely. What’s interesting to note, though, is that we have all accepted it to varying degrees and often not by our own choice, but by the terms of service we agreed to with any number of tech companies. In practice, predictive AI, like anything that tries to give you an intelligent output, benefits from as much input as possible. So, for the best predictive AI experience, you would want the system to be “trained” on as much input as possible. Obviously, if Google reads everything I ever write, it will have an easier time predicting the next word in what I am writing, based on my writing style, than if it only had two emails to work with. So, in effect, everything we do to maintain control and security over our documents and communications also reduces the predictive AI’s ability to help us, and the tradeoff becomes a choice between efficiency and security. As lawyers, we must walk that fine line very carefully and need to make sure that our sensitive data does not become part of a public AI.
If that wasn’t confusing enough, out comes generative AI. In many ways, generative AI is still a form of predictive AI, but on a much grander scale. While my SMS app will take a stab at offering me proposed short answers, a generative AI can do so much more. You can ask it to write a poem about the dangers of technology in the style of Edgar Allan Poe, and it will produce one. It’s difficult to believe that AI has any level of “creativity” though, so we need to look at the math behind this. In essence, it’s the same idea as the predictive AI that suggests one word at a time. However, generative AI is “trained” by a wide range of outside sources. So now, as it writes the poem you requested, each next word that it selects must make it through quite a few predictive conditions, such as being topical and responsive to the query, being in the “style” of Edgar Allan Poe, rhyming and so on. In other words, this is the natural evolution of predictive AI. How good is it? Well, mileage may vary. . . .
More than anything, it is important to understand that generative AI is not ready for prime time. In other words, it is a tool and cannot be relied on to create a finished product. In the end, we all know that we, as attorneys, are the ones responsible for the documents we generate, and neither a court nor a client will be willing to give you a break because your AI messed up. If you are the adventurous type and want to embrace the cutting edge, just be aware that it’s quite sharp.
This, of course, begs the question of what it is you can use generative AI for right now in your practice area, and the potential pitfalls of doing so. Using AI to generate templates for transactional documents or general documents is quite easy at this point. You can ask it to prepare a draft, which will by no means be final, and save yourself from typing or pasting in all the basic clauses. However, you should still review and edit. In my personal experience, I have found that AI can generate documents at about the same level as a law student in his or her 2L year. Good enough to call a first draft, sure, but certainly nothing I would send to a client or the opposing counsel without first reviewing every word. AI is impressively useful if you want a few proposed versions of a contract clause. Often, if I find that a client or opposing counsel is unhappy with the wording of a clause, I will drop it into a generative AI and ask it to provide me with five alternative wordings. I review all five to make sure there are no substantive differences, edit as needed and then send them off to the recipient to pick one. Often, the same clause restated a different way is more acceptable to the reader, and the AI can generate the five versions in under a minute, whereas I would waste 15 minutes of my time (and my client’s money) doing that same task by hand. Essentially, generative AI is excellent at providing templates.
However, you must remember that the AI is simply predicting what you want to hear, based on its library of available data. As such, its persuasive argument skills are quite good, but limited to arguments previously made. In other words, it is not all that useful for writing legal arguments, since it can’t treat citations as entire blocks of thought, but rather takes elements from various ones and mashes them together. It also does this with the citation itself, often producing references to cases that don’t exist.
Finally, remember that the various generative AI tools are still in beta and often have terms of service that give the company that owns that AI bot unlimited use of your query data, so be sure not to form your queries in such a way that you breach attorney-client privilege or other confidentiality requirements. For example, if you’re having it draft a medical record demand under HIPAA, enter fictitious names, social security numbers, birthdates and the like. In fact, if you don’t provide that information to the AI, it will generally prepare a template letter, with fields (like [INSERT BIRTHDATE HERE]) right in the letter, so that you can populate the sensitive data manually.
Where does AI go from here? In the last few years, we have seen staggering advances in this technology; whereas a year ago, it was barely able to do the basics, it can now produce an entire essay, article or even book, which will, at least on its face, appear well-written. Some law firms have been using chatbots that simulate human conversational capabilities to provide information regarding services offered by their lawyers, the status of their clients’ proceedings or simply a firm’s contact details, opening hours or the steps to schedule an appointment with an attorney. A newer and more powerful chatbot known as GPT-3, developed by OpenAI, seems to be able to perform more complex tasks, including, potentially, legal research, by scanning through large amounts of text data and providing relevant information on a given topic and legal analysis by providing suggestions and insights based on its understanding of the relevant legal principles and precedent.[1] In February, a big law firm announced the introduction of an AI chatbot called Harvey to help its lawyers draft contracts and prepare documents for mergers and acquisitions.[2]
The takeaway is not that you should grab onto generative AI with both hands, as it is still quite dangerous to let loose. The takeaway is not to fear the technology. For the best results, start playing around with generative AI on simple tasks first, such as having it prepare a collections letter for you or a demand letter for unpaid rent – something simple and easy to check over. Start using AI to generate individual clauses for you when working on a contract or rider. Check its work, always. As you get the hang of what the AI is good at versus where its weaknesses are, you will become more comfortable with using it.
Remember, in the end, AI is a tool, and, like any tool, it will make your job easier. However, we are nowhere near the level of technology where the tool will replace you and the human element of relationships with clients (an empathetic attorney decreases attrition, builds relationships and drives client satisfaction and gratification). Your biggest risks are (a) adopting the technology without oversight and cross-checking its work – the equivalent of hiring law school student interns and declaring their first draft to be the final product, without an attorney review; and (b) not adopting the technology at all, which over time will make you less efficient and therefore less price-competitive than your colleagues, as you will still be doing the repetitive tasks by hand.
Alexander Paykin is managing director and owner of The Law Office of Alexander Paykin in New York City and Long Island. He focuses on commercial and real estate litigation and complex transactions. He is co-chair of the Committee on Technology and the Legal Profession and serves on the Civil Practice Law & Rules, Law Practice Management, and Law, Youth and Citizenship committees of NYSBA and also serves on the Legal Technology Resource Committee, Productivity & Knowledge Strategy Committee, Pro Bono & Public Service Committee of the ABA and is a member of the ABA TechShow Board. He regularly teaches CLEs for NYSBA on technology and the practice of law.
[1] See Andrew Perlman, The Implications of ChatGPT for Legal Services and Society, The Practice, Harvard Law School, March/April 2023, https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society
[2] See Arthur Piper, ChatGPT and the Legal Profession, International Bar Association, April/May 2023, https://www.ibanet.org/ChatGPT-and-the-legal-profession