Last week I attended the New Jersey Bar Association’s Annual Meeting, which provides required continuing education training for lawyers. The main theme of this year’s meeting was all about Artificial Intelligence, A.I. And it was a thoroughly frightening experience. Now I don’t pretend to be any sort of a computer expert, much less an authority on A.I., but as a lawyer, I’m concerned. Most of the stories we’ve been hearing about A.I. involve the manipulation of images and the creation of what appear to be real events which never happened.
Then there was the advent of ChatGPT, which apparently can be asked to create works that ought to be the fruits of human labor, such as poems and term papers. That was bad enough, but the seminars I attended were about the use of A.I. technology in the field of legal services. The more I learned about this new technology’s intrusion into the practice of law, the less I liked it. From what I learned, A.I. has some relatively innocuous uses for legal practitioners. For instance, it’s apparently quite good at cataloging, organizing, and keeping track of documents. That’s useful in multi-defendant, or class action lawsuits where millions of documents may be relevant to the case. But even in those case, I foresee difficulties.
Once a computer search program decides which documents are relevant and which are not, that becomes the universe of evidence in the case. Whether human effort may uncover relevant documents that have evaded the search terms, or have been disguised in some way to prevent their discovery will never be known, to the detriment to the plaintiffs.
I suppose that it’s the inevitable extension of our computer run society. I’ve been worried about this for years. When no one bothers to read a book or to find out the truth for himself, then reality is whatever your phone or computer tells you it is. The facts no longer matter, and if history is altered, they’re none the wiser. It’s scary, and now it’s coming to my profession too.
We had an ethics lecture that featured a story about a personal injury lawsuit that bounced from State to federal court. The plaintiff’s attorneys filed a legal brief on behalf of their client. The lawyers used an A.I. program to write the brief. Only problem was that neither the opposing lawyers nor the judge could locate any of the cases cited by the plaintiff in his computer-generated brief. The court then directed the lawyers to certify that the cases they cited were genuine legal opinions. That’s the equivalent of a judge telling a lawyer, “Please prove to me that you’re not a lying shit.” So what the dumb bastard lawyers did was ask the A.I. company, “Are the cases you gave us real cases?” Of course, the A.I. company told them, “Sure they’re real.” So the stupid asses signed an affidavit swearing the cases were real. When it became apparent that they were fake, the shit hit the fan, and the lawyers wound up being sanctioned.
That’s an extreme example of A.I. generated idiocy, but there are less startling implications. The legal profession is unfortunately populated with a goodly number of, shall we say, fly by night, corner cutting, practitioners. Apart from a few true shysters and thieves, these are mostly well-meaning, but not too smart, practitioners who are flitting from court to court trying to make a living. The temptation to save time and money by having A.I. do your work may be one they are unable to resist. That will get them in trouble with clients and courts, and cheapen the quality of court filings.
As I understand the A.I. process, a lawyer can scan a brief into the system, and A.I. will “improve” it for him. The “improvement” process consists of comparing what was put in to thousands, or perhaps millions, of similar filings, and then changing it so that it looks more like what the computer has determined is “the norm.” In other words, it will make the brief look like thousands of other briefs. If you’re a layperson trying to engage in the unlicensed practice of law, maybe that’s a good thing. If you’re an authentic legal practitioner, it’s tantamount to malpractice, and it tells me the lawyer doesn’t know how to write.
I write appellate briefs on behalf of defendants who have been convicted of crimes. 96% of all appeals are rejected. If I want to among the 4% of appellants who are successful, then I don’t want the brief I file with the court to look like the thousands of others the court has seen before. I need my case to stand out. Filing some standardized, homogenized product sells my client short, and can serve only to move lawyers as a class closer to disrepute.
However, we were assured by the lecturers, A.I. is coming to the law and there’s no stopping it. The entire profession may change. A partner in a big law firm told us that the A.I. programs can be used to review legal documents, just about as well as the reviews are done by junior associate lawyers. The implication then becomes, why should we hire junior associates to do what the machine does just as well? The big firms pay these fledgling lawyers upwards of $200,000 a year. The clients may refuse to pay hundreds of dollars an hour for the work of the associates, and insist on A.I. That may save money in the short term, but fewer associates hired means fewer lawyers who learn their trade by working their way up the ladder.
Here’s the bottom line as I see it. A.I. can cover up for poorly educated new lawyers. But as more and more legal product becomes A.I. generated, the standard of legal practice diminishes. With less human effort, we’ll go from the unlicensed practice of law to the uninspired practice of law. Simply put, if you’re looking for standardized mediocrity, use A.I. If you prefer legal representation inspired by the professional judgment of a real lawyer, then avoid A.I. and retain an attorney who still knows how to think.
Leave a Reply