There’s a saying that the first rule of lawyering is that the answer is always “it depends.”
So when the Wall Street Journal asks “Would You Trust A Lawyer Bot With Your Legal Needs?” it kind of glosses over this rule. I’ve learned from experience that lawyers and the public freak out over robot lawyering, but the more you peel back the artificial narrative of robot lawyering then more mundane and inevitable it becomes.
Asa Fitch’s new article lays out the basic groundwork in the legal automation game, highlighting major players like Joshua Browder of DoNotPay and explaining the ethical challenges that face anyone entering the field.
But to the question presented in the WSJ headline? There’s a lot more to unpack first.
Would you entrust a personal-injury claim, divorce settlement or high-stakes contract to an algorithm? A growing number of apps and digital services are betting you will, attracting millions of Silicon Valley investment dollars but raising questions about the limits and ethics of technology in the legal sphere.
Did they pass an in-person bar exam during a pandemic? Because I’m told that’s the only way to trust any legal advice.
Would I trust a bot to litigate an injury claim? No. Would I trust a bot to screen an injury claim to figure out if there’s anything there or if I’m best served settling or if a class exists that can get me the recovery I want? Sure. Divorce settlements are probably a no go unless you’re looking at a Vegas stripper you met 15 hours ago while twisted on Rum Rickeys. A high-stakes contract? That’s actually one of the most fascinating questions because automating contract work is actually happening, sometimes under the auspices of Biglaw. Bots may not be replacing attorney conference calls, but they’re certainly working out the boundaries of what constitutes market language.
The latter point is alluded to when the article cites Professor Drew Shimshaw’s 2018 paper on the subject:
Drew Simshaw, a law professor at Gonzaga University, wrote in a 2018 paper that the legal profession had to balance the benefits of access to do-it-yourself justice offered by apps like DoNotPay against ethical concerns that arise when they veer into doing legal work autonomously. “The legal profession’s advocacy for crippling restrictions on legal self-help solutions could potentially stunt the development of the larger AI revolution in law in ways that would ultimately favor large firms over the public interest,” he wrote.
Lawyer AI is coming, but is it coming to get you out of parking tickets or to get Goldman Sachs a 4 percent better return on a multibillion-dollar long-term deal? Because the latter is happening no matter what.
“As soon as there’s some complexity or some resistance by the system, the automation is unable to handle it,” says Ryan Calo, a law professor at the University of Washington whose specialties include robotics and automation. “There’s an impression that this app will help you navigate the legal system, but it will only help you to a small extent.”
But this is the problem with the whole legal field. Someone walks in with a “simple 50-50 estate split” and walks out an extra in a high school production of Bleak House. Lawyers rarely have the crystal ball necessary to predict when a 95 percent routine transaction is going to turn into major litigation. It’s just a question of whether a consumer is paying a lawyer from jump or only after the bot runs into an unexpected wall. That’s why smart lawyers should be leaning into bots as screening devices and integrating them into their practices — it may forfeit some more expensive hours but being in place as the go-to in the marginal cases where things go wrong would seem to be worth it. Some attorneys probably disagree.
It all depends.
Joe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.