Two years ago, Deloitte predicted that artificial intelligence (AI) would automate about ⅓ of the jobs in the legal sector in the next twenty years.
Which is it? Are lawyers on the brink of extinction, or have the capabilities of AI been vastly oversold? Recently, Eyal Dechter, a cognitive scientist at MIT, judged that AI is still “many, many generations” away from being able to complete the types of tasks that lawyers do. Dechter said, “It’s not that computers can’t replace lawyers because lawyers do really complicated things. It’s because lawyers read and talk to people. It’s not like we’re close. We’re so far.”
The answer depends on how you think about AI. The truth is this: At its current stage, AI is great for handling repetitive tasks and surfacing anomalies in large amounts of data. It’s not ready to handle complex or unpredictable situations.
AI’s progress in medicine provides the best example of this dynamic. In recent years, researchers have created AI models that can beat humans at specialized medical tasks. For example, one year ago, Stanford researchers developed an algorithm that could detect pneumonia from chest X-rays faster and more accurately than radiologists.
Dig a bit deeper, though, and you’ll see why these types of results may be hard to replicate in the legal field. In the Stanford study, the algorithm was based on over 110,000 chest x-rays that had previously been analyzed by humans. In order for algorithms to “learn” the right answers over time, they must typically be trained on tens of thousands of previous records. Those records, in turn, must first be created and verified by humans.
In real life, most legal teams lack access to the quantity of data needed to successfully train an AI algorithm, particularly for more complex legal problems. What’s more, when it comes to things like business-critical deals, lawyers are judged on quality—perfection—in addition to their ability to do things quickly. That’s why AI tends to be better for surfacing possible issues than delivering definitive judgments.
So while it’s true that AI will help automate routine legal tasks, existing AI capabilities have a long way to go before they can reliably handle high-value assignments. Bill Fenwick, co-founder of Fenwick & West LLP, notes that on top of all the the technical challenges to implementing AI in law, the biggest challenge may be creating public trust in AI’s reliability. In other words, it’s not clear whom you should trust when humans and algorithms disagree.
What does all this mean for your legal team, as far as implementing technology? As a general rule, it means that workflow solutions should precede AI in almost all cases where technology is being introduced. That’s because workflow technologies solve a problem that all legal teams have—scaling, or doing more with fewer resources—while AI can only be used to solve specific, repetitive tasks. Implementing workflow solutions can save you thousands of man-hours when it comes to simple tasks like handling NDAs, collecting signatures from contract counterparties, and forecasting contract volumes. For contract-related tasks that are more variable or involve more liability, AI solutions need to be pretty close to perfect—because if they aren’t, they’ll likely do more harm than good.
Still, it would be naive to think that AI won’t improve over time. While we can’t predict the pace of AI advancement, we do know that AI models are only as effective as the data that’s fed into them. In the coming years, the legal teams that stand to benefit the most from AI will be the ones with the best data management practices.
In the legal industry, as in others, the winners will be the ones who have access to large databases with accurate, well-structured information, as this is the type of information necessary for developing AI models. You can prepare your team and company for long-term AI success by making sure you move your contracts from documents to databases and by standardizing the way you collect contract information across your organization. As more powerful AI capabilities become available, you’ll know that you can provide quality data as an input for AI algorithms—data that’s specific to your company, your industry, and your regulatory regime. Trying to implement automated technologies before establishing a data foundation is not only premature, it exposes your company to an unacceptable amount of risk.
At Ironclad, we believe companies shouldn’t invest in AI until they know that their data is properly managed. That’s why we’ve designed our software to help you build a robust data foundation upon which to layer and train AI. To learn more, you can request a demo here—we’d love to hear from you. In the meantime, you can breathe easy—with or without AI, lawyers aren’t going anywhere.
Ironclad is not a law firm, and this post does not constitute or contain legal advice. To evaluate the accuracy, sufficiency, or reliability of the ideas and guidance reflected here, or the applicability of these materials to your business, you should consult with a licensed attorney. Use of and access to any of the resources contained within Ironclad’s site do not create an attorney-client relationship between the user and Ironclad.