Want more content like this? Sign up for our monthly newsletter.
You’ve probably seen a dozen AI contract review demos in the past year. Most looked impressive, and a few may have even made it into a pilot.
But these demos usually have one glaring omission: what happens when the system encounters your playbook—the one written for humans, full of implicit context—and tries to apply it across a real deal queue, on real third-party paper, inside the workflow your team actually uses.
That’s where most AI redlining tools break.
That’s why today, we’re announcing the general availability of Jurist Redlining Agent with Playbooks.
Upload your playbook—or pull it directly from your CLM—open a contract, and click “Redline.” Jurist can generate a complete, sourced, reviewable first pass in under five minutes. Every change is grounded to your documented positions so there’s no rebuilding institutional knowledge from scratch, and no relying on memory.
Getting there required rethinking the system architecture, not because generating redlines is technically hard, but because doing it accurately in real legal context is a different problem entirely.
The results speak for themselves: user satisfaction climbed from ~70% to 91.5% over the past several months,* and Jurist now outperforms leading general-purpose LLMs** by an average of 31 percentage points across four redlining quality metrics, detailed below. And we’re not done.
What the redlining experience looks like
Jurist’s UX reflects a simple principle: lawyers should spend time on judgment, not mechanics.
Two features illustrate this:
Counterparty-ready comments
Jurist pulls explanations directly from your playbook—or generates them—so each redline includes a clear, defensible rationale.
Position toggling
Lawyers can switch between Preferred, Fallbacks, or no redline directly in the workflow. Negotiation isn’t linear, and the system reflects that.
The long-term advantage of Jurist isn’t just what it generates, it’s where it lives. Because Ironclad sits at the center of the contract lifecycle, Jurist has access to context that standalone tools never will: your templates, your negotiation history, the record of how deals actually close. That compounding advantage is what separates a redlining tool from a redlining system.
Before Jurist, a first-pass redline could take anywhere from 30 minutes to a couple of hours depending on complexity. With the Redlining Agent and playbooks, we’re now getting to a solid first draft in minutes. That’s allowed our team to spend less time on mechanical edits and more time focusing on risk, strategy, and supporting the business.
Raquel RodriguezAssistant General Counsel, AES
Why we built with legal expertise from day one
The core question every lawyer asks is simple: Will this actually work on my contracts, with my playbook?
Answering that required more than applying AI to redlining. It meant building a system that embeds legal knowledge directly into how it operates.
A multi-agent system built around legal reasoning
Jurist’s redlining system is built as a pipeline of specialized agents, each responsible for a distinct part of the workflow. Playbooks are converted into explicit rules. Contracts are segmented clause by clause. Each clause is evaluated in context, whose paper it is, what’s being negotiated, and what position your team would actually take.
Three principles shaped the architecture:
- Specialized agents, not generalized ones.
Each stage of the workflow – playbook parsing, rule extraction, clause classification, redline generation – is handled independently, with task-specific instructions. - Legal reasoning, not approximation.
The system mirrors how experienced lawyers work: identify the rule, assess the context, select a position, and surface fallbacks. This process was co-designed with legal SMEs. - Precise control over negotiation posture.
The hardest problem in contract review isn’t flagging issues, it’s choosing the right position. Jurist evaluates and applies positions based on defined playbook logic, not generic heuristics.
What this looks like in practice
Consider a limitation of liability clause capped at $1,000. Jurist flags it as non-compliant, proposes a fees-paid cap aligned with the playbook, explains why the current cap is insufficient, and surfaces a fallback (e.g., a $2M cap) if the counterparty resists.
It produces a complete, defensible position grounded in your playbook, the same reasoning path a lawyer follows manually.
That level of nuance is what makes position selection one of the hardest problems in AI redlining, and what separates usable output from noise.
How we define and measure quality
To bring AI redlining to production, we needed a clear definition of what “good” looks like.
We evaluate every redline across four dimensions:
- Accurately grounded: Does the redline achieve the legal intent of the playbook rule?
- Completeness: Are all applicable clauses and rules addressed?
- Succinctness: Are changes targeted, or does the system over-redline?
- Respect for compliant language: Does it leave already-compliant clauses untouched?
That last dimension is critical. Inaccurate redlining erodes trust and causes second-guessing. If lawyers have to second-guess every suggestion, the tool creates work instead of removing it.
When those four dimensions hold up in production, the numbers follow. According to our group of testers*, 99% of contracts reviewed by Jurist are accurately grounded in the provided playbook or contract, and 98% of the time, Jurist’s selections are aligned with defined organizational standards, correctly navigating between primary and fallback positions.
And we’ve heard from customers what this means for them:
We trust Jurist to produce a thoughtful and reliable first pass that’s aligned with our standards. From there, our attorneys refine and tailor the output based on context.
Raquel RodriguezAssistant General Counsel, AES
The playbook problem no one talks about
Playbooks are the foundation of accurate, consistent redlining. They encode your positions, risk thresholds, and institutional knowledge – the difference between generic output and real legal judgment. That’s precisely why we built Jurist around your playbook, not a proprietary rule format.
One of the clearest lessons from Jurist’s early access program (EAP) was this: output quality is only as strong as the playbook logic behind it. Most enterprise playbooks were written for humans. They rely on implicit context (“negotiate as appropriate”), unstated assumptions (“if this is a Fortune 500 customer…”), and reasoning that isn’t structured as explicit rules. The document comes with you; the logic often needs sharpening.
That’s why with our generally available launch, we’re including a playbook optimization step. The more structured your playbook, the more precisely Jurist can execute your strategy, your preferred positions, fallback triggers, and negotiation posture become levers it can apply on your behalf.
Our legal SME team—made up of real lawyers—works with customers to refine that logic, clarifying positions, tightening fallback triggers, and removing ambiguity. We validate outputs against a “golden set” of manually redlined agreements before certifying a playbook for production. The goal isn’t just to make Jurist work, it’s to make it work like your best lawyer does.
The real value of playbook optimization is that it makes a customer’s legal judgment machine-readable. Once properly established, Jurist can apply the best thinking across every contract, thereby saving time and creating business efficiencies legal teams actively seek.
Omid MoezziLegal SME, Professional Services, Ironclad
For legal teams looking to do more with what they have
The case for Jurist isn’t about replacing lawyers, it’s about giving them leverage where they need it most. That balance, standardization plus attorney control, is the design principle behind every decision we made in building this system.
For legal teams looking to scale without adding headcount, this is a meaningful step forward. Jurist helps standardize repetitive work while still keeping attorneys in control of the final product.
Raquel RodriguezAssistant General Counsel at AES
What’s next
We know that adoption starts with meeting lawyers where they work. Around 70% of contract review still happens in Microsoft Word, which is why we’re bringing Jurist directly into Word with our Early Access add-in, delivering playbook-driven redlines as tracked changes for select customers right now.
The next step is Precedent. Playbooks define intended positions; precedent reflects what’s actually been accepted. Within Jurist, when reviewing a clause, lawyers will see similar language from previously executed agreements in Ironclad, drawn from a curated set of contracts that admins and lawyers control. This gives immediate context on both policy and practice, without digging through past deals or relying on individual memory.
Get started with Jurist Redlining Agent with Playbooks today, or learn more in our upcoming webinar.
*Statistics are based on controlled external user testing of 66 contracts, 473 individual redlines, across 5 lawyers, as of April 2026. Individual results, including time savings, accuracy and playbook alignment, may vary based on contract complexity, user-defined playbook settings, and user inputs. Results may vary.
**Jurist outperformed GPT, Claude Opus, and Gemini, accessed via their respective APIs with default settings, across all 4 quality metrics highlighted above (accurately grounded, completeness, succinctness, respect for compliant language) from a range of 3 percentage points to 67 percentage points. Tested internally on 53 contract/playbook pairs. Results may vary.
Ironclad is not a law firm, and this post does not constitute or contain legal advice. To evaluate the accuracy, sufficiency, or reliability of the ideas and guidance reflected here, or the applicability of these materials to your business, you should consult with a licensed attorney.



