AI Responsibility

Innovation With Accountability

There’s no question: rapidly evolving AI tech is fast shaping–and reshaping–our future. At Ironclad, we recognize the need for responsible AI development that takes a unique, holistic, and thoughtful approach. Our journey towards responsible AI is a shared one; we collaborate internally and with industry experts like OpenAI and our community members to ensure that the technology we create builds trust and aligns with the values we live and work by.

And we want to share some of that thinking with you.

AI Responsibility Principles

 

Safe and Powerful AI Development

We built Ironclad AI to empower humans, not replace them. So we’ve set a precise approach to our AI development, building for highly accurate, reliable, and desired outcomes. This gives customers the confidence that we have the right features to meet their evolving needs with the right product safeguards embedded to protect them. 

Today, this development approach enables user oversight with: 

  • Metadata verification in Ironclad Smart Import 
  • Prompts to accept or reject suggested changes in Smart Detect 
  • Full redline suggestion visualizations in AI Assist based on predefined Playbooks

See how Ironclad AI works →

 


 

AI Governance

All of Ironclad AI’s features are the result of extensive research and collaboration between the product, legal, IT, and security teams. Together, the cross-functional group provides the technical expertise and business coordination to navigate the challenges that AI presents thoughtfully, safely, and inclusively. 

 Their efforts ensure that we develop: 

  • The right permission controls for all AI features, including Ironclad Smart Import, AI Assist, and AI Playbooks
  • A fully fleshed out, public policy on responsible AI use and regulation
  • The best-in-class security certifications 

Review our security certifications →

 


 

Transparent Data Security

Ironclad AI is trained with a carefully curated set of public and proprietary data, and the AI data warehouse it draws from enables the privacy and security infrastructure that their organizations require. With full transparency into how the models work, customers can see when and why AI is used, plus offer critical feedback to our product teams.

Other steps taken to ensure data privacy include: 

  • A Zero Data Retention agreement for any sub-processor’s AI models, like GPT-4
  • Anonymization of special data types such as PII, credit cards, PHI, or even select Clauses
  • Scoped User Authentication, ensuring that the AI only leverages data the user has access to
  • The deployment of an advanced encryption solution where customers hold the top level encryption key

Read the Ironclad AI documentation → 

Contract AI FAQ

How does Ironclad protect my data?

Ironclad is compliant with worldwide security standards to protect your contracts and data. All data in transit and at rest is encrypted with industry AES standards. Our legal team works closely with customers to define and manage strict data access, privacy, and any data usage restrictions to support customers’ needs when it comes to using or creating AI features. We always adhere to any data security, transfer, or access requirements in handling sensitive customer data.

How do Ironclad’s subprocessors use my data?

Ironclad’s contract with our subprocessors prohibits them from using Ironclad customer data to train their own models. All subprocessors will process customer data only for the purpose of rendering Ironclad services.

How do I learn more about Ironclad security protocol?

Use this Security Portal to learn about our security posture and request access to our security documentation.