Ironclad Journal icon Ironclad BLOG

How We Regulate AI Use at Ironclad

May 10, 2023 3 min read
abstract AI imagery

This post was co-authored by Ironclad Senior Counsels Nicole Dobias and Michael Ohta. Download the full AI policy here

At Ironclad, we’re excited about the potential of new generative artificial intelligence technologies to transform the way we work. However, as a company full of lawyers, we are also aware of the risks that accompany the explosion in AI services. That’s why we are taking a proactive approach to ensure our employees use these groundbreaking technologies responsibly while maintaining our staunch commitment to protecting data privacy for our customers and ourselves. 

An Internal Policy on Generative AI Use

Our new policy is aimed at utilization of generative AI services such as ChatGPT for Ironclad work by Ironclad employees. For clarity, by “generative AI,” we refer specifically to products and services powered by large language models that ingest prompts and use them to generate content. In the interest of transparency and information-sharing, we have decided to share the full text of the policy below this post. 

Our approach to regulating internal AI use has three main themes: 

  • Confidentiality – Ironclad classifies data according to our data classification matrix. Any data with a classification of confidential or higher cannot be fed into prompts by default. This ensures that customer data or sensitive Ironclad data is not inadvertently disclosed or used to train these systems. 
  • Responsible Use – Employees are responsible for AI-generated content as if it were their own and must carefully review content for accuracy and any issues that would negatively affect third parties. 
  • Service-specific Review – Not all generative AI companies are created equal. Whether employees should use a specific service can depend on how that service approaches data processing, compliance, and legal terms.   

How We Implement Our Policy to Mitigate Risk 

From this starting point, when our employees identify a vendor with generative AI services or integrations that the employee or team would like to use, our security and legal teams can evaluate how the specific system works and review the governing legal terms to determine the level of risk. We have found that there can be significant variation between systems. Ironclad makes sure that any vendor leveraging generative AI is scrupulous about the use of customer data. We determine what data will be processed, how it will be used, and what safeguards are in place to protect our data. We only allow employees to use vendors with generative AI tools if we are satisfied with the answers to these questions.

We hold ourselves to the same high standards in our approach to developing our own integrations with generative AI tools. Ironclad recently announced AI Assist™, which lets users instantly generate redlines to documents in our platform. This feature leverages a partnership with OpenAI, one of the leading AI platforms. When evaluating our partnership, we thoroughly vetted OpenAI to ensure we were satisfied with their security posture and the legal terms protecting our and our customers’ data. For example, text fed into our AI Assist™ tool is protected by a data processing agreement with OpenAI and is not used to train OpenAI’s large language models.

Looking to the Future 

While our focus is on mitigating risks to our business, we are also mindful of not over-regulating the use of generative AI. We believe in the value provided by these tools and we look forward to as-yet-unknown use cases that our employees will develop. 

Generative AI is still in its infancy. Ironclad is committed to staying at the forefront of responsible use as AI technology evolves. We expect to update this policy, potentially frequently, over time. We hope by sharing this policy we will continue the conversation about the most effective ways to manage AI risk for business. Discussion and feedback are welcomed.  

Download Ironclad’s Generative Artificial Intelligence Policy

Want more content like this? Sign up for our monthly newsletter.

Book your live demo