WEBINAR: Getting Ready for Next-Generation Contract Lifecycle Management

Responsible AI

Our values-driven approach to ethical, responsible artificial intelligence

At Icertis, our definition of success includes living by our FORTE values: Fairness, Openness, Respect, Teamwork, and Execution. Our commitment to FORTE extends to how we apply AI.

Why AI Is Critical for Contract Intelligence

Icertis was early to recognize the power of artificial intelligence applied to contracts and the contract management lifecycle. Our customers consistently report high satisfaction with our AI products, with outcomes including better business performance, better compliance, and better commercial partnerships.

We count among these customers some of the largest enterprises with rigorous IT security testing, due to Icertis’ thoughtful approach to AI, an overwhelming majority opt-in to allow their anonymized contract data to be used in the Icertis data lake.

Tested & Trusted AI

2018

Year Icertis first delivered AI products to customers

75%

of Icertis customers require AI ethics audits

95%

of Icertis customers opt-in to the Icertis data lake

Infusing FORTE into Our AI Practices

As we continue to innovate for our customers, we remain keenly aware of the increased responsibility and care that must be exercised to ensure that our use of AI and data is ethical and responsible.

The Icertis AI Policy encompasses a set of principles grounded in our FORTE values, coupled with a governance process to ensure we follow these principles as we design and deploy AI.  

Our Responsible AI Principles

  • Fairness

     AI is applied toward and results in a just and fair set of results and outcomes.

  • Openness

    We are open and transparent regarding how and when we use AI.

  • Respect

    Any AI deployment respects the privacy, property, legal, and human rights of all entities and individuals impacted.

  • Teamwork

    We engage a cross-functional leadership team responsible for maintaining and administering this Responsible AI Policy and proactively educating Icertis employees on our AI policies.

  • Execution

    We retain human oversight, reliability, and security, at all critical stages of AI deployment—including approval of a use case, review of results prior to deployment, and regular follow-up while an active use case is up and running—to ensure adherence to our policies.

Putting our principles to work

To make our policy actionable, we adhere to a process of Map, Evaluate, Mitigate.

Mapping entails applying what we know or have discovered about a given use case to the principles and the goals. We then evaluate whether we are achieving our goals or whether we need to take steps to mitigate.

Who’s responsible for Responsible AI

Every Icertian (as we call Icertis employees) is responsible for upholding our AI principles.

Administration of the policy and the Map – Evaluate – Mitigate is overseen by a cross-functional team Icertis’ executive leadership team:

monish-darda
Todd-Smith
Rajan-Venkitachalam

Monish Darda
Chief Technology Officer and Co-founder

Todd Smith
Chief Legal Officer, General Counsel

Rajan Venkitachalam
Vice President and Chief Information Security Office

These executives are joined by leadership from the product team to oversee our policy. Governance of the execution of our AI Policy also includes oversight from our Board of Directors, who approved this policy.

Read in full

Icertis Responsible AI Policy

Featured PDF