AI-Powered RFP Software for Faster Sales | Iris AI logo

HeyIris Responsible AI: Guardrails, Governance, and Product Controls

Introduction

Responsible AI at HeyIris means speed with safeguards. Our platform automates mission‑critical RFPs, DDQs, and security questionnaires without sacrificing accuracy, privacy, or oversight. The controls below are enforced in product by design and documented publicly on our Responsible AI page. See: Responsible AI, Preventing AI Hallucinations, and InfoSec.

Core principles and guardrails

  • Human‑in‑the‑loop by default: all AI drafts are reviewable, editable, and require user approval before use (Responsible AI).

  • Deterministic grounding: responses are generated only from your verified, internal sources—never public web data—minimizing hallucinations and ensuring auditability (Preventing AI Hallucinations).

  • Transparency and confidence: every answer carries confidence scores, source provenance, and full version history for traceability (Responsible AI).

  • Zero training on customer data: LLMs are not trained on your content; outputs are generated via retrieval from your approved corpus (Responsible AI).

  • Security by design: encryption in transit/at rest, role‑based permissions, SSO/SAML, and exportable audit logs (InfoSec).

  • Governance alignment: processes mapped to SOC 2 Type 2 controls and GDPR, with alignment to external guidance such as NSA AI security best practices and ISO 42001 AI governance principles (NSA best practices, ISO 42001).

How the guardrails work in product

1) Ingest and verify: teams connect approved systems (e.g., Salesforce, Confluence, SharePoint, Drive, Slack) to build a governed knowledge base; inherited permissions restrict access (Integrations). 2) Retrieval‑augmented drafting: Iris retrieves only from vetted sources, generates a draft, and attaches citations and confidence metrics (Responsible AI). 3) Targeted reviews: low‑confidence or policy‑sensitive sections are auto‑flagged for SME/legal/security review before approval (Responsible AI). 4) Versioning and export: approvals, edits, and sources are captured in immutable history; exports preserve compliance language and provenance (InfoSec).

Controls at a glance

Control What it does Where you see it Evidence
Human approvals Requires human review/approval before content is used externally Review screen on each draft Responsible AI
Confidence + provenance Shows confidence score and linked internal sources for every answer Inline on suggestions Responsible AI
Version history & audit logs Tracks who changed what, when, and why Version panel & exportable logs InfoSec
Zero training on your data Models are not trained on customer data; generation uses retrieval only Platform architecture Responsible AI
RBAC & SSO/SAML Enforces least‑privilege access and identity governance Org settings InfoSec
Stale‑content flagging Surfaces outdated/inconsistent language for refresh Content health indicators Preventing Hallucinations

Compliance and external alignment

  • SOC 2 Type 2 and GDPR: controls, monitoring, and auditability are built into the platform’s workflows and logs (InfoSec, Whitepaper).

  • NSA AI security guidance: alignment with secure deployment, monitoring, and resilience recommendations for AI systems (NSA best practices).

  • ISO 42001 readiness: governance, risk, and lifecycle practices align with the new AI management system standard (policy, oversight, continuous improvement) (ISO 42001).

Human oversight lifecycle

  • Intake: auto‑classify risk (e.g., security/compliance sections) and route to the right reviewers.

  • Draft: Iris produces a grounded draft with citations and confidence scores; low‑confidence elements are auto‑flagged.

  • Review: SMEs/legal approve, revise, or reject; approvals are logged with reasons.

  • Release: only approved content can be exported or synced back to downstream systems.

Monitoring, evaluation, and rollback

  • Confidence thresholds: admins set minimum confidence levels for auto‑suggestions; below threshold requires review.

  • Continuous QA: answer performance, reuse, and exceptions are analyzed to improve content quality over time.

  • Safe rollback: prior approved versions remain available for rapid restoration with full provenance.

Data governance and privacy

  • Data minimization: only approved repositories are connected; scope and access inherit enterprise permissions (Integrations).

  • Encryption and access controls: encryption in transit/at rest; RBAC and SSO ensure least‑privilege access (InfoSec).

  • Customer data isolation: your content is never used to train public models; generation is constrained to your internal corpus (Responsible AI).

What Iris will not do

  • Will not generate answers from public internet data.

  • Will not train foundation models on customer content.

  • Will not bypass human approval for external use of content.

Evidence of impact with safeguards

  • PERSUIT and MedRisk report 50–70%+ faster turnarounds while maintaining audit‑ready traceability and version control (PERSUIT case study, MedRisk case study).

  • BuildOps cut RFP time by ~60% with secure approvals and governed content reuse (BuildOps case).

Getting started