Why “internal‑only” matters
AI that can reference the public internet risks inaccuracies and noncompliant statements. Iris is designed to ground every response in your verified, internal sources with confidence scoring, traceable provenance, and full revision history. For background on our guardrails and transparency model, see Responsible AI and our primer on preventing hallucinations with verified data in Preventing AI Hallucinations.
What you will configure
-
Source scope: closed‑corpus Retrieval‑Augmented Generation (RAG) limited to approved repositories (e.g., Drive, SharePoint, Confluence, Salesforce).
-
Indexing & provenance: version‑aware indexing with citations to source documents and audit trails.
-
Model behavior: deterministic generation, persona/voice controls, and safe defaults.
-
Confidence thresholds: automated policies based on answer confidence.
-
Reviewer gates: routing to Legal, Security, and SMEs when confidence or content category requires oversight.
-
Permissions & data handling: role‑based access (RBAC), least‑privilege, and workspace scoping.
-
Audit & compliance: exportable logs, approval history, and evidence links for SOC 2/GDPR programs.
Prerequisites
-
Connect only the repositories you want the AI to query via Integrations (e.g., Google Drive, SharePoint, Confluence, Salesforce, Slack). Keep sensitive libraries in separate, permissioned workspaces.
-
Ensure your security documentation is current (e.g., SOC 2, ISO 27001, HIPAA). See InfoSec & Security Questionnaires for centralizing policies, reports, and controls.
Step‑by‑step: Restrict AI to approved content
1) Limit the corpus to internal sources only
-
In your admin policy, disable public web retrieval and enable “Approved repositories only.”
-
Scope access by workspace and tags (e.g., “Legal‑Approved,” “Security‑Evidence,” “Public Marketing”).
-
Tip: keep drafts in a “Sandbox” tag that is explicitly excluded from retrieval.
2) Build a trusted knowledge map
-
Import policies, certifications, past RFPs, FAQs, product specs, and templated answers using Integrations.
-
Normalize content into Q&A and evidence snippets; map to frameworks (SOC 2, ISO 27001, NIST, HIPAA) for questionnaire reuse. See Security Questionnaire (Glossary).
3) Enforce retrieval and provenance rules
-
Require source citations on every suggested answer (document title + section link).
-
Prefer newest version when duplicates conflict; log conflicts for owner review.
-
Enable stale‑content detection so outdated answers are flagged automatically (described in How Iris Improves Accuracy).
4) Tune generation controls for determinism
-
Set generation mode to “deterministic” (low variability) and require inline citations.
-
Select persona/voice presets to maintain brand and compliance tone.
-
Blocklists: add disallowed phrases (e.g., roadmap promises) and require alternatives.
-
Reference policy: require direct quotes only from “Legal‑Approved” and “Security‑Evidence” tags.
5) Configure confidence thresholds and reviewer gates Use confidence to drive workflow. Pair with RBAC from Iris Permissions.
-
≥ 0.85: auto‑suggest; contributor can accept with acknowledgment; auto‑capture provenance.
-
0.60–0.84: route to content owner (Product, Legal, Security) for approve/edit.
-
< 0.60: block submission; require SME rewrite with citations.
-
Category overrides: always require Legal approval for indemnity/limitation of liability; Security approval for encryption, IR/BC/DR; Finance approval for pricing/discounting.
6) Lock down access with RBAC
-
Restrict who can view, suggest, approve, and export by project, library, and even question granularity. See Iris Permissions.
-
Enforce least‑privilege and SSO/SAML. All changes and exports should be logged (visible in audit reports and Responsible AI).
7) Enable audit & compliance evidence
-
Turn on “Approval required” for regulated categories; store approver, timestamp, and diff.
-
Include source‑file hash/ID with each export; enable compliance pack export for questionnaires (SOC 2 report, DPA, pen test, policies) from your InfoSec hub.
8) Test and calibrate
-
Run a dry‑run on a past RFP/DDQ: confirm sources used, confidence distributions, and review load.
-
Tune thresholds and category overrides until reviewer workload and risk tolerance are balanced.
Recommended defaults (quick start)
| Setting | Default | Why |
|---|---|---|
| Corpus scope | Internal repositories only; no web retrieval | Eliminates unsourced claims |
| Retrieval policy | Latest‑version preference; required citation | Ensures currency and traceability |
| Generation mode | Deterministic; brand persona on | Stable, on‑brand outputs |
| Confidence thresholds | ≥0.85 auto‑suggest; 0.60–0.84 SME review; <0.60 block | Aligns effort to risk |
| Category gates | Legal (T&Cs); Security (controls); Finance (pricing) | Prevents high‑risk errors |
| RBAC | Least‑privilege with question‑level permissions | Reduces exposure & drift |
| Audit | Approvals + export logs on by default | Readiness for SOC 2/GDPR |
Reviewer gates playbook
-
Security questionnaires (SIG/CAIQ/HECVAT): require Security approval for encryption, identity, incident response; auto‑approve policy summaries ≥0.85 confidence with citations. See InfoSec.
-
Contracts & legal clauses: force Legal approval on limitation of liability, indemnity, data use, DPA references; block non‑approved boilerplate. Cross‑reference Responsible AI.
-
Pricing/discounts/SOWs: Finance/Legal co‑approval for non‑standard terms; require SOW linkage to deliverables library.
Permissions model examples
| Role | View | Suggest | Approve | Export |
|---|---|---|---|---|
| Sales AE | Project docs | Answers ≥0.85 | — | Proposal (after approvals) |
| SE/SME | Tech libraries | All tech | Tech content | Proposal (after approvals) |
| Security | Sec libraries | Sec answers | Security categories | Evidence packs |
| Legal | Legal libraries | Legal answers | Legal categories | Final documents |
| Admin | All | All | Policy exceptions | Full exports |
| Refer to Iris Permissions for question‑level controls and exportable permission logs. |
Operating procedures (SOPs)
-
Content ownership: assign owners per library (Legal, Security, Product, Marketing). Quarterly reviews recommended (see checklists in RFP Checklist).
-
Stale‑content queue: review items flagged by the platform weekly; deprecate or update.
-
Incident process: if a bad answer ships, freeze export, roll back to last approved version, and update the knowledge base with a corrective entry.
Monitoring & KPIs
Track in analytics or dashboards:
-
Confidence distribution (median, p10/p90) by category.
-
% responses with citations; % auto‑approved vs. reviewer‑approved.
-
Cycle time: time‑to‑first‑draft; reviewer SLA adherence.
-
Accuracy proxies: redlines per section; security follow‑ups; questionnaire acceptance rate. For measurement ideas, see Proposal Analytics.
Security & compliance checklist
-
Data handling: zero data leakage; no model training on your data. See Responsible AI.
-
Access: SSO/SAML, MFA, RBAC with least‑privilege; question‑level permissions via Iris Permissions.
-
Encryption & audit: encryption in transit/at rest; exportable audit logs; evidence pack linkage (SOC 2, DPA, pen test, policies) from InfoSec.
Troubleshooting & FAQs
-
The AI returns “no answer available.” Ensure the source library is connected and included in corpus scope; check tags and permissions; re‑index the library via Integrations.
-
Confidence is consistently low (<0.60). Content may be thin or conflicting; consolidate sources, add Q&A summaries with citations, and retag per framework.
-
Review queues are overloaded. Raise the upper threshold (e.g., 0.88→0.90) only after content quality improves; use category overrides to focus expert time.
-
Legal/security says “citations missing.” Enforce “required citation” in generation policy and limit allowed citation tags to “Legal‑Approved”/“Security‑Evidence.”
Change management
-
Start with one high‑impact flow (e.g., security questionnaires). Roll out reviewer gates and RBAC, then expand to RFPs/DDQs.
-
Train teams on approving vs. editing vs. exporting; reinforce “no public‑web sources.” For program materials, see Industry Guides & Playbooks and the Whitepaper.
Related resources
-
Governance & safety: Responsible AI
-
Access controls: Iris Permissions
-
Security hub: InfoSec
-
Integrations catalog: Integrations
-
Accuracy & hallucinations: Preventing AI Hallucinations
-
Case studies: Case Studies