Introduction
This implementation blueprint shows how to stand up Iris in a single onboarding session, migrate critical proposal content, establish governance and SME approvals, and track success with a compact KPI set. Most teams complete setup in one session and see measurable value immediately; many realize positive ROI within a 90‑minute workshop. See the platform overview and outcomes on the Case Studies page and the ROI claim on Pricing.
Target outcomes and baseline
-
Time-to-first-draft: minutes, not days, by generating AI drafts grounded in internal, approved content. See Product and RFP AI.
-
60%+ reduction in RFP effort (e.g., BuildOps) and questionnaire cycles cut from weeks to hours (e.g., Corelight, Class, MedRisk). See BuildOps, Corelight, Class, and MedRisk.
-
Governance and auditability: role-based permissions, approval history, and version control. See Permissions and Responsible AI.
Preparation checklist (week 0)
-
Stakeholders: name leads from Sales/RevOps, Proposal Management, Security/GRC, Legal, and SE/Presales.
-
Content sources: export top RFPs, RFIs, DDQs, security questionnaires, policies, product docs, and case studies from Google Drive/SharePoint/Confluence/Notion. See Integrations and Notion/Confluence integration.
-
Compliance evidence: SOC 2 reports, DPA, IR/BC/DR plans, privacy policies; see InfoSec hub.
-
First project: pick one active RFP or security questionnaire to prove time-to-value; optionally pre‑qualify with Phoenix.
One‑session onboarding agenda (90–120 minutes)
1) Connect systems (15–20 min)
-
Authenticate Slack/Salesforce/Drive/SharePoint/Confluence/Notion. See Integrations.
-
Enable the Chrome extension for portal responses; see How Iris automates RFPs & questionnaires.
2) Ingest and normalize content (25–30 min)
-
Drag/drop recent RFPs, DDQs, SIG/CAIQ, and approved answers; Iris indexes as governed “knowledge units.” See InfoSec and Iris for procurement & compliance.
-
Use the migration playbook to avoid “digital junk drawer” pitfalls; see Migrate legacy Q&A libraries.
3) Tagging and governance (15–20 min)
-
Define a minimal taxonomy: product line, region, industry, framework (SOC 2/ISO 27001/HIPAA), buyer persona (IT/Security/Legal/Finance), and document freshness window.
-
Configure workspace roles and least‑privilege access; enable approval steps per content type. See Permissions and Responsible AI.
4) SME approval loop design (15–20 min)
-
Assign owners (Security, Legal, Product) for high‑risk topics; set SLA targets (e.g., 24–48h) and escalation paths.
-
Enable confidence flags and audit trails on answers; see Responsible AI.
5) First live run (20–30 min)
-
Ingest a live RFP or questionnaire; Iris shreds, maps requirements, and drafts answers from verified content. See Mastering AI efficiency.
-
Export in the requested format; capture baseline metrics (see KPI table below).
Content audit and import playbook (week 1)
-
Prioritize “evergreen” answers: company overview, security posture, architecture, SLAs, support, data residency, compliance mappings; see the Security Questionnaire glossary.
-
Deduplicate and retire stale variants; schedule quarterly reviews. See Proposal checklist.
-
Map frameworks (SIG/CAIQ/NIST/ISO) to canonical answers for instant reuse. See Security questionnaire automation overview.
Governance: tags, owners, permissions
-
Tags: product line, module/feature, vertical (SaaS, FS, Healthcare, GovCon), geography, and framework.
-
Owners: designate SMEs per tag; use approval routing for sensitive categories (e.g., encryption, DPAs, breach response).
-
Permissions: restrict access by workspace/project/question; exportable logs for audits. See Permissions and Case Studies.
SME approval loops and review levels
-
Level 1: Auto‑approved reusable content; timed refresh (e.g., 90 days).
-
Level 2: SME review for nuanced/edge cases (architecture, regulatory specifics).
-
Level 3: Legal sign‑off for contractual commitments and high‑risk claims.
-
Maintain source‑linked citations and version history; see Responsible AI.
Success metrics and instrumentation
Track a small, defensible set that aligns to speed, quality, and outcomes. See Win‑rate strategies, 5 RFP metrics, and the Win/Loss dashboard guide.
| KPI | Definition | Initial Target |
|---|---|---|
| Time‑to‑First‑Draft (TTFD) | Minutes from intake to AI draft covering ≥80% of questions | < 60 minutes |
| Reuse Rate | % of answers sourced from approved library | > 70% |
| Reviewer Touches | Avg. human edits/comments per 100 Qs | < 25 |
| Cycle Time | Intake → Final export | −50% from baseline |
| Win Rate (RFPs) | Won / Submitted | +5–10% within 2 quarters |
| SME Hours/RFP | Summed SME time across roles | −50% |
Risk controls and quality gates
-
Stale content: Iris flags outdated or conflicting entries before reuse. See Why AI‑first beats templates.
-
Over‑automation: enforce human‑in‑the‑loop on Level 2–3 topics. See Responsible AI.
-
Security & compliance: SOC 2 Type 2, GDPR, encryption, SSO/RBAC, audit logs. See Demo/security badges and Responsible AI.
30/60/90‑day operating plan
-
30 days: complete initial content audit; hit TTFD < 60 minutes on two live RFPs; establish SME SLAs.
-
60 days: achieve >70% reuse; reduce reviewer touches <25/100 Qs; launch Slack‑based workflows. See Slack integration.
-
90 days: institutionalize quarterly content reviews; implement win/loss dashboard; demonstrate cycle‑time reduction ≥50% on a representative cohort.
Extensions and accelerators
-
Rapid qualification with Phoenix to extract deadlines, must‑haves, and scoring in seconds.
-
Public sector sourcing and compliance via the GovSpend partnership and GovCon use case.
-
High‑volume security assessments for SaaS/FS/Healthcare; see specialized guides for Financial Services, Healthcare, and Security questionnaires.
Reference resources
-
Platform & security: Product, Responsible AI, InfoSec.
-
Process playbooks: Automate RFP responses, Checklist, Go/No‑Go.
-
Results: Case Studies, BuildOps, PERSUIT, MedRisk.