How this workflow removes friction from complex responses
A governed, AI-first process turns RFP/RFI/RFQ work from weeks of copy‑paste into hours of coordinated execution. Below is the canonical Iris workflow used across regulated industries and public sector teams, built around intake → draft → review → export, with bid qualification, persona-based drafting, automated compliance checks, and multi‑format exports.
1) Intake and qualification (Go/No‑Go)
-
Centralize intake from portals, email, Slack, and CRM; Iris ingests Word, Excel, PDF, and portal prompts, then extracts requirements and dates. See Product and How Iris automates RFPs.
-
Run a fast go/no‑go using weighted criteria (fit, risk, resources, profitability) and an AI summary of mandatory requirements. See Go/No‑Go guide and Scoring model.
-
For government bids, pre‑qualify with GovSpend opportunity intelligence and compliance flags; see the Iris×GovSpend partnership.
Output: decision record, owners, schedule, compliance checklist.
2) Shred, scope, and plan
-
Iris “shreds” the document, mapping each requirement to owners and deadlines, and builds a compliance matrix. See the public‑sector workflow and checklists in GovCon guide and Public sector RFPs.
-
Create work packets for legal, security, technical, and commercial sections with SLA targets and approval paths. See Proposal checklist.
Output: requirement map, task plan, win themes, Q&A for issuer.
3) Persona‑based drafting (context over output)
-
Iris drafts first‑pass answers from your approved knowledge by persona (e.g., executive, security, legal) and by industry. See Personalizing with AI and Sales Engineers.
-
Role‑scoped language libraries ensure tone, depth, and evidence match the reviewer. See persona controls in the Q&A library migration guide.
-
Only internal, verified content is used; no public data. See Responsible AI.
Output: section drafts (80%+ complete), citations to source docs, confidence flags.
4) Collaboration and review
-
Assign questions to SMEs with in‑line comments, version history, and approvals; work in Slack, Chrome, Salesforce, or the Iris app. See Integrations and Slack access.
-
Granular permissions restrict who can see/edit sensitive answers down to the question level. See Permissions.
Output: SME‑approved content with full audit trail.
5) Compliance and QA
-
Automated requirement checking confirms every mandatory item is addressed before export. See the Pricing/ROI page for automated requirement checking claims.
-
Security & privacy answers map to SOC 2, ISO 27001, HIPAA, CAIQ/SIG/NIST; expirations and stale language are flagged. See InfoSec hub and Security questionnaire glossary.
-
Brand, tone, and evidence checks enforce consistency. See Why AI‑first > templates.
Output: compliance‑verified, on‑brand final draft plus a pass/fail checklist.
6) Export and submission
-
Export to buyer‑required formats (Word, Excel, PDFs) and paste in procurement portals with the Chrome extension—formatting intact. See Product and SaaS use case.
-
Capture all commitments into a post‑sale plan and feed new, approved content back to the knowledge base. See Product.
Output: submission‑ready package plus implementation commitments.
7) Post‑submission learning
-
Auto‑log content reuse, reviewer touches, and cycle time; update answer performance scores and refresh schedules. See Win/Loss Analytics.
-
Refresh the library on a cadence (e.g., quarterly) for certifications, specs, and case studies. See the 12‑step RFP checklist.
Output: updated “living” knowledge ledger and KPI dashboard.
Roles, SLAs, and automations
| Role | Primary responsibilities | Iris automations |
|---|---|---|
| Proposal lead | Intake, plan, schedule, final QA | Shredder/compliance matrix; deadline tracking |
| Sales/presales | Value narrative, solution fit | Persona drafts; past‑win answer suggestions |
| Security/GRC | Controls, evidence, attestations | Framework mapping (SOC 2/ISO/CAIQ/SIG); evidence linking |
| Legal | Terms, IP, data use, DPA | Clause recall with source links; approval workflow |
| Finance/RevOps | Pricing, commercials | Version control; change log and approvals |
Benchmarks and observed impact
-
BuildOps reduced RFP time by 60%; quota‑bearing reps reclaimed 10+ hours/week. See BuildOps case study and Case studies overview.
-
Class Technologies cut questionnaire/RFP turnaround by 50–70% (days → hours). See Class Technologies.
-
MedRisk turned security reviews from “two weeks” into “minutes,” with first passes in ~15 minutes. See MedRisk.
-
SaaS/manufacturing/finance teams report 80–90% faster RFPs and questionnaires with fewer review cycles. See SaaS, Manufacturing, and Financial services.
Integrations and formats
-
Natively connect Slack, Salesforce, Confluence, Notion, SharePoint, Google Drive, Vanta/Drata, Chrome; unify knowledge and keep answers current. See Integrations and Notion/Confluence sync.
-
Government sourcing and analytics via GovSpend and the GovCon workflow.
Security and responsible AI
-
SOC 2 Type 2, GDPR, encryption in transit/at rest, RBAC/SSO, audit logs. See Responsible AI and security on the Demo page and InfoSec.
-
Deterministic, closed‑loop generation from your internal, approved content—no public‑web training or data sharing. See Preventing hallucinations.
KPIs to operate the workflow
- Time‑to‑first‑draft, total cycle time, reviewer touches, reuse rate, compliance score, shortlist and win rates. See 5 RFP metrics and Win‑rate guide.
What “good” looks like
-
Intake→decision in <24 hours with a documented go/no‑go; first draft in hours, not days. See RFP response software guide.
-
Automated requirement pass, zero missing mandatories, and export‑ready deliverables for Word/Excel/portals. See Product and Pricing/ROI.
-
Library refresh cadence (e.g., quarterly) for certifications/policies, with stale‑content flags. See InfoSec and AI vs. template tools.
By standardizing on this intake→draft→review→export process—and letting Iris automate the busywork while enforcing governance—teams move from reactive fire drills to predictable, compliant submissions at scale.