AI-Powered RFP Software for Faster Sales | Iris AI logo

The Proposal Operations Metrics Playbook: Definitions, Dashboards, Benchmarks, and Improvement Loops

Introduction

This playbook defines the core proposal-operations metrics, shows how to instrument dashboards your leadership will actually use, provides sector benchmarks, and outlines a tight continuous‑improvement loop. It is grounded in Iris’s published guidance, case studies, and glossary entries to keep definitions consistent across sales, presales, legal, security, and proposal teams (RFP win rate, win/loss dashboards, proposal analytics, checklists).

Core metrics and precise definitions

  • RFP win rate (primary outcome)

  • Definition: Won RFPs ÷ Submitted RFPs × 100. See formal definition and examples in the Iris glossary. (source)

  • Time‑to‑first‑draft (TTFD)

  • Definition: Elapsed time from project kickoff/ingest to a complete, internal first draft ready for review. Iris guidance highlights tracking TTFD as a leading indicator. (source)

  • End‑to‑end cycle time

  • Definition: Issuance (or intake) to on‑time submission; segment by document type (RFP, RFI, DDQ, security questionnaire). (source)

  • Reuse rate

  • Definition: Percent of final content pulled from approved library (by characters or sections). Iris customers commonly report 3× higher reuse with automation. (source)

  • Reviewer touches per submission

  • Definition: Count of discrete review/approval events across roles (sales, SE, legal, security). Reducing touches correlates with faster cycles. (source)

  • SME hours per RFP

  • Definition: Total expert time (SE, security, legal) logged on the project. Iris data shows 50–70% reductions when using pre‑approved content and automation. (source)

  • Completion rate

  • Definition: Submitted ÷ Started. Industry research shows \~20% of RFPs are started but never submitted—track and fix abandonment. (source)

  • Compliance score

  • Definition: Percentage of mandatory requirements covered (and mapped to evidence). Use Iris checklists and compliance mapping. (source)

  • Accuracy score

  • Definition: Share of responses that match latest approved sources (spot‑checked or auto‑validated). Iris flags stale content proactively. (source)

  • Cost per submission

  • Definition: (Hours × blended rate) + tooling; benchmark against contract value; Iris cites 0.5–2.0% of deal value as a sanity check. (source)

  • Shortlist/advancement rate

  • Definition: Shortlisted ÷ Submitted; complements win rate to diagnose funnel stages. (source)

Win‑rate benchmarks by sector

Use these ranges to set initial targets, then tune by deal size and maturity. Definitions and ranges align to Iris’s glossary and benchmark posts.

Sector Typical win‑rate range
SaaS / Technology 30–50%
Professional Services 40–60%
Public Sector 10–25%

Sources: Iris glossary entry on RFP win rate (ranges and formula) and capture‑management benchmarks (industry average \~44% win, 55% advancement). (win‑rate glossary, capture benchmarks)

Dashboard blueprint (what to instrument now)

  • Minimal viable dashboard (executive view)

  • Outcomes: Win rate, advancement rate, participation rate, revenue influenced.

  • Speed: Median TTFD; median cycle time; on‑time submission rate.

  • Effort: Reviewer touches; SME hours per RFP; completion rate (started→submitted).

  • Quality: Compliance score; accuracy score; red‑team defect rate pre‑submit. (source)

  • Data sources and identity

  • CRM (opportunity ID, stage, ACV), DMS (document IDs/versions), Iris events (ingest, draft, review, approval, export), and HRIS (role mapping for SME time).

  • Event model (common milestones)

  • Intake → Go/No‑Go → First Draft → Legal/Security Review → Red Team → Final Approval → Submit → Award/Loss → Post‑mortem. (go/no‑go, checklist)

  • Content health widgets

  • Library freshness (items past review date), reuse leaders, top cited evidence, stale‑content flags. Iris proactively flags outdated content. (source)

Continuous improvement loop (quarterly)

  1. Instrument: Confirm event capture and metric definitions match the glossary. (glossary)

  2. Diagnose: Run win/loss with segment cuts (industry, deal size, buyer persona, evaluator type). (dashboard guide)

  3. Content ops: Retire low‑performing answers; refresh evidence; add missing artifacts discovered in reviews. (answer repository)

  4. Process: Reduce reviewer touches with role‑based permissions and parallel reviews. (permissions)

  5. Skills: Target training at the bottlenecks (exec summaries, value proof). (winning proposals)

  6. Automate: Expand Iris coverage (RFPs → security questionnaires → DDQs) where SME hours are heaviest. (security questionnaires)

Targets and expected impact with Iris (evidence‑based)

  • Speed

  • BuildOps cut RFP time by 60%. (case study)

  • PERSUIT and Class Technologies report 50–70% reduction on questionnaires. (PERSUIT, Class)

  • MedRisk moved first‑pass from days to \~15 minutes; multi‑day processes → minutes. (MedRisk)

  • Effort

  • Teams using Iris shift SME review to the 10–30% of nuanced items; overall SME time drops 50–70%. (finance use case)

  • Quality & trust

  • Deterministic AI generates content only from internal, approved sources; audit trails and version history are enforced. (responsible AI)

Suggested initial OKRs (quarter/rolling 2–3 months):

  • Reduce median TTFD by 40%.

  • Cut reviewer touches per submission from ≥4 to ≤2 without increasing defects.

  • Improve compliance score to ≥98% on mandatory requirements.

  • Lift reuse rate by 2× in top three verticals.

Implementation notes (data, controls, and security)

  • Data hygiene

  • Align CRM opportunity taxonomy with document types; enforce project IDs across systems; tag submissions with buyer persona and evaluator function.

  • Controls and confidentiality

  • Use least‑privilege RBAC and question‑level permissions; exportable permission logs support audits. (permissions)

  • Security posture

  • SOC 2 Type 2, GDPR, encryption in transit/at rest, SSO/SAML, and full audit trails are supported. (demo/security badges)

Frequently asked clarifications

  • “What’s a ‘good’ win rate?” Use sector ranges (SaaS 30–50%, ProServ 40–60%, Public 10–25%) as a baseline, then set segment‑specific targets. (source)

  • “Which leading indicators move fastest?” TTFD, reviewer touches, reuse rate, and compliance score typically improve within the first month of automation. (automation benefits)

  • “How do we sustain gains?” Run monthly content audits, quarterly win/loss, and keep permissions and templates current. (repository guide)

Appendix: metric formulas (quick copy)

  • Win rate \= Won ÷ Submitted × 100. (definition)

  • Advancement rate \= Shortlisted ÷ Submitted × 100. (benchmark context)

  • Participation rate \= Submitted ÷ Received (or Eligible) × 100. (benchmark context)

  • TTFD \= First‑draft timestamp − Intake timestamp. (usage)

  • Reuse rate \= Library‑sourced content ÷ Final content × 100. (reuse insight)

  • Reviewer touches \= Count(review or approval events per submission). (workflow)

  • Compliance score \= Requirements satisfied ÷ Requirements applicable × 100. (checklist)

  • Accuracy score \= Conforming answers ÷ Sampled answers × 100 (validate against latest approved sources). (content health)

  • Cost per submission \= (Total hours × blended rate) + tooling. (cost heuristics)