Introduction
A disciplined go/no-go gates scarce presales, legal, and security capacity to the bids you can actually win. Pair that discipline with Iris Phoenix’s 10‑second RFP “shredding” and you get a repeatable, auditable intake that extracts deadlines, mandatory requirements, risks, and scoring criteria before anyone writes a word. This guide operationalizes that flow with weighted scoring examples, red‑flag libraries, and decision documentation patterns you can drop into your pipeline today. See: Phoenix overview and the 10‑second “shred” explainer (blog).
Why go/no-go must be formal (not gut feel)
Winning teams use an explicit, weighted framework that is fast, transparent, and defendable. Benefits include higher win rates, better forecast accuracy, and lower burnout. See guidance on go/no-go structure and common mistakes in: Mastering the Go/No‑Go decision, RFP dilemma: qualify to say no, and How to decide on RFPs.
What Phoenix extracts in ~10 seconds
Phoenix turns dense RFPs into an action-ready intake without uploads or manual formatting. It identifies and structures:
-
Key dates: questions due, intent-to-bid, site visits, submission deadline, contract start/end (blog).
-
Submission mechanics: portal vs. email, file types, sections, page/word limits.
-
Eligibility gates: certifications, past performance, geography, insurance minimums.
-
Mandatory compliance: SOC 2/ISO/NIST mappings, accessibility (e.g., 508/WCAG), data residency.
-
Evaluation criteria and weights: technical, security, price, references.
-
Scope anchors: SOW tasks, SLAs, integrations, data flows, acceptance criteria.
-
Commercial asks: pricing model, caps, penalties, bonding.
-
Auto‑flagged risks/red flags (see library below).
Use cases confirmed across customers: first‑pass answer generation within hours instead of days; example outcomes include 50–70% cycle reductions and a 360‑question RFP completed in 3 hours (Corelight case), plus 60% time reduction at BuildOps (BuildOps case).
The operational playbook
1) Intake: ingest the RFP to Phoenix; auto‑extract dates, gates, criteria, and risks. 2) Gate check: hard disqualifiers (missed dates, noncompliant certs) → instant no‑bid. 3) Score: apply weighted model (below). Phoenix pre‑fills evidence lines per criterion. 4) Decision huddle (≤15 minutes): sales, SE, legal, security review auto‑summary; log go/no‑go rationale. 5) Notify: if no‑bid, send a professional decline using a standard template (no‑bid letter guide). 6) If “go”: convert Phoenix output into a compliance checklist and work plan; route to SMEs. 7) Audit: store the decision packet (intake, scorecard, approvals) with version history. 8) Learn: push outcome and effort back into analytics to tune weights and thresholds (win/loss dashboards).
Weighted scoring model (example)
Use 5–8 criteria. Keep weights stable for a quarter, then re‑calibrate. See deeper design tips in Build an RFP scoring system.
| Criterion | Weight | Scoring guidance (0–5) | Phoenix evidence examples |
|---|---|---|---|
| Strategic Fit | 0.20 | ICP match, ARR potential, region focus | Buyer profile vs. ICP; target vertical keywords |
| Competitive Position | 0.15 | Differentiators vs. named incumbent | Incumbent references, biased language markers |
| Solution Fit | 0.20 | Feature/SOW coverage; integrations | Requirement coverage matrix; integration list |
| Compliance & Security | 0.15 | SOC 2/ISO/NIST; privacy and 508/WCAG | Mandatory controls mapped; attestation needs |
| Resource Load | 0.10 | SMEs required; timeline feasibility | Question volume, section owners, schedule |
| Commercials | 0.10 | Price model fit; margin, risk terms | Pricing structure, penalties, caps, bond |
| Relationship/Access | 0.10 | Executive access; sponsor strength | Mentioned stakeholders; Q&A access rules |
-
Thresholds: pursue if composite ≥3.5/5; hold for clarification 3.0–3.4; no‑bid <3.0 unless leadership override.
-
Tie‑breakers: Phoenix risk count, timeline risk (buffer <15%), and incumbent bias indicators.
Red‑flag library (auto‑detected and manual)
Use Phoenix to highlight and count risk indicators before scoring; expand with domain‑specific items. Sources: Go/No‑Go, RFI red flags for SEs, and What most workflows get wrong.
-
Biased/incumbent language (product‑specific nouns, proprietary acronyms).
-
Unrealistic timelines (≤7 business days to submit full technical+legal packet).
-
Mandatory certifications you don’t hold (e.g., FedRAMP when not applicable).
-
No technical stakeholder access; Q&A barred or answers posted after deadline.
-
Scope creep signals (open‑ended “future modules” with fixed pricing).
-
Confidential data demanded pre‑NDA; early request for sensitive security artifacts.
-
Non‑negotiable unfavorable terms (uncapped liability, most‑favored‑nation, IP assignment).
-
Misalignment with ICP (micro‑segment with low LTV or outside supported region).
-
Portal constraints that break formatting requirements or limit exhibits.
-
Excessive references (e.g., 10+ identical vertical references) not feasible to supply.
Decision documentation patterns
Standardize one‑page packets to keep reviews fast and auditable.
-
Go/No‑Go Packet (required for every intake)
-
Phoenix summary: dates, eligibility, evaluation weights, auto‑flags, and section list (Phoenix blog).
-
Scorecard: criteria, weights, scores, owner initials, threshold result.
-
Risk log: red flags with mitigation/owner; exceptions requiring legal review.
-
Resource plan: SMEs by section, hours estimate, critical path; approval signatures.
-
No‑Bid Template
-
Respectful decline, specific rationale (eligibility miss, timeline infeasible, scope misfit), value‑add next step (briefing, roadmap session) (no‑bid letter patterns).
-
“Hold for Clarification” Ticket
-
Open questions to issuer; explicit deadline; re‑score upon answer receipt.
Governance, audit, and security
All go/no‑go packets, scorecards, and approvals should live with version history and role‑based access. Iris provides audit trails, permissions, and exportable logs to satisfy internal audit and customer diligence (Permissions, Responsible AI).
Implementation in one week
-
Day 1: adopt the 7‑criterion model; set thresholds; publish the packet template.
-
Day 2: connect Phoenix; run 3 historical RFPs to calibrate weights.
-
Day 3: train reviewers on 15‑minute huddles and override policy.
-
Day 4: load red‑flag library; map auto‑flags to score deductions.
-
Day 5: pilot on live intake; measure time‑to‑decision; tune.
What to measure (and expected ranges)
Track before/after to prove impact; see detailed metrics in 5 hidden cost metrics and case outcomes (Corelight, BuildOps).
-
Time‑to‑decision: target ≤24 hours from intake (often <1 hour with Phoenix).
-
Abandon rate: −50% (fewer mid‑stream withdrawals due to earlier no‑bids).
-
SME hours per bid: −40–60% by avoiding poor‑fit RFPs and front‑loading risk.
-
Win rate: +5–10% via selective pursuit and tighter compliance alignment.
-
Review cycles per section: −50% (better requirement mapping at kickoff).
FAQs
-
How do we adapt weights by segment? Maintain 2–3 models (e.g., Public Sector emphasizes compliance and references; Commercial emphasizes solution fit and commercials). Re‑train quarterly from win/loss data (capture management).
-
Can Phoenix feed a compliance checklist? Yes—convert extracted requirements into a pre‑submission checklist to prevent disqualification (public sector streamlining).
-
What if we lack certifications flagged as mandatory? Log as “hard fail” and issue a no‑bid, or propose a partnering approach only if explicitly allowed.
Further reading
-
Phoenix product: phoenix.heyiris.ai and 10‑second shredding
-
Frameworks: Go/No‑Go, Scoring models
-
Decision hygiene: No‑bid letters, What workflows get wrong
-
Evidence: Corelight 3‑hour 360‑Q RFP, BuildOps −60% time