Many organizations struggle with inconsistent risk scoring—different people rating similar risks differently, or scores that don't reflect actual risk levels. This makes prioritization unreliable and undermines trust in the entire risk register.
What You'll Achieve
By the end of this tutorial, you will have defined likelihood and impact scales tailored to your organization, calibrated them with real examples, and scored at least one risk using a consistent, defensible methodology.
Step 1: Define Your Scales
Start by establishing clear scales for likelihood and impact. Most organizations use a 5-point scale, though 3-point or 4-point scales also work. The key is consistency and clarity.
Likelihood Scale
Define how probable the risk is over a specific time period (typically one year):
| Rating | Label | Definition | Probability |
|---|---|---|---|
| 1 | Rare | May occur only in exceptional circumstances | <5% |
| 2 | Unlikely | Could occur but not expected | 5-20% |
| 3 | Possible | Might occur at some point | 20-50% |
| 4 | Likely | Will probably occur | 50-80% |
| 5 | Almost Certain | Expected to occur | >80% |
Impact Scale
Define the consequence severity. Include multiple dimensions relevant to your organization:
| Rating | Label | Financial | Operational | Reputational |
|---|---|---|---|---|
| 1 | Minimal | <$10K | Minor disruption | No external notice |
| 2 | Minor | $10K-$100K | Limited disruption | Limited local coverage |
| 3 | Moderate | $100K-$1M | Significant disruption | Regional media coverage |
| 4 | Major | $1M-$10M | Major business impact | National media coverage |
| 5 | Severe | >$10M | Business continuity threat | Sustained negative coverage |
Customize for Your Context
These thresholds are examples. A $10M loss is catastrophic for a small company but minor for a large corporation. Adjust the financial ranges to your organization's materiality thresholds.
Step 2: Calibrate With Examples
Scales alone aren't enough—different people will interpret them differently. Calibration anchors each level with concrete examples everyone can reference.
Create Anchor Examples
For each rating level, document 2-3 examples from your organization or industry:
Likelihood Level 3 (Possible, 20-50%)
- "We've seen this happen twice in the past five years"
- "Industry peers experience this every 2-3 years"
- "Current trends suggest this could occur within the planning horizon"
Impact Level 4 (Major)
- "Similar to the 2022 system outage that cost us $3M"
- "Would require board notification"
- "Would trigger regulatory reporting requirements"
Run a Calibration Session
Gather your risk assessors and score 5-10 sample risks together. Discuss disagreements until you reach consensus. This builds shared understanding of how to apply the scales.
Step 3: Assess Likelihood
For each risk, determine how probable it is that the risk event will occur within your assessment timeframe.
Sources of Evidence
- Historical data: Has this happened before? How often?
- Industry benchmarks: How common is this in your sector?
- Expert judgment: What do subject matter experts believe?
- Leading indicators: Are conditions that cause this risk increasing or decreasing?
Worked Example
Risk: Key supplier failure disrupts production
Evidence considered:
- The supplier has had financial difficulties twice in 5 years
- Industry peers have experienced supplier failures
- No current warning signs, but concentration risk exists
Assessment: Likelihood = 3 (Possible)
Rationale: Historical pattern and industry experience suggest this could occur, though no immediate triggers are present.
Step 4: Assess Impact
Determine the consequences if the risk event occurs. Consider multiple impact dimensions and use the highest applicable rating.
Consider Multiple Dimensions
- Financial: Direct costs, lost revenue, fines, remediation
- Operational: Service disruption, productivity loss, recovery time
- Reputational: Customer trust, media coverage, stakeholder confidence
- Regulatory: Compliance violations, sanctions, license implications
- Safety: Employee or customer harm
Worked Example
Risk: Key supplier failure disrupts production
Impact assessment by dimension:
- Financial: $2M in lost revenue and expedited shipping = Level 4
- Operational: 3-week production halt = Level 4
- Reputational: Customer complaints, no media = Level 2
- Regulatory: No compliance implications = Level 1
Overall Impact: 4 (Major) — using the highest dimension
Step 5: Calculate Risk Score
Multiply likelihood by impact to get the risk score. This determines priority for treatment and monitoring.
Risk Score Formula
Risk Score Categories
| Score Range | Rating | Typical Response |
|---|---|---|
| 1-4 | Low | Accept and monitor periodically |
| 5-9 | Medium | Monitor with quarterly review |
| 10-16 | High | Active management required |
| 17-25 | Critical | Immediate executive attention |
Worked Example
Risk: Key supplier failure
Likelihood: 3 (Possible)
Impact: 4 (Major)
Risk Score: 3 × 4 = 12 (High)
Implication: This risk requires active management. The controls in place should be reviewed, and the gap between inherent and residual risk should be documented.
Step 6: Validate and Review
Before finalizing scores, validate them for consistency and reasonableness.
Validation Checks
- Peer review: Have another assessor review your ratings
- Comparative analysis: Are similar risks scored consistently?
- Reality check: Does the prioritization feel right to subject matter experts?
- Historical comparison: How have these risks actually materialized in the past?
Watch for Bias
Common biases include anchoring (being influenced by initial estimates), availability bias (overweighting recent events), and optimism bias (underestimating likelihood). Use data and calibrated scales to counteract these.
Common Mistakes to Avoid
1. Scoring Without Defined Scales
If assessors don't have clear scale definitions, scores become meaningless. Always document what each level means.
2. Conflating Likelihood and Impact
These are separate dimensions. A risk can be highly likely but low impact (annoying but manageable) or rare but catastrophic (needs strong controls despite low probability).
3. Ignoring Existing Controls
Decide whether you're scoring inherent risk (before controls) or residual risk (after controls). Most organizations score both to demonstrate control effectiveness.
4. Scoring in Isolation
Risk scoring should be collaborative. Individual assessors often miss perspectives that others would catch.
5. Never Updating Scores
Risk levels change as conditions change. Scores should be reviewed regularly, not just set once and forgotten.
Summary
- Define clear scales with specific criteria for each level
- Calibrate scales using real examples everyone can reference
- Use evidence (historical data, benchmarks, expert judgment) to assess likelihood
- Consider multiple impact dimensions and use the highest applicable rating
- Calculate risk score by multiplying likelihood × impact
- Validate scores through peer review and comparative analysis
Outcome Checklist
Before moving on, confirm you have:
- Defined likelihood scale with probability ranges
- Defined impact scale with financial and operational thresholds
- Created calibration examples for at least 3 scale levels
- Scored at least one risk using your methodology
- Documented the rationale for your ratings
- Established risk score categories and response thresholds