Home/Examples/Alert Fatigue Assessment: 150 Alerts per Week for a 4-Person On-Call Rotation
● Calculations run locally in your browser. Some site features, such as usage analytics or shared results, may use network requests.
Example — Alert Fatigue Calculator
Alert Fatigue Assessment: 150 Alerts per Week for a 4-Person On-Call Rotation
Calculate alert fatigue risk when a 4-person on-call team receives 150 alerts per week. Actionability score, false positive rate impact, and recommended reduction targets.
Alert Fatigue Assessment
Total alerts/week: 150
False positives (40%): 60 alerts
Actionable alerts: 90 alerts
Alerts per engineer/week: 37.5
Investigation time:
Total: 150 × 8 min = 1,200 min/week (20 hours)
Per engineer: 5 hours/week of alert triage
Fatigue score: HIGH (7.8 / 10)
Benchmarks:
Google SRE target: < 5 actionable alerts/shift
Current: ~13 actionable alerts/day per rotation
Recommendation:
Target ≤ 50 total alerts/week
Actions:
1. Eliminate alerts with >25% false positive rate
2. Group correlated alerts into single incidents
3. Convert low-urgency alerts to tickets (non-paging)
4. Review and raise alert thresholds
Alert fatigue occurs when on-call engineers receive more alerts than they can meaningfully investigate, leading to alert blindness, missed incidents, and burnout. 150 alerts/week with 40% false positives means engineers spend 5 hours/week on noise. Google SRE guidelines suggest fewer than 5 actionable alerts per on-call shift as a healthy baseline.
What to do next
Run a 2-week alert audit: log every alert with disposition (action taken / no action needed). Any alert with >25% no-action rate in the audit period is a candidate for elimination or conversion to a low-priority ticket.
Use the Alert Fatigue Calculator to run this on your own input.
Suppression (silencing) is appropriate for planned maintenance windows, not chronic noise. If an alert is suppressed permanently it should be deleted or converted to a non-paging metric. Permanent suppressions accumulate technical debt and hide monitoring gaps.
What's the difference between grouping and deduplication?
Deduplication prevents the same alert from firing multiple times within a window. Grouping clusters related alerts (e.g., all alerts from the same failing service) into a single notification. Both reduce notification volume but serve different purposes — Alertmanager supports both via `group_by` and `repeat_interval` settings.