Antoine Duno
Founder of ZeriFlow · 10 years fullstack engineering · About the author
Key Takeaways
- A security score drop is a symptom — something changed in your application's configuration, a certificate is approaching expiry, or a new vulnerability was disclosed. This guide explains what causes score drops, how to configure alert channels, and how to build an escalation policy that gets the right information to the right person.
- Includes copy-paste code examples and step-by-step instructions.
- Free automated scan available to verify your implementation.
How to Get Alerted When Your Security Score Drops (Slack, Discord, Email)
Your security score is fine today. But it may not be fine tomorrow — and without monitoring, you will not know until a customer mentions it, a penetration tester finds it, or a security researcher publishes a disclosure.
Score drops are not random. They happen for predictable, specific reasons. Understanding those reasons helps you configure meaningful alerts — ones that give you enough context to act immediately, rather than notifications that tell you something changed but not why.
Why Security Scores Drop
Deployment-Induced Configuration Drift
This is the most common cause. An engineer changes an Nginx configuration, a CSP header gets accidentally removed, a new deployment platform replaces the old one with different defaults. The application works fine from a functional standpoint, but the security configuration has silently degraded.
Common deployment-induced drops: - Proxy configuration changes remove security headers entirely - A new CDN layer serves different headers than the origin - An environment variable controlling security settings has the wrong value in production - A blue/green deployment switches to a new server with a different baseline configuration
New CVE Disclosures
Security scanning tools continuously update their detection rules as new vulnerabilities are disclosed. A dependency that was clean last week may now trigger a known-CVE finding because a researcher published a new exploit. Your code did not change, but your score changes because the scanner now knows about a vulnerability it did not know about before.
This is actually the system working correctly. The score drop is telling you to update a dependency.
Expiring Certificates
TLS certificate expiry has a predictable timeline. A certificate expiring in 29 days does not trigger the same alert as one expiring today — but both warrant attention. The score impact of an expiring certificate increases as the expiry date approaches.
If you use Let''s Encrypt with certbot, the auto-renewal should handle this. If you use a commercial CA or manage certificates manually, expiry is a real operational risk. ZeriFlow''s monitoring tracks days-to-expiry and will alert you before the problem becomes critical.
Infrastructure Changes Below the Application Layer
Your application may not have changed, but the infrastructure serving it might have. A cloud provider updating their managed load balancer, a CDN changing default TLS settings, a WAF being updated with new rule sets — all of these can affect your security score without any action on your part.
Configuring Slack Alerts
Using the ZeriFlow Slack Integration (Recommended)
The simplest approach for teams already using Slack is ZeriFlow''s native Slack integration:
1. Navigate to ZeriFlow Dashboard > Settings > Integrations > Slack
2. Click Add to Slack and authorize the ZeriFlow app for your workspace
3. Select the destination channel (#security-alerts or #ops-alerts)
4. Configure alert conditions:
- Alert when score drops below threshold
- Alert when critical finding is detected
- Alert when certificate expires within N days
Using Slack Incoming Webhooks
If you prefer not to install the Slack app (common in enterprise environments with OAuth restrictions), use incoming webhooks:
# 1. Create webhook: Slack App settings > Incoming Webhooks > Add New Webhook
# 2. Store the webhook URL as an environment variable or secret
SLACK_WEBHOOK_URL="https://hooks.slack.com/services/T.../B.../..."
# 3. Build and send the alert payload
send_security_alert() {
local score=$1
local previous_score=$2
local url=$3
local critical_count=$4
local report_url=$5
local delta=$((previous_score - score))
local color="warning"
[ "$critical_count" -gt 0 ] && color="danger"
[ "$delta" -lt 5 ] && color="warning"
curl -s -X POST "$SLACK_WEBHOOK_URL" \\
-H "Content-Type: application/json" \\
-d "{
\\"attachments\\": [{
\\"color\\": \\"$color\\",
\\"title\\": \\"Security Score Drop — $url\\",
\\"fields\\": [
{\\"title\\": \\"Current Score\\", \\"value\\": \\"$score/100\\", \\"short\\": true},
{\\"title\\": \\"Previous Score\\", \\"value\\": \\"$previous_score/100 (-$delta)\\", \\"short\\": true},
{\\"title\\": \\"Critical Findings\\", \\"value\\": \\"$critical_count\\", \\"short\\": true},
{\\"title\\": \\"Scan Time\\", \\"value\\": \\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\\", \\"short\\": true}
],
\\"actions\\": [{
\\"type\\": \\"button\\",
\\"text\\": \\"View Full Report\\",
\\"url\\": \\"$report_url\\"
}],
\\"footer\\": \\"ZeriFlow Security Monitoring\\"
}]
}"
}
# Usage
send_security_alert 68 84 "https://your-app.com" 2 "https://zeriflow.com/reports/abc123"Structuring the Alert for Maximum Usefulness
A Slack alert that says "security score changed" is noise. A useful alert contains:
{
"blocks": [
{
"type": "header",
"text": {
"type": "plain_text",
"text": "Security Alert: Score Drop Detected"
}
},
{
"type": "section",
"fields": [
{"type": "mrkdwn", "text": "*Site:* api.your-app.com"},
{"type": "mrkdwn", "text": "*Score:* 68/100 (was 84/100)"},
{"type": "mrkdwn", "text": "*Delta:* -16 points"},
{"type": "mrkdwn", "text": "*Critical Findings:* 2 NEW"}
]
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*New findings since last scan:*\\n• [CRITICAL] Missing Content-Security-Policy\\n• [CRITICAL] TLS 1.1 still accepted\\n• [HIGH] HSTS max-age below minimum"
}
},
{
"type": "actions",
"elements": [
{
"type": "button",
"text": {"type": "plain_text", "text": "View Full Report"},
"url": "https://zeriflow.com/reports/abc123",
"style": "danger"
}
]
}
]
}Configuring Discord Alerts
Discord webhooks use a nearly identical format but with a slightly different payload structure. Discord is increasingly common in developer communities and smaller engineering teams.
Creating a Discord Webhook
- 1Open your Discord server, navigate to the channel where you want alerts
- 2Channel Settings > Integrations > Webhooks > New Webhook
- 3Name: "ZeriFlow Security Bot"
- 4Copy the webhook URL
Sending a Discord Alert
DISCORD_WEBHOOK_URL="https://discord.com/api/webhooks/YOUR_ID/YOUR_TOKEN"
send_discord_alert() {
local score=$1
local previous_score=$2
local url=$3
local report_url=$4
local findings_summary=$5
local delta=$((previous_score - score))
# Discord embed color: red=16711680, orange=16744272, yellow=16776960
local color=16744272 # orange default
[ "$delta" -gt 10 ] && color=16711680 # red for large drops
curl -s -X POST "$DISCORD_WEBHOOK_URL" \\
-H "Content-Type: application/json" \\
-d "{
\\"username\\": \\"ZeriFlow Security\\",
\\"avatar_url\\": \\"https://zeriflow.com/icon.png\\",
\\"embeds\\": [{
\\"title\\": \\"Security Score Drop: $url\\",
\\"color\\": $color,
\\"fields\\": [
{\\"name\\": \\"Current Score\\", \\"value\\": \\"$score/100\\", \\"inline\\": true},
{\\"name\\": \\"Previous Score\\", \\"value\\": \\"$previous_score/100\\", \\"inline\\": true},
{\\"name\\": \\"Delta\\", \\"value\\": \\"-$delta points\\", \\"inline\\": true},
{\\"name\\": \\"New Findings\\", \\"value\\": \\"$findings_summary\\", \\"inline\\": false}
],
\\"url\\": \\"$report_url\\",
\\"footer\\": {\\"text\\": \\"ZeriFlow Security Monitoring\\"},
\\"timestamp\\": \\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\\"
}]
}"
}Discord supports thread-based replies, which is useful for tracking the resolution of a specific alert. When an alert fires, post it in the channel. When the issue is resolved and the score recovers, reply in the same thread with "Resolved: score restored to 84/100 after CSP header fix."
Email Alert Configuration
Email is the right channel for non-urgent alerts and weekly summaries. It is asynchronous and searchable, making it useful for audit trails and retrospectives. It is the wrong channel for critical findings that need immediate attention — no one is checking email at 2 AM.
In ZeriFlow dashboard > Settings > Notifications, configure:
- Immediate email: sent when score drops below threshold or critical finding detected
- Daily digest: morning summary of the previous night''s scan
- Weekly report: trend data, score history, findings opened and resolved
For email deliverability, use an address from your domain rather than a personal address. security@yourcompany.com or platform-alerts@yourcompany.com ensures alerts reach a shared inbox that multiple team members monitor.
Building an Escalation Policy
The right person needs to receive the right alert at the right time. An escalation policy routes alerts based on severity and time:
Critical finding OR score drop > 20 points:
→ Immediate: PagerDuty/Opsgenie page to on-call engineer
→ 15 min no-ack: Escalate to engineering manager
→ 30 min no-ack: Escalate to CTO
High finding OR score drop 10-20 points:
→ Slack #security-alerts (business hours: visible)
→ Email to platform-team@yourcompany.com
Medium finding OR score drop 5-10 points:
→ Daily digest email to platform-team@yourcompany.com
→ Weekly summary included in engineering standup
Low finding OR score drop < 5 points:
→ Weekly digest onlyIntegrate ZeriFlow''s webhook with PagerDuty using their Events API:
trigger_pagerduty_incident() {
local summary=$1
local severity=$2 # critical, error, warning, info
curl -s -X POST "https://events.pagerduty.com/v2/enqueue" \\
-H "Content-Type: application/json" \\
-d "{
\\"routing_key\\": \\"$PAGERDUTY_INTEGRATION_KEY\\",
\\"event_action\\": \\"trigger\\",
\\"dedup_key\\": \\"zeriflow-$(date +%Y%m%d)\\",
\\"payload\\": {
\\"summary\\": \\"$summary\\",
\\"severity\\": \\"$severity\\",
\\"source\\": \\"zeriflow-monitoring\\",
\\"timestamp\\": \\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\\"
},
\\"links\\": [{
\\"href\\": \\"https://zeriflow.com/reports\\",
\\"text\\": \\"View ZeriFlow Report\\"
}]
}"
}Managing Alert Fatigue
Too many alerts train your team to ignore alerts. Here is how to keep alert volume sustainable:
Set intelligent thresholds. An alert on every 1-point drop is noise. Alert when the score drops below a defined threshold or when the delta between consecutive scans exceeds 5 points.
Use deduplication keys. If the same finding persists across multiple scans, send one alert — not one per scan. ZeriFlow''s monitoring handles this: you get an alert when a finding is first detected, not on every subsequent scan.
Schedule maintenance windows. During planned maintenance or major deployments, suppress non-critical alerts for a defined window. This prevents alert floods when you know the score will temporarily drop.
Review alert configuration quarterly. As your security posture improves, the alerts that used to fire frequently should fire rarely. If you are still getting daily alerts after two months, either the threshold is wrong or you have unresolved technical debt.
Make resolution easy. An alert that links directly to the full report, with remediation guidance per finding, gets resolved faster than a vague "score dropped" notification that requires additional investigation. ZeriFlow''s alerts include per-finding remediation steps.
Conclusion
Security score alerts are only valuable if they lead to action. An alert that goes to a channel nobody monitors, without context about what changed or why, becomes background noise within a week.
Spend 30 minutes on the setup: connect ZeriFlow to Slack, configure a sensible threshold (80 is a good starting point), and define what happens when an alert fires. That 30 minutes will save you from discovering a security regression the hard way.
Set up ZeriFlow monitoring on the Pro plan — includes Slack, Discord, and email integrations, plus scheduled scans with exact time and timezone control.