Antoine Duno
Founder of ZeriFlow · 10 years fullstack engineering · About the author
Key Takeaways
- Security is not a one-time audit — your application's attack surface changes every time you deploy, every time a CDN updates its TLS configuration, and every time a new CVE is published. This guide explains how to set up continuous website security monitoring that alerts you the moment something degrades.
- Includes copy-paste code examples and step-by-step instructions.
- Free automated scan available to verify your implementation.
How to Set Up Automated Website Security Monitoring (2026 Guide)
A security audit is a photograph. It tells you the state of your application at one moment in time. Automated security monitoring is a continuous video feed — it captures every change, every drift, every new exposure that appears between deployments or overnight.
The difference matters because your security posture is not static. A CDN provider quietly changes a TLS configuration. A new employee pushes a commit that removes a Content-Security-Policy header. A dependency update introduces a new exposed endpoint. Your TLS certificate starts approaching expiry and nobody notices. None of these show up in an audit you ran three months ago.
This guide explains what to monitor, how frequently, and how to get the right alert to the right person the moment something degrades.
What Needs to Be Monitored
Not everything is equally important to track. Focus your monitoring on signals that change frequently and have real security consequences.
Security Score (Overall)
The aggregate score — a single number out of 100 — is your highest-level signal. A score drop of more than 5 points between consecutive scans indicates something changed. Alert on score drops; investigate the delta between the current and previous scan reports to identify what changed.
HTTP Security Headers
These are the most common source of score degradation because they are controlled by your application server configuration, your CDN, and your reverse proxy — all of which can be changed independently. Monitor for the presence and correct value of:
Strict-Transport-Security(HSTS) — presence andmax-agevalueContent-Security-Policy— presence and whether it containsunsafe-inlineorunsafe-evalX-Frame-Optionsorframe-ancestorsCSP directiveX-Content-Type-Options: nosniffReferrer-PolicyPermissions-Policy
A deployment that changes your reverse proxy configuration can silently drop all of these in one push.
TLS Certificate Health
TLS certificate expiry is a monitoring category unto itself. The consequences of a lapsed certificate are severe: browsers display a hard error, your application becomes inaccessible to normal users, and you will spend a stressful hour rotating certificates under pressure.
Monitor: - Days until expiry (alert at 30 days, escalate at 7 days) - Certificate chain validity - TLS version — TLS 1.0 and 1.1 are deprecated; alert if they are still offered - Cipher suite strength
Exposed Sensitive Files and Paths
Servers are occasionally misconfigured to expose files that should never be publicly accessible. Monitor for:
.env,.env.local,.env.production/.git/configor/.git/HEAD/wp-config.php(even on non-WordPress stacks — attackers scan for these)/phpinfo.php/server-status(Apache mod_status)/.DS_Store/backup.zip,/dump.sql
A deployment that changes your web root or your Nginx configuration can accidentally expose these.
CORS Configuration
Overly permissive CORS — particularly Access-Control-Allow-Origin: * on authenticated endpoints — is a common misconfiguration that is easy to introduce and hard to notice without automated monitoring.
Monitoring Frequency
The right frequency depends on how frequently you deploy and how quickly you need to respond.
| Deployment cadence | Recommended monitoring frequency |
|---|---|
| Infrequent (< 1 deploy/week) | Daily scan, alert on any score change |
| Regular (1-5 deploys/week) | Daily scan + post-deploy hook |
| Continuous deployment (multiple/day) | Hourly scan, alert on score drop > 3 points |
| High-stakes (financial, healthcare) | Every 15 minutes, immediate escalation on critical findings |
For most SaaS applications on a typical deployment cadence, a daily scan at a predictable time (typically 1-2 AM in your primary timezone, after the last deployment of the day) plus a post-deployment verification scan is sufficient.
Avoid monitoring too frequently without infrastructure for handling the alerts. A team that receives 48 security alert emails per day will start treating them as noise within a week.
Setting Up ZeriFlow Monitoring
ZeriFlow''s monitoring feature is on the Pro plan and above. Configuration takes about three minutes.
Configuring the Schedule
In the ZeriFlow dashboard, navigate to your project and select Monitoring. You will find options for:
- Frequency: daily or weekly
- Scan time: set the exact hour and minute
- Timezone: select your team''s primary timezone — this matters when you want scans to run after the last deployment of the business day, not at a random UTC offset
- Score threshold: the score below which an alert is triggered
A practical configuration for a SaaS application:
Frequency: Daily
Scan time: 02:00
Timezone: America/New_York
Alert threshold: 80
Alert channels: Slack (#security-alerts), Email (platform@yourcompany.com)Connecting Alert Channels
Slack integration:
- 1In ZeriFlow dashboard: Settings > Integrations > Slack
- 2Click "Add to Slack" — this uses OAuth to authorize the ZeriFlow Slack app
- 3Select the channel (e.g.,
#security-alertsor#ops-alerts) - 4Configure: alert on score drop / alert on critical findings / alert on certificate expiry
For teams that prefer incoming webhooks over the Slack app (more control, no OAuth):
# Test your Slack webhook
curl -X POST https://hooks.slack.com/services/YOUR/WEBHOOK/URL \\
-H ''Content-type: application/json'' \\
-d ''{
"text": "ZeriFlow security monitoring is configured",
"username": "ZeriFlow Security",
"icon_emoji": ":shield:"
}''Discord integration:
- 1In your Discord server: Channel Settings > Integrations > Webhooks > New Webhook
- 2Name it "ZeriFlow Security", assign it to your
#security-alertschannel - 3Copy the webhook URL
- 4In ZeriFlow dashboard: Settings > Integrations > Discord > paste the webhook URL
Discord webhooks use the same format as Slack incoming webhooks, so this works immediately without additional configuration.
Email integration:
Configure email recipients in ZeriFlow dashboard > Settings > Notifications. You can add multiple recipients — useful for routing alerts to a distribution list (security@yourcompany.com) that includes the on-call engineer and the platform team.
What a Good Alert Looks Like
An alert is only useful if it contains enough information to act on immediately. A good security monitoring alert includes:
- Current score vs. previous score — so you know the magnitude of the change
- New findings since last scan — the specific checks that changed
- Severity of new findings — critical/high/medium/low
- Direct link to the full report — no logging in and navigating to find the details
- Suggested remediation — at least for the most common findings
Here is an example of what the ZeriFlow Slack alert looks like when a score drops:
ZeriFlow Security Alert — score drop detected
Project: api.yourcompany.com
Previous score: 87/100
Current score: 71/100
Delta: -16 points
New findings:
[CRITICAL] Missing Content-Security-Policy header
[HIGH] HSTS max-age below recommended minimum (current: 2592000, recommended: 31536000)
[MEDIUM] X-Content-Type-Options header absent
Scan time: 2026-05-02 02:00 EDT
Full report: https://zeriflow.com/reports/abc123
This alert was triggered because the score dropped below your threshold of 80.This is actionable. An engineer receiving this at 2 AM can look at it in the morning and know exactly what changed and what to fix.
Handling Alerts Without Creating Noise
Alert fatigue is real. A team that receives too many alerts stops treating them as urgent. Here is how to structure your alerting to avoid this:
Severity-based routing
Route different severities to different channels with different urgency:
- Critical findings or score drop > 15 points: Page the on-call engineer via PagerDuty/Opsgenie
- High findings or score drop 5-15 points: Post to
#security-alertsSlack channel, which is monitored during business hours - Medium findings or score drop < 5 points: Weekly digest email to the platform team
# Example: using ZeriFlow API to check findings and route accordingly
RESPONSE=$(curl -s \\
-X POST https://api.zeriflow.com/scan-quick \\
-H "X-API-Key: $ZERIFLOW_API_KEY" \\
-H "Content-Type: application/json" \\
-d ''{"url": "https://your-app.com"}'')
CRITICAL_COUNT=$(echo "$RESPONSE" | jq ''[.findings[] | select(.severity == "critical")] | length'')
SCORE=$(echo "$RESPONSE" | jq ''.score'')
if [ "$CRITICAL_COUNT" -gt 0 ]; then
# Page on-call
curl -X POST https://events.pagerduty.com/v2/enqueue \\
-H "Content-Type: application/json" \\
-d "{
\\"routing_key\\": \\"$PAGERDUTY_KEY\\",
\\"event_action\\": \\"trigger\\",
\\"payload\\": {
\\"summary\\": \\"$CRITICAL_COUNT critical security finding(s) on your-app.com\\",
\\"severity\\": \\"critical\\",
\\"source\\": \\"zeriflow\\"
}
}"
elif (( $(echo "$SCORE < 75" | bc -l) )); then
# Alert Slack
curl -X POST "$SLACK_WEBHOOK_URL" \\
-H "Content-Type: application/json" \\
-d "{\\"text\\": \\"Security score dropped to $SCORE/100 on your-app.com. Review at https://zeriflow.com\\"}"
fiFlap detection
A finding that appears and disappears between consecutive scans (due to intermittent server behavior, CDN caching, or A/B testing) will generate duplicate alerts. Build a simple flap-detection rule: only alert if the same finding appears in two consecutive scans.
ZeriFlow''s monitoring handles this internally — an alert is only sent when a finding is consistently detected, not on single-scan anomalies.
Weekly summary digest
In addition to immediate alerts, configure a weekly summary that shows: - Score trend over the past 7 days (graph) - Total findings by severity - Findings resolved vs. findings opened this week - Certificate expiry countdown
This weekly context prevents the situation where the team is heads-down on features and loses track of their gradual security posture degradation.
Monitoring Multiple Environments
Production is not the only environment worth monitoring. Staging is where you catch issues before they reach production. Your API subdomain may have different security headers than your main application. Monitor them separately.
In ZeriFlow, each URL is a separate project with its own monitoring schedule and alert configuration. A practical multi-environment setup:
| Environment | Frequency | Threshold | Alert channel |
|---|---|---|---|
your-app.com (production) | Daily at 02:00 | 80 | Slack + PagerDuty on critical |
api.your-app.com | Daily at 02:15 | 75 | Slack |
staging.your-app.com | Weekly | 65 | Email only |
admin.your-app.com | Daily at 02:30 | 85 | Slack + Email |
Admin and API subdomains often deserve higher thresholds because they handle privileged operations and authenticated sessions.
Conclusion
Automated security monitoring is the difference between discovering a misconfiguration in your morning Slack digest and discovering it from a security researcher''s disclosure email. The setup is straightforward, the ongoing cost is minimal, and the coverage is comprehensive.
The practical starting point: set up ZeriFlow to scan your production URL daily, connect it to your team''s Slack channel, and set a threshold of 75. Within the first week, you will have baseline data on your security posture. Within the first month, you will have caught at least one configuration change that degraded your score.
Start monitoring with ZeriFlow — the free tier includes 3 scans per day, no credit card required.