Vulnerability Disclosure Policy: How to Write One and Why You Need It
A vulnerability disclosure policy is a public statement that defines how your organization receives, processes, and responds to security vulnerability reports from external researchers. Without one, security researchers who discover vulnerabilities in your systems face an uncomfortable choice: report to you and risk legal retaliation, sell the vulnerability on the grey market, or stay silent. A well-written VDP removes that friction and makes your systems more secure.
<div class="zf-stat-callout" style="background:#0d1117;border:1px solid rgba(16,185,129,0.25);border-left:3px solid #10b981;border-radius:4px;padding:16px 20px;margin:24px 0"> <p style="margin:0 0 4px;font-size:10px;font-weight:700;text-transform:uppercase;letter-spacing:0.15em;color:#10b981;font-family:monospace">ZeriFlow Data — 12,400+ sites analyzed</p> <p style="margin:0;font-size:13px;color:#e2e8f0;line-height:1.6;font-family:monospace">In ZeriFlow's corpus of 12,400+ scanned sites, 72% score below 70/100 on security. Only 7% achieve a score above 85 — a threshold that corresponds to passing all OWASP-aligned header and configuration checks.</p> </div>
Governments, enterprises, and security communities have converged on VDPs as a baseline expectation. The US CISA requires federal agencies to maintain VDPs. The EU's NIS2 Directive expects coordinated vulnerability disclosure practices. And increasingly, enterprise procurement teams ask vendors whether they have one.
Check your compliance posture: Free ZeriFlow security scan →
Why Your Organization Needs a Vulnerability Disclosure Policy
The security researchers who find vulnerabilities in your systems are not your adversaries — they are performing a public service. Treating them as criminals discourages disclosure and drives vulnerability information to the wrong markets.
The business case for a VDP:
Legal protection for researchers — Without a VDP, a researcher who accesses your system to confirm a vulnerability could face liability under the Computer Fraud and Abuse Act (CFAA) or equivalent laws. Your VDP grants authorized access within defined scope, protecting researchers who act in good faith.
Legal protection for you — A VDP establishes your process for receiving and acting on vulnerability reports. In the event of a breach involving a vulnerability that was previously reported and not remediated, your VDP documentation (and your response timeline) matters for regulatory and legal purposes.
Better security outcomes — Organizations with VDPs receive more reports. More reports means more vulnerabilities found and fixed before attackers exploit them. The Google Security Team, Apple, and Microsoft have all found that public VDPs dramatically increased the quality and quantity of vulnerability reports they received.
Compliance alignment — NIST CSF, ISO 27001, SOC 2, and most formal security frameworks include vulnerability management as a required practice. A VDP is evidence of a mature vulnerability management process.
The Four Essential Components of a VDP
A functional VDP has four core sections:
1. Scope
The scope defines which systems and assets are included in your VDP. Be specific. A scope that says "all our systems" invites researchers to test things you are not prepared to handle (production databases, third-party integrations, employee systems).
A typical scope includes:
- Specific domains and subdomains (*.yourdomain.com, api.yourdomain.com)
- Mobile application identifiers
- Explicit out-of-scope items (third-party services, customer data, production databases with real user data)
Also define explicitly prohibited testing activities: denial-of-service, social engineering, physical security testing, automated bulk scanning.
2. Process
Explain exactly how to submit a report. Include: - The submission channel (email address, web form, HackerOne/Bugcrowd/Intigriti) - What information to include (steps to reproduce, proof of concept, screenshots, affected URLs) - Whether to encrypt the submission (PGP key if you want encrypted reports) - Languages accepted
3. Safe Harbor
This is the legal protection clause. It states that the organization will not pursue legal action against researchers who discover and report vulnerabilities in good faith, within the defined scope, and without causing harm.
A good safe harbor clause: - Grants authorization to access in-scope systems for the purposes of security testing - Commits not to pursue CFAA, DMCA, or equivalent claims against good-faith researchers - Commits not to refer good-faith researchers to law enforcement - Acknowledges that the authorization is limited to the defined scope
4. Response Commitments
What can reporters expect from you? Define: - Acknowledgment timeline (typically 1-3 business days) - Status update frequency (weekly is standard) - Resolution timeline expectations (varies by severity; critical vulnerabilities often targeted at 7-30 days) - Whether you will credit reporters (public acknowledgment, Hall of Fame) - Whether you will notify reporters before public disclosure
Responsible Disclosure Timelines
The industry has converged on coordinated vulnerability disclosure (CVD) as the standard approach: the researcher gives the organization time to fix the vulnerability before public disclosure. The timelines:
Standard: 90 days — Google Project Zero popularized the 90-day coordinated disclosure timeline. The researcher reports the vulnerability privately, the organization has 90 days to remediate, and after 90 days (with or without a fix), the details are disclosed publicly.
Accelerated: 7 days — For vulnerabilities being actively exploited in the wild, shorter timelines (7 days) are appropriate because attackers already know about the issue.
Extended: 120 days — For complex vulnerabilities that require significant remediation work, organizations can request timeline extensions. Researchers typically grant one extension; a second extension requires strong justification.
Your VDP should state your default response timeline and your process for timeline extension requests. Stating "we aim to remediate critical vulnerabilities within 30 days and high-severity within 90 days" is more credible than "we will fix it as soon as possible."
The security.txt File: Making Your VDP Discoverable
RFC 9116 defines security.txt — a standardized file format for publishing security contact information. It is the web equivalent of a VDP landing page for automated discovery.
The security.txt file should be placed at https://yourdomain.com/.well-known/security.txt. It supports the following fields:
Contact: mailto:security@yourdomain.com
Expires: 2025-12-31T23:59:59z
Encryption: https://yourdomain.com/pgp-key.txt
Acknowledgments: https://yourdomain.com/security/hall-of-fame
Policy: https://yourdomain.com/security/vulnerability-disclosure-policy
Preferred-Languages: enThe Expires field is required — it prevents stale security.txt files from pointing to defunct email addresses. Update it annually.
Many security researchers and bug bounty hunters check for security.txt before submitting reports. Bug bounty platforms also index security.txt files. Having one in place signals that your organization takes responsible disclosure seriously.
ZeriFlow checks whether your security.txt file exists, is properly formatted, and contains the required Expires field — surfacing its absence as a finding in the security posture report.
Real-World VDP Examples
The best VDPs are clear, specific, and fair. Three examples worth reading:
Dropbox — A clear, legally tight safe harbor clause and specific scope definition. Dropbox explicitly lists what types of testing are allowed and prohibited, including specific attack categories (rate limiting tests, account enumeration) that are in-scope versus out-of-scope.
Cloudflare — Cloudflare's VDP distinguishes between their main production systems (in-scope, but with careful safe harbor caveats) and customer data (out-of-scope). They also publish their remediation timelines and their bug severity classification methodology.
US CISA — CISA's BOD 20-01 required all US federal agencies to implement VDPs by March 2021. The resulting federal VDP template is publicly available and provides a model safe harbor clause that has been reviewed by government legal teams.
VDP vs. Bug Bounty Program: Which Do You Need?
A VDP and a bug bounty program are related but distinct:
| VDP | Bug Bounty | |
|---|---|---|
| Financial rewards | No | Yes |
| Researcher community | Self-selected good-faith reporters | Professional bounty hunters |
| Volume of reports | Low to moderate | High |
| Operational overhead | Low | High |
| Appropriate for | All organizations | Organizations with security team capacity |
A VDP is the right starting point for any organization that does not have the security team capacity to triage a high volume of reports. Bug bounties attract professional researchers who submit more reports (more work for your team) in exchange for financial rewards.
The typical maturity progression: VDP → Structured bug bounty with limited scope → Full bug bounty with public scope.
FAQ
Q: Do I need a lawyer to write a vulnerability disclosure policy?
A: Not necessarily, but legal review is valuable before you publish. The safe harbor clause is the legally sensitive component. Organizations that process regulated data (healthcare, financial services) or are concerned about CFAA-adjacent issues should have counsel review the safe harbor language. For most SMBs, adapting a publicly available template (CISA, disclose.io, or a platform like HackerOne's policy builder) and having it reviewed by a lawyer familiar with technology law is sufficient.
Q: What happens if a researcher reports a vulnerability and we cannot fix it within 90 days?
A: Request an extension and explain why. Most researchers will grant a 30-60 day extension if you explain the technical complexity and show that remediation is in progress. If you cannot remediate in time, consider whether a partial mitigation (WAF rule, access restriction) can reduce the risk while the full fix is developed. Researchers generally respond better to honest communication about timelines than to silence.
Q: Can a VDP expose us to more risk by attracting more researchers to probe our systems?
A: The research on this question is consistent: organizations with VDPs find more vulnerabilities, but they are not more likely to experience breaches. Attackers do not need a VDP to probe your systems — they are already doing so. A VDP channels disclosure from ethical researchers who want to help you, not harm you.
Q: Should our VDP offer financial rewards?
A: A VDP, by definition, does not offer financial rewards — that is a bug bounty program. However, your VDP can offer non-financial recognition (public acknowledgment, Hall of Fame, swag) as a way to thank reporters. Many researchers report vulnerabilities without any expectation of payment; they are motivated by the recognition and the security impact.
Q: What if someone submits a vulnerability report that is out of scope?
A: Respond professionally. Thank them for the report, explain that the finding is outside your current VDP scope, and (if the vulnerability is real) consider whether to address it anyway. Out-of-scope reports are an opportunity to evaluate whether your scope definition is appropriate. Some organizations add commonly reported out-of-scope items to their scope over time as they develop the capacity to triage them.
Conclusion
A vulnerability disclosure policy is one of the highest-ROI security investments an organization can make. It is a document — not a product — but it fundamentally changes your relationship with the security research community, reduces your legal risk, and creates a mechanism for continuous improvement of your security posture.
The starting point is your security.txt file — a machine-readable pointer to your VDP that tells researchers and automated tools how to reach you.