Non-Lawyers Summary
Bug bounty work is not automatically legal just because it improves security. The safest protection comes from written permission through a bug bounty or disclosure program, because DOJ policy alone does not stop civil suits, state-law claims, or arguments that a researcher went outside authorized scope.
The Cold Open
He found the vulnerability on a Tuesday afternoon. A logic flaw in a healthcare portal — authentication bypass, unauthenticated access to patient records. He didn't steal anything. He didn't sell anything. He sent an email to the company's security team explaining exactly what he'd found and how they could fix it.
Two weeks later, federal agents were at his door.
No bug bounty program. No vulnerability disclosure policy. No written authorization. Just a researcher who believed that finding a flaw and reporting it privately was self-evidently good — and a legal system that wasn't designed with that belief in mind.
This is the paradox at the center of security research law: the people most capable of finding critical vulnerabilities operate in a legal framework that was written to prosecute them. Understanding exactly where that framework draws its lines — and more importantly, where it doesn't — may be the most practically important legal question in modern cybersecurity.
Overview
No comprehensive federal statute protects security researchers from CFAA prosecution. The DOJ issued a policy in 2022 saying it won't charge good-faith research — but that policy is not a statute, does not bind courts, and does not protect against civil suits by private companies. The Van Buren decision in 2021 narrowed CFAA's reach, making terms-of-service violations less legally dangerous. Bug bounty programs and vulnerability disclosure policies create contractual authorization that removes the "without authorization" element entirely. For lawyers advising security researchers, companies receiving vulnerability reports, or organizations designing research programs, understanding exactly what protection exists — and where the gaps are — is the difference between a productive security research relationship and a legal dispute.
Key Concepts
The Core Legal Problem — A Law Designed to Prosecute What You're Trying to Do
Security research requires probing systems for weaknesses. Probing systems for weaknesses is exactly what CFAA was designed to prohibit. The resulting tension: good-faith researchers who make organizations more secure may be committing federal crimes and state computer crimes by doing so.
The Computer Fraud and Abuse Act (18 U.S.C. § 1030) was enacted in 1986 — before the web existed, before bug bounties existed, before the security research profession existed. It was designed to criminalize hackers. It has been applied to security researchers. The distinction the law draws — "without authorization" — turns on facts that are not always clear, in legal proceedings that begin after the research is complete.
Three Distinct Sources of Protection — and the Terrifying Gaps Between Them
| Source | What It Provides | What It Doesn't Provide |
|---|---|---|
| Van Buren (2021) | ToS violations alone likely not CFAA criminal | Protection from civil CFAA suits; protection from state law |
| DOJ Charging Policy (2022) | Federal prosecutors unlikely to charge good-faith research | Binding effect on courts; civil suit protection; state prosecution protection |
| Bug bounty / VDP enrollment | Contractual authorization — eliminates "without authorization" element | Protection for out-of-scope activity; state law conflicts |
Read that table carefully. Each source protects against something specific. None of them protect against everything. The researcher who relies on DOJ policy alone while ignoring state law has made a dangerous assumption. The researcher who stays within bug bounty scope but crosses into a related system that wasn't listed has crossed from protection into prosecution risk — often without knowing it happened.
CFAA Post-Van Buren: Where the Law Draws the Line
The Case That Changed the Calculus
Van Buren v. United States (2021) — the Supreme Court case that security researchers watched with the intensity of a surveillance operation.
Nathan Van Buren was a police officer. He ran a license plate through a law enforcement database in exchange for money from an FBI informant. He had authorized access to the database. He used it for an unauthorized purpose. The government charged him under CFAA's "exceeds authorized access" provision.
Then the judge dropped the ruling that changed everything: the Supreme Court held that "exceeds authorized access" covers accessing a prohibited area of a computer system — not mere misuse of permitted access. If you can legitimately access the data, using it for the wrong reason is not a CFAA crime.
For security researchers, the implication was immediate and profound. Violating a terms-of-service clause — "you may not use this site for security research" — while using legitimately issued credentials no longer triggers criminal CFAA liability under the federal statute. The gates had to be technically restricted, not just contractually restricted.
Likely NOT CFAA criminal post-Van Buren:
- Accessing publicly available systems with no authentication barrier
- Probing a system in violation of terms of service, where no authentication was bypassed
- Using an API with valid tokens against ToS restrictions
Still CFAA risk:
- Bypassing any authentication mechanism (even weak authentication — CAPTCHA, rate limiting, login forms)
- Accessing systems the researcher has no permission to access at all — no bug bounty, no VDP, no express authorization
- Exceeding explicitly defined technical scope in a bug bounty program
But that wasn't the whole story. Civil CFAA risk survives Van Buren. Private plaintiffs suing under CFAA § 1030(g) are not subject to the same prosecutorial restraint as the DOJ. A company that is unhappy about a researcher's findings — even findings that improve security — can file a civil CFAA suit even where the criminal exposure under Van Buren is limited.
DOJ Good-Faith Security Research Policy (2022) — What It Is, What It Isn't
In May 2022, DOJ updated its CFAA charging policy to state that good-faith security research should not be prosecuted. The policy defined good-faith research with precision:
It sounded like a safe harbor. It was not a safe harbor.
What the policy IS:
- Prosecutorial guidance for federal criminal CFAA charges
- A meaningful signal that legitimate researchers are unlikely to face federal prosecution for well-scoped research
- Part of a broader shift toward treating security research as socially beneficial
What the policy IS NOT:
- A statute — it can be modified or revoked by the next administration
- Binding on courts — a court can convict even if DOJ says it won't charge
- Protection from civil CFAA suits by private companies
- Protection from state computer crime statutes
- Blanket immunity for any research labeled "good faith"
The word "primarily" in the policy definition does the most dangerous work. Primarily used to promote security. What about the researcher who reports findings on Twitter before the vendor patches? What about the one who discusses methodology at a conference while the vulnerability is live? The policy's protection is real — and the edges of that protection are genuinely unclear.
CISA Binding Operational Directive 20-01 (BOD 20-01) — The Federal Benchmark
Issued September 2, 2020. Applies to all federal civilian executive branch agencies.
Requirement: Every federal agency must develop and publish a Vulnerability Disclosure Policy (VDP) covering all internet-accessible information systems.
VDP must include:
- Scope definition (which systems are in scope for testing)
- Good-faith testing rules (what researchers may and may not do)
- A clear process for submitting reports
- A commitment that the agency will not pursue legal action against researchers who act in good faith within the defined scope
Status: CISA tracks compliance at cyber.dhs.gov/agencies. Federal agencies are required to comply — private sector companies are not, but many have adopted similar frameworks voluntarily.
Significance for lawyers: A federal agency VDP is a formal authorization document. A researcher operating within the scope of a published federal VDP has express authorization — the "without authorization" CFAA element is satisfied in their favor. The federal government has acknowledged, in policy and in law, that security researchers provide a public benefit that justifies formal authorization frameworks. The private sector has been slower to reach the same conclusion.
HackerOne AI Research Safe Harbor (January 2026) — The New Frontier
In January 2026, HackerOne launched an AI Research Safe Harbor — the first systematic legal safe harbor specifically for security research on AI and machine learning systems.
Before this program, AI security research existed in a legal void. Probing a language model's outputs for harmful behavior. Testing for prompt injection. Assessing whether a model's safety training could be bypassed. All of this could theoretically implicate CFAA — and none of it had the clear contractual authorization that traditional bug bounty programs provide for software systems.
Structure: Participating AI companies — foundation model developers, AI product companies — grant researchers contractual permission to probe their AI systems for security and safety vulnerabilities under defined conditions.
What it covers:
- Testing AI models for security vulnerabilities (prompt injection, model extraction, data poisoning, adversarial inputs)
- Testing AI systems for safety issues (harmful output generation, bias, capability assessment)
- Research within defined scope parameters
What it does NOT cover:
- Social engineering of company employees
- Physical access to infrastructure
- Production infrastructure attacks outside defined scope
- Research that causes actual harm
Why this matters: The HackerOne safe harbor creates contractual authorization for this activity for participating companies. For the first time, a researcher probing a frontier AI model's behavior has something more than a DOJ policy and a favorable Supreme Court decision. They have a contract.
Bug Bounty Platform Legal Structure — How Authorization Actually Works
The Mechanism
- Organization publishes program on HackerOne, Bugcrowd, or Intigriti
- Program policy defines: in-scope domains, out-of-scope systems, excluded vulnerability types, disclosure timeline, payment ranges
- Researcher reviews policy and agrees to platform terms + program-specific rules
- This creates a contractual authorization — the researcher has express permission to test in-scope systems
- Express authorization = the CFAA "without authorization" element is resolved in the researcher's favor
Legal consequence: Testing within program scope is not "without authorization" under CFAA. Testing outside program scope — even on the same organization's systems, even on a domain that looks related, even on a subdomain that wasn't listed — is still unauthorized.
The scope document is not bureaucratic fine print. It is the legal boundary between protected research and criminal exposure. Researchers who understand this treat it accordingly.
Typical Program Structure
- Disclosure timeline: 90-day standard from report submission to permitted public disclosure (if not patched, researcher may disclose after 90 days)
- Triage: Organization must acknowledge receipt, validate, and respond within program SLA (often 30-90 days)
- Payment: Bounty paid on validation of legitimate, in-scope, reproducible vulnerability — not on submission
- Platform commission: Bug bounty platforms (HackerOne, Bugcrowd) take a percentage of each bounty paid
State Computer Crime Law — The Risk That Doesn't Go Away
CFAA analysis is necessary but not sufficient. This is the lesson that catches researchers off guard.
California Penal Code § 502:
- Covers unauthorized access to "any computer, computer system, or computer network"
- Includes civil action for compensatory + punitive damages
- Post-Van Buren CFAA narrowing does not automatically apply to § 502 interpretation
- California courts have not uniformly adopted a Van Buren-equivalent limitation
New York Penal Law Article 156:
- Computer tampering, unauthorized use of a computer, computer trespass
- Criminal provisions; parallel civil remedies available
Key point for researchers: Avoiding federal criminal CFAA liability under Van Buren does not automatically avoid state criminal or civil computer crime liability. State law analysis must be done independently. A researcher operating in California who benefits from Van Buren's ToS protection is not automatically protected from California Penal Code § 502. These are separate statutes, applied by separate courts, under separate precedent.
Coordinated Vulnerability Disclosure — The Professional Standard
The coordinated disclosure process evolved because full disclosure — publishing a working exploit without vendor notification — benefits attackers as much as it benefits defenders. The current standard:
- Researcher discovers vulnerability
- Researcher reports privately to vendor/organization (without public disclosure)
- Vendor acknowledges receipt and begins remediation
- Standard timeline: 90 days to patch (the Google Project Zero standard; widely adopted)
- If vendor patches within 90 days: researcher discloses publicly after patch
- If vendor does NOT patch within 90 days: researcher may disclose (typically with warning)
Legal standard references:
- ISO/IEC 29147 (2018): international standard for vulnerability disclosure policies
- ISO/IEC 30111 (2019): international standard for vulnerability handling processes
- CISA coordinated vulnerability disclosure guidance
The 90-day standard was not invented by lawyers. It was invented by security researchers — specifically Google's Project Zero team — as a practical balance between vendor remediation time and the researcher's legitimate interest in public disclosure. It has become so widely adopted that it now functions as the de facto definition of "designed to avoid harm" in the DOJ's good-faith policy.
"Full disclosure" (immediate public release without vendor notification):
- Legally riskier — does not satisfy the "designed to avoid harm" element of DOJ's good-faith policy
- Ethically contested even where legally available
- May enable attackers to exploit the vulnerability before a patch is available
Reform Landscape — The Law That Hasn't Caught Up
No federal statutory safe harbor exists as of 2026.
Proposed legislation for a CFAA security research safe harbor has been introduced in Congress multiple times — the Security Research Act and similar bills. None have passed as of 2026. The researcher who finds a critical vulnerability today has the same statutory exposure they had in 1986, when the CFAA was written for a world that bore no resemblance to the one they operate in.
Reform proposals in circulation:
- Explicit CFAA exception for good-faith security research within defined parameters
- Safe harbor conditioned on following coordinated disclosure norms
- Authorization for testing in compliance with a published VDP
EU approach: Budapest Convention Article 6 (misuse of devices) includes a "legitimate purpose" exception that some signatory states have implemented to protect security research. Implementation varies significantly across EU Member States — no uniform EU-wide researcher safe harbor exists.
The moral calculus here is uncomfortable: security research makes systems safer. The legal framework discourages it. Every year the safe harbor legislation fails to pass, the gap between what the law allows and what security requires grows wider.
Practitioner Takeaways
1. For lawyers advising security researchers: written authorization is the only reliable protection. DOJ policy is not a statute. Van Buren helps with ToS-only restrictions. The only thing that removes the CFAA "without authorization" element is actual authorization — a bug bounty program enrollment, a signed engagement letter, or participation in a VDP program with defined scope. Advise clients to get this in writing before testing begins.
2. Check state law independently of CFAA. Van Buren's narrowing of CFAA does not automatically narrow California PC § 502 or New York Penal Law § 156. Researchers operating in multiple jurisdictions need state-level analysis for each state where their testing activity touches systems.
3. Scope discipline is the researcher's most important legal control. The bug bounty program scope defines the boundary of authorization. Testing out-of-scope systems — even on the same organization — is "without authorization." Researchers should document their scope review and adhere to it strictly. Counsel should advise clients to preserve all scope documentation and confirm scope in writing before beginning.
4. For lawyers advising organizations: publish a VDP. An organization without a VDP is implicitly communicating that all security research is unauthorized. Organizations that publish VDPs channel research constructively, build goodwill with the security community, and create a legal framework that benefits both sides. From a litigation risk perspective, organizations with VDPs are less likely to face aggressive security researchers who feel they have no legitimate option.
5. The HackerOne AI Safe Harbor is newly relevant for AI-adjacent clients. Any client developing or deploying AI/ML systems that wants security research conducted on those systems should evaluate participation in the HackerOne AI Research Safe Harbor. It is the only systematic contractual framework for AI security research as of early 2026.
Quiz
See: artifacts/quizzes/quiz-01j.md
Test your knowledge
Ready to check what stuck?
10 questions — cases, statutes, and the practical move for each. Takes 5 minutes.