Non-Lawyers Summary

Bug bounty programs and vulnerability disclosure policies tell you which systems to test, but they do not immunize you from criminal prosecution — that power belongs to prosecutors, not companies. The Supreme Court's 2021 Van Buren ruling narrowed what "unauthorized access" means under the CFAA, but left enormous grey zones that researchers routinely stumble into. This module maps exactly what creates legal exposure even when you think you have permission, why "in scope" is not the same as "authorized," and what you need to document before, during, and after any research engagement to minimize your risk.


What This Module Answers Fast

  • A bug bounty program listed the domain as in scope — does that mean I'm authorized? → Possibly. It depends on the specific language and what you accessed. See Section 2.
  • The DOJ said they won't prosecute good-faith researchers under CFAA — am I protected? → No. It's a discretionary policy, not a legal defense. See Section 3.
  • I only accessed publicly exposed data — how can that be a crime?Auernheimer. It happened. See Section 4.
  • A HackerOne safe harbor clause is in the program policy — does it protect me in court? → No US court has held that it creates a legal defense. See Section 5.
  • I'm outside the US — can they extradite me? → Depends on the country and what you accessed. See Section 8.
  • What can I actually do without significant legal risk? → The Safe/Grey/Red matrix in Section 10 maps it out.

Section 1 — The Gates You Cannot See: What "Authorized Access" Actually Means After Van Buren (2021)

The Calm Before the Click

A researcher sits at his keyboard. He has a valid API key. He has a company's bug bounty program open in another tab. The scope list says: "All production web properties on *.example.com." He's done this a hundred times. He thinks he knows where the line is.

He is wrong about where the line is. Most researchers are.

The Supreme Court's 2021 ruling in Van Buren v. United States rewrote the map — but the new map has grey zones just as treacherous as the old one. Understanding them is the difference between a report that pays out and a federal indictment.

The Gates/Code Test

In Van Buren v. United States, 593 U.S. 374 (2021), the Supreme Court (6-3) resolved a circuit split on what it means to "exceed authorized access" under 18 U.S.C. § 1030(a)(2). The Court adopted a "gates-up-or-down" approach: a user "exceeds authorized access" only when they access areas of a computer system that are "off limits" to them — i.e., technically gated sections they have no right to enter, regardless of purpose.

Van Buren was a police officer who ran a license plate search for personal reasons using credentials he legitimately held for official use. The Court held this was NOT a CFAA violation because he had technical authorization to access that database. He used it for a bad purpose, but the statute's "exceeds authorized access" language targets the permission to enter, not the purpose of entry.

What this means for researchers:

The Van Buren test has two components:

  1. Technical access — Were you technically permitted to reach the system or resource?
  2. Scope of permission — Did any explicit restriction limit you to a subset of that system?

If a system is fully open to authenticated users with no subsection gating, using your legitimately obtained credentials to look at things you "weren't supposed to" may not trigger CFAA. But if a system has role-based access control, technical gates, or explicit permission scoping, crossing that line can be criminal even if you have credentials to the parent system.

What Bug Bounty Scope Language Does and Does Not Authorize

Bug bounty program scope language creates a contractual permission, not a criminal-law authorization. The distinction matters enormously.

Program LanguageWhat It AuthorizesWhat It Does NOT Authorize
"*.example.com is in scope"Testing production web properties on that domainAccessing internal systems reached via pivoting from *.example.com
"Test accounts provided on request"Actions taken within your provisioned test accountAccessing other users' accounts or data, even via IDOR
"Automated scanning permitted"Running scanners against listed targetsConsuming resources that cause measurable service degradation
"No rate limiting on login endpoint"Documenting the absence of rate limitingCredential stuffing against real user accounts
"Excluding: payment processing systems"Everything not in that exclusionAccessing payment systems even if you find an IDOR path from in-scope endpoints

Specific language analysis:

Language that IS likely sufficient to create authorization for basic testing:

  • "You are authorized to perform security testing on the following systems..." followed by specific hostnames
  • "We grant you a limited, non-exclusive license to test the systems listed below for security vulnerabilities"
  • "Safe harbor: We will not pursue civil or criminal legal action against you for good-faith testing within scope"

Language that is NOT sufficient on its own:

  • "We love bug reports" or similar encouragement without explicit authorization
  • Program policies that describe what to report but do not grant explicit authorization to test
  • Terms of Service that don't explicitly authorize security testing
  • Safe harbor clauses that apply only to the company's own civil and criminal referrals, not third-party standing or parallel prosecution by different entities

The critical gap: Even a well-drafted safe harbor clause binds only the company operating the program. It cannot bind the DOJ, a state AG, or a foreign government. If you find a vulnerability that implicates another company's systems — a third-party payment processor, a shared CDN, a cloud provider API — the safe harbor from the primary program does not extend to those third parties.


Section 2 — A Policy Memo Is Not a Shield: The DOJ 2022 CFAA Charging Policy

The Revelation That Wasn't

In May 2022, DOJ issued updated CFAA charging guidance. The news circulated instantly in hacker communities: "DOJ says it won't prosecute good-faith security research." Forums lit up with relief. Researchers felt protected.

But that wasn't the real story.

What the Policy Actually Says

The DOJ policy instructs prosecutors to weigh:

  • Whether the researcher acted in good faith to improve security
  • Whether access was limited to what was necessary for the research
  • Whether findings were disclosed to the vendor in a way consistent with remediation

What the Policy Does NOT Create — The Hard Limits

This is prosecutorial guidance — an internal policy memo — not a statute, regulation, or binding legal standard:

  1. Not a legal defense. You cannot cite it in court as a defense to CFAA charges. A prosecutor who chooses to charge you anyway is not violating any law — they're just violating policy, which is their own call.
  2. Not retroactive immunity. Prior conduct charged before the policy, or conduct a prosecutor classifies as NOT good-faith research, is not covered.
  3. "Good faith" is defined by the DOJ, not by you. The policy lists factors that DO NOT demonstrate good faith, including: accessing data beyond what's needed to demonstrate the vulnerability, retaining data you accessed, and publishing vulnerability details without first attempting coordinated disclosure.
  4. State prosecution is unaffected. A federal declination to prosecute does not prevent a California DA from charging you under Penal Code § 502, which carries up to 3 years in state prison.
  5. Civil liability is unaffected. The DOJ policy says nothing about the company's right to sue you under CFAA's civil provision (18 U.S.C. § 1030(g)) or common law trespass to chattels.

Prosecutorial discretion ≠ immunity. The 2022 policy is a speed bump, not a wall.


Section 3 — The Open Gate That Was Never Open: The Auernheimer Problem

The Disruption

  1. Andrew Auernheimer and Daniel Spitler found something remarkable: AT&T's website would return iPad owners' email addresses from a URL that followed a predictable pattern based on ICC-IDs. No authentication was required. Anyone who sent a properly formatted GET request received a name and email address in response.

They wrote a script. They iterated through hundreds of thousands of ICC-IDs. 114,000 email addresses harvested — senators, military officers, executives.

Nothing in the system had stopped them. The gate appeared to be open. But the gate was never open for them.

What Happened in Court

In United States v. Auernheimer, 748 F.3d 525 (3d Cir. 2014), Auernheimer was convicted of CFAA violations in the District of New Jersey. The Third Circuit vacated the conviction — but only on venue grounds (the conduct didn't occur in New Jersey where the case was tried). The court explicitly declined to reach the merits of whether accessing publicly exposed data violated the CFAA.

The conviction was vacated on venue. Not on the merits of the CFAA theory.

The government's theory — that accessing data through a publicly accessible endpoint without authorization violates CFAA — was never rejected on the merits by any federal appeals court. The government charged, obtained a conviction, and would have upheld it absent the venue error.

What This Means — The Myth That Gets Researchers Arrested

  • In hiQ Labs v. LinkedIn (9th Cir. 2022), the court held that scraping publicly available data does not violate CFAA. But this is a 9th Circuit decision in a civil context involving a competitor's data, not a criminal prosecution.
  • The Van Buren "gates up or down" test favors researchers testing truly open endpoints — but it doesn't immunize systematic enumeration or data collection that the system operator would clearly prohibit.

Practical rule: "No authentication required" does not mean "authorized." If a system was clearly not intended to provide the data you're collecting, a prosecutor can and will argue that your access was unauthorized even if nothing technically blocked you. The fact that a gate was left open does not mean you had permission to walk through it.


Section 4 — The False Armor: HackerOne / Bugcrowd Safe Harbor Language

What Hackers Believe vs. What the Law Says

The standard HackerOne safe harbor provision reads (paraphrased):

Bugcrowd's equivalent language is substantively similar. Researchers read this language and feel protected. The feeling is partially justified and dangerously incomplete.

What This Actually Does

Civil protection: If the company adopts this language, it likely waives the company's right to sue you for the specific CFAA civil claim (18 U.S.C. § 1030(g)) and related civil trespass theories — as long as your conduct stays within what any reasonable reading of "good faith" would cover. This is real and meaningful protection against the company itself.

Criminal protection: The language has no binding effect on the DOJ, FBI, or any state prosecutor. The phrase "authorized conduct" in a private company's program policy may be relevant evidence of authorization for CFAA purposes, but:

  • No US federal court has held that a bug bounty safe harbor clause constitutes legal authorization defeating a CFAA charge.
  • The DOJ can and does charge researchers whose conduct it views as exceeding scope, even when a program's safe harbor language was in place.
  • Third parties harmed by your testing — even incidentally — can file complaints independently, and those complaints can trigger prosecution regardless of the program's safe harbor.

The gap no one talks about: Even in a program with perfect safe harbor language, the moment you touch a system that is NOT operated by the contracting company — a third-party auth provider, a shared infrastructure vendor, an embedded iframe from a different origin — you are outside the scope of the safe harbor entirely.


Section 5 — The Pattern That Gets Researchers Prosecuted

The Mystery — Why Smart People Get Charged

The Hutchins, Auernheimer, and Weigman cases, combined with dozens of smaller prosecutions, reveal a consistent pattern. The researchers charged weren't careless. They often believed, sincerely, that what they were doing was legitimate. What broke each case was a handful of behaviors — behaviors that cross the line from research into prosecution risk regardless of stated intent.

Behavior (a): Accessing data beyond what's needed to prove the vulnerability. Demonstrating an IDOR vulnerability requires accessing one record you don't own. Downloading 100,000 records to "prove scale" is evidence of data exfiltration, not security research. One record: research. Systematic enumeration: crime.

Behavior (b): Sharing, publishing, or retaining the data. The moment you save, copy, transmit, or share data you accessed during testing, you have separate exposure: potential violation of the Stored Communications Act (18 U.S.C. § 2701), state privacy laws, and GDPR if European personal data was involved. Weev's prosecution was specifically triggered by the sharing of the harvested email list with a journalist.

Behavior (c): Public disclosure before the vendor patches. Disclosing publicly before coordinated disclosure creates exposure under multiple theories: tortious interference if it causes demonstrable business harm, and it destroys your good-faith defense for purposes of the DOJ charging policy.

Behavior (d): Accessing systems outside the defined scope. This is the most common pathway to prosecution. Following a vulnerability chain that leads you through an in-scope endpoint into an out-of-scope backend system means you've left the program's protection — even if you didn't intend to. STOP when scope ends. Document where you stopped and why.


Section 6 — The Clock Is Running: CVD, Disclosure Timelines, and the DMCA Security Research Exemption

Coordinated Vulnerability Disclosure (CVD) Timelines

The legal risk during the disclosure period is asymmetric: the longer you wait after discovery, the weaker your claim that you acted in good faith; the shorter you wait, the more likely you'll be accused of coercive disclosure.

OrganizationStandard Disclosure DeadlineNotes
CERT/CC45 days (active exploit); 90 days (standard)Carnegie Mellon; may extend by 7-14 days for patch near-completion
Google Project Zero90 days from vendor notificationExtended to 104 days if vendor commits to patch by day 90
CISATypically 45 days for critical infrastructureMay coordinate extension for national security
ISO/IEC 2914790 days recommended; not legally bindingInternational standard; courts may reference as industry norm

Legal consequence of early disclosure: If you disclose before the vendor patches and before the deadline, you face potential claims of intentional harm under CFAA (damage prong), tortious interference, and loss of good-faith defense. If the vulnerability was used by others between your disclosure and the vendor's patch, you may face civil liability for the resulting harm.

Legal consequence of never disclosing: Holding a zero-day without disclosure is not itself illegal, but using it offensively, selling it to non-government parties, or using it to maintain persistent access creates criminal exposure.

DMCA § 1201 and the 2024 Security Research Exemption

17 U.S.C. § 1201 prohibits circumventing technological protection measures (TPMs). This creates exposure for security researchers who bypass authentication, break encryption, or defeat access controls — even in the course of legitimate research.

The Copyright Office grants triennial exemptions under 17 U.S.C. § 1201(a)(1)(C). The 2024 triennial rulemaking renewed and clarified the security research exemption (37 C.F.R. § 201.40(b)(12)). Key requirements to qualify:

  1. The research must be conducted on a device or system you own or have explicit authorization to test.
  2. The purpose must be good-faith security research — identifying vulnerabilities to advance the state of security.
  3. The information derived must be used primarily to promote security and not primarily for copyright infringement.
  4. The research and disclosure must be conducted in a manner designed to avoid harm.

What the exemption does NOT cover: Bypassing DRM on software you don't own for testing purposes, circumventing authentication on systems you haven't been authorized to test, and reverse engineering for purposes other than security research.


Section 7 — The Layer Below: State Computer Fraud Laws That Bug Bounty Safe Harbors Miss

The Escalation — What Prosecutors Know and Researchers Don't

Bug bounty safe harbors are drafted by corporate legal departments to address federal CFAA exposure. They typically make no mention of state computer fraud statutes — and those statutes apply regardless of what a company's program policy says. A researcher who walks out of a federal case clean may still face state charges.

California Penal Code § 502

California's Comprehensive Computer Data Access and Fraud Act is among the most aggressively written state computer crime statutes. Key provisions:

  • § 502(c)(1): Knowingly and without permission accessing any computer system or network is a misdemeanor (first offense) or felony (subsequent; or if damage exceeds $950).
  • § 502(c)(2): Knowingly and without permission taking, copying, or using data from a computer system is a felony regardless of damage value.
  • "Without permission" under § 502: California courts have read this broadly. In People v. Gentry (1991) and successor cases, courts have held that accessing systems in a manner the owner would prohibit — even absent explicit technical restriction — can satisfy the "without permission" element.

Critical gap: A HackerOne safe harbor from a company domiciled in Delaware does nothing to defeat a California DA's ability to charge you under § 502 if the company's servers are in California, you are in California, or the harm occurred in California.

New York Penal Law § 156

New York's computer crime statute covers unauthorized use of a computer (§ 156.05, Class A misdemeanor) and computer trespass (§ 156.10, Class E felony). The trespass provision applies when you intentionally access a computer without authorization and: (a) knowingly gain access to computer material, or (b) access data that you intend to use to commit another offense.

Texas Penal Code § 33.02

Texas's "Breach of Computer Security" statute is particularly aggressive in its knowledge requirement: a person commits an offense if they knowingly access a computer without the effective consent of the owner. "Effective consent" under Texas law is narrowly construed — implied consent from the existence of a login form does not constitute effective consent for security testing.

StatuteKey TriggerMax PenaltySafe Harbor Applicability
18 U.S.C. § 1030 (CFAA)Unauthorized access / damage10-20 years (aggravated)Bug bounty safe harbor is relevant evidence, not legal defense
Cal. Penal Code § 502Access without permission; data takingUp to 3 years state prisonNOT addressed by federal/corporate safe harbors
N.Y. Penal Law § 156.10Unauthorized access + data access or intentUp to 4 years (Class E felony)NOT addressed by federal/corporate safe harbors
Tex. Penal Code § 33.02Access without effective consentUp to 2 years (State jail felony); up to 99 years if critical infraNOT addressed by federal/corporate safe harbors

Section 8 — No Borders: International Researchers, GDPR, Budapest Convention, and Extradition Risk

The Collapse — The Myth of Geographic Safety

"I'm not in the U.S., so U.S. law doesn't apply to me." This belief has ended careers, freedoms, and in one case, a decade of a person's life in a London extradition fight. The modern cybercrime legal architecture reaches across borders in ways that most researchers don't map until it's too late.

GDPR Exposure During Security Testing

If you access a system containing personal data of EU residents during your testing — which is almost any modern web application — you are technically a "processor" of that personal data under GDPR Art. 4(2). The mere act of viewing, copying to memory, or capturing in a proxy tool constitutes "processing."

GDPR Art. 5(1)(b) requires that personal data be collected for "specified, explicit and legitimate purposes." "I was security testing" is not a GDPR-recognized lawful purpose unless:

  • You had explicit written authorization from the controller (the company being tested), AND
  • You processed only the minimum personal data necessary, AND
  • You deleted all personal data upon completion of testing

Practical consequence: Capturing full HTTP responses with user PII in your Burp Suite history and retaining those logs after the engagement is potentially a GDPR violation, even if you're a researcher in the US testing a US company with EU users. GDPR has extraterritorial reach under Art. 3(2) when processing relates to EU data subjects.

Budapest Convention Jurisdiction

The Budapest Convention on Cybercrime (ETS No. 185, 2001) — ratified by 68 countries including the US, UK, EU members, Australia, Canada, and Japan — harmonizes criminal definitions for unauthorized computer access across signatories. Conduct that would be CFAA-equivalent in the U.S. is criminalized in most developed countries where you might test targets.

Multi-country jurisdiction: If you test a server in Germany from the US while targeting a company headquartered in the UK, you may be subject to US federal law, German law (§ 202a StGB — Ausspähen von Daten), and UK law (Computer Misuse Act 1990, as amended). All three jurisdictions may have grounds to charge you.

Extradition Risk Matrix

Researcher LocationTarget System LocationExtradition Treaty?Risk Level
USUSN/AFull US federal + state exposure
EUUSYes (MLAT)Extraditable; EU states have cooperated
UKUSYes (bilateral)High — US-UK extradition has been used for hacking
Russia / China / Iran / North KoreaUSNo treatyDe facto safe from extradition; travel risk to treaty countries
CanadaUSYes (bilateral)High — numerous extraditions for cybercrime
BrazilUSYesExtradited defendants exist; slower process

Travel risk: Even if you cannot be extradited from your home country, you become extraditable the moment you set foot in a treaty country. Researchers from non-extradition countries have been arrested at airports in third countries and extradited to the U.S.


Section 9 — The Checklist That Keeps You Out of Court: Pre/During/Post Engagement

Pre-Engagement (Document Everything)

  • [ ] Program scope reviewed: Print or save a dated copy of the exact program scope as it exists when you begin. Programs change scope without notice, and you need the version in effect when you tested.
  • [ ] Safe harbor language reviewed: Does the program contain explicit authorization language, or just encouragement to report? Is the safe harbor limited to civil claims or does it also address criminal referrals?
  • [ ] Third-party system identification: Map all third-party services (payment processors, CDNs, auth providers, analytics) that appear in scope targets. Note that safe harbor does not extend to these.
  • [ ] Scope ambiguity resolution: If a system is listed as in scope but you're uncertain whether a specific action is authorized, email the program with your specific question and get a written response. This creates documentary evidence of your good-faith interpretation.
  • [ ] Legal jurisdiction assessment: Where is the company? Where are the servers? What state/country laws apply?
  • [ ] State law exposure: If the company is in California and you are in California, Cal. PC § 502 applies independently of the program's safe harbor.

During Testing

  • [ ] Access minimum data necessary: Never collect, store, or transmit personal data from the target system unless absolutely required to prove the vulnerability. A screenshot of the error message, not a dump of the user table.
  • [ ] Log your methodology: Keep timestamped notes of every action. If you access something by accident that was out of scope, document that you accessed it accidentally and stopped immediately.
  • [ ] Do not pivot into out-of-scope systems: If following a vulnerability chain leads you through an in-scope endpoint toward an out-of-scope backend, STOP. Document where the chain goes and what you did NOT do.
  • [ ] Do not retain personal data: Any captured traffic containing PII should be sanitized as soon as you've documented the vulnerability.
  • [ ] Do not share findings prematurely: No Slack, no Discord, no Twitter threads, no conference talks before you've completed coordinated disclosure and received confirmation that a patch is released or a timeline is established.

Post-Testing / Disclosure

  • [ ] Report within 24-48 hours of discovery: Immediate reporting maximizes your good-faith posture. A 3-week delay before reporting undermines it.
  • [ ] Start the 90-day clock: Note when you reported. If the vendor hasn't responded within 14 days, send a follow-up. If no response in 30 days, consider notifying CERT/CC to facilitate coordination.
  • [ ] Vendor non-response protocol: If you reach 90 days with no patch and no engagement, contact CERT/CC or a similar coordination body before publishing. Document every attempt at contact.
  • [ ] Do not disclose during active exploitation: If you discover that the vulnerability is being actively exploited in the wild, notify CISA (report@cisa.gov) and the vendor simultaneously, and give them a compressed timeline.
  • [ ] Legal review before major disclosures: Any vulnerability that implicates critical infrastructure, government systems, or financial institutions warrants a quick consultation with a lawyer before public disclosure.

Section 10 — The Map of the Minefield: Safe / Grey / Red Activity Matrix

ActivityLegal Risk LevelWhyMitigation
Passive reconnaissance (DNS, WHOIS, Shodan)SafeNo system access; public data onlyNone required
Port scanning in-scope targetsSafe-to-GreyAuthorized by most programs; can cause unintended loadKeep to moderate rates; document program authorization
Authenticated testing with provided test accountSafeExplicit technical + contractual authorizationStay within your account's data boundary
Unauthenticated fuzzing of in-scope login endpointGreyAuthorized in most programs but can trigger lockouts; rate concernsSlow the rate; test during off-peak hours
IDOR testing: accessing one record of another test accountGreyMost programs allow this; CFAA Van Buren protects if gates are downUse only program-provided test accounts, not real user accounts
IDOR testing: accessing records of real usersRedThis is accessing data you are not authorized to access; civil and criminal exposureNever. Document the IDOR path with a test account only
Auth bypass that gains access to admin panelGrey-to-RedDepends on whether admin panel is in scope; pivoting out of scope is RedStop at proof of bypass; do not proceed deeper
SQL injection demonstrating data extractionRedEven with a harmless payload, extracting real user data is CFAA + SCA + GDPR exposureBlind injection to prove exploitability only; no data extraction
Automated scanning of out-of-scope targetsRedClearly unauthorized; no safe harbor; full CFAA exposureNever. Verify every target against current scope before scanning
Network pivoting from in-scope to out-of-scope systemsRedYou've left authorized access entirely; Van Buren gates are explicitly downStop immediately; document where the pivot leads; do not follow
Creating a working PoC exploitGreyLegal in itself; risk comes from what you do with itKeep offline; share only with vendor during CVD; never publish pre-patch
Publishing PoC before vendor patchesRedDefeats good-faith defense; potential CFAA damage liability; tortious interferenceNever before patch + coordinated disclosure window
Retaining user PII from testingRedSCA, GDPR, state privacy laws independently; no safe harbor covers thisDelete immediately; document that you deleted it
Sharing findings with third parties before disclosureRedEliminates good-faith defense; potential wire fraud if finder's fee involvedDisclose only to the program; never to third parties
Disclosing after 90-day window with no patchGreyLegally risky but defensible with documentation of vendor non-responseFollow CERT/CC protocol; document every contact attempt
Testing DMCA-protected authentication bypassGrey2024 exemption applies if you own the device and purpose is security researchOwn the device; document research purpose; comply with exemption terms
Testing EU-user data systems from the USGreyGDPR processing rules apply extraterritoriallyMinimize PII capture; delete captured PII immediately
Testing via Tor / anonymization toolsRedCourts treat anonymization as evidence of guilty knowledge; destroys good-faith defenseNever anonymize authorized research

Key Holdings Summary

CaseYearCourtKey Holding
Van Buren v. United States2021SCOTUS"Exceeds authorized access" requires crossing a technical gate to data you have no permission to access — not misusing access you legitimately have
United States v. Auernheimer20143d Cir.Conviction vacated on venue only; accessing unauthenticated public endpoint systematically was charged as CFAA violation and jury convicted on the merits
hiQ Labs v. LinkedIn20229th Cir.Scraping publicly available data without authentication does not violate CFAA; but this is a 9th Circuit civil case, not binding on criminal prosecutors
United States v. Hutchins2019E.D. Wis.Prior malware authorship can coexist with extraordinary post-offense conduct; plea to two counts; extraordinary cooperation matters at sentencing
United States v. Weev20143d Cir.Vacated on venue; government's CFAA theory on publicly accessible data was never rejected on the merits

The Lesson — Know Where the Gate Is

Bug bounty programs, VDPs, and the DOJ's 2022 charging policy are all meaningful — but none of them are legal armor. They are factors a prosecutor weighs when deciding whether to charge you, and factors a defense attorney will argue if you are charged. They are not defenses that will get a case dismissed.

The researchers who avoid prosecution share several traits: they access only what is necessary to prove the finding, they document everything, they never retain personal data, they report promptly, they stay strictly within scope, and they treat the program's safe harbor language as the floor of their conduct, not the ceiling of their permission.

The researchers who get prosecuted despite thinking they were doing legitimate work share the opposite traits: they collected more data than needed, they shared findings with people outside the program, they accessed systems that were technically reachable but clearly not in scope, and they treated "no auth required" as equivalent to "authorized."

Know where the gate is. Don't walk through it unless you've verified it's open for you specifically. Document that you verified it. Stop the moment you're not sure.


Next module: [01v — The Legal Architecture of Responsible Disclosure: Timelines, Notification Duties, and Post-Breach Liability]


Practitioner Template: Penetration Testing Authorization Agreement

Before starting any engagement, get a signed authorization agreement. LawZeee includes a full 27-section template:

Pentest Authorization Agreement v3.0

This template covers: parties and definitions, scope and IP ranges, timing and notification windows, rules of engagement, data handling, insurance requirements, indemnification, confidentiality, conflict of interest, post-engagement obligations, and governing law. Customize sections 1, 3, 5, and 10 for every engagement. Never start work without a countersigned copy.

Test your knowledge

Ready to check what stuck?

10 questions — cases, statutes, and the practical move for each. Takes 5 minutes.

Take the quiz now →