Non-Lawyers Summary

Security professionals who test computer systems face not just U.S. law but the criminal codes of every country whose systems, data, or infrastructure they touch — and sometimes just the country they travel to with a Kali Linux laptop. This module maps the specific statutes, penalties, and legal gaps in six major jurisdictions, explains why SIM swapping is federally prosecutable under three separate U.S. statutes, and gives you a multi-jurisdiction engagement checklist so you know what legal review to demand before your first packet hits a target with offices in the UK or Germany.


What This Module Answers Fast

  • Can I get arrested for having Metasploit on my laptop in the UK? → Possibly yes, under CMA § 3A. See Section 1.
  • Germany makes it a crime to possess hacking tools with intent — what does that mean? → § 202c StGB with "intent" is broader than you think. See Section 2.
  • Does GDPR apply to my pen test if I'm a U.S. tester testing a EU company? → Yes, if you process EU personal data. See Section 3.
  • Is there a Canadian equivalent of Van Buren's "authorization" defense? → No statutory analog. See Section 4.
  • Australia has no safe harbor for researchers — what's the actual exposure? → § 477.1 is strict liability on access. See Section 5.
  • SIM swapping is just social engineering, right? What crimes does it actually trigger? → At minimum three federal statutes, sometimes more. See Section 6.
  • If I test a company's UK office servers from the U.S., can I be extradited to the UK? → Yes, under the US-UK extradition treaty. See Section 7.

Section 1 — The Law That Emerged from a Prince's Mailbox: UK Computer Misuse Act 1990

Before the Law Existed — R v. Gold & Schifreen (1988)

In 1984, two men in England found a default maintenance password for British Telecom's Prestel network. Using nothing more sophisticated than that credential, they wandered through the system until they stumbled into something extraordinary: the private mailbox of Prince Philip. They read it. They left no damage. They told no one. And when the Crown eventually caught them and charged them with forgery under the Forgery and Counterfeiting Act 1981 — the only law that even came close to fitting — the House of Lords acquitted both in 1988. The law simply didn't exist yet. Parliament passed the Computer Misuse Act 1990 directly in response.

That single moment of judicial impotence — watching two hackers walk free because the code hadn't caught up — shaped everything that followed.

Section 1 — Unauthorized Access

CMA § 1(1): A person is guilty if they cause a computer to perform any function with intent to secure access to any program or data held in any computer, knowing that the access they intend to secure is unauthorized.

Key elements:

  • The access must be unauthorized — this is determined by whether the person responsible for the computer has consented to access in the manner sought
  • Intent is required; recklessness is insufficient for § 1
  • Covers access to any computer, including cloud infrastructure
  • Penalty: Summary conviction up to 12 months imprisonment or fine; on indictment, up to 2 years imprisonment

The UK "authorized" test differs meaningfully from Van Buren's gates-up-or-down approach. UK courts apply a factual consent standard: was this specific type of access consented to by someone with authority to grant it? There is no statutory safe harbor, and the CMA contains no equivalent of the CFAA's "authorization" definition. DPP v. Bignell (1998) held that police accessing the Police National Computer for unauthorized private purposes did NOT violate CMA § 1 because they had technical authorization — an early UK analog to Van Buren. However, R v. Bow Street Magistrates Court ex parte Government of the United States of America (2006) clarified that authorization is assessed purposively, not merely technically.

Section 2 — Unauthorized Access with Intent to Commit a Further Offence

CMA § 2: Unauthorized access (all elements of § 1) committed with intent to commit or facilitate commission of a further offence (any offence with a 5+ year sentence).

This is the aggravated form. A pen tester who accesses a system without authorization and intends to demonstrate they could subsequently access financial records or commit fraud could be charged under § 2 even if they never complete the further offence. Penalty: Up to 5 years imprisonment on indictment.

Section 3 — Unauthorized Acts Impairing Operation

CMA § 3(1): A person is guilty if they do any unauthorized act in relation to a computer, knowing it is unauthorized at the time, with intent to impair the operation of any computer, prevent or hinder access to programs or data, or impair operation or reliability of data.

Added by the Police and Justice Act 2006 to cover DoS attacks. The Serious Crime Act 2015 further amended § 3 to add: unauthorized acts that create a significant risk of serious damage to human welfare or national security can lead to life imprisonment.

For pen testers: any test that causes system degradation, crashes a service, or impairs data integrity — even accidentally — can satisfy § 3's actus reus if the underlying access was unauthorized. Accidental DoS during load testing against a system you didn't clearly have written authorization to stress-test is criminal exposure.

Section 3A — The Statute That Criminalizes Your Toolkit

CMA § 3A(1): A person is guilty if they make, adapt, supply, or offer to supply any article:

  • Intending it to be used to commit or assist commission of a § 1 or § 3 offence, OR
  • Believing it is likely to be used to commit or assist commission of a § 1 or § 3 offence

CMA § 3A(3): A person is guilty if they obtain any article with a view to its being supplied for use to commit or assist a § 1 or § 3 offence.

Penalty: Up to 2 years imprisonment on indictment.

The Metasploit Problem

Section 3A is the most dangerous UK statute for security professionals. The statute contains no express research exception. Whether possession of a dual-use tool violates § 3A depends on:

  1. Intent — if you intend the tool for authorized pen testing, your intent is lawful
  2. "Likely to be used" — if the court finds the tool is likely to be used for unauthorized access (even by someone else you supply it to), § 3A(1)(b) applies

The Crown Prosecution Service published guidance noting that § 3A was not intended to criminalize legitimate security research, and that prosecutors must consider whether supply or use was done in the context of legitimate security testing. However, this is prosecutorial discretion, not a statutory defense. The Crown has no obligation to follow its own guidance.

Practical exposure scenarios:

  • Carrying Metasploit, Burp Suite Pro, or a Flipper Zero into the UK for a client engagement: the tools are dual-use. Possession alone is not an offence; the issue is "obtaining with a view to supply" or "supplying" to someone who then uses them for unauthorized access.
  • Demonstrating a zero-day exploit to a client: if that tool is later used without authorization, retrospective § 3A liability is a theoretical risk.
  • Teaching a hacking course in the UK using live exploitation tools: instructors have been cautioned by legal counsel in this context.

UK security researchers typically navigate § 3A by: (1) ensuring all testing is explicitly authorized in writing, (2) never distributing tools to unknown parties, (3) ensuring contracts specify the legitimate purpose of each tool, and (4) not maintaining pre-compiled exploit payloads absent a specific engagement justification.


Section 2 — Germany's Preparation Statute: §§ 202a–202c StGB

The Law That Criminalized Thinking About It

Germany didn't just criminalize hacking. It criminalized preparing to hack. Section 202c of the Strafgesetzbuch — a statute so sweeping that it triggered protests from the German security research community when it passed in 2007 — made possession of tools intended for unauthorized access a standalone criminal act.

§ 202a StGB — Ausspähen von Daten (Unauthorized Data Access)

§ 202a(1): Whoever gains unauthorized access to data not intended for them and specially protected against unauthorized access, by overcoming the access protection, faces up to three years imprisonment or a fine.

Key phrase: "specially protected against unauthorized access" — unlike the CFAA, Germany requires that the data have some form of technical access protection (encryption, password, access control). Accessing unprotected open data does not satisfy § 202a. This mirrors, in some respects, the Van Buren gates analysis.

"Overcoming" the protection need not be sophisticated — bypassing a simple login form with default credentials has been held sufficient.

§ 202b StGB — Phishing (Abfangen von Daten)

§ 202b: Unauthorized interception of non-public data transmissions (including electromagnetic emissions) by technical means. Specifically covers packet sniffing and wireless interception. Penalty: up to two years or fine.

§ 202c StGB — Vorbereitung des Ausspähens und Abfangens von Daten (Preparation)

§ 202c(1): Whoever prepares a § 202a or § 202b offence by producing, acquiring, selling, disseminating, or making otherwise accessible a password or security code permitting access to data, or a computer program whose purpose is the commission of such an act, faces up to two years imprisonment or a fine.

This is the dangerous provision. Germany criminalizes preparation — specifically the production, acquisition, or distribution of tools designed for unauthorized data access. Unlike § 3A CMA which requires intent or likelihood, § 202c's text focuses on the tool's purpose.

German courts have grappled with dual-use tools extensively:

  • The Landgericht Mannheim in 2010 ruled that a security consultant who had written a port scanner had not violated § 202c because the tool's primary purpose was network administration, not unauthorized access.
  • The Bundesgerichtshof (Federal Court of Justice) has not issued a definitive ruling on what "purpose" means for broadly dual-use tools like Nmap or Metasploit.
  • German security researchers commonly operate under an informal understanding that tools created and used in professional security contexts, under contract, do not satisfy § 202c's "purpose" element — but this is untested at the highest level.

BSI Act (BSI-Gesetz): The Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik) issues guidelines for security testing but these create no statutory authorization defense. The BSI's "IT-Grundschutz" framework encourages penetration testing for covered entities but does not immunize testers from § 202c.

Practical advice for Germany engagements:

  • Obtain explicit written authorization from the data controller
  • Ensure the engagement contract specifies all tools to be used and their purpose
  • Do not distribute tools to third parties in Germany
  • German pen testers typically consult counsel before bringing novel exploitation frameworks into engagements

Section 3 — GDPR Follows Your Traffic: EU NIS2 and Data Protection During Security Testing

The Rule That Doesn't Stop at the Border

The tester was in Austin. The servers were in Frankfurt. He was working for a U.S.-based client. He thought European law wasn't his problem. The GDPR disagreed.

NIS2 Directive (2022/2555)

The EU's NIS2 Directive, effective October 2024 across member states, imposes cybersecurity requirements on "essential entities" (energy, transport, banking, health, water, digital infrastructure) and "important entities" (postal, waste management, chemicals, food, manufacturing, digital providers). Article 21 requires these entities to implement measures including penetration testing and security audits.

Article 25 (implementation of security measures) requires member states to encourage use of European cybersecurity certification schemes. NIS2 does not create authorization for third-party researchers — it creates an obligation for covered entities to conduct testing. The authorization must still come from the entity itself.

NIS2 for non-EU testers: A U.S. pen tester engaged by an EU NIS2-covered entity must operate under a contract that meets the entity's NIS2 compliance obligations. Any unauthorized access — even accidental scope drift — that affects an essential entity potentially triggers NIS2 reporting obligations for the target company and criminal referrals under member state computer misuse laws.

GDPR Article 5 — Data Minimization During Testing

GDPR Article 5(1)(c) requires that personal data be "adequate, relevant and limited to what is necessary" (data minimization principle). Article 5(1)(b) requires that data collected for one purpose not be further processed in incompatible ways.

When a pen tester — even one operating from the United States — accesses a system belonging to an EU-established data controller or processor, and in doing so touches personal data of EU data subjects, GDPR applies. GDPR Article 3(2) establishes extra-territorial reach: GDPR applies to processing of EU data subjects' data by controllers/processors not established in the EU when offering goods/services to or monitoring EU data subjects.

Practical GDPR exposure during testing:

  1. Exfiltration of PII as proof-of-concept: Pulling real user records to demonstrate a vulnerability constitutes "processing" under GDPR Art. 4(2). Even if authorized by the target company for pen test purposes, the tester who handles that data is a "processor" under GDPR and must comply with Art. 28 (processor obligations) and Art. 32 (security of processing).
  2. Retaining test data: Any captured credentials, PII, or user data retained beyond what is necessary for the engagement violates Art. 5(1)(c). Data must be destroyed post-engagement per contract and Art. 5(1)(e) (storage limitation).
  3. Cross-border transfer: If a U.S. tester exfiltrates EU personal data to U.S. systems as part of testing, this is a restricted transfer under GDPR Chapter V. Without a valid transfer mechanism (Standard Contractual Clauses, adequacy decision, or binding corporate rules), the transfer is unlawful.
  4. Breach notification: If the pen test leads to discovery of a prior breach, GDPR Art. 33 requires the data controller to notify the supervisory authority within 72 hours. The pen tester has no direct notification obligation but the contract must address who holds the obligation and how the tester supports it.

Enforcement reality: EU supervisory authorities have not targeted individual pen testers, but have fined companies whose pen test processes led to data exposure. The risk for testers is civil liability from the client company seeking indemnification.


Section 4 — Canada: The "Colour of Right" Defense That May Not Protect You

No American Safe Harbor Here

In Canada, there is no statutory analog to the DOJ's CFAA Charging Policy. There is no codified good-faith researcher protection. What exists instead is a common-law concept — "colour of right" — that sounds like a safety net but functions more like a tightrope.

§ 342.1 — Unauthorized Use of Computer

§ 342.1(1): Every person who fraudulently and without colour of right obtains, directly or indirectly, any computer service; intercepts or causes to be intercepted any function of a computer system; uses or causes to be used a computer system to commit mischief; or uses, possesses, traffics in, or permits another person to have access to a computer password that would enable a person to commit an offence under (a), (b), or (c), is guilty of an indictable offence punishable by up to 10 years imprisonment.

Key elements:

  • "Fraudulently" — Canadian courts have interpreted this to require a dishonest purpose, not merely unauthorized access. A good-faith security researcher acting under an honest belief they have authorization may lack the "fraudulent" element.
  • "Without colour of right" — This is a distinct element from CFAA's authorization test. "Colour of right" means a genuine belief in a legal right to act, whether or not that belief is legally correct. A researcher who genuinely (even if incorrectly) believes a bug bounty program authorizes their access may have a colour of right defense. Canadian courts have not definitively addressed this in the security research context.
  • No federal authorization statute: Canada has no analog to the DOJ's CFAA Charging Policy or any statutory safe harbor for good-faith security research. The "colour of right" defense is fact-specific and requires proof of genuine belief.

§ 430(1.1) — Mischief in Relation to Computer Data

§ 430(1.1): Everyone who wilfully destroys or alters computer data, renders computer data meaningless, useless, or ineffective, obstructs, interrupts, or interferes with lawful use of computer data, or denies access to computer data to a person entitled to access, is guilty of an indictable offence punishable by up to 10 years imprisonment (or 14 years for critical infrastructure, or life if it endangers life).

Security testing activities that modify data, even temporarily, can satisfy § 430(1.1). A pen tester who creates a test account, modifies configuration for a PoC, or crashes a service during testing could face mischief charges if the testing was not clearly authorized.

Canadian courts on authorized access: In R v. McLaughlin (2017, Ontario), the court held that authorization must be explicit and specific — a general "use the system as you like" instruction did not authorize security testing that went beyond normal use. Canadian courts apply a fact-specific inquiry into whether the specific actions taken were within the scope of consent given.


Section 5 — Australia: Where "Lawful Excuse" Is the Only Lifeline

The First ICS Criminal Case Came from Down Under

The case that defined what it looked like when SCADA law met criminal prosecution — R v. Boden (2002, Queensland) — didn't happen in the United States. It happened in Australia. A disgruntled contractor. A stolen wireless radio. Eight hundred thousand liters of raw sewage. And a conviction under strict unauthorized access principles with no safety research exception in sight.

Australia's Criminal Code Act 1995 offers no statutory research exemption. There is no Australian DOJ Charging Policy. What exists is strict liability on access — with one narrow escape.

§ 477.1 — Unauthorized Access, Modification, or Impairment with Intent to Commit a Serious Offence

§ 477.1(1): A person is guilty if they cause unauthorized access to, modification of, or impairment of, data held in a computer, and the person intends to commit or facilitate a serious Commonwealth, State, or Territory offence (one with 5+ year penalty).

Penalty: Up to the maximum penalty for the serious offence being facilitated, or 10 years imprisonment, whichever is lesser.

§ 477.2 — Unauthorized Modification of Data

§ 477.2: Unauthorized modification of computer data with intent to cause impairment. Penalty up to 10 years.

§ 477.3 — Unauthorized Impairment of Electronic Communication

§ 477.3: Unauthorized impairment of electronic communications. Covers DoS attacks. Penalty up to 10 years.

§ 478.1 — Unauthorized Access to, or Modification of, Restricted Data

§ 478.1: Access or modification of restricted data (data to which access is restricted by some form of access control) without authorization and without a lawful excuse. This is the base-level unauthorized access offence. Penalty up to 2 years.

No statutory authorization defense: The Australian Criminal Code contains no express safe harbor for security researchers or pen testers. There is no Australian analog to the DOJ CFAA Charging Policy. The word "unauthorized" in §§ 477–478 is assessed on the facts: was access actually authorized by someone with authority to authorize it?

Australian Signals Directorate (ASD) guidelines: The ASD (Australia's NSA equivalent) publishes the "Information Security Manual" (ISM), which includes guidelines for penetration testing of government systems and encourages organizations to conduct authorized testing. The ISM creates compliance obligations for Australian Government entities but no legal authorization for third-party researchers.

"Lawful excuse" under § 478.1: Courts have recognized that a genuine authorization from the computer owner can constitute a "lawful excuse." This is the closest Australian law comes to a safe harbor — but it requires actual authorization, not merely good-faith belief in authorization.

Key risk: Australian law is notable for broad geographic reach. An Australian court found in R v. Boden (2002, Queensland) that an individual who remotely accessed SCADA systems controlling sewage infrastructure (causing 800,000 liters of raw sewage to spill) violated the Queensland equivalent of the federal statute. The prosecution succeeded on strict unauthorized access principles with no safety research exception.


Section 6 — SIM Swapping: Three Federal Statutes, One Phone Call

How a Phone Number Became the Master Key to Everything

It started with a call to T-Mobile customer service. The attacker was pretending to be someone else. Within minutes, a phone number — and with it, every two-factor authentication code sent to that number — belonged to them. No malware. No zero-days. No server rooms. Just social engineering and a carrier employee who didn't ask enough questions. What followed triggered three federal statutes with a combined maximum sentence of 35 years.

18 U.S.C. § 1030 — CFAA

A SIM swap targeting a victim's online accounts is prosecuted under CFAA § 1030(a)(2) (unauthorized access to protected computer to obtain financial records or information) and § 1030(a)(4) (unauthorized access with intent to defraud). The "computer" is the victim's authentication system (email, crypto exchange, bank). The attacker's access, using an SMS code they caused to be rerouted, is unauthorized because the victim never consented.

Penalty: § 1030(c)(2)(A): up to 5 years (first offense), 10 years (subsequent); § 1030(c)(3)(A) (fraud causing $5,000+ loss): up to 5 years.

18 U.S.C. § 1343 — Wire Fraud

The fraudulent misrepresentation to the carrier (impersonating the victim, fabricating a "lost SIM" story, bribing a carrier employee) constitutes a scheme to defraud transmitted by wire. Wire fraud requires: (1) a scheme or artifice to defraud, (2) use of wire communication, (3) intent to defraud. The "wire" is the call or online chat to the carrier's customer service.

Penalty: Up to 20 years imprisonment (up to 30 if affecting a financial institution).

18 U.S.C. § 1029 — Access Device Fraud

§ 1029(a)(7): Knowingly and with intent to defraud effecting transactions with one or more unauthorized access devices. A SIM card is an "access device" under § 1029(e)(1): "any card, plate, code, account number, electronic serial number, mobile identification number, personal identification number, or other telecommunications service, equipment, or instrument identifier, or other means of account access that can be used to obtain money, goods, services, or any other thing of value."

The fraudulently obtained SIM — now carrying the victim's number — is an "unauthorized access device" used to obtain two-factor authentication codes (things of value enabling account access). Penalty: § 1029(c)(1)(A)(i): up to 10 years (first offense), 20 years (subsequent).

FCC Order FCC-23-111 (2023) — SIM Swap Protections

In November 2023, the FCC adopted rules under the Communications Act requiring carriers to:

  • Immediately notify customers of any SIM change or port-out request
  • Give customers at least 24 hours to respond before completing a SIM change initiated through customer service channels
  • Implement strict identity verification before processing SIM swaps

These are regulatory requirements, not criminal statutes. Carriers that fail to implement them face FCC enforcement but carriers are not criminally liable for facilitating a SIM swap by negligence (though civil negligence claims exist). A carrier employee who knowingly facilitates a SIM swap faces § 1029 liability as a principal or aider and abettor.

The Case That Defined the Charge — United States v. O'Connor and the Twitter Bitcoin Heist

United States v. O'Connor (E.D.N.Y., ongoing) is illustrative: Prosecutors charged a SIM swap ring under §§ 1029, 1343, and 1030 after the ring stole over $50 million in cryptocurrency by taking over victim accounts. The indictment identified 19 victims, alleged coordination with inside carrier employees (carrier employee bribery charged separately under 18 U.S.C. § 666, theft/bribery from programs receiving federal funds, or under 18 U.S.C. § 1952, racketeering), and sought forfeiture of all stolen assets under 18 U.S.C. § 982.

Carrier employee liability: A carrier employee who accepts bribes to execute SIM swaps faces § 1029 (access device fraud as aider/abettor), § 666 if the carrier receives federal funding, and potentially § 1343 (wire fraud) for the underlying scheme. Several carrier employees have received 5-10 year sentences in SIM swap prosecutions.

State telecom fraud statutes: California Penal Code § 530.5 (identity theft), § 538d (impersonating another person by telephone) have been used in state SIM swap prosecutions. New York Penal Law § 190.80 (aggravated identity theft) applies when SIM swap leads to financial account access. Florida Statute § 817.568 covers fraudulent use of personal identification.


Section 7 — The Long Arm: Extradition Exposure for Pen Testers

You Don't Have to Be There to Be Prosecuted There

Gary McKinnon spent 10 years fighting extradition to the United States from his home in England. He had accessed 97 U.S. military computers while looking for evidence of UFO cover-ups. He left notes for sysadmins. He called himself "Solo." The U.S. government wanted 70 years. He was ultimately blocked in 2012 on human rights grounds — Asperger's syndrome and credible suicide risk. That is an extraordinarily rare outcome. For almost everyone else, the treaty works exactly as designed.

The Double Criminality Requirement

All major extradition treaties require double criminality: the alleged conduct must be criminal in both the requesting country and the requested country. For cybercrime, this is almost always satisfied — CFAA violations, UK CMA violations, and German § 202a violations are all criminalized in both the U.S. and the requesting country.

The key question is nexus, not citizenship. You do not need to be physically present in a country to be extradited there. You need only have caused harm there. Accessing a UK company's servers from the U.S. creates UK jurisdiction under CMA § 1 if any element of the offence occurred in the UK (including the servers being located there).

US-UK Extradition Treaty (2003)

The US-UK Extradition Treaty, ratified 2007 after controversy, covers extraditable offences defined as any offence punishable by more than one year imprisonment in both countries. CMA §§ 1–3A offences qualify. The treaty eliminated the "prima facie evidence" requirement that the UK previously imposed — the U.S. no longer needs to present evidence sufficient to justify a trial, only a warrant and charging documents.

Gary McKinnon (who accessed 97 U.S. military computers 2001–2002) fought extradition to the U.S. from the UK for 10 years. He was ultimately blocked in 2012 on human rights grounds (Asperger's, suicide risk) — but this is a rare outcome requiring extraordinary circumstances.

US-Germany Extradition Treaty (1993)

The US-Germany treaty covers offences punishable by more than one year. Germany § 202a/202c offences qualify. Germany also frequently uses European Arrest Warrants for intra-EU extradition.

US-Canada Extradition Treaty (1971, as updated)

Broad treaty covering conduct punishable by two or more years. Canada's § 342.1 (10 year maximum) qualifies easily.

US-Australia Extradition Treaty (1990)

Covers offences punishable by more than one year. Australian §§ 477–478 offences qualify.

What Creates Extradition Risk

You create extradition risk when:

  1. You access systems physically located in the foreign country without authorization
  2. You access systems belonging to entities established in that country, affecting their operations there
  3. You cause financial loss felt primarily in that country
  4. Victims or compromised accounts are residents of that country

You do NOT create extradition risk merely by: using tools that are legal where you are, testing systems of global companies where access was to U.S.-located infrastructure, or accessing systems where the only connection to the foreign country is corporate headquarters.

No one has been extradited to the EU specifically for security research that was authorized. The risk is real but prosecution has historically targeted clearly malicious actors, not researchers with contractual authorization.


Section 8 — Multi-Jurisdiction Engagement Checklist

Before testing any company with offices or operations in UK, EU, Canada, or Australia, require the following:

  • [ ] Obtain written authorization from an officer with actual authority (C-suite or IT security leadership with delegated authority) — verbal authorization is insufficient in all jurisdictions
  • [ ] Confirm jurisdiction of target systems — identify which countries' laws apply based on server location and data residency, not just company HQ
  • [ ] Engage local counsel in each jurisdiction where systems are located if the engagement involves those systems directly
  • [ ] GDPR Data Processing Agreement — if testing any system holding EU personal data, execute a DPA under GDPR Art. 28 before testing begins
  • [ ] NIS2 classification — confirm whether the target is an "essential" or "important" entity under NIS2; if so, ensure testing protocol meets NIS2 Art. 21 standards

Contract Language Requirements

  • [ ] UK engagements: Explicitly list all tools to be used and state their purpose (CMA § 3A mitigation); include a "tool authorization" schedule
  • [ ] Germany engagements: Include language confirming tools are deployed for authorized security testing purposes, not for commission of §§ 202a–202c offences
  • [ ] Australia engagements: Include explicit "lawful excuse" authorization language tracking § 478.1's requirement
  • [ ] Canada engagements: Include language establishing "colour of right" — client affirmatively states tester has legal right to access named systems
  • [ ] All international engagements: Include a "data handling" annex specifying what data may be captured, retention limits, and destruction timeline (GDPR compliance)

What Never to Do

  • UK: Do not supply security tools to third parties without explicit written authorization establishing the lawful purpose. Do not carry exploitation payloads to UK client sites without a tool authorization schedule in your contract.
  • Germany: Do not distribute tools in Germany to anyone other than authorized team members on the engagement. Do not create bespoke exploitation tools during a German engagement without legal review.
  • EU generally: Do not exfiltrate real personal data as PoC. Use synthetic data or scrubbed data to demonstrate injection vulnerabilities.
  • Australia: Do not assume silence equals authorization. Do not rely on an informal "you have our permission" without it being documented and signed by an authorized officer.
  • Canada: Do not rely on a bug bounty program's in-scope listing as authorization — Canadian courts require actual authorization, not program policy.

Section 9 — Safe/Grey/Red Matrix: Activity Risk Across Jurisdictions

ActivityU.S.UKGermanyEU (GDPR)CanadaAustralia
Authorized pen test with signed contractSafeSafeSafeSafeSafeSafe
Possessing Kali Linux / Metasploit for authorized testingSafeGrey (§ 3A — document tool purpose)Grey (§ 202c — document intent)N/ASafeSafe
Distributing exploit tools to unknown third partiesGreyRED (§ 3A)RED (§ 202c)N/AGreyGrey
Testing without written authorization ("verbal OK")REDREDREDREDREDRED
Capturing real PII as PoCGreyREDREDRED (Art. 5)REDRED
Retaining captured PII post-engagementREDREDREDRED (Art. 5/17)REDRED
Accessing systems outside agreed scopeREDREDREDREDREDRED
DoS testing with written authorizationGrey (CFAA § 1030(a)(5))Grey (CMA § 3 — strict authorization required)Red (§ 303b — risk of disruption)GreyGreyGrey
Password/credential reuse across scope boundaryREDREDREDREDREDRED
Social engineering carrier employees (SIM swap research)RED (§§ 1029, 1030, 1343)REDREDREDREDRED
Bug bounty without explicit scope authorizationGreyREDREDREDREDRED

Grey = legal risk exists; careful documentation and jurisdiction-specific counsel recommended. Safe = activity is within established legal frameworks given proper authorization.


Section 10 — UK CMA § 3A and Security Tool Possession: Deep Analysis

The "Likely to Be Used" Test

Section 3A(1)(b) criminalizes supplying or offering to supply any article believing it is "likely to be used" to commit a § 1 or § 3 offence. This does not require the supplier to have criminal intent — only a belief in the likelihood of criminal use.

This creates asymmetric risk for tool authors and distributors:

  • An open-source maintainer who ships Metasploit modules, knowing some percentage of downloaders will use them without authorization, could theoretically be within § 3A's reach
  • The CPS guidance (2016) indicates prosecutors should weigh the "legitimate" uses of a tool before charging, and charge only where criminal use is the primary purpose — but this guidance is not statute
  • No UK court has convicted a Metasploit contributor or Kali Linux developer under § 3A

What "Likely" Means

UK courts have not definitively interpreted "likely to be used" under § 3A. By analogy to other "likely" standards in UK criminal law, "likely" means more probable than not. A tool with only one plausible use (a payload designed to exploit a specific live vulnerability in a specific named target's production system) meets the standard easily. A tool with broad legitimate uses (Nmap, Wireshark, Burp Suite) almost certainly does not — but Metasploit and Flipper Zero occupy the grey zone.

Flipper Zero Under CMA § 3A

The Flipper Zero (a multi-purpose radio hardware tool capable of RFID cloning, Sub-GHz attacks, infrared emulation, and BadUSB payloads) is commercially sold and has significant legitimate uses (hardware security research, RF analysis, key fob analysis). Carrying one into the UK for an authorized hardware security engagement is not clearly criminal — but:

  • Using it to clone RFID badges without authorization is § 1 + § 3A
  • Demonstrating its use in a UK course where attendees are not clearly authorized pen testers creates § 3A supply risk
  • Physical possession during travel to a UK client engagement: legal, if your engagement authorization specifically covers RFID/RF testing and you document the tool

How UK Security Researchers Navigate § 3A in Practice

  1. "Authorized test only" usage policy — written into every engagement contract alongside a tool schedule
  2. No speculative tool distribution — only supply tools to team members on the specific engagement
  3. No public exploit release without coordinated disclosure and a stated defensive purpose
  4. Insurance — Cyber professional liability policies in the UK specifically cover § 3A defense costs; this is standard in UK consulting

Practitioner Takeaways

  1. The UK CMA § 3A is the most practically dangerous statute for U.S. pen testers traveling to the UK. Carry a tool authorization schedule alongside your statement of work.
  2. Germany § 202c does not require you to use a tool — creation or possession with criminal purpose is enough. The "purpose" of the tool is the key factual question; document lawful purpose for everything in your toolkit.
  3. GDPR applies to your testing even if you're in the US. Any time you touch EU personal data, you're processing it. Execute a DPA. Do not exfiltrate real PII.
  4. Canada's "colour of right" defense requires genuine belief in authorization — and bug bounty policies alone may not create that belief. Demand explicit written authorization from a named officer.
  5. SIM swapping is a three-statute federal crime — CFAA, wire fraud, and access device fraud — each carrying 5–20 year maximums. Carrier employee bribery adds federal bribery and RICO exposure.
  6. Extradition for cybercrime is real and increasingly routine. The US-UK treaty's removal of prima facie evidence requirements makes it operationally easy for the UK to request extradition. One unauthorized access to a UK server, prosecuted in the UK, can reach you in the U.S.
  7. There is no global "good faith researcher" safe harbor. The DOJ's CFAA Charging Policy is U.S.-only and discretionary. In the UK, Germany, Canada, and Australia, your defense is the facts of your authorization, not a legal privilege.

This module is part of the LawZeee cybercrime law curriculum. Cross-reference: Module 1A (CFAA Federal Statutes), Module 1J (Bug Bounty Legal Framework), Module 1U (Safe Harbors and VDP Law).

Test your knowledge

Ready to check what stuck?

10 questions — cases, statutes, and the practical move for each. Takes 5 minutes.

Take the quiz now →