Non-Lawyers Summary
This post covers the parts of cyber law that are changing fastest: zero-day markets, ransomware sanctions, bug bounty safe harbors, cross-border evidence, and encryption fights. The main takeaway is that organizations need policies that can adapt, because the legal rules are still moving.
The Frontier Has No Map
In 2017, a piece of code called EternalBlue escaped from the NSA and burned down the internet.
The vulnerability had been sitting in a government arsenal, weaponized and ready, judged too valuable to disclose to Microsoft and too dangerous to release into the wild. The calculus made sense — in a classified briefing room. Then Shadow Brokers leaked the NSA's toolbox, and within weeks, EternalBlue was powering WannaCry, which encrypted hospitals in the UK, factories in Europe, and infrastructure across 150 countries. NotPetya followed. The total damage reached tens of billions of dollars.
No one went to prison for it. No statute governed it. The entire catastrophe unfolded in a legal vacuum that still has not been filled.
This module covers the areas where the law has not caught up — where the decisions being made right now, in boardrooms and courtrooms and classified briefings, will define what cybersecurity law looks like for the next generation. Four issues above all others are in active motion: zero-day markets, ransomware payments, security research safe harbors, and the encryption wars. Practitioners who understand the direction of travel will be the ones clients call when the crisis hits.
Issue 1: Zero-Day Markets and Vulnerability Governance
What Is a Zero-Day?
A zero-day is a software vulnerability unknown to the vendor — the vendor has "zero days" to prepare a patch because the vulnerability exists publicly before they know about it. Zero-days are among the most valuable commodities in cybersecurity: governments pay millions for exclusive use of unpatched vulnerabilities against adversary systems; criminal actors use them to breach targets before defenders can protect themselves.
The same weapon, in different hands, is either a national security asset or a catastrophic risk to civilian infrastructure. Nothing in the law draws that line clearly.
The Commercial Market
A functioning commercial market for zero-day vulnerabilities exists, with:
- Government buyers: NSA, CIA, GCHQ, and other intelligence agencies purchase zero-days for offensive operations and intelligence collection
- Private brokers: Companies like Zerodium publicly offer bounties for specific vulnerabilities (up to $2.5M for a full iOS chain exploit, publicly advertised)
- Criminal markets: Darknet markets offer zero-days to the highest bidder
The Vulnerabilities Equities Process (VEP) — Policy in Place of Law
The U.S. government's response to the tension between offensive capability and defensive disclosure is the Vulnerabilities Equities Process — an interagency process that determines whether a discovered vulnerability should be:
- Disclosed to the vendor (enabling a patch, benefiting defenders globally)
- Retained for offensive use (enabling intelligence collection or cyber operations)
It sounds like a reasonable bureaucratic solution. Then EternalBlue happened.
Legal and policy significance:
- The VEP is a policy process, not a statutory requirement — it has no legislative basis and limited external oversight
- Advocacy groups and technologists have called for greater transparency and mandatory reporting of VEP decisions
- The EternalBlue vulnerability (used in WannaCry and NotPetya) was allegedly NSA-developed and retained before it was stolen and weaponized — illustrating the systemic risk when retained vulnerabilities leak
- Congressional calls for reform have pushed for statutory VEP requirements and annual reporting, but no legislation has passed as of 2026
Practitioner significance: Companies advising government clients on cyber operations, or security researchers selling vulnerabilities, operate in a space with sparse legal frameworks. The VEP creates policy expectations but not enforceable rights.
The very tools the NSA built to protect American interests became the weapons used against its allies. And nothing in U.S. law prevented it.
Issue 2: Ransomware Payments, Sanctions Exposure, and Disruption Operations
The Sanctions Problem
In 2021, a ransomware crew encrypted the Colonial Pipeline's systems. The company paid approximately $4.4 million in Bitcoin to recover its files. Within days, the FBI had traced the payment through the blockchain and recovered 63 BTC. But buried inside the congratulatory press releases was a detail that went largely unreported: paying that ransom could have triggered OFAC sanctions liability. Not for the hackers. For Colonial Pipeline.
Ransomware has turned the U.S. sanctions framework into an operational consideration for every organization that experiences an attack.
- Major ransomware groups (LockBit, REvil, Hive, BlackCat/ALPHV, Conti) are operated by individuals and entities that may be OFAC-designated Specially Designated Nationals (SDNs)
- Paying a ransom to a designated person or entity is a civil violation of OFAC sanctions — regardless of whether the payer knew the group was designated
- Strict liability applies: OFAC civil penalties do not require intent — the payment itself is the violation
OFAC's advisory position: OFAC has repeatedly warned that ransomware payments present sanctions risk and that it will consider factors including self-disclosure and cooperation in determining penalties. But the warning also creates a chilling effect — victims may delay payment decisions, potentially increasing harm.
Current program architecture: OFAC's cyber-related sanctions materials now package the ransomware-payment analysis into a more formal operating stack: the updated ransomware advisory, virtual-currency compliance guidance, cyber FAQs, OFAC license procedures, and the cyber sanctions regulations in 31 CFR Part 578. The cyber program page also reflects the amended executive-order framework (E.O. 13694, 13757, 14144, and 14306). Practically, this means ransomware-payment analysis is not an isolated one-off memo; it is a recurring sanctions-compliance workflow that must account for SDN screening, possible licensing, and the role of intermediaries such as exchanges, OTC desks, and incident-response vendors.
FinCEN's position: Financial institutions involved in ransomware-related transactions face BSA/AML obligations. SAR filings are required when a financial institution has reason to suspect a transaction involves ransomware proceeds. This implicates banks processing wire payments, cryptocurrency exchanges facilitating ransom payments, and cyber insurance companies that fund ransomware payments.
Current numbers matter: FinCEN's December 2025 ransomware trend analysis reported more than $2.1 billion in ransomware payments reflected in BSA data from 2022 through 2024, with 2023 alone reaching 1,512 incidents and $1.1 billion in reported payments. That data matters because it helps explain why Treasury continues to treat ransomware payments as a national-security and illicit-finance issue, not just a private victim-response problem.
The Pay/No-Pay Legal Uncertainty
No U.S. federal statute currently prohibits ransomware payments in most civilian contexts. But payments may be:
- Illegal if made to a designated SDN (OFAC violation)
- Subject to SAR reporting obligations (FinCEN)
- Potentially problematic under computer fraud theories if the payment facilitates ongoing criminal enterprise
Reform proposals in circulation:
- Payment ban: Prohibiting ransomware payments entirely (particularly for public entities) to eliminate the economic incentive for attackers
- Mandatory reporting: Requiring all ransomware payments to be reported to federal authorities within 24-72 hours (CIRCIA will require this for critical infrastructure)
- Safe harbor: Providing immunity from OFAC civil penalties for victims who promptly report and cooperate with law enforcement before paying
Disruption operations: The government has increasingly used infrastructure seizure and multinational law enforcement operations to disrupt ransomware ecosystems — LockBit takedown 2024, Hive takedown 2023. These operations — seizing domains, decryption keys, and payment portals — provide alternatives to payment and change the economics of ransomware. Practitioners advising clients during active ransomware incidents should contact law enforcement early: seized decryption keys may already exist.
Recent sanctions example: Treasury's August 2025 redesignation of Garantex shows how OFAC increasingly targets the financial plumbing around ransomware and other malicious cyber activity, not only the operators who launched the intrusion. Treasury said the exchange had facilitated ransomware actors and other cybercriminals and processed more than $100 million in illicit-linked transactions since 2019. For counsel, that broadens the due-diligence question from "who encrypted the network?" to "which exchanges, brokers, wallets, or payment channels could make the transaction itself sanctionable?"
Issue 3: Bug Bounties, VDPs, and the Good-Faith Research Framework
The Security Researcher Who Became a Criminal
In 2013, Aaron Swartz — a programmer and activist who had helped build RSS and Creative Commons — faced 13 federal charges under the Computer Fraud and Abuse Act for mass-downloading academic articles from JSTOR through MIT's network. He faced up to 50 years in prison. He was 26 years old.
He never went to trial. He died before the case concluded.
His prosecution became a symbol of the CFAA's most dangerous feature: its potential to criminalize conduct that reasonable people would call legitimate — or at least benign. Security researchers who probe systems for vulnerabilities, who stress-test defenses, who find the holes before the criminals do — they live in the same legal gray zone Aaron Swartz inhabited.
The Legal Problem for Security Researchers
Security researchers test systems for vulnerabilities — sometimes without explicit permission. The CFAA, as written, could theoretically apply to unauthorized testing even where the researcher's intent is to help. For years, the "chilling effect" of CFAA on legitimate security research was a recognized policy problem.
CISA Binding Operational Directive 20-01 — The Government Blinks First
In 2020, CISA issued BOD 20-01, requiring all federal civilian executive branch agencies to develop and publish a vulnerability disclosure policy (VDP). The directive:
- Mandates a VDP covering all internet-accessible systems
- Requires agencies to publish a security.txt file with VDP information
- Establishes a process for receiving and triaging vulnerability reports from researchers
- Commits agencies to not pursuing legal action against researchers acting in good faith within VDP scope
This was a watershed moment — the U.S. government formally embraced the principle that organized vulnerability disclosure is a security benefit, not a threat.
DOJ's Good-Faith Security Research Policy — A Shield With Holes
In 2022, DOJ revised its CFAA charging policy to state that "good-faith security research" should not be charged as a CFAA violation. The policy defines good-faith research as:
- Accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability
- Where such activity is carried out in a manner designed to avoid any harm to individuals or the public
- Where the information derived is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs
Then the judge dropped the reality that changed everything: DOJ policy is not statute. It binds prosecutors but not courts. A private plaintiff suing under CFAA's civil provision is not bound by DOJ's prosecutorial policy. The policy also does not create a clear standard for researchers who access systems without any VDP or bug bounty program in place.
The security researcher who found the bug could still be the one facing a lawsuit.
Bug Bounty Programs and Safe Harbors
Commercial bug bounty programs (HackerOne, Bugcrowd, Intigriti) allow organizations to create explicit authorized-access relationships with security researchers. Participation in a scoped bug bounty program provides:
- Clear authorization (resolves the CFAA "without authorization" element)
- Defined scope (limits liability to in-scope activity)
- A contractual framework (including non-disclosure, disclosure timelines, payment terms)
Reform proposals: Legislation to create a statutory safe harbor for good-faith security research under defined conditions has been proposed in Congress multiple times. No comprehensive statute has passed as of 2026 — the CFAA reform debate continues.
Issue 4: Extradition, E-Evidence, and Cross-Border Latency
The MLAT Latency Crisis
The server logs existed. They had the attacker's IP addresses, timestamps, connection records. Everything needed to identify who did what and when. They were stored on a server in Romania.
By the time the MLAT request was processed and the evidence transmitted, the logs were gone.
MLAT requests — the formal mechanism for obtaining evidence from foreign countries — average months to over a year. In cybercrime investigations, where evidence is volatile — logs auto-delete, servers get wiped, suspects destroy devices — this latency can be case-dispositive.
The CLOUD Act Framework — A Faster Lane
The U.S. CLOUD Act (2018) addressed one dimension of the latency problem: it resolved the conflict between U.S. legal process (demanding data from U.S. providers regardless of storage location) and foreign data protection laws (prohibiting disclosure). The Act:
- Allows U.S. providers to challenge orders producing foreign-stored data when they believe it violates a foreign country's laws
- Authorizes the U.S. to enter "executive agreements" with foreign governments allowing direct law enforcement access to data held by each other's providers — bypassing MLAT for covered requests
CLOUD Act executive agreements have been signed with the UK, Australia, and are being negotiated with additional countries. These agreements dramatically reduce latency for covered request types.
Budapest Convention Second Additional Protocol
Signed by the U.S. in 2022, the Second Additional Protocol creates:
- Voluntary direct cooperation with service providers in other Party countries for subscriber information and traffic data
- Emergency disclosure provisions for imminent threats to life
- Video conferencing tools for witness testimony
- Enhanced joint investigation team frameworks
The civil liberties tension: Both the CLOUD Act and the Second Additional Protocol are criticized by privacy advocates for enabling cross-border surveillance with inadequate judicial oversight and due process protections. The tension between investigative effectiveness and rule-of-law safeguards remains unresolved.
Issue 5: Encryption and the Lawful Access Debate
The Lock No Government Can Open
Strong encryption protects billions of users from surveillance, theft, and authoritarian governments. It also makes certain categories of evidence inaccessible to law enforcement even with a valid court order — the "going dark" problem.
The FBI has argued for years that "warrant-proof" encryption impedes lawful evidence collection. The security community has answered with equal consistency: cryptographic backdoors are technically incompatible with strong security. A backdoor usable by law enforcement is also a vulnerability exploitable by adversaries. There is no technical way to create a backdoor that only "good actors" can use.
Both sides are right. And that is precisely what makes this the most dangerous unresolved debate in cybersecurity law.
The Apple-UK Confrontation (2025-2026) — The Debate Goes Live
Without warning, the theoretical became concrete. The UK government issued a Technical Capability Notice (TCN) under the Investigatory Powers Act 2016, reportedly ordering Apple to create a backdoor into iCloud's end-to-end encrypted Advanced Data Protection feature for UK law enforcement access.
Apple's response: Apple reportedly challenged the order under secrecy provisions — the TCN itself was classified, raising the issue of whether the order could be challenged transparently and what the global impact of a backdoor for one government's access would mean.
Significance: The Apple-UK dispute illustrates:
- Governments are actively using existing legal authorities to compel backdoor-equivalent access from technology companies
- The secrecy provisions around these demands prevent public and judicial scrutiny
- A backdoor created for one government's access would potentially compromise the security of users globally — raising human rights, cybersecurity, and international trade issues
- The "lawful access" debate is no longer theoretical — it has moved to live legal orders against major technology companies
No one knew it yet, but the Apple-UK confrontation would mark the moment the encryption war stopped being a policy debate and became an operational conflict between governments and the companies that hold the world's data.
Reform landscape: No legislative resolution to the encryption policy debate has emerged in the U.S. as of 2026. The FBI continues to advocate for "lawful access"; Congress remains divided; courts have not definitively resolved the constitutional dimensions.
Comparative Reform Proposals Table
| Issue | Current State | Leading Reform Proposals |
|---|---|---|
| Zero-day governance | VEP is policy-only; no statutory basis; limited oversight | Statutory VEP requirement; mandatory annual disclosure reporting; independent oversight board |
| Ransomware payments | No blanket prohibition; OFAC civil liability risk; FinCEN reporting for institutions | Payment ban for public entities; mandatory reporting for all sectors; OFAC safe harbor for cooperating victims |
| Good-faith security research | DOJ policy (non-binding on civil plaintiffs); no statutory safe harbor | CFAA amendment creating safe harbor for defined good-faith research; mandatory VDP for all critical infrastructure |
| Cross-border e-evidence | CLOUD Act + Budapest SAP + MLAT (all operational); latency reduced but not eliminated | Expansion of CLOUD Act executive agreements; enhanced Budapest Protocol capacity; EU e-evidence regulation |
| Encryption/lawful access | No U.S. legislation; live disputes in UK; FBI advocacy ongoing | Range from voluntary industry cooperation to statutory "technical capability" mandates to constitutional challenge to all government-mandated access |
Practitioner Takeaways
1. OFAC screening is a pre-payment obligation, not an afterthought. When advising on ransomware, the OFAC question must be answered before any payment. The checklist: (a) identify the ransomware group; (b) cross-reference against OFAC SDN list and relevant OFAC advisories; (c) if designated or high risk, engage OFAC counsel before payment; (d) document the screening and analysis regardless of outcome.
2. Coordinate with law enforcement before paying ransomware — decryption keys may already exist. Following major ransomware infrastructure seizures, law enforcement has obtained and distributed free decryption keys to victims. Before negotiating or paying, engage FBI (IC3 report or direct contact) to determine whether decryption assistance is available.
2A. Screen the payment rails, not only the named actor. If the proposed path runs through a virtual-asset exchange, OTC broker, or other intermediary associated with ransomware or sanctions-evasion activity, the OFAC issue may arise even when attribution to the intrusion crew is incomplete. Treasury's 2025 Garantex action is the current example to keep in mind.
3. For security researchers: get authorization in writing before testing. DOJ policy provides some comfort, but it does not protect against civil CFAA claims. Before conducting security testing, obtain: (a) explicit written authorization from the target organization, OR (b) documented enrollment in a publicly accessible bug bounty or VDP program. "Good faith" is a defense only if you were actually in good faith — and courts, not DOJ, decide that question in civil suits.
4. The encryption debate will affect client infrastructure decisions. Advise clients in regulated industries that the government's push for "lawful access" creates a real risk that end-to-end encryption implementations may face legal challenges or technical capability demands in some jurisdictions. Infrastructure decisions made today about encryption may need to account for legal requirements that do not yet exist but are clearly being pursued.
5. Track CIRCIA rulemaking. CIRCIA's implementing rules (being finalized by CISA) will create mandatory incident reporting obligations for critical infrastructure sectors. The reporting timelines, covered entities, and safe harbors are still being defined. Counsel advising critical infrastructure clients should monitor the rulemaking closely — these obligations will be as significant as GDPR was for data protection.
Quiz
See: artifacts/quizzes/quiz-01g.md
Test your knowledge
Ready to check what stuck?
10 questions — cases, statutes, and the practical move for each. Takes 5 minutes.