Non-Lawyers Summary
Three legal fault lines are reshaping what AI security researchers, software vendors, and corporate IR teams can and cannot do without serious legal exposure:
AI/LLM security research: No court has decided whether probing an AI API with adversarial inputs constitutes a crime under the Computer Fraud and Abuse Act. The first formal safe harbor specifically for AI research launched in January 2026, but coverage is narrow. Researchers who extract training data, attempt jailbreaks, or run model inversion attacks are operating in a legal gray zone with real exposure — particularly if they retain extracted data or operate without a valid program authorization.
Software supply chain attacks: U.S. law currently gives downstream victims almost no viable theory to sue the compromised software vendor. If SolarWinds had been your vendor and you were breached through their trojanized update, your lawsuit against SolarWinds for failing to secure their build pipeline would almost certainly fail under existing U.S. doctrine. This is a known gap that regulators and Congress are actively working to close.
Cyber insurance: War exclusions designed for armed conflict are being weaponized by insurers to deny ransomware claims when attacks are state-sponsored. Courts have pushed back — Merck won its $1.4B NotPetya claim in 2023. But the insurance market has responded by tightening policy language, adding sublimits, and requiring pre-authorization for ransom payments. IR professionals must understand the policy before the incident.
What This Module Answers Fast
- Can I legally run prompt injection attacks against an AI system if I have an API key? → See Section 1, CFAA Technical Gate Analysis
- Is jailbreaking an LLM a federal crime? → See Section 1, Van Buren Application to AI
- What safe harbors exist specifically for AI security research? → See Section 1, HackerOne AI Safe Harbor
- Can I sue the software vendor that got compromised and allowed attackers into my network? → See Section 2, Supply Chain Victim Liability Gap
- Does my cyber insurance cover a state-sponsored ransomware attack? → See Section 3, War Exclusion Doctrine and Merck
- What happens if I pay ransomware to a sanctioned group — does insurance cover it? → See Section 3, OFAC Liability and Ransom Coverage
Overview
Cyber law has a structural lag problem: courts build doctrine on cases that take 3–8 years to resolve; technology moves on an 18-month cycle. Three areas currently show the largest gap between legal doctrine and technical reality.
AI/LLM Security Research is the newest fault line. CFAA was written in 1986 to address unauthorized access to mainframes. Van Buren (2021) refocused it on technical "gates" rather than terms-of-service violations. Whether an AI model's safety filters, rate limits, or alignment training constitute "technical gates" for CFAA purposes has not been decided. No controlling precedent exists. Researchers are making risk bets on legal theories that courts have not yet evaluated.
Software Supply Chain Liability sits at the intersection of contract law (vendor agreements with no upstream liability), tort law (economic loss rule bars negligence claims for pure economic loss), and product liability (software is generally not a "product" under strict liability doctrine). The result: sophisticated multi-year supply chain attacks that breach thousands of organizations produce no civil liability for the compromised software vendor under current U.S. law.
Cyber Insurance Coverage Disputes escalated after insurers attempted to deny NotPetya claims totaling billions of dollars under "war exclusions" that had historically applied only to kinetic warfare. Courts sided with policyholders. The insurance market responded not with clarity but with re-engineered policy language that trades one ambiguity for another.
Start Here If Your Issue Is...
| Issue | Start At |
|---|---|
| I ran adversarial testing against an AI API — do I have CFAA exposure? | Section 1: CFAA Technical Gate Analysis for AI Systems |
| I want to know if my bug bounty safe harbor covers AI adversarial testing | Section 1: HackerOne AI Research Safe Harbor (2026) |
| I extracted what appears to be training data — is that theft? | Section 1: Economic Espionage Act and Model Inversion |
| My company was breached through a compromised third-party software vendor | Section 2: Software Supply Chain Victim Liability Gap |
| I need to understand the regulatory landscape for software vendors post-SolarWinds | Section 2: SEC, CISA, and Regulatory Pressure |
| My insurer is trying to deny a ransomware claim using the "war exclusion" | Section 3: War Exclusion Doctrine and Merck v. ACE |
| I need to pay ransomware and want to know if insurance covers it | Section 3: Ransom Payment Coverage and OFAC Risk |
| I'm reviewing our cyber policy before our next renewal | Section 3: Pre-Incident Insurance Policy Review Checklist |
Issue Map
flowchart TD
A[AI Security Research Action] --> B{Valid API key or program authorization?}
B -->|No| C[CFAA 'without authorization' — high risk]
B -->|Yes| D{Technical gate bypassed?}
D -->|Yes — safety filter / auth mechanism bypassed| E[CFAA 'exceeds authorization' — medium-high risk post-Van Buren]
D -->|No — valid requests, adversarial inputs only| F{Data retained beyond disclosure?}
F -->|Yes — training data extracted and kept| G[EEA exposure possible + ToS civil risk]
F -->|No — retained only for disclosure| H{Safe harbor covers this activity?}
H -->|Yes| I[Lowest legal risk — document everything]
H -->|No or unclear| J[Material risk — consult counsel before proceeding]
K[Supply Chain Attack] --> L{Who is the victim?}
L -->|Compromised vendor itself| M[CFAA covers initial attacker intrusion]
L -->|Downstream customer of vendor| N{Theory of liability against vendor?}
N -->|Negligence| O[Economic loss rule likely bars claim in U.S.]
N -->|Contract| P[Vendor agreement — likely limited liability clause]
N -->|Product liability| Q[Software not a product in most U.S. jurisdictions]
N -->|EU Cyber Resilience Act| R[Mandatory requirements + civil penalties if EU market product]
S[Ransomware Incident] --> T{Insurer invoking war exclusion?}
T -->|Yes| U[Analyze: sovereign directive + armed conflict context per Merck]
T -->|No| V{Paying ransom?}
V -->|Yes| W{Sanctioned group involved?}
W -->|Yes or unknown| X[OFAC liability — payment may void coverage + federal penalty]
W -->|No| Y{Insurer consent obtained?}
Y -->|No| Z[Policy may deny claim for unauthorized payment]
Y -->|Yes| AA[Coverage likely — check sublimit]Timeline Overview
timeline
title Key Events in Emerging Cyber Law (2020–2026)
2020 : SolarWinds Orion compromise (December) — 18,000+ customers affected by trojanized updates
2021 : Van Buren v. United States — Supreme Court narrows CFAA to technical gates, not ToS violations
: Executive Order 14028 — Federal software security requirements + SBOM mandate
: Colonial Pipeline ransomware — OFAC sanctions guidance on ransom payments issued
2022 : DOJ Good-Faith Security Research Policy — federal charging guidance for researchers
: Lloyd's of London Market Bulletin Y5381 — mandated cyber war exclusion language
: Cyber insurance premiums peak +130% two-year increase
2023 : Merck v. ACE American Insurance — NJ court rejects war exclusion for NotPetya; Zurich must pay $1.4B
: Mondelez v. Zurich American — NotPetya war exclusion dispute settled confidentially
: SEC v. SolarWinds — CISO personally charged; disclosure liability confirmed
: 3CX supply chain attack — Lazarus Group; no vendor liability prosecutions
: CISA Secure by Design — voluntary framework for software vendor security posture
2024 : XZ Utils backdoor — multi-year open source supply chain attack narrowly detected
: EU Cyber Resilience Act passed — mandatory connected product security requirements
2025 : SEC SBOM rulemaking progress — disclosure duties for public companies
2026 : HackerOne AI Security Research Safe Harbor — first formal AI adversarial testing safe harbor
: No controlling CFAA precedent yet on AI/LLM probing — legal questions fully openKey Facts
- No court has decided whether adversarial AI research (prompt injection, model inversion, jailbreaking) constitutes a CFAA violation. All analysis in this area is predictive, not based on decided cases.
- Van Buren (2021) held that "exceeds authorized access" under CFAA requires crossing a technical gate to an area one is not permitted to access — not merely misusing access that was granted.
- ToS violations alone do not trigger CFAA criminal liability under Van Buren. Civil CFAA exposure remains possible through private suits under 18 U.S.C. § 1030(g).
- Merck won $1.4 billion against its insurer in 2023 after the court rejected the war exclusion for a state-sponsored ransomware attack not conducted in the context of traditional armed conflict.
- No U.S. statute creates a private cause of action allowing downstream software supply chain victims to sue compromised software vendors for failing to secure their build pipeline.
- OFAC sanctions on ransomware groups (REvil, DarkSide, Evil Corp) mean ransom payment may independently constitute a federal violation — and insurers may refuse to cover payment to sanctioned entities.
- EU Cyber Resilience Act (2024) is the most significant international development in supply chain liability — it creates civil penalties for software/hardware manufacturers and applies to any product sold in EU markets.
- HackerOne's January 2026 AI Safe Harbor is the first formal authorization framework explicitly designed for adversarial ML research — but coverage depends entirely on program enrollment and scope definitions.
Section 1: The Gray Zone — AI/LLM Security Research and CFAA
The Calm Before the Probe
In the summer of 2023, security researchers across three continents were running adversarial inputs against large language models — testing jailbreaks, probing for training data leakage, mapping the edges of what these systems could be made to do.
They were doing it with valid API keys. They were doing it through documented interfaces. They were doing it, most of them, in good faith.
And not one of them could say with certainty whether what they were doing was a federal crime.
That remains true today.
The Core Problem: CFAA Was Not Written for Neural Networks
The Computer Fraud and Abuse Act, 18 U.S.C. § 1030, criminalizes "intentionally access[ing] a computer without authorization or exceed[ing] authorized access." The statute was enacted in 1986 when "a computer" meant a discrete machine with a login prompt. The Supreme Court's Van Buren decision in 2021 clarified "exceeds authorized access" through a technical gate framework: you exceed authorized access when you enter a prohibited area of a system — not when you access permitted areas for improper purposes.
AI models are not discrete machines with login prompts. They are probabilistic systems accessed via API, with "gates" that are fuzzy, statistical, and often themselves the targets of security research. Mapping Van Buren's gate analysis onto LLM interaction requires legal creativity that courts have not yet exercised.
The Van Buren Technical Gate Analysis Applied to AI Systems
Van Buren established a binary: either you had authorization to access the area you accessed, or you did not. Applied to AI systems, two scenarios emerge with meaningfully different risk profiles:
Scenario A — Safety Filter as Technical Gate
An LLM's alignment training and safety filters reject certain request categories. If a researcher bypasses these filters through a jailbreak, has a "technical gate" been crossed?
In favor of CFAA application: The model's refusal mechanism is analogous to a login gate — a system-enforced barrier. Bypassing it could be characterized as accessing a prohibited area the operator did not authorize.
Against CFAA application: A safety filter is not the same as an access control mechanism. Van Buren's gate metaphor requires that the researcher access a different area of the system — not coerce a system into producing different output from the same area. The API endpoint is the same; the computation happens on the same infrastructure.
The better analysis under current doctrine: if the model is a publicly accessible API, a jailbreak is not exceeding authorized access in the Van Buren sense — it is manipulating an authorized session into producing unintended output. However, this analysis is not tested in court.
Scenario B — Model Inversion and Training Data Extraction
A researcher sends many carefully crafted prompts that cause the model to reproduce verbatim training data. No authentication was bypassed. The API was used through valid credentials. The requests were syntactically valid.
This is the harder case. Three legal theories converge:
- CFAA (unauthorized access): Weak after Van Buren. No technical gate was bypassed.
- CFAA (damage/loss): Section 1030(a)(5) covers causing damage or loss to a computer system. If training data extraction degrades model performance or constitutes theft of a trade secret encoded in model weights, a damage theory is arguable — but never litigated.
- Economic Espionage Act (EEA), 18 U.S.C. § 1831–1839: Prohibits theft of trade secrets, including by computer. Carries up to 10-year criminal penalties. No prosecution has been brought against AI researchers under EEA as of 2026. The theory is live.
Open Legal Questions — Unresolved as of April 2026
Question 1: Is probing an AI API with adversarial prompts "exceeding authorized access" under CFAA post-Van Buren?
Valid API key + adversarial prompts = likely not criminal CFAA under current doctrine, unless API access is revoked with notice and research continues. The post-revocation analysis from hiQ v. LinkedIn and Power Ventures suggests that continuing access after explicit revocation moves the activity from "exceeds authorized access" toward "without authorization."
Question 2: Is the LLM the "tool" or the "victim" in a prompt injection attack?
When a prompt injection attack causes an LLM-powered system to exfiltrate data or take harmful actions, the legal question of who committed what crime is unresolved. Under current U.S. law, the attacker has likely committed CFAA unauthorized access (using the LLM as a tool to access the victim's data) and wire fraud (if financial gain is involved). The LLM itself is not a legal person. The operator's civil exposure to victims of prompt injection attacks is an open question — plausibly a negligence claim, but the economic loss rule and "no duty to third parties" analysis make this difficult.
Question 3: Is extracted training data "stolen property" under EEA?
Three elements must be met: (1) the information is a trade secret; (2) the defendant misappropriated it; (3) the defendant knew it was a trade secret.
- Is it a trade secret? Training datasets curated by major AI labs are likely trade secrets — they derive value from not being publicly known and are subject to reasonable protective measures.
- Is extraction "misappropriation"? EEA § 1839(6) defines misappropriation broadly to include acquiring a trade secret through "improper means." Whether using an authorized API to produce unintended outputs is "improper means" is not settled.
- Knowledge element: Researchers who know they are attempting to extract training data from a commercial model likely satisfy this if the data turns out to be protected.
Practical conclusion: Retain extracted training data beyond what is needed for a vulnerability disclosure report at serious EEA exposure risk. Destroy it. Document the destruction.
The Role of Terms of Service Post-Van Buren
Van Buren settled one question: criminal CFAA liability does not attach solely from violating terms of service. A researcher who uses an AI API in violation of the ToS has not committed a federal crime merely by that violation.
What ToS can do:
- Create a civil breach of contract claim by the AI vendor against the researcher.
- Trigger API access revocation — and continued access after revocation does raise CFAA criminal exposure.
- Become evidence of "without authorization" in a civil CFAA § 1030(g) suit.
What ToS cannot do post-Van Buren:
- Alone transform otherwise authorized computer access into criminal CFAA violations.
- Create EEA liability where the underlying technical acts would not independently support it.
HackerOne AI Security Research Safe Harbor (January 2026) — The First Line of Defense
HackerOne launched the first formal safe harbor document specifically addressing AI security research in January 2026. Key elements:
Explicit adversarial ML coverage: The safe harbor expressly covers adversarial machine learning techniques including prompt injection, jailbreaking, model extraction, membership inference attacks, and data poisoning testing — activity categories that standard bug bounty safe harbors either exclude or are silent on.
Beyond traditional scope: Participating programs may extend safe harbor to testing that goes beyond conventional vulnerability assessment — including testing model behavior at boundary conditions and safety filter robustness.
Retention restrictions: The safe harbor requires researchers not to retain extracted model outputs, training data, or model weights beyond the period reasonably needed for disclosure preparation. This directly addresses the EEA exposure risk.
Limitations: The safe harbor binds participating vendors — it does not bind DOJ, state prosecutors, or non-participating AI companies. Researchers testing AI systems outside enrolled programs receive no coverage. The safe harbor is not legislation — it is a contractual arrangement.
Current program enrollment: As of April 2026, the AI-specific safe harbor has been adopted by a subset of HackerOne's AI client programs. Researchers must verify enrollment in the specific program they intend to test.
Legal Risk Matrix for AI Security Researchers
| Action | CFAA Risk | EEA Risk | ToS Civil Risk | Safe Harbor Available |
|---|---|---|---|---|
| Prompt injection testing | Low-Medium | None | Possible | If explicitly in scope |
| Training data extraction | Medium | Possible | Yes | Rarely; retention limits apply |
| Model API probing (rate limits) | Low | None | Yes | Often |
| Adversarial example generation | Low | None | Possible | Usually |
| Jailbreaking / safety bypass | Medium-High | None | Yes | Depends on program |
| Model inversion / weight reconstruction | Medium | High | Yes | Rarely; no programs known as of 2026 |
| Membership inference attacks | Low-Medium | Possible | Yes | Rare; HackerOne AI SH potentially |
| Continued access post-revocation | High | Context-dependent | Yes | No |
Risk ratings assume: researcher holds valid API credentials for testing; research is designed to identify and disclose vulnerabilities; extracted data is not retained beyond disclosure needs; testing occurs on program infrastructure, not production user data.
Recommended Legal Analysis Before AI Security Research
- Verify the program's safe harbor explicitly covers adversarial testing — do not assume standard bug bounty safe harbor language extends to LLM jailbreaking, model extraction, or prompt injection. Read the language.
- Do not retain extracted data beyond what is needed for disclosure.
- Treat API key revocation as equivalent to a Power Ventures cease-and-desist — continuing research after revocation raises CFAA exposure materially.
- Document authorization at every step — program enrollment confirmation, scope definitions, timestamps of testing activity.
- Model extraction and training data reconstruction are EEA risk activities — even where CFAA exposure is low, the EEA exposure is non-trivial and no safe harbor eliminates it.
Section 2: The Liability Abyss — Software Supply Chain Attack Doctrine
The Calm — A Software Update
Just before dawn on December 14, 2020, an automated process pushed a software update to SolarWinds Orion customers worldwide. 18,000 organizations downloaded it. Government agencies, defense contractors, Fortune 500 companies — all of them silently installing what they trusted.
The update was poison.
The Russian Foreign Intelligence Service — the SVR — had been inside SolarWinds' build pipeline for months. The trojanized update opened doors across the U.S. government that no one knew existed. Treasury. State. Justice. Defense. The attackers had been in some of those networks since March.
Detection came in December, from a different company, by accident.
The downstream victims — the 18,000 organizations — immediately asked the same question: can we sue SolarWinds?
The answer, under existing U.S. law, was almost uniformly: no.
The Supply Chain Victim Liability Gap Under U.S. Law
Theory 1: Negligence
For negligence, a plaintiff must establish: (1) defendant owed a duty; (2) breach; (3) causation; (4) damages. Supply chain victims face two immediate obstacles:
The economic loss rule: In most U.S. jurisdictions, a plaintiff cannot recover pure economic losses — lost business, data recovery costs, regulatory fines, incident response costs — through negligence unless there is also physical injury or property damage. A ransomware attack that costs $50 million in remediation but causes no physical damage fails this test. Courts treat it as a contractual allocation issue.
No duty to third parties: SolarWinds had contracts with its direct customers. It did not have contracts with those customers' customers, employees, or third parties harmed by the breach. Under the Palsgraf analysis, extending negligence duty to all foreseeable downstream victims of a supply chain attack would create effectively unlimited liability — courts have consistently declined.
Theory 2: Contract
Direct customers may have contractual claims — but vendor software agreements uniformly include limitation of liability clauses that cap damages at the contract value, exclude consequential damages, and disclaim warranties related to security. SolarWinds' customer agreements contained these provisions. These clauses are generally enforceable as between sophisticated commercial parties.
Theory 3: Product Liability
Strict products liability allows injured parties to recover from manufacturers without proving negligence. If software were a "product," supply chain attack victims could argue the trojanized update was a defective product. However, the majority of U.S. courts have held that software is not a "product" subject to strict product liability — it is a service, an intellectual creation, or a licensed information product.
Theory 4: CFAA Against the Vendor
The CFAA's civil provision (18 U.S.C. § 1030(g)) allows private parties to sue for unauthorized access. But the vendor did not "access" the customers' systems without authorization — the attackers did, using the vendor's update as a vector. The CFAA does not impose liability on software vendors for security failures that enable unauthorized access by third parties.
The SEC v. SolarWinds Prosecution: Disclosure, Not Security Failure
October 2023. The Securities and Exchange Commission sued SolarWinds and its Chief Information Security Officer, Timothy Brown, in the Southern District of New York.
The charges were not about the security failure. The theory was that SolarWinds and Brown had made material misstatements about the company's security posture in public filings before the breach became known.
The SEC alleged:
- SolarWinds' public statements described robust security practices that were not implemented.
- Brown had internal communications acknowledging significant security deficiencies that were not disclosed.
- These misstatements were materially misleading to investors.
A federal court in July 2024 dismissed most of the claims against Brown and significantly narrowed the claims against SolarWinds. But the case established a critical principle: cybersecurity disclosure obligations under securities law are enforceable, even if substantive cybersecurity negligence claims are not. A CISO who signs off on misleading security disclosures faces personal liability.
The vendor's security failure remained unpunishable. The vendor's story about its security failure became the prosecution.
The 3CX and XZ Utils Attacks: Pattern Without Remedy
3CX (2023): North Korea's Lazarus Group compromised 3CX's desktop application, pushing a trojanized version to over 600,000 organizations globally. No criminal prosecution of 3CX followed. No civil liability theory succeeded. The affected 3CX customers had no viable U.S. legal recourse.
XZ Utils (2024): A sophisticated multi-year social engineering campaign — attributed to an unknown actor operating under the persona "Jia Tan" — compromised the maintainer of XZ Utils, a fundamental data compression library included in major Linux distributions. The attacker, over approximately two years, built trust in the open source community before inserting a backdoor that would have compromised SSH authentication at massive scale. A Microsoft engineer detected the anomaly before widespread deployment.
Had it deployed, affected users would have had zero legal recourse against anyone — the maintainer was socially engineered, not negligent in any prosecutable sense; the attacker's identity remains unknown; the open source library was provided without warranty.
Legal pattern across all three: The attacker commits CFAA (initial intrusion into the vendor's build environment). The attacker may face DOJ indictment. Downstream victims — the ones with actual damages — have no viable U.S. legal claim against the compromised vendor.
Regulatory Pressure Attempts to Fill the Gap
Executive Order 14028 (May 2021): Required federal contractors selling software to the government to maintain an SBOM (Software Bill of Materials). Created SBOM standards but no private cause of action. Applies only to federal contractors.
CISA Secure by Design (2023): Published voluntary security principles for software manufacturers. Voluntary — no enforcement mechanism, no private cause of action.
SEC Cybersecurity Disclosure Rules (2023): Public companies must disclose material cybersecurity incidents within 4 business days and describe cybersecurity risk management processes annually. Enforcement is through securities law — a disclosure violation, not a security failure.
EU Cyber Resilience Act (CRA, 2024): The most significant development in supply chain liability globally. The CRA creates mandatory security requirements for "products with digital elements" — any hardware or software product with direct or indirect network connectivity sold in the EU market. Key provisions:
- Manufacturers must assess cybersecurity risk and implement security by design.
- Mandatory vulnerability disclosure and patch deployment processes.
- Civil penalties of up to €15 million or 2.5% of global annual revenue for non-compliance.
- Extraterritorial reach: applies to any manufacturer selling into the EU market, regardless of where the manufacturer is based.
- Importers and distributors can be held liable if the manufacturer is outside the EU.
For U.S. software vendors with EU market exposure — which is most significant enterprise software companies — the CRA creates the first legally binding substantive security requirements with meaningful civil penalties.
Key Gap: No U.S. Civil Theory for Supply Chain Victims
| Legal Theory | Available Against Compromised Vendor? | Why Not |
|---|---|---|
| Negligence | Rarely viable | Economic loss rule + duty-to-third-parties bar |
| Contract — direct customer | Nominal only | Limitation of liability + consequential damages exclusions |
| Strict product liability | Not viable | Software is not a "product" in most U.S. jurisdictions |
| CFAA civil | Not viable | Vendor did not commit the unauthorized access |
| Securities fraud | Only for public companies with misleading disclosures | Requires misrepresentation, not mere security failure |
| EU CRA (regulatory) | Regulatory penalties only, no private suit | Enforcement by market surveillance authorities |
The U.S. legal system currently allocates nearly all supply chain attack costs to the downstream victims. This will change — the regulatory pressure is building, the EU has moved first — but as of April 2026, the gap is real, documented, and unresolved.
Section 3: The Insurance War — Coverage Disputes, War Exclusions, and Double Extortion
Without Warning — The NotPetya Catastrophe
June 27, 2017. In seconds, Merck & Co.'s operations ground to a halt. Screens went dark across 30,000 computers in 65 countries. The NotPetya malware — which looked like ransomware but was designed purely to destroy — had arrived through a trojanized Ukrainian accounting software update. It overwrote master boot records and left systems permanently unrecoverable. Merck's manufacturing shut down. Sales collapsed. The losses would eventually reach $1.4 billion.
Merck filed a claim with its insurer, ACE American Insurance Company. ACE denied it. The reason: a "hostile or warlike action" exclusion in the policy language.
The insurance company's argument: NotPetya was deployed by Russia's GRU — a military intelligence agency — therefore it was an act of war, therefore it was excluded.
What happened next would define the doctrine.
The Merck Victory — New Jersey Superior Court, January 2023
The war exclusion clause read:
The court ruled in Merck's favor. The war exclusion was drafted in the context of traditional armed conflict between sovereign nations. Insurance policy exclusions must be interpreted narrowly, against the insurer. The war exclusion's history and context shows it was designed for conventional armed conflict — not cyber operations conducted by state intelligence services in peacetime. Merck's reasonable expectation as a policyholder was that commercial cyber losses were covered. Zurich/ACE must pay.
Significance: This is the leading authority on war exclusion application to state-sponsored cyberattacks. Decided under New Jersey law — not binding in other jurisdictions, but the most substantively analyzed court ruling on this precise issue.
Mondelez v. Zurich — Settled, But the Pattern Holds
Mondelez International suffered approximately $100 million in NotPetya damages. Zurich American Insurance Co. also invoked the war exclusion. After years of litigation, the parties settled in 2023 on confidential terms — market reporting indicates Mondelez received a substantial portion of its claimed losses.
Two major corporations. Same attack. Same insurer argument. Both insurers lost or settled rather than litigate to judgment.
The signal: 2017-era war exclusion language cannot reliably exclude state-sponsored cyberattacks under U.S. court analysis.
Lloyd's Market Response — New Language, New Ambiguity
Following the NotPetya outcomes, Lloyd's of London issued Market Bulletin Y5381 in November 2022, requiring all Lloyd's syndicates writing standalone cyber policies to include specific war exclusion language by March 31, 2023. The required language distinguishes between:
- Cyber war exclusion: Excludes losses from cyberattacks that are part of or contribute to armed conflict between states, or that cause significant destabilizing effects on a state.
- Cyber operations carve-back: Retains coverage for "cyber operations" — state-sponsored cyberattacks that do not rise to the level of war.
The new language attempts to thread a needle. But the distinction between a "cyber operation" (covered) and a "cyberattack that contributes to armed conflict" (excluded) is not defined by reference to any established legal standard. Policyholders are trading one coverage dispute for a different coverage dispute — same ambiguity, different clause.
Practical guidance: When reviewing a cyber policy with post-2023 Lloyd's market war exclusion language, demand a plain-language explanation of exactly what scenario would trigger the exclusion and what evidence the insurer would require to invoke it. Get it in writing.
Ransomware-Specific Coverage Issues
Business Interruption (BI) Coverage:
Most cyber policies include business interruption coverage for income lost during a ransomware event. Key issues:
- Waiting period/deductible: Most BI coverage has an 8–24 hour waiting period before coverage begins. Short-duration ransomware incidents may fall below the waiting period.
- Period of restoration: Coverage typically ends when systems could be restored — not necessarily when they are restored.
- Ransomware sublimits: Post-2021 market hardening drove many insurers to impose separate, lower sublimits specifically for ransomware events — often $1M–$5M in policies with $25M+ total limits.
Ransom Payment Coverage:
Most cyber policies cover ransom payment. Two critical conditions apply:
OFAC liability: If the ransomware group is designated by OFAC — as REvil, DarkSide, Evil Corp, and ALPHV/BlackCat have been — paying ransom may constitute a federal sanctions violation regardless of whether you knew the group was sanctioned. Many insurers include language stating that coverage does not extend to payments that constitute illegal activity. Paying a sanctioned group without OFAC clearance could void coverage in addition to creating independent federal liability.
Consent-to-pay requirements: Most cyber policies require the insurer's prior written consent before paying ransom. Paying without consent — even a payment the policy would otherwise cover — may allow the insurer to deny the claim. Under the pressure of an operational shutdown, incident response teams face intense pressure to pay quickly. The policy consent requirement creates a practical conflict with the operational timeline.
Double Extortion:
Modern ransomware operations exfiltrate data before encrypting it, then threaten to publish unless additional ransom is paid. This creates two simultaneous coverage triggers:
- Ransomware/extortion coverage: Covers ransom payment and response costs.
- Data liability coverage: Covers breach notification costs, regulatory fines, and third-party claims.
Many policies have separate sublimits for these two categories. A double extortion event may exhaust the ransomware sublimit without touching the data liability sublimit, or vice versa.
Cyber Insurance Market Hardening (2021–2023)
The ransomware surge of 2021 — Colonial Pipeline, JBS Foods, Kaseya, dozens of healthcare systems — produced the most significant cyber insurance market dislocation in the product's history:
- Average cyber insurance premiums increased approximately 130% over 2021–2022.
- Multiple major carriers exited the cyber market or dramatically restricted appetite.
- Capacity became scarce, particularly for healthcare, education, and critical infrastructure.
- Sublimits for ransomware, social engineering, and war-adjacent scenarios became standard.
- Security questionnaire requirements expanded dramatically — carriers began demanding evidence of MFA deployment, EDR coverage, backup isolation, and patch management maturity.
By mid-2023, the market began stabilizing. The structural changes — sublimits, exclusions, stricter underwriting — are not reversing.
Pre-Incident Insurance Policy Review Checklist
IR professionals and corporate counsel should conduct this review annually and before any major incident response engagement:
- [ ] War exclusion language: What is the exact language? Does it use pre-2023 "hostile or warlike action" language (better for policyholders) or post-Lloyd's Y5381 language (more complex, potentially narrower coverage)?
- [ ] Ransomware sublimit: What is the ransomware-specific sublimit? Is it adequate relative to your revenue and operational exposure?
- [ ] BI waiting period: What is the waiting period before business interruption coverage begins? Is it 8 hours, 12 hours, 24 hours?
- [ ] Consent-to-pay requirement: Does the policy require insurer consent before paying ransom? What is the insurer's stated timeline for consent?
- [ ] OFAC carve-out: Does the policy explicitly address OFAC liability? Does it exclude coverage for payments to sanctioned groups? Does it provide assistance with the sanctions screen process?
- [ ] Double extortion structure: Are ransomware/extortion coverage and data liability/breach notification coverage separate coverage grants with separate sublimits, or unified?
- [ ] Pre-approved IR firm: Is there a pre-approved list of incident response firms? Can you deviate from that list, and at what cost?
- [ ] Notice requirements: What notice must be given to the insurer and when? Failure to provide timely notice can void claims.
- [ ] Retention of data post-exfiltration: Some policies reduce or deny coverage for incidents where the organization failed to encrypt or adequately protect exfiltrated data.
- [ ] Coverage for regulatory fines: Some policies include coverage for regulatory fines (SEC, OCR HIPAA enforcement, state AG actions); many exclude them as "uninsurable penalties."
Practical Takeaways
For AI Security Researchers:
- Valid API credentials + adversarial prompts = low CFAA criminal risk under current doctrine, but not zero. The doctrine is untested in AI contexts.
- Verify whether the specific program you are testing has adopted the HackerOne AI Security Research Safe Harbor or equivalent — do not assume standard bug bounty protection covers adversarial ML.
- Never retain extracted training data beyond what is needed for disclosure. EEA exposure is the largest underappreciated legal risk in AI security research.
- API key revocation after notice is a red line. Continuing research after revocation materially increases CFAA exposure.
- Jailbreaks and safety filter bypasses are in a legal gray zone — lower CFAA risk than model inversion, but ToS civil liability and potential safe harbor exclusion are real.
For Software Vendors and Their Customers:
- U.S. law does not currently give downstream supply chain victims a viable civil claim against the compromised vendor. Do not assume contract law will protect you — negotiate your vendor agreements to the extent possible, but understand the limits.
- Public companies: post-SEC SolarWinds enforcement, security disclosure accuracy is now a personal liability issue for CISOs who sign off on materially misleading statements.
- Federal software vendors: EO 14028 SBOM requirements apply to you; maintaining an accurate SBOM is now a contractual federal requirement.
- EU market exposure: CRA compliance timelines are real. Software and hardware manufacturers selling into EU markets need to be moving on CRA compliance now.
For IR Professionals and Corporate Counsel:
- Read the cyber policy before the incident. The war exclusion, ransomware sublimit, consent-to-pay requirement, and OFAC carve-out are not details — they determine whether your claim pays.
- Merck is your authority on war exclusions for state-sponsored ransomware under pre-2023 policy language. Post-2023 Lloyd's market language is more complex and less clearly favorable.
- OFAC sanctions screening is not optional before a ransom payment. Build it into your ransomware playbook as a mandatory gate.
- Double extortion requires simultaneous claims management across two coverage grants — engage your broker to understand the sublimit structure before an event.
- Insurer consent before ransom payment is a policy condition, not a courtesy. Violating it can void the claim regardless of whether the payment was otherwise covered.
What This Module Does Not Cover
- State computer crime statutes (addressed in Module 01A — CFAA and Federal Statutes, and relevant state modules)
- GDPR and EU data protection law as applied to AI training data (partially addressed in Module 01C — EU and International Frameworks)
- FTC enforcement actions against AI companies (emerging; not yet covered in LawZeee)
- Criminal CFAA analysis for non-AI hacking scenarios (Module 01A)
- Standard bug bounty and vulnerability disclosure law (Module 01J)
- Civil liability for AI model outputs to end users (separate emerging doctrine area; not yet covered)
- Insurance coverage disputes outside cyber policies (property, D&O, E&O coverage for cyber events)
- Export control law (EAR/ITAR) as applied to AI model exports
For Non-Technical Readers
On AI security research: Security researchers deliberately try to break AI systems to find weaknesses before attackers do. The law does not have clear rules yet for when this is legal and when it is not. The main issue is a 1986 law called the Computer Fraud and Abuse Act that was written for old-fashioned computer break-ins. Courts are still figuring out how it applies when someone uses a legitimate account to send unusual inputs to an AI system. A new safe harbor policy from HackerOne in early 2026 created some official protection for AI security researchers who work through authorized programs — but it does not cover everyone.
On software supply chain attacks: When attackers compromise a software company and push malicious code to that company's customers, U.S. law provides almost no way for those customers to sue the software company for failing to prevent it. The customers — who often had no idea there was a problem until attackers were already inside their networks — must bear their own costs. Courts, Congress, and regulators are aware this is wrong, and the European Union has passed legislation requiring minimum security standards for software, but the U.S. has not yet created civil liability for supply chain failures.
On cyber insurance: After a major ransomware attack in 2017 called NotPetya — launched by the Russian military against Ukraine but which spread globally — insurance companies tried to deny billions of dollars in claims from corporations, arguing the attack was a military "act of war" not covered by commercial insurance. Courts ruled against the insurance companies in the biggest cases (Merck won $1.4 billion in 2023). But since then, insurance policies have been rewritten with more complex language about what qualifies as "war," and claims for ransomware attacks are now subject to more restrictions, lower coverage limits, and requirements to get the insurance company's permission before paying a ransom. Organizations should read their cyber insurance policy before an incident — not during one.
Module 01S is part of the LawZeee legal reference system. Cross-references: Module 01A (CFAA Federal Statutes), Module 01C (EU/International Frameworks), Module 01D (Landmark Cases), Module 01J (Bug Bounty Legal Protections). Primary sources verified against primary legal materials. No LLM-fabricated citations — all case names, docket numbers, and regulatory citations should be independently verified before use in legal proceedings or formal submissions.
Test your knowledge
Ready to check what stuck?
10 questions — cases, statutes, and the practical move for each. Takes 5 minutes.