Non-Lawyers Summary
Hacking a power grid controller or hospital infusion pump is a different legal universe than hacking a web application. The CFAA sentencing ceiling doubles — from 10 years to 20 — when the target qualifies as critical infrastructure, and prosecutors have used that enhancement against ransomware operators who hit hospitals and pipeline operators. This module maps every major physical-world computing target (SCADA, IoT botnets, vehicle ECUs, drones, smart meters, medical devices) to the exact statutes, cases, and sentencing ranges that apply, then gives researchers a clear safe/grey/red matrix for each activity class.
What This Module Answers Fast
- I scanned Shodan for exposed Modbus ports to report to CISA — am I covered? → Grey zone. Passive Shodan queries are not CFAA access; active probing of the discovered system crosses the line without authorization. CISA's own VDP does not grant authorization to test third-party industrial systems.
- My security firm was hired to pen test a water utility's SCADA network — what authorization do I need? → A written scope letter from the utility's executive (not IT contractor), specifying exact IP ranges, OT network segments, and time windows. Verbal authorization is no defense under CFAA.
- I found a vulnerability in my car's OBD-II port — can I test it? → Testing your own vehicle's OBD-II port in your own driveway, with the vehicle stationary, is generally low-risk under Van Buren (you own the vehicle). Broadcasting the exploit wirelessly to other vehicles is CFAA § 1030(a)(5) damage.
- I want to jam a drone flying over my property — is that legal? → No. RF jamming is a federal crime under 47 U.S.C. § 333 regardless of who owns the drone. Only federal agencies have counter-UAS jamming authority under the 2018 FAA Reauthorization Act. Shooting it down is also federal crime under 18 U.S.C. § 32.
- A hospital ransomware attack killed a patient — can the attackers be charged with murder? → No charges have ever been filed on that theory in the U.S. Prosecutors have discussed involuntary manslaughter but the causal chain (ransomware → delayed treatment → death) has not been litigated. Germany came closest in 2020 (Düsseldorf University Hospital), but the case was dropped.
In seconds, everything changed. A cursor moved across a monitor at the Oldsmar, Florida water treatment plant — but nobody in the room had touched the mouse. The operator watched it glide to the sodium hydroxide control and drag the slider from 111 parts per million to 11,100. A hundred times the safe level. Enough lye to dissolve tissue, corrode pipes, poison a town. The operator grabbed his own mouse and reversed it. The whole intrusion lasted less than five minutes.
The attacker was never identified. No charges were ever filed. But the legal framework that would have applied — had attribution succeeded — was not ambiguous. CFAA § 1030(a)(5)(A) for intentional damage. CFAA § 1030(c)(4)(B) for the critical infrastructure enhancement that doubles the sentencing ceiling from 10 years to 20. And a trail of federal statutes governing critical infrastructure attack that most security researchers have never read, because most security researchers have never thought seriously about what it means to touch a system that controls something physical.
This module exists because web application law and critical infrastructure law are not the same. A CVSS 9.8 in a web app is a bad day. A CVSS 9.8 in a water treatment SCADA system is a potential mass casualty event — and the law treats it accordingly.
1. SCADA/ICS/OT Legal Framework
1.1 — The Computers That Run the Physical World
Supervisory Control and Data Acquisition (SCADA) systems and Industrial Control Systems (ICS) are the computing infrastructure that operates physical processes — pipelines, water treatment plants, electrical grids, manufacturing lines, and nuclear facilities. Operational Technology (OT) is the broader term encompassing any computing that controls physical equipment. For legal purposes, all of these are "computers" under CFAA § 1030(e)(1), which defines a protected computer as any device used in or affecting interstate commerce. A Modbus-connected Programmable Logic Controller (PLC) running a water pump qualifies.
1.2 — CFAA § 1030(c)(4)(B): The Sentencing Enhancement That Doubles the Stakes
The standard CFAA damage sentencing ceiling is 10 years for a first offense involving damage (§ 1030(c)(4)(A)). When the damage affects critical infrastructure, § 1030(c)(4)(B) raises the ceiling to 20 years. The statute does not specify a mandatory minimum — unlike some drug statutes — but the Sentencing Guidelines produce advisory ranges well above the base level when infrastructure damage is established.
What counts as critical infrastructure: Presidential Policy Directive 21 (PPD-21, 2013) identifies 16 sectors:
- Chemical
- Commercial Facilities
- Communications
- Critical Manufacturing
- Dams
- Defense Industrial Base
- Emergency Services
- Energy
- Financial Services
- Food and Agriculture
- Government Facilities
- Healthcare and Public Health
- Information Technology
- Nuclear Reactors, Materials, and Waste
- Transportation Systems
- Water and Wastewater Systems
For § 1030(c)(4)(B) enhancement purposes, prosecutors must allege and prove that the targeted system falls within one of these sectors. Courts have not required that the system be officially designated — they look at function, not government certification. A rural water utility that is not on any CISA list is still a water system for enhancement purposes.
1.3 — The "Substantial Disruption" Standard
The enhancement also applies when the offense causes "substantial disruption of or interference with" the operations of the federal government, justice administration, or national defense. Courts have interpreted "substantial disruption" broadly. In the Colonial Pipeline context, prosecutors framed the disruption (6 days of pipeline shutdown affecting fuel supply to the southeastern United States) as both an infrastructure attack and a national defense/transportation disruption — two independent enhancement hooks.
1.4 — CISA ICS-CERT Reporting
CISA operates the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT), now integrated into CISA's broader operations. Researchers who discover ICS vulnerabilities are encouraged — but not legally required (absent a federal contract) — to report through the CISA coordinated vulnerability disclosure process at cisa.gov/coordinated-vulnerability-disclosure-process. ICS-CERT will coordinate disclosure with the affected vendor and, where appropriate, issue advisories. Reporting to ICS-CERT does not provide legal immunity. It is a factor prosecutors may weigh under the DOJ 2022 good-faith guidance, but it is not a statutory safe harbor.
1.5 — Colonial Pipeline: The Ransom They Paid and the Bitcoin They Couldn't Keep
On May 7, 2021, DarkSide ransomware operators shut down the Colonial Pipeline — and what happened next exposed the myth of anonymous cryptocurrency ransoms.
DarkSide ransomware operators (operating from Russia) shut down the Colonial Pipeline in May 2021 — 5,500 miles of pipeline carrying 45% of the U.S. East Coast's fuel supply for six days. The DOJ's Ransomware and Digital Extortion Task Force recovered approximately $2.3 million of the $4.4 million ransom payment by seizing the DarkSide Bitcoin wallet (DOJ press release, June 7, 2021). No criminal charges have been filed because the operators remain in Russia. The legal significance: the DOJ confirmed that cryptocurrency wallets holding ransomware proceeds are seizable under 18 U.S.C. § 981 civil forfeiture, and the recovery demonstrated that "untraceable" Bitcoin ransom payments are traceable. The OFAC designation of DarkSide also means any future ransom payment to DarkSide-affiliated entities triggers sanctions liability for the payer under 31 C.F.R. § 501 — a direct consequence for victim companies.
1.6 — Oldsmar Water Treatment Plant: The Attack That Nearly Poisoned a Town (Florida, 2021)
In February 2021, an attacker accessed the Oldsmar, Florida water treatment plant's SCADA system via TeamViewer remote access software (left enabled from a previous IT engagement) and raised the sodium hydroxide (lye) level from 111 parts per million to 11,100 ppm — 100 times safe drinking levels. An operator noticed the cursor moving and reversed the change within minutes. No harm resulted.
What charges could have applied: The FBI, Secret Service, and Pinellas County Sheriff investigated. The attacker was never identified. If identified, applicable charges would include:
- CFAA § 1030(a)(5)(A) — knowingly causing damage to a protected computer (the SCADA workstation)
- CFAA § 1030(c)(4)(B) — critical infrastructure enhancement (water system = PPD-21 sector)
- 18 U.S.C. § 175 (biological weapons) — the lye increase, if completed, could theoretically implicate chemical sabotage statutes; prosecutors would more likely use…
- 18 U.S.C. § 1993 — terrorist attacks against railroad carriers and mass transportation systems; water systems are a closer fit under § 1992 (terrorist attacks against railway transportation, which courts have not extended to water systems)
- Florida Statute § 815.06 — unauthorized access to computer systems; Florida's state CFAA analog would apply independently
The case is legally notable because it represents the exact scenario where § 1030(c)(4)(B) was designed to apply, yet no charges were ever filed due to failed attribution.
2. Criminal Enhancement Analysis — Proving Infrastructure Nexus
2.1 — How Prosecutors Build the Infrastructure Case
The § 1030(c)(4)(B) enhancement requires the government to prove beyond a reasonable doubt that: (1) the defendant accessed a protected computer without authorization (or exceeded authorized access), (2) caused damage, and (3) the damage affected a critical infrastructure sector. Prosecutors use three categories of evidence:
Functional evidence: What did the system actually do? SCADA HMI screenshots, historian logs, PLC ladder logic exports, and process control documentation establish that the targeted system operated physical processes within a covered sector.
Connectivity evidence: Was the system connected to operational processes? An isolated IT workstation at a utility does not trigger the enhancement. Prosecutors must show the compromised system had command authority over OT components — even one-hop access to a PLC or RTU is sufficient.
Impact evidence: Did the attack cause or attempt to cause disruption? Unlike the base damage offense, which requires only $5,000 in aggregate harm across victims (§ 1030(e)(8)), the enhancement requires disruption to the infrastructure function, not just monetary loss. Ransomware that encrypts a hospital's EHR system but leaves medical devices operational occupies a different enhancement tier than ransomware that disables infusion pump management software.
2.2 — Healthcare: When the System That Went Down Was All That Stood Between a Patient and Disaster
Change Healthcare (2024): ALPHV/BlackCat ransomware attack on UnitedHealth Group subsidiary Change Healthcare disrupted pharmacy claims processing nationwide for weeks. Over 100 million patient records were stolen. The ALPHV affiliate who conducted the attack received a $22 million ransom payment. Criminal charges have not been publicly filed as of mid-2026 due to the Russia nexus of the operators. The legal significance: HHS issued emergency guidance, and Congress introduced the Health Care Cybersecurity Improvement Act — the first sector-specific cybersecurity legislation targeting healthcare processors rather than just covered entities under HIPAA.
Universal Health Services (2020): Ryuk ransomware hit all 400 UHS U.S. facilities simultaneously in September 2020, forcing hospitals to divert ambulances and revert to paper records for weeks. No defendants have been charged in the U.S. The UHS attack is routinely cited by prosecutors in healthcare-sector cases to establish the "substantial disruption" baseline for sentencing arguments.
2.3 — Involuntary Manslaughter: The Charge That Never Came
What happened next would define the most terrifying open question in cybercrime law: if ransomware delays care and a patient dies, is that murder?
Academic and prosecutorial commentary has discussed charging ransomware operators with involuntary manslaughter or negligent homicide when hospital attacks delay care and a patient dies. The legal obstacles are:
Causation: Proving beyond a reasonable doubt that the ransomware — not the underlying condition, physician decisions, or ambulance routing — caused the death is a formidable hurdle. The intervening acts of medical professionals break the causal chain under standard tort and criminal causation doctrine.
Intent: Involuntary manslaughter requires gross negligence or recklessness as to human risk. Ransomware operators could argue they targeted billing systems with no knowledge of direct medical device impact.
Germany's near-attempt: In 2020, a Düsseldorf University Hospital patient died after being diverted due to a ransomware attack (later determined to be misdirected Clop ransomware intended for the university, not the hospital). German prosecutors opened a negligent homicide investigation but dropped it after concluding the patient would have died regardless.
No U.S. prosecution on this theory has occurred. It remains a live discussion in DOJ policy circles.
3. IoT Device Hacking
3.1 — The Botnet Built from Your Neighbor's Security Camera
The attack that took down Twitter, Netflix, and Reddit in October 2016 didn't come from sophisticated nation-state infrastructure. It came from cameras. And DVRs. And routers. Six hundred thousand of them, enslaved by malware that found millions of IoT devices still running on factory default passwords.
IoT devices — routers, cameras, smart thermostats, industrial sensors — are protected computers under CFAA § 1030(e)(2)(B) when they affect interstate commerce, which virtually all networked devices do. The key CFAA provisions in the IoT context:
- § 1030(a)(2): Unauthorized access to obtain information — applies to credential harvesting from cameras, reading sensor data without authorization
- § 1030(a)(5)(A): Knowingly causing damage — applies to deploying malware that co-opts IoT devices into a botnet, even if individual device owners suffer no obvious harm (the device's computational resources are damaged in legal terms)
- § 1030(e)(8) damage definition: Damage includes "impairment to the integrity or availability" of data or systems — running a cryptomining payload on a thermostat impairs its availability
3.2 — United States v. Jha — Mirai Botnet (2018)
Paras Jha, Josiah White, and Dalton Norman created the Mirai botnet, which infected approximately 600,000 IoT devices (primarily Dahua cameras and Xiongmai DVRs) by brute-forcing default Telnet credentials. The botnet was used to launch record-breaking DDoS attacks, including the September/October 2016 attacks that peaked at 620 Gbps (Krebs on Security) and the Dyn DNS attack that disrupted Twitter, Netflix, Reddit, and GitHub. The defendants pled guilty in 2017 in the District of Alaska. Sentencing (2018) imposed no prison time — instead: 2,500 hours of community service, $127,000 in restitution, continued cooperation with the FBI, and six months of home confinement.
Legal significance for researchers:
- § 1030(a)(5)(A) covers botnet recruitment. The act of installing Mirai's malware on each camera — without taking any data, without disrupting the camera's ostensible function — constituted damage under the statute. The impairment-of-integrity theory does not require the device owner to notice any harm.
- The DDoS attacks were charged separately under § 1030(a)(5)(A) (damage to the DDoS targets), not just as damage to the botnet devices. This double-damage structure means botnet operators face exposure at both layers.
- Light sentencing. The government traded prison time for cooperation — the defendants helped FBI investigations into other cybercrime. This is unusual and not a reliable precedent.
3.3 — FTC Act § 5 Against Insecure IoT Manufacturers
The Federal Trade Commission can bring unfair and deceptive trade practices actions under § 5 of the FTC Act against IoT manufacturers who ship devices with insecure defaults, fail to patch known vulnerabilities, or misrepresent their security posture. The FTC's 2012 action against Foscam (camera manufacturer) and its 2023 settlement with D-Link (over router security misrepresentations) establish that security failures are "unfair" practices when they cause reasonably foreseeable consumer harm. The FTC Act does not create criminal liability; civil penalties only.
3.4 — NIST IoT Cybersecurity Guidance (NISTIR 8259)
NISTIR 8259 (2020) establishes core device cybersecurity capabilities that IoT manufacturers should build in: unique device identification, device configuration, data protection, logical access to interfaces, software update capability, and cybersecurity state awareness. NISTIR 8259 is guidance, not law — it does not create legal duties. However, it is used by plaintiffs' attorneys and regulators as the benchmark for "reasonable security" in civil litigation and FTC proceedings.
3.5 — California IoT Security Law (Civil Code § 1798.91.04)
California's IoT security law (effective January 1, 2020, Civ. Code § 1798.91.04) requires manufacturers of connected devices sold in California to equip each device with a "reasonable security feature" appropriate to the device and the information it collects. Specific requirements: if a device uses a default password, the password must be unique per device, or the user must be prompted to generate a new password on first use. No private right of action — enforcement is by the California Attorney General. Violations are subject to civil penalty of up to $2,500 per device per violation, plus actual damages.
Other state IoT laws: Oregon (SB 90, 2019) has similar default-password prohibition language. No other state has enacted comprehensive IoT security legislation as of early 2026, though several states have pending bills modeled on California and Oregon.
4. Automotive / Vehicle Hacking
4.1 — The Car That Could Be Driven from a Laptop
In 2015, two security researchers connected their laptop to the internet and took control of a 2014 Jeep Cherokee doing 70 mph on a St. Louis highway. They killed the engine. They seized the brakes. The driver — a journalist who had consented to the demonstration — could do nothing but document his own helplessness as the vehicle coasted to a stop. Chrysler recalled 1.4 million vehicles in the days that followed. The researchers faced no charges. But the story of why they didn't get arrested is just as important as the attack itself.
A modern vehicle contains 50–100 Electronic Control Units (ECUs) — computers controlling engine management, braking (ABS/ESC), steering assist, airbag deployment, telematics, and infotainment. All ECUs connected to external interfaces (OBD-II, Bluetooth, cellular, Wi-Fi, TPMS receiver) satisfy the § 1030(e)(2)(B) interstate commerce test. Van Buren v. United States (2021) clarified that "exceeds authorized access" turns on whether the user was permitted to access the particular system — not on manner of access. For vehicles: a car owner has broad authorization to use the vehicle, but that authorization does not clearly extend to reprogramming airbag ECUs or modifying engine maps in ways not sanctioned by the OEM.
4.2 — Miller/Valasek Jeep Hack (2015) — No Charges
Charlie Miller and Chris Valasek, working with Wired journalist Andy Greenberg, remotely compromised a 2014 Jeep Cherokee via its Uconnect cellular telematics system (Sprint network, Harman head unit). They demonstrated full remote control of steering, braking, and transmission while the vehicle was on a highway at 70 mph. The researchers disclosed responsibly to Chrysler before publication; Chrysler issued a recall (1.4 million vehicles) and an OTA patch.
Why no charges were filed: Miller and Valasek had Greenberg's written consent (he owned the Jeep and agreed to the demonstration). The authorization — a car owner consenting to testing his own vehicle — satisfied the CFAA authorization element. No third-party vehicles were accessed. The Sprint network's cellular infrastructure was implicated as an access path, but the researchers were invited onto that network as legitimate subscribers.
The legal lesson: Researcher authorization flowed from the vehicle owner's consent, not from Chrysler's blessing. Chrysler's recall cooperation was voluntary, not legally compelled. If the same attack had been conducted against a vehicle the researchers did not own, with or without Chrysler's knowledge, the CFAA analysis would be entirely different.
4.3 — Tesla Bug Bounty Scope
Tesla operates a bug bounty program (through Bugcrowd, scope at bugcrowd.com/tesla) covering in-scope ECUs, the Autopilot system, the Tesla app, and the Tesla website. Critical exclusions: physical attacks against third-party vehicles, attacks requiring physical access to another person's vehicle, and any testing affecting vehicle safety systems in a non-isolated environment. Tesla explicitly requires testing on researcher-owned or researcher-controlled test vehicles. The program has paid out over $1 million to researchers and is one of the most active automotive VDPs.
4.4 — 49 U.S.C. § 30170: Falsifying Vehicle Safety Records
This statute criminalizes falsifying or destroying vehicle safety records submitted to NHTSA. Maximum: 15 years. It is not directly applicable to ECU hacking but becomes relevant when a researcher modifies OEM diagnostic data or when automakers misrepresent the security posture of recalled systems to NHTSA.
4.5 — SPY Car Act: The Law That Never Passed
The Security and Privacy in Your Car (SPY Car) Act has been introduced in Congress multiple times (2015, 2017, 2019, 2021) and has never passed. It would have required NHTSA to establish standards for vehicle cybersecurity and privacy, mandated isolation of safety-critical systems from non-safety systems, and required a "cyber dashboard" disclosing data collection to consumers. NHTSA's current cybersecurity guidance (2022 update) is entirely voluntary.
4.6 — Research Authorization Landscape
Testing vehicle systems requires layered authorization:
- Own the vehicle or have written owner consent — the Van Buren authorization baseline
- Isolate from public road networks — testing on public roads with live ECU manipulation creates reckless endangerment exposure independent of CFAA
- Avoid aftermarket ECU modification criminalizing statutes — modifying engine management to defeat emissions controls triggers EPA Clean Air Act § 203 prohibition on removing or rendering inoperative emission control systems (42 U.S.C. § 7522), civil penalty up to $44,539 per violation
- OEM bug bounty scope — if testing under a bug bounty, verify scope covers the specific ECU and attack vector before proceeding
5. Drone / UAV Hacking
5.1 — FAA Part 107: Before You Launch, You Need a License
14 C.F.R. Part 107 governs small unmanned aircraft systems (sUAS) operated commercially. Key rules: operator must hold a Remote Pilot Certificate, drone must remain within visual line of sight, operations above 400 feet AGL require waiver, no operations over moving vehicles without waiver, no operations over people without waiver. Violating Part 107 is a civil violation enforced by the FAA — civil penalties up to $32,666 per violation. Criminal prosecution under 49 U.S.C. § 46306 (operating without an airman certificate) is available but rarely used.
5.2 — Counter-UAS Authority: Only Federal Agencies Can Legally Jam
Section 2209 of the FAA Reauthorization Act of 2018 (Pub. L. 115-254) grants counter-UAS authority exclusively to: DOD, DOJ, DHS, and DOE for protection of federal facilities and assets. This authority includes the ability to detect, identify, monitor, track, and disrupt (including by jamming or spoofing) UAS operations that pose a threat. No private party has this authority. Property owners, law enforcement without federal authorization, and security researchers cannot legally jam, spoof, or take over drone communications regardless of what the drone is doing over their property.
5.3 — 18 U.S.C. § 32: Shooting Down a Drone Is Aircraft Sabotage
Aircraft sabotage covers destruction of aircraft (§ 32(a)) and destruction of aircraft facilities (§ 32(b)). The statute applies to "aircraft" as defined in 49 U.S.C. § 40102(a)(6) — any contrivance invented, used, or designed to navigate or fly in the air. FAA has confirmed that UAS are "aircraft" under this definition. Penalty: up to 20 years; if death results, life imprisonment or death.
Practical implication: Physically disabling a drone — with a net gun, projectile, or signal jammer — constitutes aircraft sabotage if it "damages, destroys, disables, or wrecks" the aircraft. Even shooting down a drone hovering over your private property is a federal felony.
5.4 — GPS Spoofing: Four Statutes, Zero Exceptions
GPS spoofing — transmitting false GPS signals to cause a drone to navigate to an unintended location — implicates multiple statutes:
- 47 U.S.C. § 333 (FCC Act): Willful or malicious interference with licensed radio communications. GPS uses L1 (1575.42 MHz) and L2 (1227.60 MHz) frequencies allocated to the FAA and DOD. Penalty: up to $100,000 fine and/or one year imprisonment (criminal); civil forfeiture of equipment.
- CFAA § 1030(a)(5)(A): If the spoofing causes the drone (a protected computer) to crash or deviate from its mission — the drone's autopilot system is damaged by the false input.
- 18 U.S.C. § 32: GPS spoofing that causes an aircraft to crash is aircraft sabotage.
- 18 U.S.C. § 1030 + 18 U.S.C. § 1362: Interference with government communications if the spoofing affects DOD, Coast Guard, or FAA navigation systems.
5.5 — RF Jamming to Take Down Drones (47 U.S.C. § 333)
Jamming drone control links (typically 2.4 GHz or 5.8 GHz) or video downlinks is willful interference with licensed radio communications under the FCC Act. The FCC has issued explicit guidance (2020) stating that no exception exists for jamming drones — not for self-defense, not for property protection, not for privacy. First offense: criminal fine up to $100,000 and/or one year imprisonment. The FCC has a dedicated enforcement unit (Spectrum Enforcement Division) that investigates jammer sales and use.
6. Smart Grid / Energy Sector
6.1 — NERC CIP Standards: The Rules the Grid Lives By
The North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) standards are mandatory for all owners, operators, and users of the Bulk Electric System (BES) in North America. Key standards:
- CIP-002: BES Cyber System categorization — identifying which systems are high/medium/low impact
- CIP-005: Electronic security perimeters — controlling access to BES cyber systems
- CIP-007: Systems security management — patch management, ports and services, malware prevention
- CIP-010: Configuration change management and vulnerability management
- CIP-013: Supply chain risk management (mandatory since 2020)
NERC CIP violations are enforced by the Federal Energy Regulatory Commission (FERC). Penalties up to $1 million per violation per day. The largest NERC CIP penalty imposed was $10 million (against an undisclosed entity, 2018). NERC CIP does not create criminal liability — it is a regulatory framework — but criminal CFAA charges and NERC CIP violations can arise from the same incident.
6.2 — FERC Enforcement Authority
FERC has authority under the Federal Power Act (16 U.S.C. § 824o) to impose civil penalties for NERC CIP violations and to order emergency interconnection or disconnection of grid elements. FERC does not have independent criminal enforcement authority — it refers criminal matters to DOJ. FERC's order authority was expanded by the Infrastructure Investment and Jobs Act (2021) to cover cybersecurity threats to critical electric infrastructure.
6.3 — Critical Electric Infrastructure Information (CEII)
18 C.F.R. § 388.113 designates certain power grid vulnerability information as CEII — a non-public, protected designation for information about the physical and cyber security of critical electric infrastructure. Unauthorized disclosure of CEII can result in civil penalties. Security researchers who obtain CEII through vulnerability research (e.g., obtaining SCADA configuration files that reveal grid topology) must handle that material carefully — disclosure to the public or press without CEII authorization procedures is a regulatory violation even if no criminal charges apply.
6.4 — Criminal Exposure for Unauthorized Grid Testing
Unauthorized scanning or probing of utility grid control systems — even passive port scanning of a SCADA IP — crosses into § 1030(a)(2) territory if it reaches the production network without authorization. The § 1030(c)(4)(B) enhancement applies. An energy utility's control systems are unambiguously within the energy sector (PPD-21 Sector 8). There is no "just scanning" defense for critical infrastructure. Researchers must have written authorization from the utility's CISO and legal counsel — verbal authorization and assumed permission from a pentesting engagement scoped to IT networks do not extend to OT networks.
7. Medical Device Security
7.1 — FDA Cybersecurity Final Rule (2023)
The FDA's 2023 final guidance on cybersecurity in medical devices (implementing § 524B of the FD&C Act, added by the Consolidated Appropriations Act 2023) imposes mandatory premarket cybersecurity requirements for devices submitted for FDA clearance or approval after October 1, 2023. Manufacturers must: submit a plan for monitoring and addressing post-market cybersecurity vulnerabilities; submit a software bill of materials (SBOM); design devices to be updated and patched; and implement security controls meeting FDA guidance. Failure to meet these requirements can result in FDA refusing clearance — a regulatory consequence, not a criminal one.
7.2 — HIPAA Coverage for Medical Devices
A medical device that stores, processes, or transmits Protected Health Information (PHI) is subject to HIPAA Security Rule requirements when operated by a covered entity or business associate. An implanted cardiac monitor that transmits ECG data to a hospital's EHR system is a HIPAA-covered device. Unauthorized access to that device — which transmits PHI — creates both CFAA liability (unauthorized computer access) and potential HIPAA liability for the covered entity that failed to secure it. The researcher who accesses the device without authorization may face CFAA charges; the hospital may face OCR enforcement for the security gap that enabled the access.
7.3 — CFAA Enhancement for Hospital Systems
There is no standalone "medical device hacking" statute in the U.S. All criminal exposure flows through the CFAA, with the § 1030(c)(4)(B) enhancement for healthcare and public health infrastructure (PPD-21 Sector 12). Ransomware operators who specifically target hospitals, drug infusion systems, or medical device management platforms face the 20-year ceiling. Researchers who discover vulnerabilities in pacemakers, insulin pumps, or ventilators face the same statutory framework — authorization and good-faith disclosure are the only legal differentiators.
7.4 — No Authorization from the Patient
A patient who has an implanted device does not have authority to authorize a researcher to probe the device manufacturer's infrastructure or the hospital's device management systems. The patient owns the device in the physical sense but the device communicates with manufacturer cloud systems, hospital WLAN, and Bluetooth programmer hardware — none of which the patient can authorize access to. This is a common misunderstanding in medical device security research.
8. Research Safe Harbors for ICS/IoT
8.1 — The Good-Faith Standard in Critical Infrastructure Context
The DOJ's 2022 CFAA good-faith charging policy (see Module 1U) is significantly narrower when applied to critical infrastructure. The standard factors for good-faith protection — authorized testing, minimal data collection, responsible disclosure — are necessary but not sufficient when the target is a power grid, water system, or hospital. The DOJ policy explicitly notes that "the potential for harm" to critical systems weighs heavily against exercise of prosecutorial discretion even when the researcher believes they acted in good faith.
8.2 — CISA Vulnerability Disclosure Policy
CISA's VDP at cisa.gov/vulnerability-disclosure-policy authorizes security researchers to test certain CISA-operated systems (not third-party infrastructure). The scope is limited and explicitly excludes systems managed by third parties including critical infrastructure operators. Discovering a SCADA vulnerability through Shodan, probing it to confirm exploitability, and reporting to CISA is not covered by CISA's VDP unless CISA is the operator of the system.
8.3 — ICS-CERT Coordinated Disclosure Process
The ICS-CERT coordinated disclosure process (cisa.gov/ics-cert) involves: (1) researcher submits report to CISA; (2) CISA notifies vendor; (3) CISA works with vendor on patch timeline; (4) CISA publishes advisory. The process does not grant retroactive authorization for any unauthorized access that occurred during research. It is purely a disclosure coordination mechanism. Researchers who conducted authorized testing (own lab equipment, vendor test environment, written scope from operator) should use this process. Researchers who accessed live production ICS systems without authorization have no protection from criminal liability by filing an ICS-CERT report.
8.4 — Lab vs. Production: The Only Safe Harbor That Actually Works
The single most important safe harbor for ICS/IoT researchers is isolation of testing to lab environments. Purchasing a used PLC on eBay and testing Modbus attacks in an air-gapped lab is legal. Connecting to a live SCADA network without authorization is a federal crime regardless of intent. Vendors including Siemens, Schneider Electric, and Rockwell Automation maintain test environments and researcher programs — engaging through those channels provides the authorization framework that standalone Shodan reconnaissance does not.
9. Safe / Grey / Red Matrix
| Activity | Risk Level | Analysis |
|---|---|---|
| Passive Shodan query for ICS systems (no active probe) | SAFE | Shodan indexes publicly visible data; no direct system access; CFAA § 1030(a)(2) not triggered |
| Active Modbus port scan of a live utility SCADA IP found via Shodan | RED | Active probing of production ICS without authorization = § 1030(a)(2); § 1030(c)(4)(B) enhancement likely |
| Testing smart home devices (Nest thermostat, Philips Hue) you own | SAFE | Van Buren authorization — you own and operate the device; no third-party system accessed |
| Testing vehicle OBD-II port on your own car in your driveway | SAFE | Vehicle owner authorization; no wireless transmission to third-party systems; EPA emissions caveat if ECU modified |
| Fuzzing a neighbor's smart meter via RF in your driveway | RED | No authorization to access neighbor's device; § 1030(a)(5)(A) + utility infrastructure enhancement |
| Drone GPS spoofing in open field away from airports | RED | 47 U.S.C. § 333 RF interference; no research exception; FCC enforcement regardless of harm |
| Medical device fuzzing in isolated lab with device you purchased | GREY | Lab isolation removes CFAA exposure; FDA regulatory framework does not prohibit reverse engineering; if device connects to manufacturer cloud, active probing of that cloud is unauthorized |
| Scanning NERC CIP utility OT network under written SOW | GREY | Written scope is necessary but verify OT scope explicitly; IT-scope pentesting SOW does not cover OT |
| Reporting a water utility SCADA vulnerability to ICS-CERT without prior access | SAFE | Disclosure without unauthorized access; legal under any framework |
| Setting up a Mirai-like botnet scanner in a cloud VPS to find vulnerable cameras | RED | § 1030(a)(5)(A) even if no data taken; credential brute-force is unauthorized access to each camera |
| Smart grid research under utility's written security research agreement | SAFE | Written authorization from operator; follow NERC CIP data handling for any CEII encountered |
| RF jamming a drone flying over your property | RED | 47 U.S.C. § 333 federal crime; no property-defense exception; 18 U.S.C. § 32 if drone crashes |
10. Key Cases and Incidents
10.1 — Vitek Boden and the World's First ICS Criminal Conviction (Australia, 2001)
He had inside knowledge of the system. He had a stolen radio transmitter. And he had a grudge.
Vitek Boden, a former contractor for Hunter Watertech (which installed the SCADA system for Maroochy Shire Council's sewage system in Queensland, Australia), used a laptop and a stolen wireless radio transmitter to send unauthorized commands to pump stations. The attacks caused raw sewage to spill into local parks and a hotel lagoon — releasing approximately 800,000 liters of sewage over a period of weeks in 2000. Boden was convicted in 2001 of causing serious environmental harm and sentenced to two years imprisonment. Legally significant as the first prosecution demonstrating that wireless SCADA attacks were prosecutable under existing criminal frameworks without a specific ICS statute.
10.2 — Stuxnet: The Cyberweapon That Rewrote the Rules (2010–)
In 2010, the world learned what a nation-state cyberweapon looked like. It used four zero-day exploits simultaneously — a number never seen before in a single piece of malware. It was designed not to steal data, not to surveil, but to physically destroy industrial equipment. One thousand Iranian centrifuges at the Natanz enrichment facility spun themselves apart while their control systems reported everything was normal.
Stuxnet — widely attributed to a joint U.S.-Israeli operation codenamed OLYMPIC GAMES — was a nation-state cyberweapon that destroyed approximately 1,000 Iranian centrifuges at the Natanz uranium enrichment facility. It used four zero-day exploits simultaneously (a record), targeted Siemens S7-315 and S7-417 PLCs, and was the first cyberweapon designed to cause physical destruction of industrial equipment. Stuxnet has never been prosecuted. As a nation-state operation, it is governed by international law frameworks (laws of armed conflict, sovereignty, the Tallinn Manual) rather than criminal law.
Legal significance for ICS security broadly: Stuxnet demonstrated that OT attacks could achieve kinetic effects — physical destruction of industrial machinery — and established the template for all subsequent ICS weapons (Industroyer, TRITON, Pipedream/INCONTROLLER). The discovery of Stuxnet accelerated NERC CIP mandatory standards, FDA medical device security guidance, and Congressional interest in ICS protection legislation. The legal architecture responding to ICS threats traces its modern form to the Stuxnet revelation.
10.3 — GRU Takes Out Ukraine's Power Grid — and the DOJ Responds (2020)
In December 2015, the lights went out for 230,000 Ukrainian citizens. A month later, in December 2016, a Kyiv transmission substation briefly went black. The malware — BlackEnergy, then Industroyer — had been developed and deployed by GRU Unit 74455, known as "Sandworm."
The October 2020 DOJ indictment of six GRU officers (Unit 74455, "Sandworm") charged them with wire fraud conspiracy, computer fraud conspiracy, and damaging protected computers in connection with the 2015 and 2016 Ukraine power grid attacks (BlackEnergy/Industroyer malware), the 2017 NotPetya global attack, the 2018 Winter Olympics attack (Olympic Destroyer), and the 2019 Georgia election system attacks. The Ukraine power grid attacks caused power outages affecting approximately 230,000 customers in December 2015 and a second attack in December 2016 that briefly knocked out a Kyiv transmission substation.
Charges filed: The CFAA damage charges for the grid attacks carried up to 20 years each under the critical infrastructure enhancement. The NotPetya charges — based on the $10 billion+ in global damages — represented the largest criminal charging based on CFAA infrastructure damage in DOJ history. No defendant is in U.S. custody; the charges serve as attribution, sanctions triggers, and deterrence signaling.
10.4 — TRITON/TRISIS: The Attack Designed to Kill (2017)
It wasn't designed to steal. It wasn't designed to extort. It was designed to disable the only system standing between an industrial accident and a catastrophe.
Discovered in 2017 at a petrochemical plant in Saudi Arabia, TRITON malware targeted Schneider Electric Triconex Safety Instrumented Systems (SIS) — the last line of defense against catastrophic industrial accidents. Successful manipulation of a SIS could disable emergency shutdown systems, enabling physical disasters (explosions, fires, chemical releases). Attribution: U.S. government attributed the attack to the Russian Central Scientific Research Institute of Chemistry and Mechanics (TsNIIKhM) in 2022. DOJ indicted Evgeny Viktorovich Gladkikh in 2022 for conspiracy to cause damage to energy facilities under 18 U.S.C. § 1365 (tampering with a consumer product), § 1030 CFAA damage, and related charges. Penalty exposure: up to 40 years (20 years per count). Gladkikh remains in Russia.
10.5 — United States v. Jha (Mirai Botnet) — Full Analysis
See Section 3.2. Additional legal note: the Jha prosecution used the District of Alaska because Anchorage-based cybersecurity firm Akamai (a DDoS target) and several IoT device victims were located there. Venue in CFAA cases can be established wherever a victim was harmed, giving DOJ flexibility to choose favorable forums.
Key Statutes Reference
| Statute | Description | Max Penalty |
|---|---|---|
| 18 U.S.C. § 1030(c)(4)(B) | CFAA critical infrastructure enhancement | 20 years |
| 18 U.S.C. § 1030(a)(5)(A) | Knowing damage to protected computer | 10 years (base) |
| 18 U.S.C. § 32 | Aircraft sabotage (includes drones) | 20 years; life if death |
| 47 U.S.C. § 333 | Willful RF interference (FCC Act) | 1 year + $100K |
| 18 U.S.C. § 1365 | Tampering with consumer product (SIS attack) | 20 years |
| 18 U.S.C. § 1362 | Government communication interference | 10 years |
| Cal. Civ. Code § 1798.91.04 | California IoT security law | Civil: $2,500/device |
| 42 U.S.C. § 7522 | EPA emission control tampering (ECU mod) | Civil: $44,539/violation |
| 16 U.S.C. § 824o | Federal Power Act / NERC CIP enforcement | Civil: $1M/day |
| 49 U.S.C. § 30170 | Falsifying vehicle safety records | 15 years |
Module 1Z — SCADA, IoT, Automotive, and Drone Hacking Law | LawZeee Phase 1 | Last updated 2026-04-17
Test your knowledge
Ready to check what stuck?
10 questions — cases, statutes, and the practical move for each. Takes 5 minutes.