Non-Lawyers Summary

The Federal Trade Commission is the main federal cop for corporate cybersecurity failures in the United States. It does not prosecute hackers — it sues the companies that got hacked when those companies had terrible security and made promises to customers they could not keep. If a company tells you "we take your privacy seriously" and then stores passwords in plaintext and gets breached, the FTC calls that a deceptive practice and can drag that company into a decade-long consent decree. The Wyndham hotel case established that the FTC can do this under a broad statutory hook — "unfair or deceptive acts or practices" — without Congress passing a specific cybersecurity law.

For security researchers and pen testers, this matters more than it might look like at first glance. FTC enforcement is the background pressure that pushes companies to adopt vulnerability disclosure programs, maintain incident response plans, and actually patch known vulnerabilities. When you find a critical bug, the company's legal team is thinking about FTC exposure if they ignore you. LabMD learned what happens when a company stonewalls a researcher: the FTC sued them. The Drizly case went further — it named the CEO personally, which is now every CISO's nightmare and every researcher's leverage point.

The FTC also runs specialized regimes under its umbrella: the Safeguards Rule for financial institutions (encryption, MFA, 30-day breach notification), the Health Breach Notification Rule for health apps that fall outside HIPAA, and COPPA for children's data. These rules define the technical controls that companies are legally required to implement — which means when you probe those controls and find them missing, you are documenting a potential regulatory violation, not just a vulnerability.


Overview

FTC Act Section 5 — codified at 15 U.S.C. § 45 — is the FTC's foundational enforcement authority. Its prohibition on "unfair or deceptive acts or practices in or affecting commerce" was written in 1914 to target fraudulent business practices. Courts have since interpreted it to cover cybersecurity failures because: (1) inadequate security can cause substantial harm to consumers with no countervailing benefit, meeting the unfairness standard; and (2) companies that make false representations about their security practices are engaging in deception.

This module covers the Section 5 cybersecurity framework from the ground up — the legal standards, the landmark enforcement actions, the specialized rules, the researcher-facing implications, and the state analogs that extend FTC-style enforcement across jurisdictions.


FTC Act § 5 Authority: The Statutory Foundation

The 1914 Law That Now Governs Your Company's Password Policy

Congress wrote the FTC Act in 1914. They were thinking about price-fixing, fraudulent advertising, and railroad monopolies. They were not thinking about cloud databases, bcrypt hashing, or ransomware.

But the law they wrote — 15 U.S.C. § 45 — was broad enough to reach all of it.

Text: "Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful."

The FTC Act creates two distinct theories of liability that have both been applied to cybersecurity:

1. Deception theory: A company makes a representation (express or implied) that is material, false or misleading, and likely to deceive reasonable consumers. A privacy policy promising "industry-standard security" when the company stores passwords in MD5 without salting is a deceptive representation.

2. Unfairness theory: An act or practice causes or is likely to cause substantial injury to consumers that is not reasonably avoidable and not outweighed by countervailing benefits. Inadequate security practices that cause data breaches meet this standard — consumers cannot easily evaluate or avoid a company's backend security choices, and there is no legitimate benefit to cutting corners on encryption.

No Private Right of Action — The FTC's Monopoly on Enforcement

Section 5 does not create a private lawsuit right. Only the FTC can enforce it. Consumers cannot sue a company directly under Section 5 for bad security. This is a significant limitation — it means the FTC must prioritize which cases to bring, and companies that fall below the "FTC enforcement threshold" (too small, wrong sector, no high-profile breach) often face no federal regulatory consequence at all.

The FTC is the gatekeeper. The question is whether they choose to walk through the door.

FTC cybersecurity enforcement typically follows this path:

  1. Investigation — The FTC opens an investigation, usually triggered by a breach or a complaint. The company receives a Civil Investigative Demand (CID) — essentially a government subpoena for documents, data, and testimony.
  2. Complaint and proposed consent order — The FTC files a complaint and simultaneously proposes a consent order (consent decree). The case is typically settled at this stage; very few FTC cybersecurity cases go to trial.
  3. Commission vote and public comment — The proposed order goes through a public comment period and is voted on by the FTC commissioners.
  4. 20-year monitoring period — Most cybersecurity consent decrees impose a 20-year monitoring period requiring biennial third-party security assessments submitted to the FTC.
  5. Modification or termination — The company can petition for modification or termination after demonstrating compliance; the FTC can modify the order if circumstances change.

Section 5 itself carries no per-violation civil penalty. However, once a consent decree is in place, violations of the decree expose the company to civil penalties under 15 U.S.C. § 45(m): up to $51,744 per day per violation (adjusted periodically for inflation).

This is how the FTC gets teeth — the initial enforcement action may result in no money damages, but violating the resulting order is extremely expensive.

Twitter's 2022 settlement included a $150 million civil penalty under the pre-existing 2011 consent decree framework — a direct application of this mechanism.


Unfairness Theory in Cybersecurity: The Wyndham Standard

Three Breaches. One Lawsuit. A New Era of Cybersecurity Law.

Citation: FTC v. Wyndham Worldwide Corp., 799 F.3d 236 (3d Cir. 2015)

Just before the first breach, the Wyndham hotel chain's cybersecurity program had a feature that would define it in the history of U.S. cybersecurity law: the administrator account password was "password."

Between 2008 and 2010, Wyndham suffered three separate data breaches. The same vulnerabilities. The same failures. Three times. The result: more than $10.6 million in fraudulent charges on approximately 619,000 payment card accounts.

The FTC sued. Wyndham fought back with everything they had, arguing that Section 5 did not authorize cybersecurity regulation, that the FTC had never given them fair notice of what the standard required, and that the FTC was attempting to regulate areas Congress had left to other agencies.

The Third Circuit was unmoved. It upheld the FTC's authority to bring cybersecurity unfairness cases under Section 5 and rejected all three arguments:

  • The "unfairness" standard under Section 5 applies to security practices.
  • The FTC's prior enforcement actions, consent decrees, and guidance publications (including "Protecting Consumer Information: Can Your Company Afford to Ignore It?") gave companies sufficient notice of what reasonable security requires.
  • The existence of other cybersecurity statutes (HIPAA, GLBA) did not foreclose FTC jurisdiction.

The Three-Part Unfairness Test — The Standard Everything Is Measured Against

To prove a cybersecurity practice is "unfair," the FTC must show:

  1. Substantial harm — The practice causes or is likely to cause significant injury to consumers (financial loss, identity theft, payment card fraud).
  2. Not reasonably avoidable — Consumers cannot meaningfully protect themselves from a company's backend security failures; they cannot audit the company's server configurations before handing over their payment card.
  3. Not outweighed by countervailing benefits — There is no legitimate benefit to failing to encrypt cardholder data, using default passwords, or skipping network segmentation.

The foreseeability element adds another layer: the FTC's theory requires that the harm was foreseeable from the company's security choices. When the same vulnerabilities lead to three successive breaches, foreseeability is not contested.

A critical implicit element: the harm would likely not have occurred if the company had implemented reasonable security measures. In Wyndham's case, basic encryption and non-default credentials would have prevented or substantially limited the breaches.

Why this matters for researchers: Wyndham established that companies cannot claim ignorance of what reasonable security looks like. When you document a company's use of default credentials, cleartext credential storage, or absent network segmentation, you are documenting precisely the kind of failure the Wyndham court said Section 5 covers.

Your report is evidence of their foreseeable harm.


Deception Theory: Privacy Promises vs. Security Reality

The Gap Between What They Say and What They Do

The deception theory is simpler to understand but equally powerful. A company makes a representation about its security practices, and that representation is materially false.

Privacy policy language: Statements like "we use industry-standard security measures," "your data is protected by SSL," or "we take reasonable precautions to protect your information" become deceptive representations if the company's actual security practices fall short.

"We do not share your data with third parties": Deceptive if the company is actually sharing data through tracking pixels, third-party analytics integrations, or advertising partnerships.

Cookie and tracking disclosures: If a company's cookie disclosure says it collects only "session data" for functionality but actually builds persistent cross-site behavioral profiles for advertising targeting, that is a deceptive practice under FTC theory.

"Reasonable security" representations: The FTC has taken the position that making any security representation — even a vague one — establishes a baseline that must be met.

Twitter/X — When 2FA Phone Numbers Became Ad Targeting Data

The 2022 Twitter consent decree arose in part from Twitter's representation that phone numbers collected for two-factor authentication would be used only for security purposes.

Twitter then used those phone numbers for advertising targeting.

A concrete, documented mismatch between representation and reality. A $150 million lesson in the difference between what a privacy policy says and what an engineering team does.


Major FTC Enforcement Actions

1. Wyndham Hotels — The Benchmark Case (3d Cir. 2015)

Citation: FTC v. Wyndham Worldwide Corp., 799 F.3d 236 (3d Cir. 2015); settled 2015

Outcome: Consent order — no civil penalty (initial enforcement), but the Third Circuit ruling validated the FTC's cybersecurity authority. Wyndham agreed to a comprehensive information security program, biennial third-party audits for 20 years, and enhanced franchise oversight.

Security failures: Default passwords, unencrypted payment card data, inadequate firewalls, no oversight of franchisee networks.

Significance: Established the three-part unfairness test for cybersecurity and the FTC's authority to bring these cases.

2. LabMD — The Researcher Who Became a Weapon, and the Company That Was Destroyed

Citation: LabMD, Inc. v. FTC, 894 F.3d 1221 (11th Cir. 2018)

Just before the FTC came for LabMD, a cybersecurity firm called Tiversa had approached them with a warning: LabMD's patient data had been found on a peer-to-peer file-sharing network. LimeWire. Sensitive patient data — insurance information, Social Security numbers — floating in the open.

Tiversa offered to help. For a fee.

LabMD declined. And then Tiversa reported them to the FTC.

The FTC issued an administrative complaint alleging LabMD's security practices were unfair. The FTC administrative law judge initially dismissed the case, finding insufficient evidence of "substantial harm." The full FTC Commission reversed. LabMD appealed.

The Eleventh Circuit vacated the FTC's cease-and-desist order — but not because the FTC lacked authority. The order was too vague to comply with. It required LabMD to establish a "reasonably designed" security program without specifying what that program must do. A cease-and-desist order must be specific enough for a company to know what it must do to comply.

But that wasn't the real story.

A former Tiversa employee later testified before Congress that Tiversa had fabricated or exaggerated breach data in multiple FTC referrals. The company that triggered LabMD's destruction may have been running a protection racket dressed as security research.

LabMD shut down in 2014 during the FTC proceedings. The litigation lasted years. The CEO wrote a book about it: "The Devil Inside the Beltway."

What LabMD did NOT decide: LabMD did not hold that the FTC lacks authority to regulate cybersecurity practices under the unfairness standard. That holding came from Wyndham. LabMD is about order specificity, not underlying authority.

Researcher implication: If you discover a breach and disclose it to a company as leverage for a commercial contract, and then report to the FTC when they decline, you may be in Tiversa's position. The FTC is not a general-purpose enforcement tool for security firms with commercial grievances.

3. Drizly — The Day a CEO Became Personally Liable

Citation: FTC v. Drizly, LLC (Docket No. C-4762, 2023)

In 2020, Drizly — an alcohol delivery app — suffered a data breach that exposed 2.5 million user records.

The FTC's investigation found something more damaging than the breach itself: security problems at Drizly had been known internally since at least 2018 — two years before the breach. The risks were documented. The decision was made not to fix them.

The FTC's consent order was not just against Drizly as a company. It named James Rellas, Drizly's CEO personally, and required him to implement a security program at any future company he leads (with 25 or more users) for a period of ten years.

Not because Rellas personally hacked anyone. Because he knew about security deficiencies and failed to remediate them.

The implications rippled through every CISO office in America. The person at the top who receives the security researcher's report — who sees the vulnerability, who makes the decision whether to act — now faces personal regulatory liability if that decision is to wait.

Researcher angle: This is your leverage point when filing an FTC complaint. If you have documented evidence that a company knew about a vulnerability (e.g., they acknowledged your report, failed to fix it, and then got breached), the FTC can now reach the individual executive who made the decision to ignore the risk.

4. Twitter/X — $150 Million for a Broken Promise

Citation: United States v. Twitter, Inc. (N.D. Cal. 2022) (filed by DOJ on behalf of FTC)

In 2011, Twitter entered a consent order with the FTC related to earlier privacy and security failures. The 2022 action was a violation of that order.

Twitter had told users that phone numbers and email addresses collected for two-factor authentication would be used only for security. Twitter then used this data for advertising targeting, reaching users who had only provided their contact information for 2FA security.

Outcome: $150 million civil penalty — at the time, one of the largest FTC privacy penalties. Additional requirements: enhanced privacy controls, user notifications, compliance reporting, deletion of data collected under false pretenses.

The civil penalties were available because Twitter was violating an existing consent order — activating the per-day penalty mechanism of 15 U.S.C. § 45(m). This is the mechanism that turns an initial zero-dollar FTC enforcement action into hundreds of millions in future exposure.

5. Meta/Facebook — $5 Billion and a Seat at the Board

Citation: United States v. Facebook, Inc. (D.D.C. 2019) (filed by DOJ on behalf of FTC)

Without warning, the scale of the Cambridge Analytica scandal became clear: up to 87 million users' data had been shared with a political data firm without consent — in direct violation of Facebook's 2012 FTC consent order, which had required Facebook to obtain affirmative consent before sharing user data beyond users' privacy settings.

The response:

  • $5 billion civil penalty — the largest FTC privacy penalty in history.
  • A 20-year compliance program.
  • An independent privacy committee on Facebook's board.
  • Personal certifications of compliance by CEO Mark Zuckerberg.
  • Prohibition on monetizing data of users aged 13–17 without consent.
  • Enhanced oversight of third-party apps.

The fine was so large because multiple violations of the 2012 consent order, massive scale (87 million affected users), and prolonged duration of the violations combined to reach the $5 billion figure under the per-violation, per-day structure.


FTC Safeguards Rule (16 C.F.R. Part 314)

The Financial Institution That Doesn't Know It's a Financial Institution

The Gramm-Leach-Bliley Act (GLBA) of 1999 directed the FTC and other federal financial regulators to issue standards protecting nonpublic personal financial information. The FTC's implementing rule — the Safeguards Rule (16 C.F.R. Part 314) — governs non-bank financial institutions.

"Financial institution" under the Safeguards Rule is defined far more broadly than the common understanding. It covers any company that is "significantly engaged" in providing financial products or services to consumers, including:

  • Mortgage brokers and lenders
  • Payday lenders
  • Auto dealers that arrange financing
  • Tax preparation services (H&R Block, Jackson Hewitt, independent preparers)
  • Accounting firms handling consumer financial data
  • Investment advisors not regulated by the SEC
  • Collection agencies
  • Higher education institutions with student loan operations

It does not cover banks, credit unions, broker-dealers, or investment companies regulated under other GLBA titles.

The rule was substantially revised in 2021 (effective for most requirements in June 2023), adding prescriptive technical requirements that previously existed only as general "reasonable security" standards.

The Ten Controls Now Written Into Law

Under the revised Safeguards Rule, covered financial institutions must implement:

1. Qualified Individual — A designated individual responsible for the information security program (a CISO-equivalent), who must report to the board of directors or senior officer at least annually.

2. Risk Assessment — A written risk assessment identifying foreseeable internal and external security risks to covered information. Must be updated periodically.

3. Access Controls — Limiting access to customer information to authorized users only; implementing the principle of least privilege; locking out users who fail to authenticate.

4. Encryption — Customer information must be encrypted in transit (over external networks) and at rest. This is now a specific prescriptive requirement, not a vague "reasonable security" standard.

5. Multi-Factor Authentication (MFA) — Required for any individual accessing customer information systems, unless a qualified individual approves alternative equivalent controls. This is a mandatory specific control.

6. Secure Development Practices — Applications used to transmit, access, or store customer information must be developed in accordance with secure development guidelines and tested periodically.

7. Penetration Testing and Vulnerability Assessment — Annual penetration testing and vulnerability assessments at least every six months. Security researchers take note: this is a legal mandate for pen testing at covered entities.

8. Audit Logging — Monitor and maintain audit logs for systems containing customer information, including access, security events, and modifications.

9. Incident Response Plan — Written incident response plan addressing detection, classification, containment, eradication, recovery, and post-incident review.

10. 30-Day Breach Notification to FTC — For institutions with 500 or more customers affected by a breach of unencrypted customer information, notification to the FTC within 30 days of discovery. (Effective May 2024.)

Exemption: Financial institutions with fewer than 5,000 customers are exempt from some of the more prescriptive requirements but must still maintain a general information security program.

Penalties for Safeguards Rule Violations

Violations of the Safeguards Rule can be prosecuted as FTC Act violations with civil penalties of up to $51,744 per violation per day under 15 U.S.C. § 45(m)(1)(A). The FTC has actively enforced the Safeguards Rule — most notably against tax preparers and auto dealers.


Health Breach Notification Rule (16 C.F.R. Part 318)

The Gap HIPAA Left — And the FTC Filled

HIPAA's privacy and security rules cover "covered entities" (hospitals, clinics, insurers) and their "business associates." They do not cover the enormous ecosystem of consumer health apps, fitness trackers, period-tracking apps, and direct-to-consumer genetic testing services that collect sensitive health data without being part of the healthcare system.

The FTC's Health Breach Notification Rule (16 C.F.R. Part 318) fills this gap. It covers "personal health record vendors" — companies that collect or handle individually identifiable health information but are not HIPAA covered entities or business associates. The FTC updated its policy statement on this rule in 2021 and again in 2023 to clarify that health apps (including mental health apps and period trackers) fall within the rule's scope.

Key Requirements

  • Who must notify: PHR vendors and PHR-related entities must notify affected individuals, the FTC, and (for breaches involving 500+ individuals in a state) prominent media outlets in that state.
  • Timing: Notification "without unreasonable delay and in no case later than 60 days" after discovery of the breach (mirroring HIPAA's 60-day standard).
  • What triggers notification: "Breach of security" is broadly defined to include "unauthorized acquisition of unsecured PHR identifiable health information" — including unauthorized disclosures that are not traditional database breaches (such as sharing data with third-party apps without proper authorization).

Premom — The Ovulation Tracker That Sent Data to China

FTC Action Against Premom (2023): The FTC took action against Easy Healthcare Corporation, maker of the Premom ovulation tracking app, for sharing users' sensitive reproductive health data with Chinese analytics firms (AppsFlyer and Google) without adequate disclosure, in violation of the Health Breach Notification Rule and Section 5.

The FTC's complaint alleged that Premom:

  • Shared precise health data with third-party analytics providers without user consent
  • Made deceptive representations about data sharing in its privacy policy
  • Failed to notify users of the data disclosures

Outcome: Prohibited Premom from sharing health data for advertising, required deletion of data collected in violation, civil penalties, and enhanced disclosure requirements.

Women using an app to track their fertility. Their most intimate health data, routed to servers they never consented to. The FTC drew the line.

FTC vs. HIPAA Overlap: A company dealing with reproductive health data may face both FTC enforcement (under the Health Breach Notification Rule for its app) and state-level requirements (several states enacted post-Dobbs health data protection statutes). HIPAA only applies if there is a covered entity or business associate relationship, which Premom lacked.


Children's Online Privacy Protection Act (COPPA)

The Law That Was Written to Protect Children — And Cost One Company $275 Million

15 U.S.C. §§ 6501–6506 (COPPA) and 16 C.F.R. Part 312 (COPPA Rule)

COPPA applies to operators of websites and online services that are directed to children under 13, or that have actual knowledge they are collecting personal information from children under 13. The rule requires:

1. Parental notice and verifiable parental consent: Before collecting, using, or disclosing personal information from children under 13, the operator must provide direct notice to parents and obtain verifiable parental consent. "Verifiable" means more than a checkbox — it requires a credit card transaction, government ID verification, or similar method.

2. Data minimization: Operators may collect only the information reasonably necessary for the activity for which parental consent was obtained.

3. Right to review and delete: Parents have the right to review personal information collected from their children and request deletion.

4. No conditioning: Operators may not condition participation in activities on disclosure of more personal information than is reasonably necessary.

5. Retention limitations: Personal information collected from children must be retained only as long as necessary for the purpose for which it was collected.

Age Gate Requirements — The Checkbox That Doesn't Work

The FTC has published guidance making clear that "neutral age screen" mechanisms (simple text entry of a birth date with no verification) are insufficient when they are designed to fail — i.e., when a child can lie about their age and the operator takes no further steps. Effective age-gating for COPPA compliance typically involves:

  • Neutral age screen that does not indicate which ages will be accepted
  • Blocking children who self-identify as under 13 without allowing retry
  • Parental consent collection for identified minors

YouTube/Google — $170 Million for Knowing Who Was Watching

FTC and New York AG v. Google LLC and YouTube, LLC (2019): $170 million civil penalty — at the time the largest COPPA penalty ever — for YouTube's failure to obtain parental consent before collecting personal data from children watching child-directed content on YouTube. The FTC found that YouTube knew it was serving child-directed channels and collecting behavioral advertising data from those viewers.

YouTube knew. The algorithm knew. The advertising system knew. And it kept running.

Subsequent COPPA actions (2022–2023): Epic Games (Fortnite) — $275 million civil penalty for COPPA violations in the Fortnite platform and in-game purchase systems accessible to children. This surpassed the YouTube/Google settlement as the largest COPPA penalty.

Security Research Intersection With COPPA

If you are testing an application that serves children (gaming platforms, educational apps, social networks with child-directed features), COPPA compliance is part of the security posture. A missing age gate, an insecure consent mechanism, or a data pipeline that sends children's data to a third-party analytics provider without parental consent are all FTC enforcement targets — not just security vulnerabilities.


Researcher and Bug Bounty Angle: FTC as Background Pressure

The Commission You're Never Directly Fighting — But Always Indirectly Using

You are not the FTC's enforcement target. The FTC goes after companies, not researchers. But FTC enforcement shapes the environment you operate in, in three concrete ways:

1. The policies you test under are FTC-driven. When a company publishes a vulnerability disclosure policy, maintains an incident response plan, and requires annual penetration testing — those requirements often trace back to FTC consent decree obligations or FTC Safeguards Rule mandates. You are operating in a compliance infrastructure that the FTC built.

2. FTC complaints are a disclosure escalation tool. If a company ignores your responsible disclosure, you can file an FTC complaint. This is not a lawsuit — it's a regulatory referral. The FTC receives tens of thousands of complaints per year and investigates selectively, but documented evidence of a company knowingly ignoring serious security vulnerabilities (especially post-Drizly) significantly raises the probability of investigation.

3. FTC enforcement histories affect scope decisions. Companies in FTC consent decrees have legally binding obligations to maintain security programs and submit to third-party audits. When you are retained to test a company in a consent decree, you may be testing against a legally mandated baseline — and your findings may be part of the company's required FTC reporting.

LabMD — The Cautionary Tale That Runs in Two Directions

From the organizational side, LabMD demonstrates what happens when a company responds to security research disclosure with hostility and delay. The litigation lasted years, cost the company millions, and ultimately destroyed it. LabMD shut down in 2014 during the FTC proceedings.

The lesson for organizations: ignoring a security researcher's disclosure does not make the FTC go away. The FTC can independently discover breaches through consumer complaints, journalist investigations, and referrals from other agencies. A company that received a responsible disclosure report and failed to act on it has now created a documentary record that the harm was foreseeable — directly satisfying the Wyndham unfairness standard's foreseeability element.

The VDP Adoption Pressure — Compliance as Background Gravity

CISA's Binding Operational Directive 20-01 (BOD 20-01) required all federal agencies to publish vulnerability disclosure policies. The FTC's Safeguards Rule requires penetration testing. CISA's joint guidance on VDPs for critical infrastructure carries implicit FTC-overlap for covered financial institutions. The cumulative effect: companies that have not published a VDP are increasingly the outliers, and FTC enforcement history is part of the story explaining why VDPs have proliferated.


State FTC Analogs

The Fifty Attorneys General Who Don't Need Permission From the FTC

Federal Section 5 enforcement reaches only companies engaged in interstate commerce (virtually everything) but requires FTC resources and priority. State analogs fill the gap with parallel enforcement authority that state attorneys general can exercise independently.

California UCL — Business & Professions Code § 17200

California's Unfair Competition Law prohibits "any unlawful, unfair or fraudulent business act or practice." It is broader than the FTC Act in important ways:

  • Private right of action: Unlike Section 5, the UCL allows private lawsuits by any person "who has suffered injury in fact and has lost money or property as a result" of the unfair practice. This makes UCL the California vehicle for class action litigation against companies with poor cybersecurity.
  • "Unlawful" prong: Borrowing liability — if a company's security practice violates any other law (California's data breach statute, HIPAA, COPPA), the UCL makes that violation independently actionable as an unfair business practice.
  • "Unfair" prong: Mirrors the FTC's unfairness standard. California courts have applied it to cybersecurity failures that injure consumers.
  • Injunctive relief and restitution: UCL provides injunctive relief and restitution of money obtained through unfair practices — class restitution in a cybersecurity context can be substantial.

New York GBL § 349 — Deceptive Acts and Practices

New York's General Business Law § 349 prohibits "deceptive acts or practices in the conduct of any business, trade or commerce." Like the FTC's deception theory, § 349 requires:

  1. A deceptive act or practice
  2. Directed at consumers
  3. That caused harm

Private right of action: GBL § 349 includes a private right of action with minimum statutory damages of $50 per violation (actual damages if higher), plus attorney's fees, plus punitive damages up to three times actual damages (capped at $1,000) for willful violations.

AG enforcement: The New York Attorney General also enforces § 349. The NY AG has been one of the most active state enforcers of data privacy and security, including coordinating with the FTC on multi-state investigations.

Texas DTPA — The State That Triples Your Damages

Texas's Deceptive Trade Practices-Consumer Protection Act (DTPA), Tex. Bus. & Com. Code §§ 17.41–17.826, prohibits false, misleading, or deceptive acts in consumer transactions. Privacy policy misrepresentations and deceptive security claims fall within the DTPA's broad reach.

Enhanced damages: The DTPA allows for damages up to three times actual damages for knowing or intentional violations, plus attorney's fees.


Safe/Grey/Red Matrix

ScenarioRisk ClassificationAnalysis
Disclosing a vulnerability to a company's security teamSAFECore responsible disclosure — no FTC implications for researcher. Company's response (or failure to respond) may create FTC exposure for the company, not the researcher.
Responsible disclosure following a written VDPSAFEOperating within the consent framework that Section 5 enforcement pressure helped create. FTC is not relevant to your conduct.
Filing an FTC complaint after disclosure is ignoredSAFE (with caveats)Legitimate regulatory escalation path. Caveats: complaint must be truthful and not designed to extract commercial advantage (the Tiversa warning). FTC complaints are public records accessible via FOIA.
Testing a company's systems without a VDP or authorizationREDNo FTC relevance to the researcher's authorization status — CFAA and state computer crime laws govern that. The FTC's Safeguards Rule mandating pen testing applies to the company, not to you.
VDP-scoped testing that discovers FTC Safeguards Rule violationsGREYYou have discovered a regulatory violation (missing MFA, unencrypted data). Your scope authorization covers the technical testing. Reporting findings to the company is appropriate. Separately reporting to the FTC is legally permissible but is a discretionary escalation.
Receiving leaked data that reveals company's FTC compliance failuresREDReceiving stolen data creates CFAA and state law exposure independent of FTC. The FTC-relevant misconduct is the company's, but your possession of the data is your problem.
Publishing a PoC demonstrating a COPPA violation in a children's appGREYIf the PoC uses real children's data, significant risk under COPPA itself (§ 6502) and state privacy laws. If the PoC is synthetic/anonymous and demonstrates the vulnerability mechanism without real PII, lower risk — but legal review strongly advised before publication.
Bug bounty — company operating under FTC consent decreeGREYYour authorization is the bug bounty agreement. The company's FTC obligations run in the background and may affect how it classifies and reports your findings. Ask whether findings must be disclosed to the FTC monitor; the company's counsel will know.

Key Statutes Quick Reference

Statute / RuleCitationWhat It DoesEnforcement
FTC Act § 515 U.S.C. § 45Prohibits unfair or deceptive acts or practices in commerce; primary FTC cybersecurity authorityFTC (civil); no private right of action
Civil penalty authority15 U.S.C. § 45(m)Up to $51,744/day/violation for consent decree violationsFTC via DOJ
Safeguards Rule16 C.F.R. Part 314Written ISP, risk assessment, encryption, MFA, pen testing, 30-day FTC breach notification for financial institutionsFTC
Health Breach Notification Rule16 C.F.R. Part 318Breach notification for PHR vendors outside HIPAA; 60-day notificationFTC
COPPA15 U.S.C. §§ 6501–6506Parental consent for children's data; age gates; data minimizationFTC; no private right of action
COPPA Rule16 C.F.R. Part 312Implementing regulations for COPPAFTC
GLBA / Financial Privacy Rule15 U.S.C. §§ 6801–6809Financial privacy framework; authorizes Safeguards RuleFTC + other bank regulators
California UCLCal. Bus. & Prof. Code § 17200Unfair, unlawful, or fraudulent business practices; cybersecurity failuresCA AG + private plaintiffs
New York GBL § 349N.Y. Gen. Bus. Law § 349Deceptive acts in business; privacy/security misrepresentationsNY AG + private plaintiffs ($50 min damages)
Texas DTPATex. Bus. & Com. Code §§ 17.41–17.826Deceptive trade practices; security misrepresentationsTX AG + private plaintiffs (3x damages)

Key Cases Quick Reference

CaseCitationHolding
FTC v. Wyndham Worldwide Corp.799 F.3d 236 (3d Cir. 2015)Section 5 unfairness covers cybersecurity failures; three-part test: substantial harm, not reasonably avoidable, no countervailing benefit
LabMD, Inc. v. FTC894 F.3d 1221 (11th Cir. 2018)FTC consent orders must be specific enough to comply with; does not disturb FTC's cybersecurity authority
FTC v. Drizly, LLCDocket No. C-4762 (2023)Personal liability imposed on CEO for cybersecurity failures at company he led
United States v. Twitter, Inc.N.D. Cal. (2022)$150M penalty for violation of 2011 consent order; 2FA data used for advertising
United States v. Facebook, Inc.D.D.C. (2019)$5B penalty for violation of 2012 consent order; Cambridge Analytica / data sharing
FTC v. Easy Healthcare (Premom)FTC Docket (2023)Health Breach Notification Rule applied to period-tracking app sharing data with third parties
FTC v. Google/YouTubeFTC/NY AG action (2019)$170M COPPA penalty; child-directed content on YouTube

Practitioner Takeaways

For security researchers:

  • The FTC's enforcement actions against companies for inadequate security are the reason VDPs exist. Every time a company publishes a security acknowledgment policy, they are partly managing FTC exposure.
  • Filing an FTC complaint is a legitimate post-disclosure escalation tool for unresponsive companies. Be factual, be specific, and do not use it as commercial leverage (Tiversa warning).
  • The Drizly case means executives who ignore your disclosures now face personal liability risk. This is a fact you can communicate to a company's legal department when pursuing responsible disclosure without implying a threat.
  • If you discover that a financial institution is missing the Safeguards Rule's mandatory controls (no MFA, unencrypted data at rest), you have documented a regulatory violation that the FTC is actively enforcing — not just a security finding.

For compliance and corporate counsel:

  • A company cannot rely on Section 5's lack of specific guidance to avoid responsibility. Wyndham established that prior FTC guidance publications, consent decrees, and industry norms constitute sufficient notice of what reasonable security requires.
  • Three successive breaches at Wyndham was the factual death knell. If your organization has experienced prior breaches and has not materially improved its security program, the foreseeability element of unfairness analysis is essentially established by your own history.
  • The Drizly personal liability precedent means CISOs and CEOs who receive security researcher reports have documented notice of a foreseeable risk. Failure to act on that notice creates personal exposure.
  • Under the Safeguards Rule, annual penetration testing and biennial vulnerability scanning are legally required — not aspirational — for covered financial institutions. Budget accordingly.

Quiz

See: artifacts/quizzes/quiz-02d.md

Test your knowledge

Ready to check what stuck?

10 questions — cases, statutes, and the practical move for each. Takes 5 minutes.

Take the quiz now →