Print Newsletter
SHARE

Prefer to listen instead? We’ve made an audio version of this newsletter so you can catch the highlights on the go.

Letter from the Editor

Landscape Photo of The Grand Canyon

Dear ICYMI Readers, 

As I sit down to write this letter over my morning coffee, I am struck by how 2025 went by in a flash, and it is almost over. Time has a way of moving faster than we realize, especially when we are immersed in the rapidly evolving world of AI regulation, privacy law, and consumer protection.  

This October has been particularly special. I stepped away from the office to travel and connect with many of you at conferences and events. Along the way, I took a detour to check something off my bucket list: the Grand Canyon. I have wanted to visit since I was a kid, watching it featured on old television shows. Seeing it in person was nothing short of amazing and life-changing.  

Standing at the edge of the Grand Canyon, witnessing layers of history stretching back millions of years, reminded me of the importance of perspective and perseverance. If you are creating compliance plans for 2026, you may feel constrained by what your organization will or will not prioritize. Remember: each small change builds on the next to create something significant. I have my fingers crossed that rolling out your next compliance initiative won’t take millions of years. If it takes longer than expected, I hope you can step back and appreciate the hard work of pushing these initiatives forward.  

This visit also reminds me why I started my legal practice: to make attorneys’ days more straightforward to navigate with innovative tools, resources, and counsel that I longed for when working with business partners. I am blown away by watching this community continue to grow. This newsletter exists because of your interest in staying informed about the complex issues. 

My door is always open. I welcome your feedback about what you would like to see in future editions, and topics you would like me to explore in more depth. Please reach out.  I read every message and am grateful for your insights.  

As we close out October, I hope you have a wonderful Halloween filled with treats and maybe a few tricks up your sleeve to keep things interesting.  

Thanks for being part of this community. 

Until next month, 

Susan Duarte Signature

Artificial Intelligence (AI)

October brought significant developments in AI regulation and litigation, with bipartisan federal legislation advancing product liability frameworks for AI systems while California continued its leadership role by enacting multiple AI safety laws. Meanwhile, new lawsuits are testing the boundaries of AI training data practices, particularly around the use of protected health information and user-generated content without authorization.

Key Takeaways

  • Federal Legislation Focuses on AI Accountability. The AI LEAD Act would establish the first federal product liability framework, while the AI for Main Street Act aims to help small businesses access AI guidance.
  • California Leads State-Level AI regulation. New California laws require safety testing for AI models, offer protections to minors from harmful AI chatbots, implement age verification requirements, and mandate content authenticity tracking.
  • First Major Legal Challenge to Healthcare AI Training Practices is Filed. The lawsuit over AI training on patient medical records through tracking pixels against Meta represents the first major legal challenge to healthcare AI training practices, potentially exposing tech companies and healthcare providers to liability for unauthorized use of protected health information in AI development.

Federal AI Developments

AI-Lead Act (AI Product Liability). U.S. Senators Dick Durbin (D-IL) and Josh Hawley (R-MO) introduced the Aligning Incentives for Leadership, Excellence, and Advancement in Development Act (AI LEAD Act), bipartisan legislation aimed at holding artificial intelligence companies accountable for harm caused by their systems by establishing a federal cause of action for products liability claims and classifying AI systems as products subject to product liability laws. The bill allows lawsuits from the U.S. Attorney General, state attorneys general, individuals, and classes of individuals based on defective design, failure to warn, breach of express warranty, and unreasonably dangerous or defective product theories, while creating liability for AI deployers who substantially modify systems or intentionally misuse them, voiding contract language that waives rights or unreasonably limits liability, and including a 4-year statute of limitations that begins when harm is discovered.

AI for Main Street Act Introduced to Support Small Business AI Adoption. Representatives Alford (R-Mo.) and Scholten (D-Mich.) introduced the AI for Main Street Act, bipartisan legislation that would amend the Small Business Act to require Small Business Development Centers (SBDCs) nationwide to provide comprehensive AI guidance and training to small businesses covering cybersecurity, data protection, intellectual property, regulatory compliance, and customer trust, while mandating proactive outreach to understand small businesses’ concerns about AI adoption. The legislation aims to address the growing gap between large corporations that have the resources to invest in AI technology and small businesses that risk being left behind.

State AI Developments

California AI Legislative Developments. At the end of September and in early October 2025, California Governor Gavin Newsom signed four major AI-related laws while vetoing two others.

  • The Transparency in Frontier Artificial Intelligence Act (SB 53) was signed on September 29, 2025, and requires developers of the most powerful AI models to test and plan for potentially catastrophic risks that could kill more than 50 people or result in more than $1 billion in damage, and includes whistleblower protections.
  • Newsome signed three other laws on October 13, 2025. The first, SB 243, regulates AI companion chatbots for minors by requiring companies to implement safety measures to detect and address suicidal ideation and creates a private right of action for violations. He vetoed a similar bill, AB 1064, the same day, expressing concerns that it was overly broad.
  • AB 1043 imposes digital age verification requirements by requiring app developers to review age information from devices when apps are downloaded to prevent companies from evading child protection laws; and
  • AB 853 aims to combat AI-generated content by requiring large online platforms to make origin data for uploaded content accessible starting in 2027 and requiring device makers to let users embed authenticity information in their captured content beginning in 2028.
  • Newsom vetoed Senate Bill 7, known as the “No Robo Bosses Act,” which would have regulated employers’ use of artificial intelligence and automated decision-making technologies in employment decisions. The vetoed bill would have required employers to provide prior written notice to employees and job applicants when AI would be used for employment-related decisions, prohibited employers from relying solely on automated decision systems to discipline or terminate employees, prevented AI use that interferes with compliance with labor, workplace safety, or civil rights laws, and applied to both employees and independent contractors.
  • California’s Civil Rights Council amendment to Title 2 of the California Code was effective October 1, 2025. The law clarifies how the Fair Employment and Housing Act (FEHA) applies to artificial intelligence and automated decision systems (ADS) in employment. Key requirements include prohibiting ADS use that discriminates based on FEHA-protected characteristics, mandating four-year retention of all ADS-related data, holding employers liable for discriminatory outcomes from third-party AI tools, and broadly defining agents to include vendors and recruitment firms. The regulations do not ban specific AI tools or prohibit legitimate uses in hiring and workforce management, but they eliminate the ability for employers to shift responsibility to vendors, require proactive anti-bias testing and monitoring, and clarify that ADS assessments eliciting disability information may constitute unlawful medical inquiries. Enforcement falls under the California Civil Rights Department, and employers must now audit all automated tools used in recruitment, screening, hiring, promotion, compensation, and performance decisions to ensure compliance with existing anti-discrimination laws.

Ohio. On September 23, 2025, Ohio State Representative Thaddeus Claggett introduced House Bill 469, which designates AI systems as non-sentient and prohibits them from obtaining legal personhood, serving in corporate roles, holding property, making medical or financial decisions, or entering marriage, while ensuring human owners or developers remain liable for any harm caused by AI systems. The bill, referred to the House Committee on Technology and Innovation on October 1, 2025, represents Ohio’s effort to establish clear legal boundaries for AI systems before they become more deeply integrated into business and legal structures, preemptively addressing personhood questions to prevent potential legal complications that could arise if AI systems were granted rights or responsibilities traditionally reserved for natural persons or human-controlled entities. Ohio has several other AI bills pending, including House Bill 524, imposing penalties if AI encourages self-harm, and Senator Jon Husted’s CHAT Act, requiring age verification for chatbots.

Wisconsin. Wisconsin Governor Tony Evers signed 2025 Wisconsin Act 34, expanding existing nonconsensual image sharing laws to prohibit the creation and distribution of AI-generated explicit images and deepfakes. The bipartisan legislation introduces new definitions and enhanced penalties for AI-manipulated content, with Representative Jacobson noting it helps Wisconsin law catch up with rapidly evolving AI and image generation technology that can be weaponized for harassment and exploitation, particularly against minors.

AI Litigation

Meta AI Training on Patient Records. On October 11, 2025, a class action lawsuit was filed against Meta and two health systems, alleging improper use of patient medical records to train AI systems without authorization or consent. The complaint claims Meta accessed patient records through the Meta Pixel tracking tool embedded on hospital websites and patient portals and used this protected health information to train its Llama AI models. The lawsuit alleges violations of the Health Insurance Portability and Accountability Act (HIPAA), state medical privacy laws, and consumer protection statutes, marking the first major legal challenge to AI training data practices in the healthcare context. The plaintiffs argue that even though the data may have been “de-identified” according to HIPAA standards, the combination of data points collected through Meta Pixel, including appointment details, medication searches, and health conditions viewed, creates identifiable profiles that violate patient privacy expectations. The case could have significant implications for how AI companies source training data from regulated industries and whether existing privacy frameworks adequately address AI training practices. Healthcare providers that use tracking pixels and similar technologies on their platforms face potential liability for downstream uses of patient data by technology vendors, even when those uses were not contemplated in original vendor agreements.

Reddit files a complaint against unauthorized scrapers. This case follows Reddit’s broader strategy to control access to its data, including licensing deals with AI companies like Google and OpenAI that reportedly total hundreds of millions of dollars. By pursuing legal action against unauthorized scrapers while simultaneously monetizing access through licensing agreements, Reddit is attempting to establish that its user-generated content has significant commercial value that cannot be extracted without permission or compensation. The lawsuit represents a test case for whether platforms can successfully use anti-hacking laws to prevent AI companies and other entities from scraping publicly accessible content.

Reddit Data Scraping Lawsuit Tests Boundaries of AI Training Practices. Reddit filed a lawsuit in the U.S. District Court for the Southern District of New York on October 22, 2025, against Perplexity AI and three data-scraping companies (SerpApi, Oxylabs UAB, and AWMProxy), alleging they circumvented technological protections to harvest Reddit’s user-generated content on an industrial scale for use in AI systems. The complaint alleges that during two weeks in July 2025, the three scraping defendants accessed nearly three billion Google search engine results pages containing Reddit content by masking their identities and bypassing security restrictions, then reselling this data to AI companies, including Perplexity, which purchased scraped data rather than entering into a licensing agreement with Reddit directly. The lawsuit seeks injunctive relief to halt unauthorized scraping and monetary damages, representing a critical test case for whether platforms can use anti-hacking laws to prevent extraction of publicly accessible content for AI training, particularly as Reddit has already monetized its data through licensing agreements with OpenAI and Google reportedly totaling hundreds of millions of dollars that now account for approximately 10% of the company’s revenue.

Challenge to NY Algorithmic Pricing Disclosure Law Dismissed. U.S. District Judge Jed Rakoff dismissed a lawsuit by the National Retail Federation challenging New York’s Algorithmic Pricing Disclosure Act, the first-in-the-nation law requiring retailers to disclose when prices are set using algorithms that analyze personal customer data—a practice known as “surveillance pricing.” The judge ruled that the disclosure requirement does not violate First Amendment free speech protections, finding that it reasonably serves the state’s interest in consumer transparency and prevents confusion about how prices are determined. Signed by Governor Kathy Hochul in May 2025 and effective from July 8, 2025, the law mandates that businesses display conspicuous disclosures in capital letters when algorithmic pricing based on personal data is used, with civil fines up to $1,000 per violation for non-compliance. The NRF argued the law compelled misleading speech and unfairly portrayed algorithms as dangerous despite their use for legitimate purposes like loyalty discounts, but the court found these arguments unpersuasive, clearing the way for New York Attorney General Letitia James to enforce the transparency requirements.

Snapchat AI Chatbot Vulnerability Highlights Risks. Cyber News researchers discovered they could trick Snapchat’s My AI chatbot—used by over 900 million people—into sharing instructions for making improvised explosive devices by framing requests as storytelling exercises. While Snapchat’s safeguards block direct weapon queries, the chatbot provided dangerous information when prompted to tell a historical story about the Winter War, demonstrating that relatively simple prompt manipulation techniques could bypass safety controls. This follows similar vulnerabilities found in Meta, Lenovo, and Anysphere’s AI systems, where chatbots were manipulated into exposing sensitive data or providing harmful information.

Privacy and Data Protection

October brought enforcement activity and new privacy protections at both the federal and state levels.  The key takeaways are:

  • Enforcement Penalties Continue to Increase. California’s CPPA imposed its largest-ever penalty of $1.35 million, and New York’s Attorney General secured $14.2 million from eight car insurers for data breaches.
  • The List of Sensitive Data Continues to Grow. The bipartisan MIND Act adds neural data as uniquely sensitive information.
  • Washington State’s CEMA Law Triggers Litigation. A Washington court interpretation of email marketing laws has triggered class action litigation exposing retailers to billions in potential liability for routine promotional practices.

Federal Privacy Developments

MIND Act (Neural Data Protection Bill). U.S. Senators introduced the Management of Individuals’ Neural Data Act (MIND Act) on September 24, 2025, to protect brainwave and neural data as consumer neurotech devices enter the market. The bill directs the FTC to develop standards for collecting, processing, and sharing neural data to prevent exploitation by tech companies and data brokers. The legislation recognizes neural data as uniquely sensitive information that reveals thoughts, emotions, mental health conditions, and cognitive functioning, requiring special protections beyond those applied to other types of personal information.

Sendit App FTC Enforcement (September 2025). The Federal Trade Commission took action against Sendit Labs Inc. and its CEO, alleging the anonymous messaging app unlawfully collected personal data from children under 13 and deceived users about fake messages. The FTC complaint alleges that Sendit violated the Children’s Online Privacy Protection Act (COPPA) by failing to obtain verifiable parental consent before collecting personal information from millions of children, despite knowing that a significant portion of its user base consisted of minors. The company allegedly used deceptive tactics to boost engagement and drive subscriptions, including sending users fabricated anonymous messages designed to appear as if they came from real people, when the app itself actually generated them. The FTC alleges Sendit manipulated users into paying for premium subscriptions to reveal the supposed identities of these fake message senders. The settlement requires Sendit to pay a civil penalty, implement a comprehensive privacy program, and obtain independent privacy assessments. This enforcement action demonstrates the FTC’s continued focus on protecting children’s privacy online and cracking down on deceptive practices that exploit users’ desire for social connection.

State Privacy Developments

Multistate Consortium Continues to Grow. Nine states have formed a collaborative consortium to write and enforce comprehensive data privacy laws across state lines, with Minnesota and New Hampshire recently joining the bipartisan group of regulators. The consortium’s shared goal is to strengthen consumer protections across jurisdictions and harmonize enforcement efforts as state privacy laws continue to expand. Michael Macko, head of enforcement at the California Privacy Protection Agency (CPPA), stated, “We’re entering a new era of enforcement as state privacy laws continue to harmonize and expand,” signaling a coordinated approach to addressing the challenges of enforcing privacy protections in an increasingly interconnected digital economy.

California

California Finalizes CCPA Regulations on Automated Decision-Making and Risk Assessments. On September 23, 2025, California’s Office of Administrative Law approved significant new regulations under the California Consumer Privacy Act (CCPA) covering automated decision-making technology (ADMT), cybersecurity audits, and risk assessments. These regulations, which will begin to take effect on January 1, 2026, represent a significant expansion of California’s privacy framework. The rules establish new requirements for businesses using ADMT to make “significant decisions” about consumers, mandating pre-use notices, opt-out rights, and access request procedures. The rules include phased implementation dates, with ADMT requirements starting January 1, 2027, and risk assessment submissions due by April 1, 2028.

CCPA Amendment Expands Opt-Out Requirements. California’s AB 1194 amended the CCPA effective October 1, 2025, to require businesses to honor opt-out signals for limiting the use and disclosure of sensitive personal information. The amendment closed a loophole that allowed companies to ignore universal opt-out mechanisms for sensitive data while honoring them for sale/sharing opt-outs, requiring businesses to update their privacy practices to comply with this expanded opt-out requirement. Previously, companies were only required to honor opt-out preference signals for sales and sharing of personal information, but could continue using sensitive personal information for other purposes even when consumers had expressed a clear preference against such use.

California Browser Privacy Opt-Out Mandate California has become the first state to mandate that web browsers, such as Google Chrome, must contain data-blocking features that allow users to opt out of having their data shared by businesses. Governor Gavin Newsom signed the legislation, which builds on California’s existing state privacy law that gives residents the right to opt out of data sharing, typically by clicking buttons or links on websites. This new mandate shifts the burden from individual website opt-outs to browser-level controls, making privacy protection more accessible and automatic for California residents.

California Settles with the Tractor Supply Company for $1.35 Million. On September 26, 2025, the California Privacy Protection Agency (“CPPA”) issued a decision requiring Tractor Supply Company, the nation’s largest rural lifestyle retailer with more than 2,500 stores in 49 states, to change its business practices. In addition, the CPPA required Tractor Supply to pay $1.35M to resolve multiple violations of the California Consumer Privacy Act (CCPA), which is the largest in the CPPA’s history. This enforcement action and the decision are the first to address the importance of CCPA privacy notices and the privacy rights of job applicants. It also highlights critical compliance failures related to consumer opt-out rights, third-party data sharing, inability to send honor preference signals from websites, and inadequate privacy disclosures. This enforcement action signals the CPPA’s continued focus on ensuring businesses properly implement consumer privacy choices and maintain transparent data practices.

Maryland Bans Targeted Ads and Data Sales for Minors. Maryland’s expansive privacy law, the Maryland Online Data Privacy Act (MODPA) and the Maryland Kids Code, took effect on October 1, 2025. The regulations impose some of the nation’s strictest protections for minors’ online data, prohibiting targeted advertising to users under 18, banning the sale of personal data of minors without verified parental consent, and requiring opt-in consent for collecting, selling, or sharing personal data of users under 18.

New York State Data Broker Law Takes Effect. New York’s amended data broker registration law was effective on October 15, 2025, requiring data brokers to register annually with the Attorney General and pay a $300 fee. The law defines data brokers as entities collecting and selling personal information about consumers with whom they have no direct relationship, and violations carry civil penalties up to $500 per day. Data brokers must provide consumers with mechanisms to request information about collected data, opt out of sales, and correct inaccuracies. The law represents New York’s effort to increase transparency in the largely unregulated data broker industry, which collects and sells detailed personal information about millions of consumers. By requiring registration and disclosure, New York aims to give consumers visibility into which companies are trading their personal information and create accountability mechanisms through the Attorney General’s enforcement authority. The registration requirement also provides the state with a comprehensive database of data brokers operating in New York, enabling more effective oversight and enforcement of privacy violations.

NY Attorney General Secures $14.2 Million from Car Insurance Companies. New York Attorney General Letitia James has secured $14.2 million from eight car insurance companies for data breaches that exposed the personal information of over 825,000 New Yorkers. The breaches stemmed from inadequate security in online quoting tools that allowed hackers to exploit a pre-fill function and access sensitive information, including driver’s license numbers. Some of this stolen data was later used to file fraudulent unemployment claims during the COVID-19 pandemic.

Virginia Expands Consumer Data Protection Act. Virginia Governor Glenn Youngkin signed Senate Bill 1485 on October 7, 2025, lowering thresholds to capture more businesses under the Virginia Consumer Data Protection Act (VCDPA). The privacy law now applies to companies that control or process data of at least 25,000 Virginia consumers and have $25 million or more in revenue. The law is effective on July 1, 2026, and includes new requirements for data minimization, purpose limitation, and risk assessments for processing activities that present a heightened risk of harm.

TCPA Litigation: Curbside pickup notices are not solicitations. The Ninth Circuit ruled in favor of Walmart in Barton v. Walmart, finding transactional curbside pickup notifications are not “solicitations.”

Retailers Challenge Washington State CEMA. Retailers challenged Washington’s Commercial Electronic Mail Act (CEMA) following the April 2025 Brown v. Old Navy decision, where the Washington Supreme Court ruled 5-4 that CEMA prohibits any false or misleading information in commercial email subject lines, not just information that conceals the email’s advertising purpose as retailers had argued. The case arose after plaintiffs sued Old Navy over email subject lines like “No joke! $12.50 JEANS (today only)” when sales actually continued beyond the stated timeframe. The financial stakes are enormous: with $500 statutory penalties per violation (potentially per email, per recipient), companies face exposure reaching into the billions. Since the ruling, over twenty new class actions have been filed against retailers, with the Retail Litigation Center and Washington Retail Association arguing that this broad interpretation is unnecessary given existing consumer protection laws and could expose companies to massive liability for routine marketing practices.

Marketing and Consumer Protection

October saw aggressive federal and state enforcement targeting deceptive business practices, with the FTC securing its third-largest monetary judgment ever against Amazon for Prime subscription dark patterns and filing an antitrust complaint against Zillow’s alleged anticompetitive rental advertising agreement with Redfin. The key takeaways include:

  • A Continued Focus on Subscription Practices. Amazon agreed to a $2.5 billion settlement for its Prime subscription practices, Dun & Bradstreet paid $5.7 million for violating a prior FTC order on subscription renewals, and multiple states secured a $4.8 million settlement from TFG Holdings for subscription membership practices.
  • Companies Face False Advertising Lawsuits. Wells Fargo settled for $33 million over allegations it knowingly facilitated fraudulent free trial schemes, and California sued manufacturers for false recyclability claims.
  • “Made in USA” Labeling Creates Federal-State Compliance Conflicts – Two federal courts held that California’s “Made in USA” safe harbors do not preempt stricter FTC standards requiring “all or virtually all” domestic content, forcing companies to comply with the more stringent federal requirements.

Amazon Agrees to Settle Prime Subscription Matter for $2.5 Billion. The Federal Trade Commission secured a landmark $2.5 billion settlement with Amazon.com, Inc., along with two senior executives, resolving allegations of deceptive Prime subscription enrollment and cancellation practices. This represents the third-largest monetary judgment in FTC history and includes a $1 billion civil penalty, which stands as the largest ever imposed for an FTC rule violation, alongside $1.5 billion in consumer refunds representing the second-highest restitution award in FTC history. The agreement provides relief for approximately 35 million affected consumers. The settlement requires Amazon to provide clear and conspicuous disclosure of all material Prime terms during enrollment, including information about costs, billing frequency, auto-renewal policies, and cancellation procedures, while prohibiting misleading decline buttons. Amazon must ensure Prime cancellation uses the same method as enrollment, cannot be difficult, costly, or time-consuming, and must implement significant interface modifications, including clear decline buttons and transparent enrollment flows with unambiguous consent mechanisms to eliminate the “dark patterns” designed to manipulate consumer decision-making. An independent third-party monitor will oversee the consumer refund process, ensuring that the $1.5 billion in restitution reaches affected consumers.

California “Made in USA” Safe Harbors Don’t Guarantee FTC Compliance. Two California federal courts recently held that federal and state Made in USA standards can both apply to products, creating a complex situation for companies. The FTC requires products labeled Made in USA to have all or virtually all components made domestically. At the same time, California law provides exceptions allowing some foreign content—up to 5% of a product’s wholesale value, or up to 10% if those materials cannot be obtained in the United States. This creates problems when a product contains a small amount of foreign content at a high cost, but that content is essential to the product’s function. In such cases, California might allow a Made in USA claim while the FTC would not. In two recent cases involving McCormick mustard and It’s a New 10 hair care products, the courts determined that both the federal and California standards apply simultaneously and do not conflict with each other because they share the same goal of protecting consumers. As a result, companies marketing products as Made in USA must comply with whichever standard is stricter, meaning they should follow the FTC’s “all or virtually all” requirement, even if California’s safe harbor provisions might technically permit their labeling claims.

FTC Enforces Violations of Settlement Order for Subscription Services. Business credit reporting service provider Dun & Bradstreet agreed to pay $5.7 million to settle allegations that it violated a 2022 FTC order by deceiving small business customers about subscription renewals and credit score improvement claims. The FTC alleged that D&B failed to accurately inform customers of product list prices before automatically renewing subscriptions, allowed employees to misrepresent that purchasing fee-based products would improve customers’ business credit scores, and failed to retain required voice recordings of telemarketing calls as mandated by the previous order. The settlement includes $3.7 million for consumer refunds and over $2 million in civil penalties. Under the modified order, D&B must maintain a third-party quality assurance provider to monitor telemarketing practices, implement a comprehensive compliance program, obtain annual leadership certifications of compliance, and notify the Commission within 60 days of any compliance failures.

Citizens Disability Settles FTC Allegations that it Made Illegal Robocalls. Massachusetts-based Citizens Disability and its subsidiary CD Media agreed to pay $1 million to settle FTC allegations that they made tens of millions of illegal robocalls and calls to numbers on the Do Not Call Registry to market Social Security Disability Insurance (SSDI) application assistance services. The FTC alleged that between January 2019 and July 2022, the companies caused more than 109 million outbound telemarketing calls to be made, including over 25.7 million calls to numbers listed on the DNC Registry, violating the Telemarketing Sales Rule and the FTC Act. The complaint alleges the companies contracted with lead generators who deceptively obtained consumer contact information through websites offering prizes, coupons, and services without disclosing that the data would be used for telemarketing calls. Additionally, Citizens Disability allegedly made robocalls falsely claiming they were responding to consumers’ inquiries about SSDI benefits eligibility, particularly targeting lower-income and disabled individuals. The consent order prohibits the defendants from using pre-recorded robocalls for telemarketing, calling numbers on the DNC Registry, making misrepresentations about why they are calling, and requires them to conduct due diligence on lead generators. The order includes a $2 million civil penalty, partially suspended to $1 million based on financial condition representations.  

FTC Sues Zillow. The Federal Trade Commission sued Zillow and Redfin for entering into an illegal agreement that eliminates competition in the market for rental property advertising on internet listing services. In February 2025, Zillow allegedly paid Redfin $100 million in exchange for Redfin ending its contracts with advertising customers, helping Zillow take over that business, agreeing not to compete in the multifamily rental advertising market for up to nine years, and serving merely as an exclusive syndicator of Zillow listings. The FTC alleges this arrangement, disguised as a partnership, allows Zillow to avoid head-to-head competition with Redfin in an already concentrated market that millions of Americans rely on to find rental housing. Following the agreement, Redfin fired hundreds of employees and helped Zillow hire its pick of the terminated workers. The FTC contends the deal will likely lead to higher advertising prices and worse terms for property managers, reduce incentives for innovation and user experience improvements, and ultimately harm renters searching for affordable housing. The complaint seeks to stop the unlawful agreement and contemplates potential divestiture of assets or business reconstruction to restore competition in this critical market.

FTC Bans Two Individuals from Debt Relief Industry in $45M+ Judgment. The Federal Trade Commission won court orders permanently banning two individuals from the debt relief industry after they ran a student loan scam. The FTC sued Superior Servicing LLC in November 2024 for deceiving consumers by pretending to be affiliated with the Department of Education or official loan servicers. The scammers charged illegal upfront fees as high as $899 while making false promises about loan consolidation, lower payments, and loan forgiveness. The court banned defendant Eric Caldwell from debt relief and telemarketing, ordered him to pay $1.55 million, and imposed a total judgment of nearly $46 million against all defendants. The case demonstrates the FTC’s commitment to stopping scams that prey on people struggling with student loan debt.

Multistate AG Subscription Settlement with TFG Holdings. TFG, which operates JustFab, ShoeDazzle, and FabKids, has agreed to a $4.8 million multistate settlement for allegedly deceiving consumers by automatically enrolling them in a VIP Membership Program with recurring $49.95 monthly charges without their knowledge and making cancellations difficult. Approximately $3.8 million will go to automatic refunds for customers who enrolled before May 31, 2016, and only made an initial purchase without logging in to skip payments. Consumers who believe they have a claim can email TFGHoldingResolutions@jfbrands.com by January 30, 2026, or contact their state attorney general’s office, with more information available at state attorney general websites such as Pennsylvania’s AG office or New Jersey’s AG office.

California AG Sues Plastic Bag Manufacturers for False Environmental Claims. California Attorney General Rob Bonta filed suit last week against Novolex Holdings, Inteplast Group Corp., and Mettler Packaging for violating the state’s Environmental Marketing Claims Act, False Advertising Law, and Unfair Competition Law by falsely labeling plastic bags as recyclable despite not meeting California’s standards. The lawsuit stems from violations of Senate Bill 270, which was passed in 2014 and upheld by voters in 2016, prohibiting the distribution of single-use plastic grocery bags unless the bags meet strict recycling and reusability standards. Bonta stated that these bags are not recyclable at any meaningful scale in California and typically end up in landfills or incinerators, even when consumers return them to designated recycling bins. The legal action follows settlements with four other companies, which agreed to halt plastic bag sales in California and collectively pay over $1.7 million in fines. This enforcement effort is part of California’s broader campaign against misleading environmental claims, which has also targeted companies like ExxonMobil over recycling assertions.

California Passes Social Media Warning Laws. California AB 56, the Social Media Warning Law, signed into law on October 13, 2025, requires social media platforms to display black box warnings about mental health risks when users first access the platform each day, after 3 hours of cumulative use, and then hourly thereafter, with each warning displayed for at least 90 seconds without bypass options. AB 1043, the Digital Age Assurance Act, also signed on October 13, 2025 and effective January 1, 2027, establishes a device-based age assurance system where operating system providers like Apple and Google transmit secure, non-identifying age bracket signals (such as 13-15 or 16-17) to app developers at device setup, enabling apps to adjust content appropriately for minors. AB 1043 violations can result in civil penalties of up to $2,500 per affected child for negligent violations or $7,500 per affected child for intentional violations, enforceable by the California Attorney General. Both laws are part of California’s comprehensive youth online safety legislative package and complement the state’s existing Age-Appropriate Design Code Act.

Wells Fargo Settles Free Trial Lawsuit. In October 2025, Wells Fargo agreed to pay $33 million to resolve allegations that it played a supporting role in two free trial marketing schemes operated by Triangle Media Corp. and Apex Capital Group LLC that extracted approximately $200 million from consumers, schemes that were the subject of Federal Trade Commission cases brought in 2018. The underlying fraud involved offering consumers supposedly “risk-free” trials of supplements and personal care products for just shipping costs, but then charging the full product price (around $90) and enrolling them in unauthorized monthly continuity programs, with Wells Fargo allegedly knowingly opening more than 150 bank accounts for shell companies between 2009-2018 and allowing millions in unlawfully obtained funds to be deposited and transferred, including to accounts outside the United States.

OF NOTE

Regulators Intensify Focus on Child Safety Online

While this is a time for trick-or-treating with children, their safety is always a priority for any parent. Throughout 2024-2025, regulators at the federal and state levels remain focused on children. Enforcement actions and lawsuits against tech platforms have intensified dramatically, with federal regulators and state attorneys general launching an unprecedented campaign to hold social media companies accountable for failing to protect children online.

The FTC has brought significant enforcement actions under the Children’s Online Privacy Protection Act (COPPA):

  • August 2024: Sued TikTok and ByteDance for collecting data from millions of children under 13
  • January 2025: Genshin Impact maker paid $20 million for collecting children’s data without parental consent
  • July 2024: NGL Labs settled for $5 million and was banned from offering anonymous messaging apps to anyone under 18

COPPA violations can result in civil penalties up to $53,088 per violation. In January 2025, the FTC finalized significant updates to the COPPA Rule requiring separate parental consent for data sharing with third parties for targeted advertising.

Likewise, state attorneys general (AGs) have filed numerous lawsuits arguing that platforms deliberately created addictive features harming children’s mental health:

  • Meta litigation: 42 states sued Meta; as of April 2025, over 1,700 cases in multidistrict litigation
  • October 2024: Texas sued TikTok; Arkansas sued Google/YouTube
  • April 2025: New Jersey sued Discord for misleading child safety claims

The Texas Attorney General has warned that technology companies are “on notice” about vigorous enforcement. Federal legislation like the Kids Online Safety Act remains stalled in Congress, but bipartisan support for child protection continues. Courts increasingly narrowed interpretations of Section 230 of the Communications Decency Act, allowing more claims to proceed.

States continue to pass laws in this space, with eight robust laws being introduced in the last year, including:

  • Louisiana (July 2024): Age verification and parental consent for users under 16
  • Connecticut (October 2024): Bans features designed to increase minor usage, like endless scrolling
  • Maryland (October 2024): Kids Code requires default privacy settings and data protection assessments
  • California (September 2024): Protecting Our Kids from Social Media Addiction Act (temporarily blocked)
  • New York (June 2024): SAFE for Kids Act requires parental consent for addictive feeds

2025 Laws:

At least 19 states have passed age verification laws as of January 2025.

The cost of getting compliance wrong has real financial consequences, and legal settlements are already ranging from $5 to $20 million. To remain compliant, companies should consider establishing cross-functional child safety programs integrating legal, product, engineering, and privacy teams to:

  1. Track state compliance requirements
  2. Implement parental control tools and monitoring capabilities
  3. Conduct data protection impact assessments where required
  4. Review and redesign potentially addictive features (infinite scroll, autoplay, like counts)
  5. Document internal safety research and responsive actions
  6. Ensure accurate content designation for child-directed material
  7. Create clear parent communication and account deletion processes