Letter from the Editor
Dear Readers,
I hope you enjoyed a wonderful Thanksgiving yesterday with family and friends. As we ease into this Black Friday and reflect on the holiday, I want to express my sincere gratitude for your continued engagement with our ICYMI newsletter. Your trust in us to keep you informed on the rapidly evolving landscape of technology law, privacy, and consumer protection means the world to our team.
November brought significant developments that will shape how businesses navigate compliance in 2026 and beyond. Our latest ICYMI covers four critical areas:
AI Regulation reached a crossroads, with federal preemption efforts threatening state laws while bipartisan legislation targets AI companions and minors. Major litigation against Photobucket and Google demonstrates the growing risks around AI training practices and default settings that could fundamentally reshape how companies deploy AI technologies.
Privacy enforcement hit record levels, including Texas’s historic $1.375 billion settlement with Google and California’s $1.4 million action against Jam City. New requirements for cybersecurity audits, algorithmic pricing disclosures, and health app privacy protections demand immediate operational changes as we head into 2026.
Consumer protection enforcement intensified, particularly around greenwashing claims (with both JBS and Tyson settling over unsubstantiated climate commitments), and Washington’s CEMA created systematic litigation waves targeting major retailers for promotional email practices with $500-per-email statutory damages.
A major circuit split emerged on whether text messages constitute “telephone calls” under the TCPA’s Do-Not-Call provisions, with federal courts now divided 4-3 following the Supreme Court’s McLaughlin decision. This split creates enormous compliance challenges and litigation exposure, potentially reaching hundreds of millions of dollars.
As the holiday shopping season begins in earnest today, I hope you took time yesterday to rest and recharge. The legal landscape may be complex and constantly shifting, but together we’ll continue navigating these challenges with clarity, strategic foresight, and practical solutions.
Thank you again for being part of our community. Here’s to finishing 2025 strong.

ARTIFICIAL INTELLIGENCE REGULATION & LITIGATION
Recent developments reveal intensifying conflicts between federal and state AI regulatory authorities, new protections targeting AI companions and minors, and major privacy litigation challenging how companies collect and use data for AI systems.
- Federal preemption threatens state AI laws. The Trump administration’s draft executive order and proposed NDAA language aim to override state AI regulations, drawing opposition from 36 state attorneys general and over 200 state lawmakers.
- AI companion regulation advances at the federal and state levels. The bipartisan GUARD Act would ban minors from AI companions and require government ID-based age verification, while New York’s new law mandates crisis intervention protocols and three-hour disclosure reminders.
- Litigation targets AI training and deployment practices. Photobucket faces a class action over allegedly training AI on 13 billion user photos without permission. At the same time, Google is being sued for allegedly enabling Gemini AI by default to access users’ private Gmail, Chat, and Meet communications.
Federal Legislation & Regulation
The GUARD Act (“Guidelines for User Age-verification and Responsible Dialogue Act”) is bipartisan legislation introduced in the U.S. Senate in late October 2025 to regulate AI chatbots’ interactions with minors. The bill was sponsored by Senators Josh Hawley (R-MO), Richard Blumenthal (D-CT), Katie Britt (R-AL), Mark Warner (D-VA), and Chris Murphy (D-CT). The legislation would require all AI chatbot providers to implement mandatory age verification using government IDs or similar methods (not self-attestation), create user accounts for all users, and completely ban minors from accessing “AI companions” chatbots designed to simulate emotional relationships or companionship. The bill mandates that all AI chatbots disclose every 30 minutes that they are not human and not licensed professionals, establishes new federal crimes with $100,000 penalties for chatbots that solicit sexual content from minors or encourage self-harm/violence, and imposes civil penalties up to $100,000 per violation for non-compliance with age verification and disclosure requirements. The current bill is broadly written such that it has the potential to inadvertently capture customer service bots and other tools beyond their intended scope if the definitions are not revised. Enforcement would be handled by the U.S. Attorney General and state attorneys general, with the law taking effect 180 days after enactment.
Potential Executive Order to Block State AI Regulation. The Trump administration circulated a six-page draft executive order titled “Eliminating State Law Obstruction of National AI Policy“ that characterizes the over 1,000 state AI laws, particularly in California and Colorado, as “fear-based regulatory capture” threatening American AI dominance. The draft Executive Order includes language directing multiple federal agencies to implement a coordinated strategy to override them. The draft order calls on the Department of Justice to create an “AI Litigation Task Force” to challenge state laws on constitutional grounds, directs the Department of Commerce to develop a list of problematic state laws and tie Broadband Equity Access and Deployment (“BEAD”) funding eligibility to states’ AI regulatory landscape, instructs the Federal Communications Commission to initiate proceedings for federal AI standards that would preempt state laws, requires the Federal Trade Commission to issue a policy statement explaining how the FTC Act preempts state laws requiring alterations to AI outputs (consistent with Trump’s “Preventing Woke AI” Executive Order), and tasks Special Advisor for AI and Crypto David Sacks with developing legislative recommendations for a uniform federal framework.
Proposed AI Preemption Language in the National Defense Authorization Act. House Republican leaders signaled they may include sweeping AI preemption language in the National Defense Authorization Act (“NDAA”) to block states from enacting their own AI protections. The organization notes that a similar attempt earlier in the year to override state AI laws was rejected. Senator Markey and Senator Warren urged colleagues to reject the proposal. Likewise, thirty-six state AGs and over 200 state lawmakers have sent letters urging Congress to reject this provision. Although joining the other thirty-six AGs in the combined letter to Congress, California Attorney General Rob Bonta also sent his own letter to Congressional leaders opposing efforts to introduce language in the NDAA that would undermine state authority to regulate artificial intelligence. Bonta argues that preempting states’ regulatory powers in this rapidly evolving area would seriously undermine the federalist system that has historically allowed states to respond swiftly to emerging technologies to protect residents’ health, safety, and welfare. Bonta emphasized California’s unique position as “the birthplace of AI and the fourth largest economy in the world,” noting that the state is home to 32 of the top 50 AI companies worldwide. He went on to state that common-sense state regulations can coexist with innovation, economic growth, and global leadership.
State AI
New York AI Companion Safety Law. New York’s AI Companion Safety Law (General Business Law Article 47) went into effect on November 5, 2025, and is the first state-level regulation requiring AI companion operators to implement specific safety protocols for users. The law applies to AI systems that simulate sustained human relationships by retaining information from prior interactions, asking unprompted emotion-based questions, and maintaining ongoing dialogue about personal issues. Companies must implement crisis intervention protocols to detect and respond to expressions of self-harm or suicidal thoughts, interrupt sustained engagement periods by reminding individuals every three hours that they are communicating with an AI system rather than a person, and face potential civil penalties of up to $15,000 per day for non-compliance. Enforcement authority rests with the New York Attorney General.
Ohio AI Bill Prohibits AI Systems from Marrying. Ohio House Bill 469, introduced by Republican Representative Thaddeus Claggett in September 2025, would officially declare AI systems as “nonsentient” entities and prohibit them from obtaining any form of legal personhood under state law. The bill prevents AI systems from marrying (whether to humans or other AI), serving as corporate officers or directors, or owning property, including real estate and intellectual property. Under the legislation, any direct or indirect harm caused by an AI system’s operation is the responsibility of the owner or user who directed it. At the same time, developers or manufacturers can be held liable if a defect in design, construction, or instructions causes harm, following product liability principles. Claggett stated his primary motivation was to prevent AI from being blamed for human crimes that involve AI systems and to prepare Ohio’s court system for disputes involving AI technology. The bill is currently being revised in the House Technology and Innovation Committee.
Rockland County Enacts New York Deepfake Criminal Law
Rockland County unanimously passed the Damaging Deepfake Act on November 6, 2025, making it a crime to knowingly create or share digitally deceptive media that falsely depicts an identifiable person without consent, with those weaponizing AI for harassment, fraud, or impersonation facing fines or jail time. County Legislator Dana Stilley sponsored the legislation and represents one of the strongest local responses to AI-generated deception.
Bi-Partisan AI Task Force. North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown announced on November 13, 2025, the formation of a bipartisan AI Task Force in partnership with leading AI developers, including OpenAI and Microsoft, to identify emerging AI issues and develop safeguards to protect the public as the technology rapidly evolves. The task force, facilitated in partnership with the Attorney General Alliance, will focus on three key efforts: working with law enforcement, experts, and stakeholders to identify emerging AI issues so attorneys general are equipped to protect the public; developing basic safeguards that AI developers should follow to protect the public and reduce the risk of harm, especially to children; and creating a standing forum to track developments in AI and coordinate timely responses as new challenges emerge.
Major AI Litigation
Photobucket Asks Colo. Court To Dismiss AI Training Suit. Image hosting website Photobucket has asked a Colorado federal judge to throw out a proposed class action alleging the company unlawfully used billions of photographs uploaded by users for biometric data and training image generators. The lawsuit, filed in December 2024, claims Photobucket violated privacy and intellectual property laws by training AI systems, including biometric facial recognition tools and deepfake generators, on more than 13 billion users’ photos without permission, after the company’s CEO announced plans to license content to “multiple tech companies” for training text-to-image models. Plaintiffs allege Photobucket sent “coercive emails” urging users to log in and accept new terms, treating silence from inactive accounts as implied consent, which they claim violates Illinois, New York, and Virginia biometric privacy statutes as well as the Colorado Consumer Protection Act and Digital Millennium Copyright Act. This case illustrates why it is important to have transparent notice to consumers about changes to business terms and conditions, refrain from pushing for acceptance of terms, and only record documented affirmative consent rather than implied consent to terms and conditions.
Google Spying On Users With Newly Default AI Tool, Suit Says. Google is being sued for allegedly illegally tracking its email, chat, and videoconferencing users’ private communications through its Gemini AI assistant, which the tech giant secretly turned on by default for all users without their knowledge or consent in October 2025. According to the lawsuit, on or about October 10, 2025, Google quietly began utilizing “Smart Features,” which included Gemini, in its Gmail, Chat, and Meet products. The lawsuit, filed in federal court in San Jose, California, alleges violations of California’s Invasion of Privacy Act, which bans recording or eavesdropping on confidential conversations without all parties’ agreement. The complaint also alleges violations of the Stored Communications Act and other California privacy laws. While Google allows users to turn off Gemini, they must dig through privacy settings to deactivate the AI tool. This case reinforces why it is important to provide consumers with clear disclosures and instructions to opt out before implementing changes of this nature.
PRIVACY & DATA PROTECTION
The privacy and data protection landscape continued its aggressive enforcement trajectory in late 2025, marked by record-breaking settlements, expansive new regulatory requirements, and evolving judicial interpretations of legacy privacy statutes applied to modern technologies. State attorneys general secured over $3 billion in combined settlements from tech giants for deceptive data practices. Federal action also intensified, with the FCC holding companies accountable for third-party vendor failures and Congress introducing comprehensive health data privacy legislation. Meanwhile, courts grappled with how decades-old wiretapping laws apply to contemporary web tracking technologies, producing divergent outcomes that underscore the urgent need for businesses to carefully assess their data collection practices, vendor relationships, and compliance obligations across multiple jurisdictions.
- Record-Breaking Enforcement Demonstrates Regulators’ Willingness to Impose Substantial Penalties. Texas’s $1.375 billion settlement with Google for geolocation deception and the $1.4 million California settlement with Jam City for CCPA violations signal that regulators will pursue aggressive monetary penalties against companies of all sizes that misrepresent data practices or fail to provide compliant opt-out mechanisms, particularly when children’s data is involved.
- New Compliance Requirements Demand Immediate Operational Changes. Businesses face a wave of new obligations, including New York’s algorithmic pricing disclosures (effective November 2025), California’s annual cybersecurity audits and ADMT risk assessments (effective January 2026), and potential federal HIPAA-equivalent protections for health apps and wearables, requiring cross-functional coordination between legal, IT, and product teams to implement compliant systems before deadlines.
- Third-Party Vendor Risk and Tracking Technologies Remain High-Exposure Areas: The $5.1 million Illuminate Education settlement for failing to terminate a former employee’s access credentials and the FCC’s $1.5 million fine against Comcast for a vendor breach underscore that companies remain liable for third-party security failures. At the same time, split court rulings on website tracking pixels demonstrate that geographic jurisdiction and technical implementation details can determine liability outcomes.
Federal Privacy
Federal Health Privacy Bill for Apps & Smartwatches. Senator Bill Cassidy introduced the Health Information Privacy Reform Act (S.3097) on November 4, 2025, which extends privacy protections similar to HIPAA to health-related data handled by non-HIPAA-covered entities, including health apps, smartwatches, wearable devices, and wellness platforms. The bill requires HHS, in consultation with the FTC, to promulgate privacy, security, and breach notification standards that provide protections at least commensurate with existing HIPAA protections. The legislation also includes provisions for AI guidance, alignment of Part 2 substance use disorder records with HIPAA, and a study on compensating patients for sharing health data for research, establishing a national floor that would preempt less protective state laws.
State Privacy
California AG Secures $1.4M Settlement with Jam City for CCPA Violations. California Attorney General Rob Bonta announced a $1.4 million settlement with mobile app gaming company Jam City, Inc., for violating the California Consumer Privacy Act. The company, which creates games based on popular franchises including Harry Potter, Frozen, and Family Guy, failed to provide consumers with methods to opt out of the sale or sharing of their personal information across its 21 mobile apps. Despite collecting and sharing consumer personal information almost exclusively through mobile games, Jam City did not offer CCPA-compliant opt-outs in any of its apps, and the only reference to CCPA opt-out rights in the privacy policy was under a “Cookies and Interest-Based Advertising” section, where consumers could email the company. In addition, some games shared or sold data of minors ages 13-16 without the required affirmative parental consent. Under the settlement, Jam City must provide in-app opt-out methods, cannot sell or share personal information of consumers ages 13-16 without opt-in consent, must implement a three-year compliance program, and provide annual reports to the AG. This marks the sixth CCPA enforcement action by Attorney General Bonta and demonstrates increasing enforcement focus on mobile gaming companies and children’s data protection.
California CCPA Enters into $530,000 Settlement with Sling TV. On October 30, 2025, California Attorney General Rob Bonta secured a $530,000 settlement with Sling TV and Dish Media Sales LLC, resolving allegations that the streaming service violated the California Consumer Privacy Act (CCPA) by failing to provide an easy-to-use method for consumers to stop the sale of their personal information and by failing to provide sufficient privacy protections for children. The AG alleged that Sling TV posted a “Your Privacy Choices” link, which directed consumers to cookie preferences, but did not provide an easy-to-execute method to exercise the opt-out right. When consumers located the link, they faced a burdensome process requiring multiple steps to log in with redundant information requests, while consumers using the Sling TV app had to use an additional device separate from the app to effectuate the opt-out. Under the settlement, which arose from the first enforcement action from the California Department of Justice’s January 2024 investigative sweep of streaming services, Sling TV agreed to implement changes including providing an opt-out mechanism within the app on various devices, allowing parents to designate “kid’s profile” settings that default off the sale and sharing of personal information and targeted advertising, and maintaining a compliance program for at least three years.
California Targets Data Brokers With Strike Force. The California Privacy Protection Agency (CalPrivacy) announced on November 19, 2025, the creation of a Data Broker Enforcement Strike Force within its Enforcement Division to investigate privacy violations and review compliance with the Delete Act’s registration requirements and the California Consumer Privacy Act. The Strike Force builds on CalPrivacy’s 2024 investigative sweep that led to a record-setting number of enforcement actions, including a $1.35 million fine against Tractor Supply Company and a $345,178 fine against Todd Snyder, Inc. CalPrivacy’s head of enforcement stated the agency intends to bring the same level of intensity as U.S. Attorney offices and state Attorney General offices to investigations into the data broker industry, which poses “unique risks to Californians through the industrial-scale collection and sale of our personal information.” In January 2026, CalPrivacy will launch its Delete Request and Opt-Out Platform (DROP), allowing consumers to direct all registered data brokers to delete their personal information in a single request.
New York Algorithmic Pricing Disclosure Act. New York’s Algorithmic Pricing Disclosure Act took effect on November 10, 2025, requiring businesses that use algorithmic pricing, which is the use of automatic computational processes to dynamically adjust prices based on consumers’ personal data, to display a clear and conspicuous disclosure stating: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.” The law, codified at N.Y. Gen. Bus. Law § 349-a was enacted as part of New York’s omnibus budget bill and applies to entities domiciled or doing business in New York that set prices using personalized algorithmic pricing. Attorney General Letitia James issued a consumer alert warning New Yorkers that algorithmic pricing, also known as surveillance pricing, allows companies to automatically adjust prices based on individuals’ personal data, charging some consumers more than others depending on factors like location, income, and previous shopping habits. Examples of algorithmic pricing include customers being charged more for hotel rooms when booking from a high-income ZIP code and Target shoppers seeing prices increase when they browse online inside a Target store. Companies that fail to comply with the law will face civil penalties of up to $1,000 for each violation, with no maximum total penalty and no proof of individual consumer harm or damages required. Enforcement is handled by the NY AG, who can send cease-and-desist letters wherever there is reason to believe there is an alleged violation, including based on consumer complaints.
Google and Texas 1.375 Billion Settlement for Privacy Law Violations Related to Geolocation Data. On October 31, 2025, Texas Attorney General Ken Paxton announced the execution of a $1.375 billion settlement agreement with Google regarding privacy claims originating from two lawsuits filed by Texas against Google in 2022 concerning Google’s handling of data derived from geolocation, incognito browsing activities, and biometric identifiers. In the first lawsuit, Texas alleged violations of the Texas Deceptive Trade Practices Act, asserting that Google “systematically misled, deceived, and withheld material facts” about how Google tracked, used, and monetized geolocation data, and that Google deceptively captured information. At the same time, users were in “Incognito” mode, continuing to track, collect, and utilize data contrary to its public representations. In the second lawsuit filed in 2022, Attorney General Paxton sued Google for unlawfully tracking and collecting users’ private data regarding geolocation, incognito searches, and biometric data. Attorney General Paxton stated that this historic $1.375 billion settlement for Google’s misconduct is designed to send a clear warning to all of Big Tech that he will take aggressive action against any company misusing data and violating Texans’ privacy.
Meta Will Pay $190M, Change Policies To End $8B Privacy Suit. Mark Zuckerberg and other Meta directors agreed to a $190 million settlement resolving shareholder claims that they failed to rectify repeated violations of Facebook users’ privacy and improperly engineered an accord to shield Zuckerberg from personal liability for privacy missteps related to the Cambridge Analytica scandal. The derivative lawsuit, which ended a July trial in Delaware Chancery Court, alleged board members mishandled the data privacy scandal and improperly agreed to a $5 billion FTC settlement to personally protect Zuckerberg, with shareholders initially seeking at least $7 billion in damages. The settlement ranks as Delaware’s second largest for claims related to board oversight failures, and Meta’s board agreed to policy changes governing directors’ conduct, insider trading, and whistleblower protections. Because it’s a derivative case, the $190 million goes back to the company rather than to individual investors, pending approval by the court.
Expansion of Illinois’ Right to Privacy in the Workplace Act. On October 30, 2025, the Illinois General Assembly passed SB 2339, an expansion to the Right to Privacy in the Workplace Act designed as a response to increased federal immigration enforcement. The bill was sent to Governor JB Pritzker on November 25, 2025, and will take effect immediately upon signing. The legislation prohibits employers from imposing tougher employment authorization or re-verification requirements than the federal E-Verify system itself demands—in other words, employers must stick to what the federal program requires, no more, no less. The law significantly enhances enforcement mechanisms by allowing labor unions and not-for-profit groups to bring their own civil actions against employers as “interested parties,” in addition to existing enforcement powers held by the Illinois AG and the Department of Labor. Individual employees, applicants, or their representatives can also sue directly in Illinois courts. Penalties under the expanded law include: $100 to $1,000 for each violation; mandatory reinstatement and back pay; up to $10,000 penalties for lost jobs; coverage for attorneys’ fees and damages; and, for repeated violations, fines rising to $1,000 to $5,000 per infraction. The law provides safe harbors for employers who acted in good faith after seeking guidance from the Illinois Department of Labor or the Department of Homeland Security. Coverage under the amended law is broad, applying to both public and private employers statewide.
Data Breach Litigation & Enforcement
FCC Fines Comcast $1.5M Over Vendor Data Breach. The Federal Communications Commission announced that Comcast will pay a $1.5 million fine following a data breach at Financial Business and Consumer Solutions (FBCS), a debt collector that Comcast used until 2022, which exposed personal information of 237,000 current and former customers. The ransomware attack occurred in February 2024, during which an unauthorized party accessed FBCS systems between February 14 and 26, downloading data including names, addresses, Social Security numbers, dates of birth, and Comcast account numbers, and encrypting systems. FBCS initially told Comcast in March 2024 that no customer information was affected, but reversed course in July 2024 to disclose that Comcast customer data had been stolen, and subsequently filed for bankruptcy before the breach was publicly disclosed in August 2024. As part of the FCC settlement, Comcast agreed to adopt a compliance plan that includes new vendor oversight practices related to customer privacy and information protection. Comcast stated it was not responsible for and has not conceded any wrongdoing, noting that no Comcast systems were compromised and that FBCS was required to comply with its vendor security requirements. This enforcement action highlights the FCC’s focus on holding companies accountable for third-party vendor security failures.
California, Connecticut, and New York $5.1M Data Breach Settlement Deal With Illuminate Education AGs. Education technology company Illuminate Education Inc. reached a $5.1 million settlement with California ($3.25 million), New York ($1.7 million), and Connecticut ($150,000) over a 2021-2022 data breach that exposed personal information of millions of students when a hacker used a former high-level employee’s login credentials—which should have been terminated upon departure—to access the company’s network and download data including student names, birth dates, ID numbers, medical conditions, racial data, and special education status. Attorneys general found Illuminate failed to implement basic safeguards despite its privacy policy claiming security measures “meet or exceed” legal requirements, marking the first enforcement action under both California’s K-12 Pupil Online Personal Information Protection Act (KOPIPA) and Connecticut’s Student Data Privacy Law of 2016. Required remedial measures include auditing employee credentials, real-time system monitoring, and separating backup databases from active networks. Renaissance Learning Inc. has since acquired Illuminate.
California’s Cybersecurity and AMDT Regulations. Effective January 1, 2026, the California Consumer Privacy Act introduces major new requirements for certain businesses to conduct risk assessments and complete annual cybersecurity audits, as well as implement consumers’ rights to access and opt out of businesses’ use of automated decision-making technology (ADMT). Businesses whose processing of California consumers’ personal information presents “significant risk” to consumers’ security must complete annual cybersecurity audits if, in the preceding calendar year, they either earned fifty percent or more of their gross global revenue from selling or sharing personal information, or had $26.625 million in gross worldwide revenue and processed either the personal information of 250,000 or more consumers/households or the sensitive personal information of 50,000 or more consumers. The audits must be performed by qualified, objective, independent professionals (either internal or external) whose findings must be based on specific evidence rather than management assertions. Businesses must submit annual certifications of completion to the CPPA signed by a member of executive management under penalty of perjury, with audit records retained for at least five years. The regulations also require businesses to conduct risk assessments for new processing activities that present a “significant risk” to consumer privacy, with a senior executive required to submit an annual certified report to the Agency outlining the number and types of risk assessments conducted and the categories of personal information involved. Under the CCPA, ADMT means “any technology that processes personal information and uses computation to replace human decision-making or substantially replace human decision-making,” and businesses using ADMT to make significant decisions about financial services, housing, education, employment, or healthcare must provide notice and honor opt-out requests.
Amazon Alexa Users Win Certification of 1.2M-Member BIPA Class. An Illinois federal judge certified a class of roughly 1.2 million users of Amazon’s Alexa in litigation accusing the e-commerce giant of unlawfully collecting their biometric voice data, allowing two people to serve as representatives for Illinois residents for whom Amazon allegedly created voiceprints. The court limited the class to users in Illinois for whom Amazon created a voiceprint on or after June 27, 2014, rejecting Amazon’s argument that it does not share or sell customers’ Voice ID information as a merits question not relevant at the class certification stage. The lawsuit alleges Amazon recorded portions of billions of private conversations, including interactions involving children, and stored the data indefinitely, using recordings to train algorithms and AI systems while failing to delete them even when customers requested complete removal. This certification represents one of the most significant biometric privacy class actions and could expose Amazon to substantial liability under Illinois’ Biometric Information Privacy Act.
Rothman Orthopaedics Faces Class Action Over Website Tracking Pixels. Rothman Orthopaedics has been hit with a proposed class action in Pennsylvania alleging the company violated state wiretapping laws by intercepting private healthcare information on its website using third-party tracking pixels. The lawsuit was filed under Pennsylvania’s Wiretapping and Electronic Surveillance Control Act (WESCA) and is part of a growing trend of litigation against healthcare providers over the use of web tracking technologies, including pixel systems and session replay tools, which collect and analyze user activity. Plaintiffs in these cases argue that collecting and sharing user data with third parties—including mental health conditions, treatment searches, provider preferences, and appointment details—is nonconsensual and violates privacy protections. Healthcare-related web tracking cases have been particularly successful under state wiretapping laws, with courts increasingly presuming that online communications regarding health issues are confidential. This lawsuit follows numerous other settlements and enforcement actions against healthcare organizations for similar tracking pixel practices, including cases involving Aspen Dental Management, Mammoth Hospital, and consolidated litigation against Meta over its Pixel tool used by healthcare providers. The trend reflects heightened scrutiny of how healthcare entities deploy tracking technologies that may transmit sensitive patient information to third-party platforms.
Third Circuit Finds Quest Did Not Eavesdrop In Data Privacy Suit. The Third Circuit upheld a win for Quest Diagnostics, which beat a class action alleging it inappropriately shared patient data with Meta Platforms through ad tracking software on its website, with the court reasoning that information was not unlawfully collected because it wasn’t obtained through eavesdropping. The court held that when a user’s browser sends a separate message to Facebook’s servers (triggered by visiting a Quest page), Facebook is a recipient and participant in the communication rather than a third-party wiretapper, meaning California’s Invasion of Privacy Act does not apply. The ruling also rejected claims under California’s Confidentiality of Medical Information Act, finding that plaintiffs alleged only that Quest disclosed that a patient accessed test results—not what kind of medical test was done or what the results were—which does not constitute “substantive” medical information. This precedent significantly benefits companies defending adtech class actions involving pixel tracking, where the third-party receives information directly from users’ browsers.
CONSUMER PROTECTION, AG ENFORCEMENT & MARKETING
Recent enforcement actions and litigation demonstrate heightened scrutiny of consumer-facing marketing claims across multiple sectors, with state attorneys general leading aggressive enforcement. At the same time, Washington’s expanded CEMA interpretation triggers systematic litigation targeting major retailers for promotional email practices.
- Greenwashing enforcement intensifies: The two largest U.S. beef producers (JBS and Tyson), controlling approximately half of beef consumption, have now both settled claims over unsubstantiated net-zero commitments, signaling that environmental goals must be framed carefully with specific supporting actions or face regulatory action.
- Washington CEMA creates new litigation wave: Following the April 2025 Washington Supreme Court ruling, plaintiff firms continue to systematically target major retailers for email marketing practices, including false urgency claims and inflated pricing, with statutory damages of $500 per email creating potential multi-million dollar exposure.
- Product labeling claims face continued scrutiny: Cases challenging “Made in USA,” “no preservatives,” and certification terminology demonstrate ongoing vulnerability for companies whose marketing implies qualities (American origin, certifications, ingredient claims) that don’t match the actual product composition or qualifications.
Enforcement Actions
NY AG Settlement with JBS Greenwashing Settlement. New York AG Letitia James announced a $1.1 million settlement with JBS USA Food Co., an American subsidiary, resolving allegations regarding the company’s “net zero by 2040” commitment. The February 2024 lawsuit alleged that JBS’s net-zero goal wasn’t feasible because the company had no viable plan or factual basis to reach it and was actually making plans to increase production, thereby increasing its carbon footprint. JBS agreed to remove or revise its Net Zero by 2040 consumer-facing statements on US websites, presenting its emission plan as a “goal” instead of a “pledge” or “commitment,” and must include specific steps or actions if the company states it is taking steps toward this goal. JBS must pay $1.1 million to Cornell’s College of Agriculture and Life Sciences’ New York Soil Health and Resiliency Program to support “climate-smart agriculture.” The settlement suggests that positioning environmental plans as “goals” instead of “pledges” or “commitments” may present less risk, and providing details of specific steps being taken may be beneficial.
Texas Sues Bristol-Myers For Alleged Drug Misrepresentations. Texas Attorney General Ken Paxton filed a lawsuit against pharmaceutical companies Bristol-Myers Squibb and Sanofi for failing to disclose that their blood thinner Plavix (clopidogrel bisulfate), intended to prevent heart attacks, strokes, and blood clots, had diminished or no effect for many patients—particularly Black, East Asian, and Pacific Islander patients—due to genetic factors affecting drug metabolism. The lawsuit alleges the defendants violated the Texas Deceptive Trade Practices-Consumer Protection Act and the Texas Healthcare Program Fraud Prevention Act, resulting in minority patients being prescribed a medication that was substantially inadequate or inappropriate. At the same time, the companies made billions from sales. Attorney Mark Lanier stated the companies “knew Plavix didn’t work for many patients, especially minority patients who already face disproportionate risks of heart disease and stroke, but defendants hid that information to protect their bottom line.” The case highlights growing focus on drug efficacy across diverse populations and transparency regarding clinical outcomes.
Vermont AG v. Angi – $100,000 Deceptive Certification Settlement. On October 13, 2025, Vermont Attorney General Charity R. Clark announced a settlement with Angi, Inc., resolving allegations related to misleading marketing practices involving the company’s use of “Angi’s Certified Pro” terminology. Vermont requires residential contractors to register with the Office of Professional Regulation, but Vermont does not establish any professional qualifications or certification process for contractors. Additionally, Angi does not have a certification process or the ability or process to credential contractors using the platform. AG Clark alleged that Angi’s use of the “Certified Pro” terminology misled consumers. The settlement requires Angi to stop using the term “Angi Certified Pro,” along with any other term implying governmental credentialing, and direct consumers to Vermont resources where they can verify a Vermont contractor’s registration and notify Vermont residential contractors of registration requirements. The settlement underscores the importance of marketing practices that cannot be challenged as misleading or untruthful, highlighting the need for companies to evaluate how they advertise credentialing or certification claims carefully.
Litigation
Hanes False ‘Last Day’ Email Ads Violate Washington CEMA. A Washington state consumer filed a proposed class action against Hanesbrands Inc., alleging the company’s marketing emails violated Washington’s Commercial Electronic Mail Act by creating false urgency, including emails in October 2024 claiming a free-shipping deal was on its “LAST DAY!” followed by promotion of the same agreement three days later. March 2025 emails stating “Free Shipping Ends Today” followed by another free shipping offer four days later. The lawsuit identifies 14 specific allegedly misleading emails sent between July 2022 and May 2025, seeking to certify a class of all Washington residents who received these emails with damages of $500 per email. Hanes removed the case to federal court, noting damages likely exceed $5 million. The same plaintiff law firms—Strauss Borrelli PLLC, CohenMalad LLP, and Stranch Jennings & Garvey PLLC—are systematically targeting major retailers following the April 2025 Washington Supreme Court CEMA ruling.
Nordstrom Customers Sue Over Allegedly Misleading Spam Two plaintiffs filed a proposed class action against Nordstrom Inc. in Washington federal court alleging the company’s Nordstrom Rack marketing emails violated Washington’s Commercial Electronic Mail Act (CEMA) by advertising discounts calculated from inflated “list prices” at which products were never actually sold and creating false urgency by implying sales would end when the discounts were actually “perpetual and never-ending.” Plaintiffs monitored approximately 80,000 products since June 2025 and reviewed archived web pages dating to September 2021 to support their claims. Under CEMA, plaintiffs can collect $500 per offending email, and the suit seeks to certify a nationwide class with damages likely exceeding $5 million.
Tyson Foods Makes Carbon Claims Under Greenwashing Deal. Tyson Foods settled a greenwashing lawsuit filed by the Environmental Working Group, agreeing to stop making “net-zero by 2050” and “climate-smart beef” claims that plaintiffs alleged were not supported by sufficient action, with Tyson unable to make new related climate claims for five years unless an expert concludes they are sufficiently funded. Tyson’s beef production accounts for 85% of the company’s emissions, and the settlement disclosed that Tyson had invested only $65 million to reduce beef-related emissions—roughly 0.1% of its $53 billion in annual revenue. This settlement, combined with the recent JBS USA settlement with New York’s Attorney General, means the two largest U.S. beef producers (together controlling about 50% of beef consumption) have now agreed to stop making unsubstantiated climate claims. Tyson denied wrongdoing and stated the decision to settle was made solely to avoid litigation costs.
Made In The US Suit Against Black Rifle Coffee. Two consumers filed a proposed class action in California federal court against Black Rifle Coffee Company, the military-themed retailer, alleging its “America’s Coffee” slogan and prominent American flag imagery misleadingly imply U.S. origin when all coffee beans are sourced internationally, with only roasting and bagging occurring domestically. The plaintiffs from California and New York claim they relied on the patriotic labeling when purchasing products like “Wakin’ the Neighbors,” “Spirit of ‘76,” and “Tactisquatch” from retailers including Walmart and Safeway, paying premium prices based on implied American origin. The suit alleges violations of California Business and Professions Code Section 17533.7 (which specifically governs “Made in USA” claims), California consumer protection laws, New York false advertising statutes, and FTC regulations. The complaint notes that while coffee can be grown in Hawaii, Puerto Rico, and parts of California, the U.S. produces less than 1% of the coffee it consumes, and Black Rifle uses no Hawaiian-grown coffee.
Sara Lee Falsely Claims ‘No Preservatives,’ Suit Says. A proposed class of consumers is suing the company behind Sara Lee in New York federal court, alleging its bread products contain citric acid even though the labels indicate they are made without “artificial colors, flavors & preservatives.” The lawsuit contends that while citric acid is found naturally in citrus fruits, the “food-grade” citric acid in Sara Lee products is a commercially manufactured food additive that the FDA classifies as a preservative, creating a misleading impression for consumers seeking preservative-free products. This case follows prior Sara Lee litigation, including a $1 million settlement over “All Butter Pound Cake” labeling, where products also contained soybean oil, resulting in a permanent injunction requiring the company to rebrand products as “Classic Pound Cake.” Food labeling class actions over “natural,” “preservative-free,” and origin claims continue to proliferate as plaintiff attorneys systematically target consumer product companies.
Hanes False ‘Last Day’ Email Ads Violate Washington CEMA. A Washington state consumer filed a proposed class action against Hanesbrands Inc., alleging the company’s marketing emails violated Washington’s Commercial Electronic Mail Act by creating false urgency, including emails in October 2024 claiming a free-shipping deal was on its “LAST DAY!” followed by promotion of the same agreement three days later. March 2025 emails stating “Free Shipping Ends Today” followed by another free shipping offer four days later. The lawsuit identifies 14 specific allegedly misleading emails sent between July 2022 and May 2025, seeking to certify a class of all Washington residents who received these emails with damages of $500 per email. Hanes removed the case to federal court, noting damages likely exceed $5 million. The same plaintiff law firms—Strauss Borrelli PLLC, CohenMalad LLP, and Stranch Jennings & Garvey PLLC—are systematically targeting major retailers following the April 2025 Washington Supreme Court CEMA ruling.
OF NOTE
MAJOR CIRCUIT SPLIT: ARE TEXT MESSAGES “TELEPHONE CALLS” UNDER TCPA’S DO-NOT-CALL PROVISIONS?
Following the Supreme Court’s June 20, 2025, decision in McLaughlin Chiropractic Associates, Inc. v. McKesson Corp., FCC interpretations of the TCPA no longer bind federal district courts and must instead independently determine the statute’s meaning using ordinary principles of statutory interpretation while affording “appropriate respect” to agency interpretations. This decision was built on Loper Bright Enterprises v. Raimondo (2024), which overruled long-standing deference to administrative agencies.
The McLaughlin ruling fundamentally changed the TCPA litigation landscape because, for decades, seven federal circuits had held that the Hobbs Act rendered FCC orders binding on district courts, effectively requiring courts to defer to the FCC’s 2003 order treating text messages as “calls” for TCPA purposes. With that deference eliminated, courts nationwide are now free to conduct their own textual analysis of whether SMS messages constitute “telephone calls” under Section 227(c)(5) of the TCPA, which provides a private right of action for individuals who “received more than one telephone call” in violation of Do-Not-Call regulations.
The stakes are enormous: the TCPA’s combination of near-strict liability, private right of action, and statutory damages of $500 to $1,500 per violation creates massive exposure for high-volume text message campaigns, potentially reaching tens or hundreds of millions of dollars in class action litigation.
Court’s Holding DNC Provisions Do NOT Cover Courts’ Finding SMS Messages
On July 21, 2025, the Central District of Illinois became the first post-McLaughlin court to apply the new framework to DNC claims. It ruled in Jones v. Blackstone Medical Services (C.D. Ill. July 21, 2025) that the statute’s DNC protections do not extend to text messages. The court took a strict textual approach, holding that: (1) Section 227(c)(5) provides a private right of action only for “telephone calls” in violation of DNC regulations; (2) neither Section 227(c) nor its implementing rules mention “texts,” “SMS,” or “messages”; and (3) FCC orders expanding “calls” to include texts were issued under Section 227(b), not Section 227(c), and thus don’t apply to the DNC Registry. Most critically, the court reasoned: “In today’s American parlance, ‘telephone call’ means something entirely different from ‘text message.’ Thus, under a plain reading, Section 227(c)(5) of the TCPA does not regulate text messages.” The court emphasized that text messaging was not available technology in 1991 when the TCPA was enacted. Thus, the term “telephone call” could not have included text messages or SMS messages at the time of enactment.
Less than six weeks after Jones, on August 26, 2025, Chief Judge Allen Winsor of the Northern District of Florida reached the same conclusion in Davis v. CVS Pharmacy, Inc. (N.D. Fla. Aug. 26, 2025), with remarkably succinct reasoning: “no ordinary person would think of a text message as a ‘telephone call.’ This conclusion, supported by the ordinary public meaning at the time of the provision’s enactment, is enough to end this case.” The court rejected the plaintiff’s argument that other TCPA provisions defining “telephone solicitations” to include “a telephone call or message” should inform the interpretation of Section 227(c)(5)’s reference to “telephone call,” finding that Congress’s use of different language in neighboring provisions actually undermines the argument that the terms are equivalent. The court also rejected the plaintiff’s contention that the FCC’s 2003 Order treating text messages as calls for TCPA purposes was entitled to “appropriate deference” under McLaughlin, noting that courts can provide appropriate respect without adopting a statutory interpretation that conflicts with the ordinary public meaning of clear statutory text.
On October 24, 2025, the Middle District of Florida became the third court to rule that text messages are not “telephone calls” under the TCPA’s DNC provisions in El Sayed v. Naturopathica Holistic Health, Inc., (M.D. Fla. Oct. 24, 2025). Senior District Judge Steven D. Merryday explained that “it is only through the rulemaking authority of the FCC that the voice call provisions of the TCPA have been extended to text messages. … However, a District Court is not bound by the FCC’s interpretation of the TCPA.” The court adopted Judge Winsor’s opinion from Davis, reasoning that “the statutory text here is clear. A text message is not a ‘telephone call,’“ and further concluded that Congress knows the difference between the words it uses—the TCPA’s separate usage of “a call made using a voice service” as distinguished from “a text message sent using a text messaging service” confirms that the terms carry different meanings.
Florida has emerged as the primary battleground for this issue, with all three of the state’s federal district courts weighing in with conflicting rulings during 2025. While the Northern and Middle Districts ruled that texts are not calls, the Southern District of Florida reached the opposite conclusion in Bosley v. A Bradley Hospital LLC, citing pre-McLaughlin caselaw deferring to FCC rules rather than engaging in textualist inquiry.
While the parties reportedly resolved both the Northern District and Southern District cases without appeal, the split of authority raises the likelihood that the Eleventh Circuit will eventually be asked to weigh in on this critical issue.
DNC Provisions cover Courts’ Finding SMS Messages
In Mujahid v. Newity (N.D. Ill. Nov. 10, 2025), the Northern District of Illinois denied the defendant’s motion to dismiss. It concluded that SMS messages are “calls” under the TCPA for five distinct reasons. The court found that Webster’s Dictionary defines “call” in this context as “to communicate with or try to get into communication with a person by a telephone,” and concluded that a text message falls within this definition. The court reasoned that interpreting Section 227(c) to include text messages is consistent with the text of Section 227 as a whole. The court held that interpreting Section 227(c) to include text messages is confirmed by the purpose and context of the TCPA, noting Congress’s privacy-focused purpose that does not depend on whether a message is oral or written. The court found that interpreting Section 227(c) to include text messages is consistent with guidance from the FCC. The court concluded that its interpretation is consistent with the weight of authority, citing the Supreme Court’s recognition that text messages are calls for purposes of Section 227(b), and the Seventh Circuit’s similar holdings in other TCPA contexts. The Mujahid decision on November 10, 2025, effectively made it a 4-3 split among federal district courts, with four courts now holding that the TCPA’s DNC rules cover SMS messages and three holding that they are not.
Critical Implications for Businesses
The current uncertainty creates significant compliance challenges and may encourage forum shopping, with plaintiffs’ attorneys strategically filing cases in jurisdictions with favorable precedent while emphasizing state law claims unaffected by federal interpretations. Businesses cannot assume uniform protection across districts—messaging programs reaching Florida consumers may face different interpretations depending on where a lawsuit is filed. Even in jurisdictions that are finding texts are not covered by DNC provisions, the decisions address only DNC registry obligations and leave intact other TCPA requirements, including prior express written consent for marketing texts using automated systems. Companies must still maintain detailed consent records, honor opt-out requests immediately, and avoid texting between 9 PM and 8 AM recipient time. State-level regulations, many of which impose stricter requirements than federal law, also remain enforceable regardless of federal court interpretations.
The litigation risk remains severe—a company sending 100,000 texts to DNC-registered numbers could face $50 to $150 million in potential statutory damages under the TCPA, plus attorney fees, making this one of the most significant exposure areas in consumer protection litigation. The decisions face certain appeal, with the Eleventh Circuit likely to review within 12-18 months, potentially reversing or modifying the district court analyses. Until circuit courts or Congress provide definitive guidance, businesses should reassess compliance programs, maintain robust opt-in and opt-out procedures, and stay alert to legal and regulatory developments, with compliance strategies remaining conservative until appellate clarity emerges.