NetChoice's Minnesota lawsuit, in five claims

10 min read
May 3, 2026

On April 29, NetChoice, the trade group for Meta, Google, YouTube, Amazon, Snap, and TikTok, filed a lawsuit against Minnesota in federal court. They are challenging HF 2 Section 13, a law signed by Governor Tim Walz on June 14, 2025, that requires social media platforms to show a clear mental-health warning to users in the state. The law will take effect on July 1, 2026. NetChoice claims that forcing platforms to display this warning violates the First Amendment by compelling speech. This is the second time NetChoice has taken Minnesota to court. They have also sued Texas, Florida, California, Utah, Ohio, Mississippi, New York, and Colorado over similar laws about child safety, content moderation, and digital age verification. Suing states over these issues is what they do.

NetChoice submitted a 48-page complaint in the U.S. District Court for the District of Minnesota to try to stop the state’s new social media warning label law. The case is called NetChoice v. Ellison. The tech industry is represented by Aaron Van Oort from Faegre Drinker Biddle & Reath. Van Oort served as Minnesota’s Solicitor General and is now leading the case against the state he once represented. Instead of hiring a typical D.C. law firm, NetChoice chose someone who knows the local federal courts well. The complaint is detailed, carefully written, and uses all the main legal arguments against compelled-speech rules.

What the Minnesota law actually requires

The press release describes the law as a “censorship label” requirement, but the actual statute is less strict.

Article 19, Section 13 of Minnesota House File 2 (2025 First Special Session, Chapter 3) created a new section of the Minnesota Statutes — § 325M.335, titled “Mental Health Warning Label.” Effective July 1, 2026, a covered “social media platform” must display a “conspicuous mental health warning label” each time a user accesses the platform. The label disappears when the user either exits the platform or “acknowledges the potential for harm and chooses to proceed to the social media platform despite the risk.”

The label must do two things. It must warn users about the “potential negative mental health impacts” of using the platform. And it must include access to mental health resources, specifically the federally administered 988 Suicide and Crisis Lifeline. The Commissioner of Health, in consultation with the Commissioner of Commerce, was charged with developing the actual content of the label by March 1, 2026, and the guidelines must be “based on current evidence regarding the negative mental health impacts of social media platforms.”

That covers the full requirement. There is no ban, age check, or content moderation rule. Users can say what they want, and platforms can publish what they choose. The law only asks for a one-click acknowledgment, similar to the “I understand” buttons people already see on cookie banners, age gates, terms of service screens, HIPAA notices, and the alcohol-cancer warning the Surgeon General called for in January 2025.

The five claims, examined

NetChoice v. Ellison · The argument at a glance

The case in five lines

1
NetChoice claims Forced speech, First Amendment violation.
Counter The warning is factual, not opinion. Same legal rule that allows tobacco warnings.
2
NetChoice claims The warning broadcasts the government's controversial views.
Counter The Surgeon General, APA, and AAP have all confirmed the harm. Not controversial.
3
NetChoice claims Minnesota targets free-speech platforms and exempts others.
Counter Exclusions are about how products work, not viewpoint. ISPs, search, and email are also excluded.
4
NetChoice claims Users cannot access content until they affirm the warning.
Counter One click and the warning is gone. Same pattern as cookie banners and HIPAA notices.
5
NetChoice claims Public health is a backdoor for the government to control speech.
Counter A warning is information, not control. The cases NetChoice has won were about access bans, not warnings.
A 30-second version of the argument. Each claim is rebutted in full below.

Claim 1: “This is compelled speech, a direct violation of the First Amendment.”

NetChoice frames any required disclosure as an automatic violation of the First Amendment. That overstates the doctrine in a specific, exploitable way.

Under Zauderer v. Office of Disciplinary Counsel, 471 U.S. 626 (1985), the Supreme Court held that the government may compel a commercial actor to disclose “purely factual and uncontroversial information about the terms under which his services will be available,” subject only to rational-basis-style review that the disclosure be reasonably related to a legitimate state interest and not “unduly burdensome.” This is the doctrine that has upheld tobacco warnings, alcohol warnings, prescription drug labeling, the Schumer Box on credit cards, FDA nutrition labels, and California’s Proposition 65 chemical warnings (National Association of Wheat Growers v. Bonta, 85 F.4th 1263 (9th Cir. 2023)). Compelled commercial disclosure of factual public health information is one of the oldest and best-established forms of First Amendment-compliant regulation in American law.

The Ninth Circuit, in the very NetChoice victory they keep citing, expressly preserved this rule. From NetChoice, LLC v. Bonta, 113 F.4th 1101, 1117 (9th Cir. 2024): “As for laws that compel the disclosure of ‘purely factual and uncontroversial’ commercial speech, such laws are subject to a form of rational basis review.” The court struck the California Age-Appropriate Design Code’s data protection impact assessment requirement only because it forced businesses to render a subjective opinion about whether their products harm children. That is affirmative subjective speech, not factual disclosure.

Minnesota’s law does the opposite. The platform displays a state-authored, evidence-based factual notice and a link to the 988 hotline. It is closer to the cigarette package than to the AADC’s compelled opinion-writing. The legal pivot is whether Zauderer applies. If it does, Minnesota wins. If a court is persuaded to apply strict scrutiny instead, Minnesota almost certainly loses. Everything else in this lawsuit is plumbing around that single doctrinal question.

Claim 2: The warning forces websites to broadcast “the government’s controversial views.”

The law requires the label to be “based on current evidence regarding the negative mental health impacts of social media platforms” (Minn. Stat. § 325M.335, subd. 2(a)). That evidence base is federal.

In May 2023, the U.S. Surgeon General released a national advisory titled “Social Media and Youth Mental Health.” The advisory concluded, “we cannot conclude that social media is sufficiently safe for children and adolescents.” It found that teens who spend more than three hours a day on social media have about twice the risk of depression and anxiety symptoms. On June 17, 2024, Surgeon General Vivek Murthy called on Congress to require a warning label on social media platforms, a policy Minnesota later adopted. The American Psychological Association’s 2023 Health Advisory on Social Media Use in Adolescence and its follow-up report, as well as the American Academy of Pediatrics, reached similar conclusions.

A factual statement supported by a federal Surgeon General Advisory, the APA, and the AAP is not “controversial” under the Zauderer standard. Federal courts have used the term to mean only value judgments that are politically or ideologically disputed, not scientific questions where public health authorities agree. See CTIA v. City of Berkeley, 928 F.3d 832 (9th Cir. 2019), which upheld a cell phone disclosure tracking the FCC’s own RF exposure guidelines.

NetChoice’s most strategic move here is to use the Surgeon General’s own honesty against him. The 2023 advisory admits there are “known evidence gaps” in the research. NetChoice argues this means the Surgeon General admits the science is unsettled. That is not the case. Admitting research gaps does not mean there is controversy. The Surgeon General made a public health decision after reviewing all the evidence, including its limits. This is how regulatory science works.

Claim 3: “Minnesota carved out TV networks and gaming platforms, yet targets places like YouTube and X because that is where free speech thrives.”

This is the easiest claim to rebut by looking at the statute itself, and it is worth showing readers directly. The press release suggests that Minnesota’s law targets certain platforms because of their political content, but the actual text shows the opposite.

Minn. Stat. § 325M.31(j) defines a “social media platform” as a service that “allows an account holder to create, share, and view user-generated content for a substantial purpose of social interaction, sharing user-generated content, or personal networking.” It then explicitly excludes:

  • Internet search providers
  • Internet service providers
  • Email services
  • Streaming services, online video games, e-commerce, and other websites where the content is not user-generated and interactive functions are incidental.
  • Employer communication services for business activities
  • Advertising networks
  • Telecommunications carriers
  • Broadband services
  • Single-purpose community groups for education or public safety
  • Teleconferencing or video-conferencing services
  • Cloud computing services
  • Technical support platforms
  • Platforms designed primarily for creative professional users (portfolio platforms, creative networking)

None of these exclusions are based on political viewpoint. They are based on how the products work. The Minnesota legislature drew the line between services mainly designed for social interaction, which the Surgeon General and APA identified as increasing mental health risks, and services that do not work that way. ISPs are not excluded because of political reasons, but because no one has argued that their product design causes adolescent depression. The same logic applies to cloud storage, telecoms, and portfolio sites for creative professionals.

The Supreme Court’s decision in Williams-Yulee v. Florida Bar, 575 U.S. 433 (2015), and the Ninth Circuit’s opinion in Wheat Growers both confirm that a regulation does not have to be perfectly comprehensive to survive First Amendment scrutiny. Underinclusiveness is only a problem if it shows the stated interest is not genuine. In this case, the law draws the line exactly where the public health evidence does. That is not a pretext; it is regulatory specificity.

Claim 4: Minnesotans “will not be able to access digital content until they ‘affirm’ they understand the allegations the government is requiring them to see.”

This misrepresents the statute. Subdivision 1(a)(2) gives users two ways for the label to disappear: they can exit the platform, or they can “acknowledge the potential for harm and choose to proceed to the social media platform despite the risk.” This is a one-click acknowledgment, just like the “I understand” buttons users already see on cookie banners, age gates, terms-of-service screens, and HIPAA notices.

The law’s repeated-display requirement is a real design choice, and some courts may scrutinize the repetition. But repetition of a one-click acknowledgment is different from an access ban. It is not a paywall, age gate, or restriction on what users can read or post. For comparison, California’s Prop 65 carcinogen warnings, OSHA workplace hazard placards, alcohol pregnancy warnings under 27 U.S.C. § 215, and the alcohol-cancer warning the Surgeon General called for in January 2025 are not treated as access bans on the products they cover. Even NetChoice’s own argument admits that a brief acknowledgment does not “prevent” speech. It simply goes along with it.

Claim 5: Politicians are using “public health” as a “backdoor means to control online speech,” and adults can decide for themselves and for their children.

A warning is the opposite of speech control. It adds information and lets users make their own choices, which, NetChoice says, is the libertarian idea it supports. The Supreme Court has recognized for decades that disclosure requirements help, rather than limit, the marketplace of ideas because they provide consumers with information rather than suppressing speech (Zauderer, 471 U.S. at 650; Milavetz v. United States, 559 U.S. 229 (2010)).

Minnesota’s statute does not restrict any user’s content. It does not ban any platform. It does not prohibit any post. It does not condition access on identity verification. Those are the features that doomed other state laws NetChoice has won against. NetChoice v. Bonta (9th Cir. 2024) struck down California Age-Appropriate Design Code provisions that required platforms to opine on harm in DPIA reports and to enforce age-estimation, mitigation, and content-moderation duties — affirmative restrictions on operations and forced subjective speech. NetChoice v. Yost (S.D. Ohio 2024) and NetChoice v. Griffin (W.D. Ark. 2024) enjoined parental-consent and age-verification mandates that conditioned minors’ access to lawful speech. NetChoice v. Reyes (D. Utah 2024) struck a curfew/age-verification regime.

None of these laws were narrow factual-disclosure regimes. All imposed access restrictions or compelled subjective speech. NetChoice cites them throughout the Minnesota complaint as if they are interchangeable precedents. They are not. A federal court applying Zauderer faithfully should evaluate Minnesota’s law on its own much narrower terms, against the framework that has upheld tobacco, alcohol, drug, and chemical warnings for decades.

NetChoice's cited cases · 2024–2025

The cases NetChoice cites, and why none of them match Minnesota's law

NetChoice v. Bonta
9th Circuit · 2024 · California Age-Appropriate Design Code
Different
from MN's law+
What the law required Platforms had to write reports about whether their products harm children, plus run age checks and follow content rules.
Why the court struck it It forced platforms to share their own opinion about harm. That is opinion, not fact.
Minnesota's law Shows users a warning written by the state. No platform opinion required. The same court said in Bonta that factual warnings like this are allowed.
NetChoice v. Yost
S.D. Ohio · 2024 · Parental consent law
Different
from MN's law+
What the law required Parents had to give consent before minors could create accounts. Platforms had to verify each user's age.
Why the court struck it It blocked access to legal speech unless users proved their identity first.
Minnesota's law No ID required. No parental consent. Users tap once and continue. Nothing in the law blocks access based on age or identity.
NetChoice v. Griffin
W.D. Ark. · 2024–2025 · Social Media Safety Act
Different
from MN's law+
What the law required Age checks and parental consent before minors could use covered platforms.
Why the court struck it It blocked minors from speaking online based on their age. The law was also unclear about which platforms it covered.
Minnesota's law No age check. No ID. The law applies based on how a platform works and its size, not who its users are.
NetChoice v. Reyes
D. Utah · 2024 · Social Media Regulation Act
Different
from MN's law+
What the law required Curfew hours when minors could not use platforms, plus age checks and parental controls.
Why the court struck it It restricted speech by time of day and required ID to enforce the curfew.
Minnesota's law No curfew. No time-of-day rules. No age checks. Adults and minors see the same one-click warning.
NetChoice v. Weiser
D. Colo. · 2025 · Mental health warning law (currently on appeal)
Most similar
still different+
What the law required A mental health warning on social media, applied only to users under 18.
Why the court paused it The court treated the warning as forced speech. The case is on appeal.
Minnesota's law Minnesota argues its warning is a factual public health notice, not forced opinion. The Colorado ruling is in a different federal circuit, so it does not bind the Minnesota court.

Tap any case to see what the law required and why it does not match Minnesota's.

Sources: NetChoice v. Ellison complaint citations; published opinions from each case.

The argument NetChoice doesn’t lead with

The strongest legal claim in NetChoice’s filing isn’t in their press release. Spokesperson Paul Taske said Minnesota’s law is content-discriminatory because it targets some platforms and exempts others. As shown above, the statutory text rebuts that on its face. The exclusions are about product architecture, not viewpoint.

But there is a more serious legal argument NetChoice does make in the complaint itself, and Minnesota will have to defend against it: the law applies to adults as well as minors. NetChoice will likely point to Free Speech Coalition, Inc. v. Paxton, 606 U.S. 461 (2025), where the Supreme Court considered Texas’s age-verification law for adult content. The Court applied intermediate scrutiny — not strict — and ultimately upheld the Texas statute. NetChoice’s challenge will be to argue that Minnesota’s warning is more burdensome on adult speech than Texas’s age-verification requirement, which is a hard sell because Minnesota’s law requires no identity verification at all.

This is the argument Minnesota’s lawyers will need to win. The defensive frame is straightforward: the warning informs, it does not restrict. Acknowledgment is one click. The 988 hotline link is a public health resource, not a barrier. Adults retain full access to the platform with a single click, the same model used for OSHA workplace warnings and pharmaceutical packaging that adults have lived with for 50 years. The warning’s burden on adult speech is dramatically lighter than the Texas law that already cleared intermediate scrutiny. But it is a real argument, and pretending it isn’t would be naive.

What this lawsuit is telling us

Industries don’t typically spend this much on litigation unless they expect the regulation to bite. NetChoice hired a former Minnesota Solicitor General to handle a 48-page complaint over a one-click acknowledgment with a 988 link. That is a significant use of legal resources from a trade group whose members have almost unlimited legal capacity. They are not doing this because the warning is just symbolic. They are doing it because the warning could change user behavior at the most important moment: when people return to the platform.

That is the main purpose of a warning label. The Surgeon General called for one because the current system of industry self-regulation and platform-controlled disclosures led to the worsening outcomes he is now warning about. NetChoice’s complaint lists, in paragraphs 29 through 52, the many voluntary self-regulatory steps platforms have taken, including parental controls, screen-time tools, sensitive content filters, family pairing features, and default settings for teen accounts. All of these are real, but none have been enough. That is why the Surgeon General issued the advisory and why a Los Angeles County jury awarded $6 million in March 2026, finding Meta and YouTube failed to adequately warn users of the risks.

Minnesota took the chance that other states would follow. California already has. The lawsuit is the price of being first. It may also be the cost of being right.

References

  1. NetChoice, "Free Speech Online in Minnesota Doesn't Need Government Censorship Labels" (press release, April 29, 2026). https://netchoice.org/free-speech-online-in-minnesota-doesnt-need-government-censorship-labels/
  2. NetChoice v. Ellison, Complaint for Declaratory and Injunctive Relief, U.S. District Court for the District of Minnesota (filed April 29, 2026). https://netchoice.org/wp-content/uploads/2026/04/NetChoice-v-Ellison-2026-04-29-01-Complaint-for-Declaratory-and-Injunctive-Relief.pdf
  3. NetChoice, "NetChoice v. Ellison – Minnesota Censorship Labels" (case page). https://netchoice.org/netchoice-v-ellison-minnesota-censorship-labels/
  4. NetChoice, "About: Our Members." https://netchoice.org/about/
  5. Minnesota House File 2, 2025 First Special Session, Chapter 3, Article 19, Section 13, signed June 14, 2025; codified at Minn. Stat. § 325M.335 ("Mental Health Warning Label"); effective July 1, 2026. https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF2&ssn=1&y=2025
  6. Minn. Stat. § 325M.31 (definitions and exclusions). https://www.revisor.mn.gov/statutes/cite/325M.31
  7. Zauderer v. Office of Disciplinary Counsel, 471 U.S. 626 (1985). https://www.law.cornell.edu/supremecourt/text/471/626
  8. National Association of Wheat Growers v. Bonta, 85 F.4th 1263 (9th Cir. 2023).
    https://cdn.ca9.uscourts.gov/datastore/opinions/2023/11/07/20-16758.pdf
  9. NetChoice, LLC v. Bonta, 113 F.4th 1101 (9th Cir. 2024).
    https://cdn.ca9.uscourts.gov/datastore/opinions/2025/09/09/25-146.pdf
  10. CTIA v. City of Berkeley, 928 F.3d 832 (9th Cir. 2019).
  11. Williams-Yulee v. Florida Bar, 575 U.S. 433 (2015). https://www.law.cornell.edu/supremecourt/text/13-1499
  12. Milavetz, Gallop & Milavetz, P.A. v. United States, 559 U.S. 229 (2010). https://www.law.cornell.edu/supremecourt/text/08-1119
  13. Free Speech Coalition, Inc. v. Paxton, 606 U.S. 461 (2025).
  14. NetChoice v. Yost, S.D. Ohio (2024).
    https://netchoice.org/wp-content/uploads/2024/01/2024.01.05-NetChoice-v-Yost-Complaint-for-Declaratory-and-Injunctive-Relief-FILED.pdf
  15. NetChoice v. Griffin, W.D. Ark. (2024).
    https://netchoice.org/wp-content/uploads/2026/04/NetChoice-v.-Griffin-Arkansas-Act-900-Enjoined_Apr-20-2026.pdf
  16. NetChoice v. Reyes, D. Utah (2024).
    https://netchoice.org/wp-content/uploads/2023/12/NetChoice-v-Reyes_Official-Complaint_FINAL.pdf
  17. U.S. Surgeon General, Social Media and Youth Mental Health: The U.S. Surgeon General's Advisory (May 2023). https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf
  18. Vivek H. Murthy, "Surgeon General: Why I'm Calling for a Warning Label on Social Media Platforms," The New York Times (June 17, 2024). https://www.nytimes.com/2024/06/17/opinion/social-media-health-warning.html
  19. American Psychological Association, Health Advisory on Social Media Use in Adolescence (May 2023). https://www.apa.org/topics/social-media-internet/health-advisory-adolescent-social-media-use
  20. American Academy of Pediatrics, Center of Excellence on Social Media and Youth Mental Health. https://www.aap.org/en/patient-care/media-and-children/center-of-excellence-on-social-media-and-youth-mental-health/
  21. U.S. Surgeon General, Alcohol and Cancer Risk: 2025 Surgeon General's Advisory (January 2025). https://www.hhs.gov/sites/default/files/oash-alcohol-cancer-risk.pdf
  22. 27 U.S.C. § 215 (alcoholic beverage labeling). https://www.law.cornell.edu/uscode/text/27/215
  23. Dani Anguiano, "Meta and YouTube designed addictive products that harmed young people, jury finds," The Guardian (March 2026) https://www.theguardian.com/media/2026/mar/25/jury-verdict-us-first-social-media-addiction-trial-meta-youtube
  24. California Assembly Bill 56 (2025–2026 Regular Session, chaptered) – social media warning label legislation. https://legiscan.com/CA/text/AB56/id/3273339

Further Reading

  1. Reuters Connect, "EXPLAINER: Why are social media companies under pressure over 'addictive' platform design?" — Useful international context on litigation and regulatory trends targeting platform design. REPEAT- EXPLAINER: Why are social media companies under pressure over 'addictive' platform design? | Reuters Connect
  2. United Kingdom Government, Online Safety Act 2023 — A simple guide. https://www.gov.uk/government/publications/a-guide-to-the-online-safety-act — A comparative example of how another democracy has addressed platform-design harms to minors.
  3. European Commission, The Digital Services Act. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en — The EU's framework for platform transparency, risk assessment, and minor protection.
  4. Australian Government, "Social Media Ban in Australia: A Simple Guide." — Background on Australia's under-16 social media restrictions, often invoked in the U.S. policy debate. Social media ban in Australia | A simple guide
  5. World Health Organization, WHO Report on the Global Tobacco Epidemic 2019: Offer Help to Quit Tobacco Use. https://www.who.int/publications/i/item/9789241516204 — Cited frequently in disclosure-law debates as the gold-standard public health warning model.
  6. Noar et al., "Understanding Why Pictorial Cigarette Pack Warnings Increase Quit Attempts," PubMed Central. https://www.ncbi.nlm.nih.gov/pmc/ — Empirical evidence that point-of-use warnings change behavior, the underlying premise of Minnesota's law.
  7. U.S. Food and Drug Administration, "Prescription Stimulant Medications" consumer information. https://www.fda.gov/ — Example of federally compelled disclosure for products with documented harm to adolescents.
  8. Federal Trade Commission, Alcohol Marketing and Advertising: A Report to Congress. https://www.ftc.gov/ — Federal precedent for industry-targeted disclosure rules grounded in public health concerns.
  9. Federal Trade Commission, How to Make Effective Disclosures in Digital Advertising (.com Disclosures). https://www.ftc.gov/business-guidance/resources/com-disclosures-how-make-effective-disclosures-digital-advertising — The federal standard for what makes a digital disclosure "clear and conspicuous," the same statutory phrase Minnesota used.
  10. Federal Trade Commission, Competition and Consumer Protection Guidance Documents. https://www.ftc.gov/business-guidance — General federal framework for consumer-protection disclosures, useful background for readers new to compelled-speech regulation.

About the Author

Lucy Bichakhchyan is a technology commercialization specialist based in Minneapolis. She works at a startup, holds an MS in Management of Technology from the University of Minnesota and writes about tech policy in Minnesota. The views expressed are my own and do not
represent those of any employer or organization. This article is commentary on a pending lawsuit and is not legal advice.

Subscribe TO GET SPAM

Thank you! Your subscription has been received!
Oops! Something went wrong while submitting the form.
Pixelated white airplane flying at an angle on a black background.Abstract pixelated pattern with white and light gray blocks on a black background.