There's nothing neutral about Big Tech
1 April 2026
Opinion: Social media platforms have long argued that mental health is the responsibility of users, not the designers and engineers who made them. Alexandra Andhov considers the meaning of the jury ruling that found otherwise.
Comment: On March 25, 2026, a Los Angeles jury delivered a verdict that many law tech scholars had hoped for but considered unthinkable. For the first time, a US jury was asked to decide whether platform design could be responsible for the harm caused to its users. This was not because of the content the users posted on them, but because of how these platforms have been designed.
The plaintiff was a 20-year-old California woman who began using YouTube when she was around six and created an Instagram account at nine. Her lawsuit alleged that the platforms’ design features – including likes, algorithmic recommendation engines, infinite scroll, autoplay, and others – got her addicted at a young age and contributed to her depression, anxiety, body dysmorphia, and suicidal thoughts.
Many New Zealanders will recognise this story intimately.
Thumbs down for NZME’s Grenon in high-stakes court fight over tax
After weeks of testimony and more than 40 hours of deliberation, the jury found both Meta and Alphabet (which owns Google) negligent in their design and operation of Instagram and YouTube, concluding that the platforms’ design was a substantial factor in harming the plaintiff’s mental health. The jury recommended punitive damages on top of compensatory ones. This should be read as a signal that this is not merely about compensation, but about corporate conduct deserving punishment.
The verdict is already being called a “Big Tobacco moment” for tech. Whether that analogy holds legally, it holds morally. And it matters far beyond the courtroom.
For years, we have been told a reassuring story about technology. That platforms are neutral pipes. That they merely connect people who otherwise would be lonely. That what happens on these platforms is the responsibility of users, not engineers. That if a child spirals into depression after years of algorithmically curated content designed to maximise the time they spend on a screen, that is a matter of parenting, or psychology, or individual fragility, not of product design. This verdict says otherwise.
In other words, the feed is not neutral. The scroll is not neutral. The notification is not neutral. Each is the result of a deliberate engineering choice, made by a company in pursuit of engagement, in pursuit of profit.
After weeks of testimony and more than 40 hours of deliberation, the jury found both Meta and Alphabet (which owns Google) negligent in their design and operation of Instagram and YouTube, concluding that the platforms’ design was a substantial factor in harming the plaintiff’s mental health. The jury recommended punitive damages on top of compensatory ones. This should be read as a signal that this is not merely about compensation, but about corporate conduct deserving punishment.
The verdict is already being called a “Big Tobacco moment” for tech. Whether that analogy holds legally, it holds morally. And it matters far beyond the courtroom.
For years, we have been told a reassuring story about technology. That platforms are neutral pipes. That they merely connect people who otherwise would be lonely. That what happens on these platforms is the responsibility of users, not engineers. That if a child spirals into depression after years of algorithmically curated content designed to maximise the time they spend on a screen, that is a matter of parenting, or psychology, or individual fragility, not of product design. This verdict says otherwise.
The core claim is that the harm arises not from third-party content, but from the fact that the platforms own the engineering and design side and make decisions based on algorithms and system design. This “informational architecture” is what shapes the users’ experience. In other words, the feed is not neutral. The scroll is not neutral. The notification is not neutral. Each is the result of a deliberate engineering choice, made by a company in pursuit of engagement, in pursuit of profit.
Internal communications disclosed in the proceedings included exchanges among Meta employees comparing the platform’s effects to pushing drugs and gambling. This is not a company that didn’t know what it was doing. Plaintiffs alleged that infinite scroll, autoplay, variable notification patterns, and algorithmic targeting were designed to maximise dopamine triggers, especially in adolescents, and that these features were core to the product strategy. The companies knew. The management knew and potentially pushed for more. And for decades, they were protected, legally, politically, and rhetorically, by the fiction that they were merely hosting other people’s words.
That fiction has been the foundation of the tech industry’s extraordinary immunity from accountability. Section 230 of the Communications Decency Act, US legislation written in 1996, when the internet was a novelty, became the legal shield behind which trillion-dollar companies sheltered themselves from the consequences of their own design decisions. Since that law passed, the internet has evolved dramatically, and litigants are now looking to courts to step in where legislation has failed to keep pace.
This is the lesson that regulators, researchers, and the public need to absorb. Regulation is necessary, but the way we have conceived of it must change fundamentally. It must change on at least two fronts: what we regulate, and how.The instinct, when a technology causes harm, is to ban it. Australia’s law prohibiting social media for users under 16 took effect in December 2025, with many nations looking to the rule as a potential template, including New Zealand.
We know what happens when a prohibition is introduced. We have seen this over and over again. If prohibition is hard to enforce, in the analogue world, it’s even harder in the digital world.
A determined teenager with a virtual private network and a borrowed email address is not meaningfully protected by a ban. They can find their way around such restrictions. However, what bans do is signal that a society has made a value judgment, that it considers the harm real enough to act on. That signalling matters. But a ban will never be the single silver bullet.
We need to rethink regulation in the digital space. We must rethink accountability and shift the burden of proof. We must rethink the duties of tech companies towards their “users”.The Los Angeles verdict reasserts a principle we should never have abandoned: that those who build things are responsible for what those things do.Technology is not neutral. It is made by people, for purposes, with consequences. A platform that was engineered to be addictive is an addictive product. A company that profits from that addiction while concealing what it knows is not a passive intermediary, but rather an active perpetrator, and should be treated as one.The era of unregulated, unaccountable design may not be over, but we need to start thinking of a new regulatory toolbox that will be able to match it. Bans alone will never be enough. What we need is something more ambitious: regulation that is as sophisticated as the technology it governs, and accountability that runs all the way down to the design decision and up to the management.
Professor Alexandra Andhov is the organiser of the inaugural Law, Technology and Government Conference, bringing together experts in law, policy, business, and technology to explore the role of government in navigating technological changes hosted by the faculties of Law and Business and Economics, University of Auckland, April 15-17.
Professor Alexandra Andhov is the director of the Centre for Advancing Law and Technology Responsibly at the University of Auckland’s Faculty of Law and Faculty of Business and Economics.
This article reflects the opinion of the author and not necessarily the views of Waipapa Taumata Rau University of Auckland.
This article was first published on Newsroom, 1 April, 2026
Media contact
Margo White I Research communications editor
Mob 021 926 408
Email margo.white@auckland.ac.nz