On 25 March 2026, a Los Angeles jury reached a verdict that will ripple through boardrooms, courtrooms, and constitutional debates for years. It found that Meta's Instagram and Google's YouTube had deliberately designed their platforms to be addictive, and that the design itself had caused serious psychological harm to a young woman who began using both apps as a child. The jury awarded six million dollars in damages, assigning seventy percent of responsibility to Meta and thirty percent to Google. A separate New Mexico jury, ruling in the same week, ordered Meta to pay $375 million for failing to protect young users from child predators on its platform.
The dollar amounts are almost beside the point. What the Los Angeles jury established is something more fundamental: that the architecture of a digital platform is a product, and that a product which causes foreseeable harm to human beings carries liability for the choices its designers made.
This argument has not always found traction in American courts. For decades, technology companies sheltered beneath Section 230 of the Communications Decency Act, which broadly exempted platforms from liability for user-generated content. The platforms interpreted that immunity expansively. If harm occurred on their networks, it was the consequence of human choices, not corporate engineering.
The Los Angeles jury did not accept this framing. The case was not argued around what users saw. It was argued around what the systems were built to do. What was designed, how it was designed, what its engineers understood about its psychological effects, and whether users, particularly children, had any meaningful understanding of what they were entering into when they downloaded an application and created a profile.
Design is an act of intent. When engineers build systems that identify emotional states and respond to them by serving content calibrated to maximise engagement, they are not passively reflecting human behaviour. They are shaping it. Internal documentation cited in the proceedings showed that Meta was acutely aware of the strategic value of acquiring users at the youngest possible age, treating that age not as a protection concern but as a commercial opportunity. The jury found that the design translated that awareness into a product which caused harm.
The constitutional argument that follows is not difficult to state. Meaningful consent to the terms of any arrangement requires genuine understanding of what one is consenting to. Most users, and certainly most children, do not understand when they begin using a social media application that their emotional responses are being monitored, classified, and fed back into a system calibrated to maximise time on the platform. The transaction is asymmetric by design. The platforms have had strong commercial incentives to ensure that understanding on the user side remained minimal, and that the architecture producing the engagement remained opaque.
The plaintiffs' legal team drew deliberate comparisons to the decades-long campaign against the tobacco industry. The turning point in that campaign came when courts accepted that personal choice was not a sufficient defence for manufacturers who had engineered dependence into their products with full awareness of the physiological consequences. The question courts are now asking of social media platforms is structurally identical: did you know what your design would do to people, did you build it anyway, and did you target the youngest possible users to ensure that it worked?
With approximately two thousand similar lawsuits pending, and with both Meta and Google announcing their intention to appeal, the legal landscape will remain contested for years. The Supreme Court will almost certainly be called upon to rule on the product liability theory at the heart of these cases. But the constitutional argument has now been accepted by a jury and entered into the legal record. That is not a small thing.
The deeper question these verdicts open extends well beyond any single courtroom. For most of the past two decades, the prevailing assumption has been that the platform is simply an intermediary, and that users alone bear responsibility for what engagement does to them. This assumption always required a further assumption to sustain it: that the engagement being generated was not itself the product of deliberate engineering. Courts have now begun to dismantle that second assumption. The first cannot long survive without it.
We believe the honest answer is accountability from the point of design. Before a system engineered to exploit human attention and emotional responses is deployed at scale to hundreds of millions of people, the question of whether that engineering is safe must have a clear, verified, public answer. We do not yet have a legal or regulatory framework that makes that demand. The question worth watching, in the appeals courts and the legislatures that follow, is how long that absence is permitted to continue.






