On 25 March, a jury in Los Angeles delivered a damning verdict on two of the world’s most popular social media platforms. It ruled that Instagram and YouTube were deliberately designed to be addictive and, consequently, their parent companies have been negligent in failing to safeguard child users. Meta and Google, owners of Instagram and YouTube, must now pay $6m (£4.5m) in damages to ‘Kaley’, the young woman who was the plaintiff (claimant) in this case. Her lawyers argued that the design of Instagram and YouTube caused her to be addicted to the platforms. This addiction impacted her mental health during childhood, leaving her with body dysmorphia, depression and suicidal thoughts.

Both companies vigorously defended the claim and intend to appeal the judgment. Meta maintains that a single platform cannot be solely responsible for a user’s mental health crisis. Google, meanwhile, argues that YouTube is not a social network.
English law
Could such a claim succeed in this country? The tort of negligence provides the best hope for claimants who allege harm from social media use, subject to the elements of the tort (duty of care, breach, causation and foreseeability) being satisfied. There is growing recognition in UK law that online platforms may owe a duty of care to users, particularly if the users are children. And the harms of social media overuse are well documented. However, causation is likely to be the most difficult hurdle for UK claimants. To succeed, a claimant must prove that a platform’s design caused or materially contributed to the harm they suffered through their use of social media. This is a difficult hurdle when it comes to social media. Psychological harm rarely has a single identifiable cause. Social media companies are likely to argue that their platforms are one of many factors which can contribute to an individual’s mental health, alongside family environment, school experiences, pre-existing vulnerabilities and offline relationships, to name a few.
Could social media platforms be treated as ‘defective products’ under the Consumer Protection Act 1987 (CPA) which carries strict liability for harm? Products, under the CPA, are traditionally understood as tangible goods, not the likes of YouTube and Instagram. It is arguable, though, that social media platforms are not just intermediaries but ‘manufacturers’ of digital environments, making them liable for defects in algorithms or addictive design. The Law Commission is currently reviewing the CPA to determine if it is fit for the digital age, with a focus on artificial intelligence, software and online platforms. The review, which began in September 2025, may lead to expanded liability for online platforms and software providers.
Despite the issues around causation, a legal action in negligence is probably the best option for aggrieved social media users in the UK. However, the lack of legal aid and UK courts’ restrictive approach to class actions mean a test case would require significant upfront funding. Perhaps insurers, emboldened by the US judgment, may be more willing to cover the costs of such a case.
Regulating social media
Unlike the US, the UK has moved toward statutory regulation rather than litigation as the primary means of controlling social media harms. Since the passage of the Online Safety Act in 2023 (OSA), social media companies and search engines have a duty to ensure their services are not used for illegal activity or to promote illegal content, with particular protections for children. Communications regulator Ofcom has been tasked with implementing the OSA and can fine infringing companies up to £18m, or 10% of their global revenue (whichever is greater). Last month, it published guidance on how platforms must protect children. Furthermore, since platforms are processing users’ personal data, they have to comply with the UK GDPR. The Data (Use and Access) Act 2025, which mainly came into force in February, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.
The government is presently consulting on whether additional measures are required to keep children safe in the online world. This includes: setting a minimum age for children to access social media; restricting risky functionalities and design features that encourage excessive use, such as infinite scrolling and autoplay; whether the digital age of consent should be raised; whether the guidance on the use of mobile phones in schools should be put on a statutory footing; and better support for parents, including clearer guidance and simpler parental controls. The consultation ends on 26 May and the government will respond before the end of July. Alongside the consultation, the government is running a pilot scheme. This will involve 300 teenagers having their social media apps disabled entirely, blocked overnight or capped to one hour’s use – with some also seeing no such changes at all – in order to compare their experiences. Children and parents involved in the pilot will be interviewed before and after to assess its impact.
On 27 March, the government published national guidance that urges parents to strictly limit screen exposure in early years. The new recommendations advise that there should be no screen exposure for children under two, except for shared activities. For those aged two to five, usage should be capped at one hour per day, with additional guidance to avoid screens at mealtimes and before bed.
Parliament is also debating the use of social media platforms by children. In March, during a debate on the Children’s Wellbeing and Schools Bill, the Lords supported a proposal to ban under-16s from social media platforms. This is the second time peers have defeated the government over the proposal. There is now a standoff between the Commons and the Lords. Whatever happens, the California court verdict may herald more aggressive regulation.
Ibrahim Hasan is a lawyer and director of Act Now Training
























No comments yet