Protecting the Rights
of Injury Victims

Thomas DeLattre and Glen D. Wieland

Can product liability hold social media accountable?

On Behalf of | Feb 15, 2024 | Product Liability |

Social media platforms are now ingrained in our daily lives, fostering connections with individuals globally, including friends, family and even strangers. However, privacy breaches, cyberbullying, misinformation and manipulation are common, which can result in physical or emotional harm, even fatalities. But, can product liability laws finally hold social media platforms accountable?

How does product liability apply to social media?

One perspective asserts that social media platforms, essentially software products, possess design features impacting user behavior and outcomes. Algorithms deployed by platforms filter, rank, recommend or amplify specific content, potentially exposing users to harmful material like hate speech, fake news or extremist propaganda.

Another viewpoint contends that social media platforms serve as distributors of third-party content capable of causing harm to users or others. Some platforms permit users to post content that may be defamatory, fraudulent, threatening or incite violence. The failure of these platforms to adequately moderate, remove or flag such content in a timely manner, along with a lack of warnings regarding potential risks, can be seen as failures to warn.

What are the challenges?

Product liability claims directed at social media companies encounter various legal and practical hurdles. A significant obstacle is Section 230 of the Communications Decency Act. This confers immunity upon online platforms regarding third-party content liability.

Another formidable challenge involves establishing causation and damages. Proving that the social media platform’s defect significantly contributed to the injury or damage can be complicated. For instance, how can a plaintiff provide evidence that a social media platform’s algorithm led to the development of depression, anxiety or addiction? Linking a social media platform’s failure to warn to a plaintiff’s involvement in a violent riot, terrorist attack or suicide attempt further complicates the case.

What about in Florida?

In 2021, a federal judge in Florida dismissed a product liability lawsuit against Twitter filed by the family of a journalist kidnapped and killed by ISIS in Syria. The plaintiffs claimed Twitter provided material support to ISIS by allowing the group to use its platform for recruitment, fundraising and propaganda. The judge ruled that Section 230 barred the plaintiffs’ claims, and they failed to demonstrate that Twitter’s conduct was the proximate cause of the journalist’s death.

Contrastingly, in 2020, a federal judge in California permitted a product liability lawsuit against Snapchat to proceed. The plaintiffs, parents of two teenagers who died in a car crash while using Snapchat’s speed filter, alleged that Snapchat knew or should have known that the speed filter encouraged reckless driving and failed to warn users of its dangers. The judge determined that Section 230 did not apply to the plaintiffs’ claims and that they adequately alleged a design defect and a failure to warn.

Archives

FindLaw Network