U.S. Supreme Court Leaves Intact Section 230’s Liability Shield… for Now

In two related rulings issued on May 18, the Supreme Court left intact the broad liability shield under Section 230 of the Communications Decency Act, which protects websites and online platforms that host user-generated content. But the narrow legal scope of these rulings leaves open the door for future judiciary review that could either enshrine or undermine the immunity that websites and online platforms have relied on for over 25 years. 

In light of this uncertainty, websites and online platforms that offer users the ability to post content (such as product reviews and user forums) should implement (or review their existing) screening processes to monitor, filter, and remove unlawful content. Particularly for smaller platforms and websites that may not have implemented sophisticated content screening tools yet, deploying new and enhanced AI tools may help more efficiently and accurately identify risky posts while reducing the number of false positives.

Background

When the Supreme Court granted certiorari in the cases of Twitter v. Taamneh and Gonzalez v. Google, many commentators wondered whether Section 230 of the Communications Decency Act (CDA) – and its broad immunity protections for websites and online platforms that host user-generated content – would be curtailed by the judiciary. But, despite a pair of Supreme Court wins for tech companies, the scope of Section 230’s immunity remains open to legal challenge in the future.

In a unanimous Taamneh decision handed down on May 18, the Supreme Court evaded the question of Section 230 almost entirely, instead centering the analysis on (a) the high bar of culpability required for platforms to meet the elements of “aiding and abetting” terrorists under the Anti-Terrorism Act (ATA) and the Justice Against Support to Terrorist Act (JASTA), and (b) the inability to hold platforms to account absent tortious wrongdoing. And in an unsigned opinion issued the same day, the justices also vacated the Ninth Circuit’s ruling in Gonzalez v. Google and remanded the case to the lower court, where it seems likely to be dismissed in light of the Taamneh decision. 

Although a win for Facebook (now Meta), Google, Twitter, and other platforms that rely on the liability protections afforded by Section 230 of the CDA, the Supreme Court’s sidestepping of the scope of Section 230 leaves technology platforms vulnerable to potential future curtailment of Section 230. 

The Twitter v. Taamneh Case

In 2017, ISIS attacked The Reina nightclub in Istanbul, Turkey, killing 39 people, including a U.S. Citizen – whose family sought to hold online platforms liable for housing ISIS content prior to the attack. The plaintiffs alleged that Twitter, Facebook, and Google’s recommendation algorithms promoted terror-related content, thereby enabling terrorist recruitment, financing, and propaganda in violation of JASTA’s prohibition. Plaintiffs hoped for a landmark precedent holding that when recommendation algorithms further terror activities, JASTA would pierce Section 230’s immunity shield. In pursuit of this goal, they sued Facebook, Inc., Twitter, Inc., and Google LLC under 18 U.S.C. 2333 (ATA). Section 2333(a) prohibits direct material support to terrorists while §2333(d)(2) imposes secondary civil liability on anyone “who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.”

The District Court initially dismissed the complaint for failure to state a claim. On appeal, the Ninth Circuit dismissed the §2333(a) (direct material support) claim with prejudice because plaintiffs failed to establish proximate causation of the three social media platforms radicalizing the ISIS member responsible for the attack. Federal indirect liability claims (i.e. §2333(d)(2)) were similarly dismissed with prejudice. 

The crux of the analysis for the indirect liability claims lies in applying the D.C. Circuit’s 1983 ruling in Halberstam v. Welch. Halberstam’s “aid and abet” framework which requires: 

(1) the party whom the defendant aids perform a wrongful act, causing injury, 

(2) the defendant has a general awareness of its role when providing assistance, and 

(3) the defendant knowingly and substantially assisted the principal violation. 

Six factors are weighed for determining whether support is “substantial”: (1) “the nature of the act assisted,” (2) the “amount of assistance” provided, (3) whether the defendant was “present at the time” of the principal tort, (4) the defendant’s “relation to the tortious actor,” (5) the “defendant’s state of mind,” and (6) the “duration of the assistance.” Weighing these factors in isolation, the Ninth Circuit ruled the plaintiffs “failed to allege that Defendants played a major or integral part in ISIS’s terrorist attack.”

On appeal at the Supreme Court, Justice Thomas’ opinion emphasized the importance of recognizing a “conceptual core” for Halberstam’s “aiding and abetting” framework. Rather than reviewing and weighing isolated factors, substantial assistance and scienter elements represented twin requirements. In other words, the nature and amount of scienter for culpability works in tandem with the amount of assistance provided. Further, JASTA is “inherently…for specific wrongful acts,” rendering substantial assistance to a general enterprise insufficient. Rather than run through the individual JASTA elements, the opinion concludes that the “conceptual core” of aiding and abetting must entail a finding of “pervasive, systemic, and culpable” assistance by platforms for specific wrongful acts. 

The Supreme Court’s language in the Taamneh ruling implies that the justices viewed Section 230 as an explanation for Twitter’s alleged inaction; the opinion concludes that the platforms “at most allegedly stood back and watched.” Absent a tort duty to remove terrorist content, the defendant tech platforms’ conduct demonstrated passive nonfeasance. The court recognized that “there may be situations where some such duty exists, and we need not resolve the issue today.” The court contrasted the lack of platform accountability for user content with the instance where Congress imposed an affirmative commercial duty to screen robocalls. 

Deeming the relationship between the platforms and the Reina nightclub attack highly attenuated, the holding explained that ISIS’s ability to benefit from the platforms is “merely incidental” to the platforms’ actions of creating and maintaining agnostic algorithms that are part of their infrastructure. Because the plaintiffs failed to provide sufficient nexus between the platforms’ actions and the Reina attack, they failed to assert that the platforms “consciously participate[d]” in the attack. 

The Gonzalez v. Google Case 

Gonzalez v. Google, the companion social media platform case seeking to pierce Section 230 immunity, similarly arose out of an ISIS attack abroad in Paris in 2015, which killed Nohemi Gonzalez, a U.S. Citizen; her family sued Google, challenging Section 230 protections of recommendation algorithms and revenue sharing practices within the company’s video-sharing subsidiary, YouTube. 

In 2022, the Ninth Circuit found that although Section 230 barred the Gonzalez family’s recommendation algorithm claims, questions of culpability remained around the platform’s revenue sharing system. On appeal to the Supreme Court, however, plaintiffs’ questions centered on the application of Section 230, rather than seeking review of the revenue sharing claims. The related decision in Taamneh briefly discussed revenue sharing claims, explaining that without concrete figures as to the timing and amounts of revenue-sharing support, the Taamneh plaintiffs failed to plausibly assert “substantial” assistance. Never reaching the question of Section 230 immunity, the justices decided that the claims brought in Gonzalez v. Google were materially identical to those in Taamneh, allowing the Supreme Court to rely on Taamneh instead of delving into the scope of Section 230. The Supreme Court remanded the case to the Ninth Circuit for reconsideration via an unsigned per curium opinion. 

Door Remains Open for Future Judicial Review of Section 230 

In her concurrence, Justice Ketanji Brown Jackson emphasized the narrow implications of the holding, limiting to the facts of the nature of the platforms, algorithms, and relationships with terrorists provided on appeal. Justice Jackson underscored that “both cases came to this court at the motion-to-dismiss stage, with no factual record…other cases presenting different allegations and different records may lead to different conclusions.” The door has been firmly shut on assigning liability to platforms for merely building an infrastructure which terrorists choose to take advantage of, even if the company is aware of the enterprise’s existence and use of the platform. At the same time, the door remains ajar for future findings of culpable “aid and abet” conduct if a defendant’s action amounts to substantial amounts of intentional and culpable support for a specific terrorist act. Justice Jackson’s concurrence–and Justice Thomas’ aforementioned comparison to the robocalls statute–demonstrates the Supreme Court’s passing of the baton to Congress rather than judicially narrowing the scope of Section 230.

What Companies can do now with Section 230 in Flux

In light of these holdings, companies that host user-generated content on their sites should implement (or review their existing) screening processes to monitor, filter, and remove offensive or illegal content. Particularly for smaller platforms that may not have implemented sophisticated content screening tools yet, deploying new and enhanced AI tools may help more efficiently and accurately identify risky posts while reducing the number of false positives.