Supreme Court poised to rewrite how social media confronts disinformation
The Facebook post showed a gloved hand holding a silver needle alongside a message in bold letters: “Bill Gates explains that the COVID-19 vaccine will use experimental technology and permanently alter your DNA!”
Despite violating Facebook rules to prevent the spread of lies about COVID, that post slipped past the platform’s filters in June 2021. It was just one of many false claims that made it online and fueled national skepticism about the vaccine, and also gave ammunition to social media critics who say tech giants such as Facebook’s parent company Meta, Alphabet and Twitter aren’t doing enough to block disinformation.
The backlash has only grown more intense, but depending on how the Supreme Court decides on a controversial case challenging a Florida law that bans platforms from removing similarly deceptive posts, the internet could soon be awash in disinformation on a scale never seen before.
The Florida law and a similar Texas law would prevent social media companies from removing certain types of content even if false. Both laws are the result of a conservative effort to force tech giants to host a variety of political views no matter how extreme. The court is expected to hear the Florida case this term — an appeals court ruled against state officials in May — though that is not yet confirmed.
Two other cases related to content moderation are already on the docket. On Monday, the court announced it will review the cases challenging the broad legal immunity Section 230 of the 1996 Communications Decency Act provides websites.
Taken together, the outcome of the four cases could fundamentally alter the flow of information online and dramatically exacerbate the global disinformation problem.
“After 20 years of not telling us much of anything, the Supreme Court will finally decide the future of the internet,” said Alan Rozenshtein, a senior editor at Lawfare and a professor at the University of Minnesota School of Law. “These fundamental questions have not been addressed.”
Here’s a look at the four cases and what they could mean for the future of the internet.
NetChoice, CCIA v. Paxton
A federal circuit court in New Orleans upheld the controversial Texas law in a September decision that has variously been described as “legally bonkers,” “angrily incoherent” and “sweeping and nonsensical.” Tech trade groups NetChoice and the Computer and Communications Industry Association are suing Texas Attorney General Ken Paxton to block the law. Upon learning a couple of weeks ago that a 5th circuit appellate court had ruled in favor of Texas, Paxton tweeted “BigTech CANNOT censor the political voices of ANY Texan!”
The Supreme Court has already signaled that it plans to examine the First Amendment issues in the law. When the court blocked Texas’s law from taking effect last spring while a legal challenge played out, Justice Samuel Alito wrote that the debate stirred up by the law and others like it “will plainly merit this court’s review.”
Under the Texas law, states in the Fifth Circuit — Louisiana, Mississippi and Texas — would enforce “viewpoint neutrality” and “mandate that news organizations must cover certain politicians or certain other content.” Since the law will allow private citizens to sue platforms if they believe their content has been deleted for viewpoint reasons, lawyers say an onslaught of frivolous lawsuits will likely result.
Viewpoint neutrality is dangerously broad language that will prevent platforms from removing disinformation about the Holocaust, mass shootings and COVID-19, experts say. It also could force social media companies to allow American computers to be flooded by sexual images, hate speech and violent videos.
“If I think vaccines work and you think they inject microchips into your veins, the platforms can’t discriminate,” Daphne Keller, who directs the Program on Platform Regulation at Stanford’s Cyber Policy Center, said via email. “Our posts both stay up or both come down.”
Meta officials removed more than 7 million posts containing COVID falsehoods in the first three months of the pandemic alone. Even critics acknowledge that social media companies block far more disinformation than consumers realize.
Keller has speculated that if the court backs Texas, platforms will disable content moderation by default and then give people the ability to opt back into the content-moderated version.
Even if the Supreme Court strikes down the Florida and Texas laws, the court’s engagement on the issue could still create a framework for future litigation and lead to radical new content moderation regulations. More than 100 bills related to content moderation or censorship on social media are now under consideration around the country, according to the National Conference on State Legislatures.
Moody v. NetChoice, CCIA
Republican lawmakers in Florida argued that social media companies have become too powerful and their content moderation decisions often “distort the marketplace of ideas” to a degree requiring regulation.
The case involving the Florida law, Moody v. NetChoice, CCIA, is the result of two tech advocacy groups suing the state after Gov. Ron DeSantis signed the social media legislation in May. The advocacy groups assert that by “compelling [social media platforms] to host — and punishing them for taking virtually any action to remove or make less prominent — even highly objectionable or illegal content, no matter how much that content may conflict with their terms or policies,” the law violates their First Amendment rights.
While legislators are concerned with perceived political bias, the new policies carry “grave implications” for companies’ efforts to fight not just disinformation, propaganda and extremism, but also everyday computer viruses, malware and fraud, says Matt Schruers, head of CCIA, which represents Meta, TikTok, Twitter and Google.
“We just had the FDA warning against this NyQuil chicken nonsense,” he said, citing an example of content that won’t be removed in the future if the Supreme Court rules against the platforms.
Some disinformation experts have questioned the laws’ likely impact. Rose Jackson of the Atlantic Council’s Digital Forensic Research Lab called the Florida and Texas laws “highly partisan messaging bills” and “largely unenforceable.” Still, the tech advocacy groups say they are alarmed by the laws’ ambitions and are actively working with engineers and lawyers to prepare for the possibility that the laws will go into effect.
When Twitter, Meta, YouTube and TikTok executives were hauled before the Senate Homeland Security Committee earlier this month and asked to account for the disinformation their platforms host, Committee Chairman Senator Gary Peters, D-Mich., told the companies’ their algorithms prioritize user engagement over safety, leading them to allow hateful, extremist and patently false content that has dramatically exacerbated social divisions and even spurred violent massacres.
YouTube, TikTok and Twitter did not return emails seeking comment for this story. Meta shared content moderation guidelines and examples of content it removes but otherwise declined to comment.
But social media company executives told senators they cull vast amounts of harmful content. A YouTube executive said during the hearing that the service removed close to 8.4 million videos in the first half of this year.
Gonzalez v. Google
The family of a woman who died in the 2015 Islamic State attacks in Paris sued Google on the grounds that its YouTube service recommended ISIS propaganda, leading to the terrorist attack that killed 130 people. Nohemi Gonzalez’s family members argue that YouTube’s role in recommending militant videos overcomes the Section 230 liability shield that gives platforms legal protection from lawsuits related to user posted content.
The Gonzalez case landed at the Supreme Court after a divided California appeals court ruled that based on precedent Section 230 protects social media companies’ recommendations. However, the majority opinion acknowledged the law “shelters more activity than Congress envisioned it would.”
According to the Gonzalez family’s complaint, YouTube provides “a unique and powerful tool of communication that enables ISIS to achieve [its] goals.” They assert that two of the ISIS jihadists who carried out the attacks used online social media platforms to post links to ISIS recruitment YouTube videos, including one featuring an attacker who participated in the shooting at the cafe where Gonzalez died.
Because YouTube uses algorithms to match and suggest content to users based upon their viewing history, the Gonzalez family argues that by recommending ISIS videos the service allowed users to locate other videos related to the terrorist organization, making YouTube “useful in facilitating social networking among jihadists.”
Section 230 has become controversial across the political spectrum in recent years. Many on the right believe platforms are censoring content that should be removed, while liberals assert that the companies are spreading dangerous falsehoods.
But tech companies say changing Section 230 will limit their ability to suppress problematic content because algorithms also inform the work of the platforms’ human trust and safety teams that moderate content. “The same mechanisms that push up relevant content also seek to push down dangerous content,” said Schruers, the CCIA president.
The content recommendations YouTube offers are similar to recommendation systems used by a variety of platforms, including TikTok, Facebook and Twitter. Internet experts and legal observers say that as a result the case has huge stakes for both the spread of disinformation and social media companies’ autonomy.
Many legal observers believe the right-leaning Court will at least partially rule against the tech companies.
Twitter Inc. v Taamneh
On Monday, the high court also announced it will consider a petition for review filed by Twitter, which is defending itself against a lawsuit filed by the family of Nawras Alassaf, a Jordanian citizen killed in an ISIS attack on an Istanbul nightclub.
The same appeals court that considered the Gonzalez case also assessed the Taamneh case in a single opinion. In the latter case, the court held that Twitter, Facebook and Google could be held liable for aiding and abetting international terrorism as a result of letting ISIS use their platforms.
While the Gonzalez and Taamneh cases are based on similar facts and arguments, a panel of Ninth Circuit judges declined to consider Section 230 at all in the latter case and instead held that Twitter, Google and Facebook could be liable for aiding and abetting an act of international terrorism because they “provided generic, widely available services to billions of users who allegedly included some supporters of ISIS.”
Alassaf’s family argued that ISIS could not have become one of the globe’s most feared terrorist organizations without what the appeals court judges called “effective communications platforms provided by defendants free of charge.”
The complaint also alleges that tech companies have at times defended ISIS’s use of their platforms, saying some controversial and troubling content is nonetheless protected by Twitter policies.
The appeals court ruled in favor of the Alassaf family, but acknowledged that it “recognizes the need for caution in imputing aiding-and-abetting liability in the context of an arms length transactional relationship of the sort defendants have with users of their platforms.”
Twitter’s lawyers told the court it offers “generic, widely available services to billions of users who allegedly included some supporters of ISIS.” They say the platform has consistently enforced policies preventing terrorists from using it.
Schruers, the president of CCIA, said it is important that the high court not carve up Section 230, asserting that platforms need as much content moderation flexibility as possible or disinformation will surge. “What people aren’t seeing is all the content that companies have taken action against,” he said. “Opening those floodgates could have a profound impact on how Americans experience the internet.”