FCC Conveniently Seeks to 'Clarify' Section 230 Just When Republicans Are Mad at Facebook and Twitter

We may earn a commission from links on this page.
FCC Chairman Ajit Pai wrote that he would “clarify” the meaning of Section 230, which could have huge ramifications on social media platforms.
FCC Chairman Ajit Pai wrote that he would “clarify” the meaning of Section 230, which could have huge ramifications on social media platforms.
Image: Alex Wong (Getty Images)

Federal Communications Commission Chairman Ajit Pai has announced that he plans to “move forward with a rulemaking” to “clarify” Section 230 of the Communications Decency Act, which, among other protections, shields social media platforms from liability over moderating certain types of content. Interesting timing, because Republicans have spent the last 24 hours threatening to annihilate Section 230 on the very platforms they accuse of censorship.

Republicans have been on this warpath before over an ongoing perception of social networks’ “conservative bias” (which typically involves fact-checking disinformation and limiting its spread). The latest slight is Facebook and Twitter’s decision to restrict the spread of the New York Post’s questionably-sourced, disinformation-ridden “bombshell” report on Joe Biden’s son, Hunter. In letters to Mark Zuckerberg and Jack Dorsey, Sen. Josh Hawley (R-Mo.) called on the CEOs to testify before the Senate Judiciary Crime and Terrorism Subcommittee on a supposed violation of FEC rules by contributing something “of value” to support presidential campaigns. This assumes that providing Donald Trump a platform to run campaign ads that would otherwise violate their own terms of service isn’t considered valuable.

Advertisement

Historically, Republicans have believed that they deserve to see Section 230 repealed on the misguided assumption that Section 230 protects platforms because they are not publishers. The go-to portion, Section 230(c)(1) reads:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Advertisement

Time and again, they seem to wrongly interpret this to mean that if a platform decides to check falsehoods and limit propaganda, the platform has lost its Section 230 privileges because it’s now in the business of editing, which makes it a publisher. But it is not. Facebook is a business, and businesses can refuse service to people for all kinds of reasons, especially if they’re harmful, just as brick-and-mortar shops can turn away a customer who refuses to wear a mask during a pandemic. This is why Facebook and Twitter have terms of service, even ones that they’ve bent considerably for the president.

Advertisement

Pai, too, invoked the idea that social media companies should follow the same rules as “other media outlets.”

“Social media companies have a First Amendment right to free speech,” Pai concluded in his statement. “But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Advertisement

But this is where Republicans typically discard the publisher comparison. Literally considering social media companies publishers, with the right to select whatever content they choose to run, and legal liability for libelous claims, is the last thing they want. (This, on the other hand, is closer to what Joe Biden would like to see: an amended Section 230 which would force Facebook to remove Trump’s falsehoods about his son.)

This has been reflected in recent attacks intended to limit another portion of Section 230 exemptions. Section 230(c)(2) protects platforms from civil liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

Advertisement

Those who claim censorship on the part of Twitter and Facebook have argued that Section 230's immunity does not apply to content stricken from a site by its owner if it doesn’t fall into one of these categories: material that is overly gory, threatening, or pornographic. Others draw attention to the very end of Section 230(c)(2), the reference to “otherwise objectionable” material, hoping to portray this as a catch-all. But traditionally that’s not how it works.

When a law includes a list of specific things like “obscene, lewd, lascivious” content, it’s understood adding a vague term at the end doesn’t mean “and anything else under the sun.” A general term, such as “or otherwise objectionable,” applies only to the same class of things previously mentioned. (If a law reads, “apples, oranges, pears and other things,” you can’t interpret “other things” to mean “elephants.”)

Advertisement

This was the case in a bill introduced last month by Sen. Lindsey Graham (R-S.C.), Sen. Roger Wicker (R-Miss.), and Sen. Marsha Blackburn (R.-Tenn.), which proposes to narrow the phrase “otherwise objectionable” down to “promoting self-harm, promoting terrorism, or unlawful.” It’s pretty clear that self-harm, terrorism, and illegal content already qualify as “objectionable”; rather than adding stipulations, it removes the necessary leeway to cover the unforeseeable breadth of harmful content that comes with each fresh news cycle, like conspiracy theories and health misinformation.

We can guess that Pai’s rulemaking will similarly limit moderation powers, since his statement focuses tightly on concerns that Section 230 has been broadly interpreted to a fault. Specifically, he paraphrases Supreme Court Justice Clarence Thomas, who wrote in a denial of certiorari this week that lower courts have “long emphasized nontextual arguments when interpreting [Section 230], leaving questionable precedent in their wake.” In other words, Thomas believes that the lower courts have strayed too far from the statute’s literal meaning; as he puts it, “reading extra immunity into statutes where it does not belong.”

Advertisement

Thomas first takes issue with a 1997 Fourth Circuit case in which the appellate court, concluding that Section 230 “confers immunity even when a company distributes content that it knows is illegal.” The petition denied by the court this week involved a company that sought immunity under Section 230 after it was accused of intentionally reconfiguring its software to make it harder for consumers to access a second company’s product; Thomas wrote that he agreed with the ruling of the Ninth Circuit, which found the immunity “unavailable” against allegations of anticompetitive conduct.

Section 230 was written to shield website operators from liability for defamatory statements made by their users; however, Thomas argues that the definition of user-generated—or, as the statute describes it, content “provided by another information content provider”—has been misconstrued by courts to include content website owners have had a hand in creating. He also makes clear that he believes Facebook and other websites can, and should, be held liable for any user-generated content it selectively promotes (and appears not to differentiate between a Facebook employee intentionally boosting a post and an algorithm that does this automatically).

Advertisement

Based on Pai’s statement chiding others for advancing “an overly broad interpretation” that, he claims, often wrongly shields social media companies from liability in particular, it’s likely that whatever rule he attempts to pass will focus mostly on emphasizing, like Thomas, a need to adhere more to the literal meaning of Section 230's text, rather than the so-called “spirit of the law.”

Section 230 was passed in 1996 when gore, porn, and harassment were really the only types of content that needed taking down. For example, it did not take into account the deluge of disinformation plaguing social media sites, which did not yet exist. Regardless, even in the event that it’s determined that Section 230 does not grant sites like Facebook immunity for certain moderation decisions, it doesn’t mean they’re automatically liable either.

Advertisement