I’ve Got 99 Problems, But §230 Ain’t One
Section 230 – The only thing both political parties get right is when they say the other party has no fucking clue about Section 230 actually means.
Originally Published On Substack June 27th, 2021
47 U.S. Code § 230 – Protection for private blocking and screening of offensive material
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—
(A) Any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) Any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
Let me ask you a couple questions:
- Have you heard of Section 230?
- Do you feel you have a reasonable understanding about what this provision of the law sets out, It’s meaning and purpose?
- Do you have an opinion about whether or not we should change or repeal Section 230?
If you answered yes to at least 2 of those 3 questions and you are anyone except Senator Ron Wyden, you are probably wrong.
Because the meaning of Section 230 is coming under greater assault recently, because the only way any side is viewing this law is exclusively through that same political lens that colors all understanding, to a degree that should be troubling to just about anyone paying attention
That sort of political framing is happening on the left and right. For example:
Now that Trump is out of office, his torch of ignorant indignation over Section 230 has been passed to Josh Hawley, the junior Senator from Missouri and quintessential conservative Karen:
“With Section 230, tech companies get a sweetheart deal that no other industry enjoys: complete exemption from traditional publisher liability in exchange for providing a forum free of political censorship,”
Hawley said when he introduced the bill last week. Since then, the notion that Section 230 has always included an implicit “deal” requiring platforms take a neutral political stance has become a talking point in some parts of the political right.
And progressives think Section 230 is shielding abusive content that affects marginalized groups and proliferates fake news. One notable example is Joe “Are Your Parents Home?” Biden.
As Biden recently put it:
[The Times] can’t write something you know to be false and be exempt from being sued. But he [Mark Zuckerberg] can. The idea that it’s a tech company is that Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms…And it should be revoked. It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy. You guys still have editors. I’m sitting with them. Not a joke. There is no editorial impact at all on Facebook. None. None whatsoever. It’s irresponsible. It’s totally irresponsible.
….Clearly President Biden Hasn’t read the New York Times recently. Which is understandable, given how useless their reporting is, I’m always more surprised to find someone who DOES read the New York Times.
But the problem with these two quotes from notable politicians is that both are making claims that are objectively, verifiably false, and yet these statements are the standard left and right talking points, immediately resorted to when discussing the issue.
I think it’s important we consider the two arguments side by side. The fact that the talking points for both parties who want the same repeal believe this law does the opposite of each other’s assumptions. The Republicans say it allows censorship and that repealing it would make social media a politically neutral, censorship-free experience. While Democrats think the law doesn’t censor enough speech, making room for fake news and hate speech to run amok and it should be repealed so the democrats can begin demanding these sites censor all content the government wants it to.
And as you will find out in this article, if you are not yet familiar with this subject, that this isn’t an issue of one side being right and one side being wrong. They are both horribly mischaracterizing this law in ways that may be politically expedient, but legally unintelligible.
It also exemplifies an interesting concept of what legal scholar Josh Blackman called: “Bootleggers and Baptists.” Often, different groups with different motivations favor the same regulation.
Today I want to step back from these political squabbles and I will be breaking down the clauses that constitute section 230 and once we establish that solid legal ground to apply a textualist construction to politically motivated arguments to find which claims may hold water and which do not
Section 230 is an aspect of the 1996 Communications Decency Act 47 U.S.C. § 230 (enacted 1996), the statute that is so important to the order; I hope people find this helpful.
Section 230 makes Internet platforms and other Internet speakers immune from liability for material that’s posted by others Congress enacted 47 U.S.C. § 230 (with some exceptions). That means, for instance, that
I’m immune from liability for what is said in our comments.
A newspaper is immune from liability for its comments.
Yelp and similar sites are immune from liability for business reviews that users post.
Twitter, Facebook, and YouTube (which is owned by Google) are immune from liability for what their users post.
Google is generally immune from liability for its search engine results.
And that’s true whether or not the Internet platform or speaker chooses to block or remove certain third-party materials. I don’t lose my immunity just because I occasionally delete some comments (e.g., ones that contain vulgar personal insults); Yelp doesn’t lose it’s because it sometimes deletes comments that appear to have come from non-customers; the other entities are likewise allowed to engage in such selection and still retain immunity. Section 230 has recently become controversial, and I want to step back a bit from the current debates to explain where it fits within the traditions of American law (and especially American libel law).
Historically, American law has divided operators of communications systems into three categories.
Publishers, such as newspapers, magazines, and broadcast stations, which themselves print or broadcast material submitted by others (or by their own employees).
Distributors, such as bookstores, newsstands, and libraries, which distribute copies that have been printed by others
Platforms, such as telephone companies, cities on whose sidewalks people might demonstrate, or broadcasters running candidate ads that they are required to carry.
And each category had its own liability rules:
Publishers were basically liable for material they republished the same way they were liable for their own speech. A newspaper could be sued for libel in a letter to the editor, for instance. In practice, there was some difference between liability for third parties’ speech and for the company’s own, especially after the Supreme Court required a showing of negligence for many libel cases (and knowledge of falsehood for some); a newspaper would be more likely to have the culpable mental state for the words of its own employees. But, still, publishers were pretty broadly liable, and had to be careful in choosing what to publish. (See Restatement (Second) of Torts § 578. )
Distributors were liable on what we might today call a “notice-and-takedown” model. A bookstore, for instance, wasn’t expected to have vetted every book on its shelves, the way that a newspaper was expected to vet the letters it published. But once it learned that a specific book included some specific likely libelous material, it could be liable if it didn’t remove the book from the shelves.
Platforms weren’t liable at all. For instance, even if a phone company learned that an answering machine had a libelous outgoing message. (see Anderson v. N.Y. Telephone Co. (N.Y. 1974), and did nothing to cancel the owner’s phone service, it couldn’t be sued for libel. (See Restatement (Second) of Torts § 612) Likewise, a city couldn’t be liable for defamatory material on signs that someone carried on city sidewalks (even though a bar could be liable once it learned of libelous material on its walls), and a broadcaster couldn’t be liable for defamatory material in a candidate ad.
Categorical immunity for platforms was thus well-known to American law; and indeed New York’s high court adopted it in 1999 for e-mail systems, even apart from § 230. (SeeLunney v. Prodigy Servs. (N.Y. 1999).
But the general pre-§ 230 tradition was that platforms were entities that didn’t screen the material posted on them, and indeed were generally (except in Lunney) legally forbidden from screening such materials. Phone companies are common carriers. Cities are generally barred by the First Amendment from controlling what demonstrators said. Federal law requires broadcasters to carry candidate ads unedited. (47 U.S. Code § 315) (https://www.law.cornell.edu/uscode/text/47/315)
Publishers were free to choose what third-party work to include in their publications, and were fully liable for that work. Distributors were free to choose what third-party work to put on their shelves (or to remove from their shelves), and were immune until they were notified that such work was libelous. Platforms were not free to choose, and therefore were immune, period.
Enter the Internet, in the early 1990s. Users started speaking on online bulletin boards, such as America Online, Compuserve, Prodigy, and the like, and of course started libeling each other. This led to two early decisions: Cubby v. Compuserve, Inc. (S.D.N.Y. 1991) and Stratton Oakmont, Inc. v. Prodigy Services Co. (N.Y. trial ct. 1995)
Cubby held that Internet Service Providers (such as Compuserve) were entitled to be treated as distributors, not publishers.
Stratton Oakmont held that only Service providers that exercised no editorial control (such as Compuserve) over publicly posted materials would get distributor treatment, and service providers that exercised some editorial control (such as Prodigy)—for instance, by removing vulgarities would be treated as publishers.
Neither considered the possibility that an ISP could actually be neither a publisher nor a distributor but a categorically immune platform, perhaps because at the time only entities that had a legal obligation not to edit were treated as platforms. And Stratton Oakmont‘s conclusion that Prodigy was a publisher because it ” actively utilize[ed] technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and ‘bad taste,'” is inconsistent with the fact that distributors (such as bookstores and libraries) have always had the power to select what to distribute (and what to stop distributing), without losing the limited protection that distributor liability offered.
But whether or not those two decisions were sound under existing legal principles, they gave service providers strong incentive not to restrict speech in their chat rooms and other public-facing portions of their service. If they were to try to block or remove vulgarity, pornography, or even material that they were persuaded was libelous or threatening, they would lose their protection as distributors, and would become potentially strictly liable for material their users posted. At the time, that looked like it would be ruinous for many service providers (perhaps for all but the unimaginably wealthy, will-surely-dominate-forever America Online).
This was also a time when many people were worried about the Internet, chiefly because of porn and its accessibility to children. That led Congress to enact the Communications Decency Act of 1996, which tried to limit online porn; but the Court struck that down in Reno v. ACLU (1997) Part of the Act, though, remained: 47 U.S.C. § 230 (https://www.law.cornell.edu/uscode/text/47/230), which basically immunized all Internet service and content providers platforms from liability for their users’ speech—whether or not they blocked or removed certain kinds of speech.
Congress, then, deliberately provided platform immunity to entities that (unlike traditional platforms) could and did select what user content to keep up. It did so precisely to encourage platforms to block or remove certain speech (without requiring them to do so), by removing a disincentive (loss of immunity) that would have otherwise come with such selectivity. And it gave them this flexibility regardless of how the platforms exercised this function.
And Congress deliberately imposed platform liability (categorical immunity) rather than distributor liability (notice-and-takedown immunity). For copyright claims, it retained distributor liability (I oversimplify here), and soon codified it in 17 U.S.C. § 512 (https://www.law.cornell.edu/uscode/text/17/512), the Digital Millennium Copyright Act of 1998: If you notify Google, for instance, that some video posted on YouTube infringes copyright, Google will generally take it down—and if it doesn’t, then you could sue Google for copyright infringement. Not so for libel.
So what do we make of this? A few observations:
Under current law, Twitter, Facebook, and the like are immune as platforms, regardless of whether they edit (including in a politicized way). Like it or not, but this was a deliberate decision by Congress. You might prefer an “if you restrict your users’ speech, you become liable for the speech you allow” model. Indeed, that was the model accepted by the court in Stratton Oakmont. But Congress rejected this model, and that rejection stands so long as § 230 remains in its current form. (I’ll have more technical statutory details on this in a later post.)
Section 230 does indeed change traditional legal principles in some measure, but not that much. True, Twitter is immune from liability for its users’ posts, and a print newspaper is not immune from liability for letters to the editor. But the closest analogy to Twitter isn’t the newspaper (which prints only a few hundred third-party letters-to-the-editor words a day), but more like either A phone company, bookstore or library (which houses millions of third-party words, which it can’t be expected to screen at the outset)
Twitter is like the bookstore or library in that it runs third-party material without a human reading it carefully, and reserves the right to remove some material (just as a bookstore can refuse to sell a particular book, whether because it’s vulgar or unreliable or politically offensive or anything else). Twitter is like the phone company or e-mail service in that it handles a vast range of material, much more than even a typical bookstore and library, and generally keeps up virtually all of it (though isn’t legally obligated to do so, the way a phone company would). Section 230 is thus a broadening of the platform category, to include entities that might otherwise have been distributors.
Now of course § 230 could be amended, whether to impose publisher liability (in which case many sites, including ours, would have to regretfully close their comment sections) or distributor notice-and-takedown liability (which would impose a lesser burden, but still create pressure to over-remove material, especially when takedown demands come from wealthy, litigious people or institutions). And it could be amended to impose distributor liability for sites that restrict user speech in some situations and retain platform liability for sites that don’t restrict it at all. I hope to blog some more about these options in the coming days. I also hope to blog some more in the coming days with more details about the specific wording of § 230. But for now, I hope this gives a good general perspective on the traditional common-law rules, and the way § 230 has amended those rules.
Politics as logical fallacy
If you want to learn more about the misapplication of Section 230 I want to suggest a great interview with Senator Ron Wyden I found in Reason Magazine. Wyden was the primary author of Section 230 and though I am not surprised to learn this, after some digging I have found that his explanation of what this law means is incredibly consistent from today going all the way back to when he first authored it and if you know anyone who remains less than convinced of it’s meaning through the kind of dense formalistic kind of Textualist interpretation in this article, he puts it in plain English very well. He’s also one of those people of a very rare breed who are undeniably political progressives with a true libertarian streak running through their belief. Wyden has always been a civil-libertarian when it comes to online privacy and if nothing else, you should love him for being the guy whose concise questioning of James Clapper about the degree to which the NSA is spying on us left Clapper uncomfortably squirming in his chair before committing outright perjury live on TV and for all time.
But I would like to quickly address a few of the most common and most vexing lies politicians currently tell about Section 230:
- That big tech currently enjoys protections as a platform instead of a publisher, that if they do anything to moderate content that automatically makes them a publisher and they lose their protections as a platform
- This is not only a conclusion totally foreign to an explicit reading of the law, I can’t even imagine some creative extrapolation of any aspect of the law that events hints at this conclusion.
- That social media, as it exists now has made itself a public utility or a common carrier
- Common Carrier 47 USC § 153(11 ): The term “common carrier” or “carrier” means any person engaged as a common carrier for hire, in interstate or foreign communication by wire or radio or interstate or foreign radio transmission of energy, except where reference is made to common carriers not subject to this chapter; but a person engaged in radio broadcasting shall not, insofar as such person is so engaged, be deemed a common carrier.
- It’s very definition precludes application to social media, since social media site would be the equivalent of a radio broadcaster, which this law specifically rejects as a common carrier.
- Public Utility 18 CFR § 46.2(a): A public utility is an entity that provides goods or services to the general public. Public utilities may include common carriers as well as corporations that provide electric, gas, water, heat, and television cable systems. In some contexts, the term “public utility” may be defined to include only private entities that provide such goods or services. For example, when defining the regulatory purview of the Federal Energy Regulatory Commission (FERC), Congress “exclude[d] governmental entities such as cities, counties, local irrigation districts, and state and federal agencies.” Instead, FERC has primary authority over “principally privately owned businesses, commonly referred to as ‘investor-owned utilities’ or ‘regulated utilities.’”
- The closest thing to an internet version of a public utility would be an ISP. Twitter does not become an ISP because they have a website any more than they become a telephone company for getting a company telephone number. Social media sites are a destination, not the service required to access those destinations. It doesn’t matter if they become an especially popular destination. That doesn’t fundamentally change the nature of what they do. Twitter will always be a destination and never a service to reach a destination. Social media is a website, not the service you require to connect to all websites
- That 230 protections require you to act as a neutral party to be considered a platform.
- This political neutrality standard is entirely made up. Their common carrier argument is at least talking about a real thing. There they merely misapply the concept of common carrier. That section 230 has anything to do with political neutrality is not even a false claim that could be reached through the same kind of misunderstanding as the common carrier. This point we will return to at the end of the article)
- Clarence Thomas says social media are common carriers and wants to bring a court case for him an the court to reach this pre-decided conclusion
- Actually, Clarence Thomas has merely said that because the way this disagreement is splitting groups of people arguing about whether social media is a common carrier that he believes this is a problem that should be brought to the Court’s attention. That the federal judiciary is the right place to settle this dispute. Precisely what Thomas recently said in his dissent to a denial of Cert is:
- Section 230 is some kind of blanket immunity gifted on big tech companies like a cushy give away.
- Actually, Josh Hawley is just a big government Republican who believes free speech, free association & private property rights aren’t rights but government granted privileges. This is the only explanation of how Josh Hawley could have been on the right side of Masterpiece Cakeshop v. Colorado Civil Rights Commission, 584 U.S (2018) He is a religious bigot who thinks free speech, free association and private property rights are privileges that he, as the government, can dole out to people whose views align with his own and that he can revoke from everyone who doesn’t buy into his soft tyranny.
This is why, in an odd sense I can appreciate Joe Biden, in that Joe Biden believes he knows how to run your life better than you do and he is not afraid to say as much. He doesn’t get mealy-mouthed when it comes to Joe Biden’s commitment to use State violence against anyone he pleases if that’s what it takes to gain our compliance with his standards. People like Josh Hawley and Ted Cruz use the language of liberty while believing in the same right to wield state violence against anyone not living the way Josh Hawley or Ted Cruz have decided we should live.
It takes a uniquely immoral character (or lack thereof) for someone to decide they don’t want the government to dictate what they can or cannot say; but they are okay with using the State’s monopoly on force to crack down on companies using their private property in a way conservatives don’t approve of. After all, what does free speech even mean, if not a right to speak free of violence. That nobody can hurt you, or destroy your property because they didn’t like what you had to say. Yet, that’s the very thing Big Government Conservative like Josh Hawley believe they have a right to do.
I do not like the path social media companies are taking with moderating user content, but you either believe in property rights or you don’t. You cannot say you believe in individual liberty and private property rights unless you believe in others having the right to use their property in a way you don’t approve of… Of course the right to speak is part of living in a free society, a very important part. But boycotts, cancel culture, social stigmatization, property rights, free association and free markets are also part of living in a free society. The sad thing is those who claim to be fighting for free speech, like Josh Hawley have made the topic a cliché, They carry on, claiming to invoke the principle of liberty, when what they are in fact doing is rather insidiously rejecting them.