If you’ve been following the news in the United States for the last year, there’s been a major hubbub about something called “Section 230,” and although everyone seems to have an opinion on it, there’s very little discussion of the context in which the law came about or what it actually does.
For the most part, the debate revolved around whether social media companies like Twitter and Facebook are abiding by the law or whether the law should be adjusted to fit inside the context of current times with the power these companies have to direct the discussions of their users.
To fully understand why Section 230 of the Communications Decency Act of 1996 is such a big deal, it’s important to explore what it is, what it discusses, and why it came into being in the first place.
Going Back to 1934
Franklin D. Roosevelt had been president for just over a year when he was attempting to find a way to untangle the bureaucracy that regulates radio communication in a way that streamlines everything into one single commission. Not long after this initiative was pushed into Congress, he signed the Communications Act of 1934, eliminating the old bureaucracies and establishing the Federal Communications Commission.
The purpose of all of this, according to the act, is for “regulating interstate and foreign commerce in communication by wire and radio” to make rules that are clear and easy to understand, coming from one single governing body.
Since that moment, the FCC has been the go-to enforcer and regulator for radio, television, and even the Internet.
That last one, however, doesn’t rely on the typical broadcast style we associate with the other two. This became a problem even in the early 90s when the Internet was still in its infancy. Given how differently the Internet operates – allowing almost anyone to have their own soapbox and democratizing the flow of information – one couldn’t just expect the FCC’s operating principles to be compatible or even flexible enough to allow it to thrive.
A change was needed, and it came during the Clinton administration in the form of the Telecommunications Act of 1996.
The Birth of Internet Regulation
Although several attempts have been made to regulate the Internet in the U.S., nothing came quite as close as the Telecommunications Act of 1996. Contained within the law was a section known as Title V. Some may know this as the Communications Decency Act.
When it first passed, the CDA was the first major attempt by Congress to limit “obscenity, indecency, or nudity” on all broadcasting methods, including the Internet. This law was eventually struck down by the Supreme Court a year later and revised to remove that particular portion.
Still remaining in the law, however, is an interesting provision known today as the “safe harbor,” or Section 230(c)(2). Under this provision, providers of content on the Internet are allowed to perform “any action […] in good faith to restrict access to or availability of material that the provider […] considers to be obscene, lewd, lascivious, filthy, excessively violent, harrassing, or otherwise objectionable,” regardless of constitutional provisions concerning freedom of expression.
Where Social Media Comes In
In the form it took when it passed in 1996, the law was attempting to affirm the right of “interactive computer services” to moderate their content, eliminating things people publish that are arguably vile in nature or otherwise “harmful to minors” (as stipulated further on in section d). But does this also allow social media platforms to heavily curate the messages posted by their users?
This is the great question being posed by debates that started in 2020, but you may be surprised to find out that it isn’t a new question. In fact, Section 230 was drafted specifically to make a distinction between publishers that curate their content and content distributors (platforms).
In 1997, only a year after the CDA was signed into law, the Fourth Circuit Court of Appeals ruled in favor of AOL when someone attempted to hold the company liable for one of its user’s posts.
This came as a result of the paragraph in Section 230 prior to the one mentioned previously, which states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.“
In plain English, this means: “If you are a platform and one of your users decides to say something outrageous or (relevant to the AOL case) post libelous information via your service, you’re not legally liable to what that user did.”
Services like Telegram, Whatsapp, Facebook, Twitter, and many others would be in serious trouble if this weren’t the case. Leaked and libelous information by individuals acting of their own accord go through those services all the time. The story isn’t the same for sites of the New York Times, The Miami Herald, and other newspapers because they’re publishers and therefore expected to curate their content.
The Debate
Here’s where things get pretty messy. We’ve already established that Section 230 was intended to make a distinction between publishers and platforms, but what happens when Twitter decides to heavily punish people who express ideas that are found objectionable by the majority of its user base?
Social media sites, messengers, whatever Reddit is now, and even Steam’s forums, enjoy the freedoms of being platforms, making them nearly immune to litigation when their users engage in civil misbehavior. Their only real obligation is to remove content that’s illegal (such as messages by users advertising the sale of recreational drugs). But when they take on the task of a publisher voluntarily by removing other ideas, possibly offensive parodies, humor, and ironic or unironic lies, are they still acting like a platform?
On one hand, the answer is “yes.” The unfortunate truth of the matter is that Section 230 is pretty vague on what platforms are allowed to remove. Using words like “filthy” and “objectionable,” it’s easy to justify the removal of almost anything that isn’t someone talking about the weather on Sunday while still enjoying the safe harbor privileges.
On the other hand, consistent attempts to curate content beyond the social limit of what would be considered “removing vile content in good faith” makes some of these companies behave somewhat like publishers.
In the end, the real question we currently have no clear answer to is, “Do social media companies that curate political speech have the ability under Section 230 to continue to call themselves neutral platforms for their users?”
And if they lose the safe harbor protection, how do we make it so that this legal precedent doesn’t stifle the growth of upstarts that could potentially compete with these larger and more established sites?
What do you think of this? Is this debate worth having? Does CDA Section 230 go far enough to make a proper distinction between publisher and platform? Tell us your thoughts below! Meanwhile, do also check out the GDPR regulation and how it affects you.
Our latest tutorials delivered straight to your inbox