Decency, Free Speech, and Tomorrow’s Internet.

The internet has come a long way since the Communications Decency Act of 1996. Is it time to reform internet liability? 


84698598-E533-4F5D-9492-2ECFAC5FCE87.png

The Court of Justice of the European Union ruled in early October of 2019 that Facebook and other platforms can be required to remove content worldwide where it is deemed illegal in the EU. This ruling obviously has major global implications in an interconnected world. While the ruling makes sense in Europe, for Americans who live under a broader free speech framework the court’s decision seems overzealous. If every politician in the US filed a lawsuit every time someone called them a fascist we would soon be swimming in litigation. Clearly, things in Europe are done differently. That said, the American side of the pond is also (re)evaluating the state of content moderation. 

What is Section 230?

Section 230 of the Communications Decency Act of 1996 is the law that polices internet content in the United States. The provision provides immunity from liability for providers of internet services who allow third parties to place information on their platforms. When someone puts something online, the platform hosting that speech is not legally liable for the content that you, the consumer, post.

Lawmakers in the U.S. have been debating revising Section 230 in various ways for several years. Possible revisions include making the platforms more responsible for the policing of content, and by extension, further liability for that content. But what content should be policed? How fast can that policing happen in a digital world where millions of people could see a piece of content in mere minutes? This has thrust Section 230 into the limelight and forced Americans to decide what platform responsibility should look like today.

The good, the bad, and the ugly (photos)

While Americans have a right to free speech, that right only extends to censorship by the government. Thus, if an individual private citizen, business, or organization wants to censor your speech they are legally able to do so. While Americans have the right to say most things online (though not always without consequences), online platforms have the right to take what you say on their apps, websites, social media networks and more down. Put another way, no one’s obligated to host your speech for you. Enter the whys and hows of content moderation. 

Certain illegal activities (such as illicit drug sales or transmission of child pornography) that occur over the internet can be prosecuted. U.S. law generally finds that those acts, along with others such as terrorist broadcasting and recruitment, or incitement of violence, are unacceptable and hence should be moderated by the platforms where they were posted.

On the other hand, there is content everyone can generally agree is good and approved for all audiences. This includes tutorials, art, and all those cute animal videos. What about the stuff in between? The ‘alternative facts’ or extremist political content, conspiracy theories, anti-vax or climate change propaganda, the exaggerated Yelp reviews that hurt your business, and even that really bad photo that you asked your friend to remove (but they won’t because it’s a great one of them!). These aren’t illegal to have online, but not everyone wants them. Content moderation is highly subjective.

Facebook announced they would not remove or fact-check speech by political figures, as they believe it is in the public interest for individuals to make these editorial decisions for themselves. Some members of Congress take great issue with this, with Rep. Waters calling it a “massive voter suppression effort” and Rep. Ocasio-Cortez inquiring if she can place advertisements saying Republicans supported the Green New Deal (something they have not, and very likely would not ever do). Enforcement and revision of Section 230 is not a one-sided political issue. Sen. Cruz has long crusaded for reform. He regularly points to reports of a “consistent pattern of political bias and censorship on the part of big tech.” 

How do we fix it?

Politicians on both sides here aren’t wrong. Platforms make judgment calls about what is, and what is not acceptable content. Often there is an unconscious bias due to who is making the decision. Section 230 allows platforms to objectively make these calls and act as good Samaritans, lest the internet spiral into chaos. Inaccurate or misleading content is distracting and causes confusion. This is especially true in the age of deepfakes, “coordinated inauthentic behavior,” and multiple The Onion-wannabes.

Sometimes a gray area is good. I want the unsavory Yelp review to tell me if I should avoid that sketchy burger joint to also avoid food poisoning. Or the Reddit thread to anonymously list my grievances/questions about a variety of personal topics. Maybe even a social media thread that helps to solve a murder mystery.

Changing Section 230 to require a platform to have legal liability for content posted doesn’t make content moderation problems go away – it makes content moderation go away. It makes the content itself go away. If they want to stay in the game, platforms will still have to invest in technology to find and detect objectionable content, and human reviewers to ensure that policing is accurate; but more importantly, it will force them to over filter content to avoid lawsuits. Eliminating Section 230 protections will just place more demands on technology companies to create improved filtering technology on an arbitrary government timeline or face fines and other legal consequences. It will open the door to litigation in the gray areas. This pressure will disproportionately impact smaller technology companies that may not have the same access or capabilities to advanced monitoring and filtering systems as
some of the larger tech companies, or the legal teams to fight the inevitable battles that will follow.

Conclusion

Removing Section 230 protections from platforms will lead to three possible results: It will either force platforms to over-filter content so that nothing can be seen as objectionable— dramatically reducing the amount of content that is available today. Or it will cause them to go hands-off entirely, filtering nothing. We’d be left with a wild-west-American-internet of unfettered content with platforms claiming no responsibility. On this internet, all liability of the speech would be on the initial poster, rather than the platform. This policy would then encourage the EU and others to further exert jurisdiction beyond their borders to stop the wash of illegal or unwanted speech that crosses into their digital jurisdiction.

In order to avoid these scenarios, platforms should take it upon themselves to make clear what their policies for various types of speech are. Additionally, they should continue to invest in content policing and filtering technology. These moves would encourage the creation of a safe internet that allows for a variety of free speech, while largely avoiding harmful content – much like the real world itself.

Avatar photo

By Sarah Richard

Developers Alliance Policy Counsel & Head of US Policy

Related Content

Developers Alliance files Amicus Brief to Argue that Algorithms are Protected by the First Amendment

Developers Alliance files Amicus Brief to Argue that Algorithms are Protected by the First Amendment

With Holidays Approaching, Outlook for Dev Policies is Limited on Capitol Hill

With Holidays Approaching, Outlook for Dev Policies is Limited on Capitol Hill

Alliance Files in the U.S. Supreme Court. Again.

Alliance Files in the U.S. Supreme Court. Again.

Join the Alliance. Protect your interests.

©2019 Developers Alliance All Rights Reserved.