Familiar with Section 230? It’s the law that allows content moderation while shielding platforms from being sued for what their user’s post. In the last few weeks, it has been at the heart of both calls to action and to fanning the flames of racial violence in America.
Both Donald Trump and Joe Biden have called for the end of Section 230 protections. Does this mean they agree? Not quite. Does the bipartisan call for its end mean that the law will be repealed immediately? Also unlikely. The fight over Section 230 is nothing new.
On May 28th President Trump released an executive order requesting a review of online censorship in response to Twitter labeling one of the president’s tweets with a disclaimer indicating that he was promoting misinformation regarding the US election process (in violation of Twitter’s published content policies). A few days later, Twitter flagged another of Trump’s tweets. This time indicating that the tweet in question was “glorifying violence.” In the tweet, Trump promises that “when the looting starts, the shooting starts.” This was in response to the protests and riots taking place across the country after Minnesota resident George Floyd’s death at the hands of the local police department. In the days after that, Twitter additionally flagged a post by Rep. Matt Gaetz (R-FL) for further inciting violence. The executive order has been found to have no teeth legally and is already facing lawsuits from the internet community. That notwithstanding, let’s take a look below at what removing Section 230 would mean and if removing it would solve our problems. (TLDR? It won’t.)
What Does Removing Section 230 Look Like?
Your First Amendment Rights
Allow me to briefly put on my legal hat. The First Amendment grants you the right to free speech. This includes expressing your political and religious views, the right to not say the pledge of allegiance, the right to burn a flag, and the right to scream about how much we hate cilantro in the streets.
First Amendment rights however are not absolute. The government may restrict the time, place, or manner of speech. This is why protest permits and noise curfews are legal. Additionally, the government may restrict speech “directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Most importantly, First Amendment rights only apply to government restrictions of speech.
Facebook, Twitter, etc, are not the government, and the internet is also not run by the government. As private entities with terms of service agreements, a platform’s First Amendment rights allow them to restrict (or not restrict) their content and users as they see fit. The government may not infringe upon a platform’s First Amendment rights, absent them violating one of the noted exceptions.
The Communications Decency Act
In order to contain what would otherwise be a free for all, Congress passed the Communications Decency Act of 1996 (CDA) to regulate material on the internet. In that, they acknowledge that Americans do have the fundamental right to free speech, however, some standards may need to be implemented. This is due to the internet (as all communication technologies are) being used for illegal interstate commerce (guns, drugs, CSAM), promotion of terrorism, etc.
Section 230 of the CDA provides immunity from liability for providers of internet services who allow third parties to place information on their platforms. When a user posts or uploads some form of content online, the platform hosting that speech is not legally liable for what you, the consumer, have posted. This is why you can post a bad Yelp or Airbnb review without being sued by the companies/host for defamation. It is also why companies/hosts who wronged you can’t sue the platforms for posting your reviews.
Lawmakers in the U.S. have debated revising Section 230 in various ways for years. The largest push of late, however, is to remove that portion of the law entirely.
Why do Republicans want to remove it?
Republicans want Section 230 removed because they feel that conservative voices are being unfairly targeted and censored on platforms. President Trump specifically takes issue with platforms putting labels on his postings that he believes are editorializing his views. Some Republicans have stated that if a platform wants to “exercise an editorial role like a publisher then they should no longer be shielded from liability & treated as publishers under the law,” highlighting the different liability standards publishers and platforms have.
Why do Democrats want to remove it?
Democrats want Section 230 removed because they believe that the promotion of misinformation and disinformation on platforms is leading to great public harm. These harms include a lack of trust in both government and each other, as well as providing inaccurate information to vital services and functions such as public health information and voting rights. Democrats believe platforms have the responsibility to police content on their platforms created by users and prohibit the spread of hateful or misleading content.
What would happen if Section 230 was removed?
How do you make platforms responsible for user content? Remove their legal protections. What then happens when you remove their legal protections? They increasingly moderate their platforms so that they do not get sued. Demanding companies act responsibly means the companies will….act *too* responsibly.
Removing Section 230 would mean two things: that platforms could be sued for what their users say online, and that companies and developers are liable for code that is discriminatory in either intent or execution. While nobody wants discriminatory programs, companies also do not want to be sued before being able to even realize there may be a problem. Or worse, dealing with massive costs of litigation when they are in fact innocent.
Going against the Republican argument, this would mean that platforms can be sued for their algorithms that promote popular accounts or content that is likely interesting to the particular user. Platforms have across the board denied censoring content based on political views, contrary to conservative claims. In the current environment, if certain content is less prominent in news feeds, it’s because of the lack of engagement on those posts, or because the user does not express interest in that type of content through their behaviors on the platform. Removing algorithms that promote posts based on engagement would give us more fringe news sources and pyramid-scheme lures from old classmates, while simultaneously leaving us with fewer dog videos and actual newsworthy material. No thanks.
Further, platforms who have long struggled with how to handle the Trump camp’s inflammatory rhetoric are going to censor more of his commentary rather than less. This could even lead to accounts like Trump’s being banned from the platform entirely. With fewer protections like Section 230 for platforms, the less likely platforms are to open themselves to litigation from every “snowflake” that is “triggered” by posts. No doubt not that is not what Trump or other conservatives want.
In the case of the Democratic argument, removing Section 230 would mean their political foes would be fact-checked. It would also hold their own camp to the same heightened standards, however. This sounds great when it’s your opponent, but the policy would also apply to individuals like four-Pinocchio Michael Bloomberg and Sen. Elizabeth Warren (D-MA). They too would not be able to add spin where it suited them. Their followers would be censored from adding any colorful commentary on political movements or certain individuals presently occupying the White House. The liberal argument in this case inherently destroys the op-ed concept for the benefit of themselves as well as everyone else.
Can Section 230 be fixed?
Do we actually want to “fix” it?
The two camps are at odds with each other in that one wants to strip the internet of their protections because they have done too much, whereas the other wants to strip them of the protections because they have done too little. In either case, removing Section 230 helps neither party achieve their goals all the while harming the platforms. Hence why coming to a consensus is near impossible.
We can’t stop people from thinking and doing things we do not like on or off the internet, regardless of whether or not platforms allow that content to exist. Section 230 is a problem with platform amplification of unsavory content rather than the content itself. Revisions to the law should be more about agreed-upon censoring and amplifying standards we all find acceptable rather than platform liability.
As a consumer of content, let me ask you: If everyone was only allowed to post only non confrontational things that were accepted by all, would you still use Facebook and Twitter? Voyeurism is one of the main reasons platforms are so successful. We (usually secretly) love the antagonistic political debates and seeing the latest from our high school’s former resident mean girl who now peddles essential oils. We also love the happy birthday wishes from that pleasant ex-neighbor who managed to stay on our friends list over a decade later. Getting rid of Section 230 would be getting rid of all of the above and more.
Any time people exercise free speech there will inevitably be some speech that we don’t, agree with, like, or even think should be legal. The President of the United States saying we should be shooting protesters is newsworthy regardless of whether or not it is said on social media (or is a palatable statement to any). Herein lies Twitter’s argument of why they kept the post up, disclaimer aside.
Will it be fixed?
As previously stated, there is far from a consensus on Capitol Hill on how to handle further legislation that would modify the CDA. Members of both parties do recognize the complexity of the legislation, and as such cannot reach agreements even within their respective parties. As Rep. Cathy McMorris Rodgers (R-WA) said at a hearing on the issue last year, “misguided and hasty attempts to amend or even repeal Section 230 for bias or other reasons could have unintended consequences for free speech and the ability for small businesses to provide new and innovative services.” It is unlikely that it will be reformed despite President Trump and Democrat pleas.
How do we as developers do better?
Room for improvements to Section 230 lies in both the technology and the policy. Developers as creators of platforms have a responsibility to be aware of the policies of their respective companies. They must also establish methods to best achieve the end goals while not suppressing information that is important to the public interest. There have been numerous reports that foreign adversaries have and will further amplify misinformation. They have and will continue using it as a divisive weapon. Developers must work on methods to detect fraudulent posts and inauthentic content.
Above all, online platforms — including the comment sections in games, apps, stores and message boards — should be as transparent as possible about their content moderation and posting policies. The biggest driver of angst in this space is the feeling that the rules are arbitrary and open to influence. Following posted rules and enforcing terms of service agreements go a long way towards minimizing the appearance of misuse and nepotism.
If you don’t like a platform’s policies, lobby them for a change in them.
If the platform is resistant to change, leave. Create your own.
If you don’t like what a politician is saying, vote them out of power.
If you don’t like the recipe your Great Aunt Delores posted on her news feed, you should probably have a discussion offline at your own risk.