Developers, Content Filtering, And Censorship Enforcement: An Appeal To Policymakers

We’re living in troubled times. One of the largest and lasting effects on society, however, may not have become fully apparent… yet. Are we risking harm to freedom of speech in the policies designed to combat COVID-19?


EU Censor Post.png

We’re living in troubled times. The coronavirus crisis is challenging our society in many ways. One of the largest and lasting effects on society, however, may not have become fully apparent… yet. Are we risking harm to freedom of speech in the policies designed to combat COVID-19? Combating disinformation in an efficient way has always been imperative, but now more than ever. Online platforms took firm stands with protective measures from the start of the crisis. Enhanced or full automation was, and is, the best tool at hand but that runs the risk of over-filtering and accidental censorship. Developers and online platforms are now in the middle of a monumental dilemma. What’s the best approach? Ultimately, the answer will lie with policymakers. 

I personally noticed this risk in mid-March. I wanted to share a well-written article about COVID-19’s impact on the future of work with my friends on social media. I received an automatic message: “Your post goes against our Community Standards on spam”! I immediately understood what happened. I’m convinced that the keyword which triggered the automatic filter to ban my post was “Coronavirus”, as everything else was inoffensive. I used the redress option offered by the platform. As a citizen and end-user, I found the platform’s policy to use such a strong algorithm frustrating. It prevented me from exercising my free speech right. After a couple of days, I received the good news that my post was back, with the platform’s apologies that it got it wrong. I wasn’t the only one in that situation. The social media platform in question revised its algorithm after plenty of similar and justified complaints. All due to limited access to human moderators during the lockdowns. Thus the automated filtering was the sole solution to prevent dangerous content related to the crisis. 

Automatic filters can be extremely efficient, purely in a mechanical way. The machine, as advanced as it is, is incapable of sensing nuance. Artificial intelligence cannot (yet) match humans in this regard. So actual eyes are needed to moderate the content. This is why it is infrastructurally difficult and often financially impossible for small platforms to moderate even modest amounts of content by internet standards. In a time of pandemics and crises, however, even big platforms are struggling to scale-up necessary measures. 

This exceptional time requires fast and diligent actions to protect platform users from misinformation and harmful content. There are clear criteria to select authoritative sources of information as well as detecting disinformation and harmful content. Online platforms are doing their best to rely on these criteria and to eliminate content that can cause serious harm. Their most effective tools are those that use machine learning. In other words, yes, automatic filters do this job and do it exceptionally well, but there are many challenges and risks. A clear example: automatic filters cannot perceive when a politicians’ statement contains references to disinformation or is drawing attention to controversial content circulating online. The filters cannot tell whether this is legitimate discourse, versus a malign post, intentionally promoting illegal or harmful content. In my previously mentioned personal example, it was a single but highly relevant word, appearing in the headline, that likely triggered the filter. The rest didn’t count. It is as simple as that, unfortunately. We humans can discern these subtle differences. The machine, as advanced as it may be, still cannot.

In the offline world, the city council controls the town square. They would find it impossible to censor every citizen walking by. They’re free to speak their mind. When other citizens or the police observe that some limits are passed, then law enforcement intervenes and removes them from the public space if their speech harms others. However, a permanent and automatic ex-ante intervention in the town square, allowing only some people to speak, no matter the criteria used… that leads to censorship, authoritarianism, and the erasure of democracy. Why should we accept that the Western world, built by democratic principles and humanistic values can risk switching to an authoritarian regime? It doesn’t matter if it’s offline or online.

Developers have different opinions, and their personal values influence their work, like any other profession. They are concerned by these issues because they are in the best position to observe this risk of censorship by automatic means. They are the first to raise awareness that the way we use technology, especially machine learning, deserves critical attention. While machine learning technology has the potential to solve many of our societal problems it may also create serious problems of its own; like an over-reliance on automation that leads to censorship. 

It is the role of policymakers to direct the use of technology in such a way as to preserve the equilibrium of freedoms in society. In times of crisis, there is a tension between freedoms, and governments make conscientious choices favoring, for example, health and security for a period of time. But what is justified in such exceptional cases should not be allowed to continue afterward. The risk of putting democracy in danger is too high. 

In lawmaking finding the right balance is an evergreen challenge. Criminal law is an imperfect system, but it doesn’t stop our pursuit of justice. An error rate is accepted and we acknowledge that we can never perfectly find the dividing line. If the system is unbalanced one way, there is a risk that we convict innocent people (if our filters are too tough). On the other hand, if they are too lax, then some offenders could go free. As a society, we’ve decided that it is better to accept the risk of missing some offenders than to convict even one innocent person. That is the better moral outcome for us. The same challenge exists here. Filters cannot be perfect, any more than people can be, so we need to decide whether we want to block permissible content to ensure nothing impermissible gets posted, or whether we’ll accept some undesirable content getting up for a short while to ensure that free speech is protected and nothing legal gets censored. Right now, the EU and the US put the balancing point in slightly different places.

The developer community and online platforms need proper direction from policymakers. The rules that governments put in place should help to focus automatic tools in ways that preserve the fundamental freedoms. Not the other way around. If the rules are such that a developer has no choice but to use an automated content filter, then regulators have failed to find balance in their intervention. Please don’t blame the technology or the developers for the results. 

After the crisis, this issue will remain front and center during the EU debate on the Digital Services Act. We should learn from this crisis and pay attention to the regulatory solutions that will define the future of the EU, our future. There are better options that permit proactive, voluntary actions from online platforms but keep appropriate checks and balances in place. A good basis already exists in the Code of Practice on Disinformation. 

Policymakers: better solutions exist. It’s up to you to decide wisely and to ensure that citizens’ freedoms and rights are preserved. It’s a noble and critical responsibility in troubled times. Technology will follow political decisions, for good or for ill, so we are counting
on you to protect and preserve freedom as well as online safety.

Avatar photo

By Karina Nimară

Director of EU Policy and Head of Brussels Office - Karina previously served as Legal Advisor and Internal Market attaché at the Permanent Representation of Romania to the EU. Prior to her work with the Romanian diplomatic mission, Karina spent ten years in European Union affairs within the Romanian Government. While there she coordinated, inter alia, the process for transposition and implementation of EU legislation. Karina holds a law degree and specializes in EU law and policies. Based in the Alliance’s Brussels office, she's a tech enthusiast, enjoying the dawn of the Age of Artificial Intelligence. Other than robots, she's fascinated with cats and owls.

Related Content

End of the Road

End of the Road

Digital Markets Act: Unlocking the Potential of Interoperability for Developers and Consumers Recording

Digital Markets Act: Unlocking the Potential of Interoperability for Developers and Consumers Recording

The Next Netflix? National Security? Standardized Test Scores? Infrastructure Investment Is The Answer

The Next Netflix? National Security? Standardized Test Scores? Infrastructure Investment Is The Answer

Join the Alliance. Protect your interests.

©2020 Developers Alliance All Rights Reserved.