On July 18th New America’s Open Technology Institute (@OTI on Twitter) hosted a panel on The Future of Free Expression Online in America.
The event was a packed house and included speakers ranging from digital rights activists at Next Century Cities, First Amendment advocates from First Amendment Coalition, and policy experts from both Facebook, and New America’s Open Technology Institute itself.
First Amendment protections apply to online speech as much as offline speech. In the words of the former Secretary of State John Kerry, “In America you have a right to be stupid if you want to be.” Further, the First Amendment protects intermediaries (aka: the Facebooks and Googles of the world, but tens of thousands of smaller players as well) rights from being liable for unsavory content, as the platforms merely act as a facilitator for speech rather than function as the speakers themselves. Tech companies’ interests are also at stake when it comes to content moderating. (Would YOU want to use Instagram if you were constantly getting your DMs blown up by scammers selling you cannabis oils and tech support?) Thus, moderating content is necessary – to some extent.
Thursday’s event discussed how big tech is moderating content and how social media platforms, in particular, should be accountable for the content of their platforms. One piece of the larger discussion is how governments could provide useful and uniform guidelines for this process. While the classic example of ‘nobody wants to promote terrorism’ is a common theme for why regulation in this space is generally necessary, there seemed to be a consensus that platforms should be transparent to the public about what they are moderating and why. Panelists cited that regulation is needed to prevent hate speech, harassment, and bullying. What this regulation looks like, of course, is an evolving thing. As you can guess, this is subjective, as shown by Thursday’s lively audience. Panelists acknowledged that while AI has been helpful in this regard, it is not a perfect science (yet). This is especially notable considering the nuances content moderation involves, often requiring human judgment. Going one step further, no two moderators will always agree where to draw the line between criticism and hate speech.
Panelists highlighted the importance of pushing for data portability. Such a process should have meaningful applications for end-users. The panel remarked on the importance of this, especially in light of the First Amendment concerns. Individuals would have the ability to move their content from Platform A to Platform B should they find their content censored. This allows for the posting individual to keep their information if the hosting company deleted it per their moderating standards. Individuals can then move their content (and anything else that they lost should they find their account closed) to a platform that allows them to express their views.
One prominent question: How can we resolve (some of) the tension between First Amendment rights and harmful content? The unanimous answer: Diversity in backgrounds and life experiences of both developers and content moderators. The experiences of the developers translate into better AI, which leads to more effective First Amendment protections and less harmful content. OTI’s panelist Sharon Bradford Franklin pointed out that “So much of moderation, by all platforms, is… grounded in English as its first language…” and thus it is necessary for developers to continue incorporating different regional viewpoints and linguistics to ensure a more comprehensive method of (moderating). This was echoed by Facebook, who noted that they have committed to specifically bringing civil rights expertise to their team because “you could be a really smart engineer… but perhaps you haven’t had the life experience to understand what it means to deal with civil rights.”
Protecting individual rights, effective content moderation, and ways for developers to help solve the issues this topic brings is going to be an ongoing conversation as various tech companies build software to remove offensive content.