Jan 242021
 

Let me begin with a simple statement: I am not a parler.com fan.

But free speech is not about the freedom to publish things we all like. It is about the freedom to publish things we hate. Things that we find disgusting, revolting, reprehensible.

Of course, there has to be a line drawn somewhere. It is one thing to publish a racist rant about the inferiority of some human beings. It is another thing to call for genocide. Somewhere between these two is (or should be) a clear bright line: criminal “hate speech” is speech that calls for violence, speech that instructs readers to commit a crime.

Arguably, parler.com crossed this line multiple times when it failed to remove posts calling for violence against lawfully elected or appointed public officials, when it called for a violent uprising against the lawful government of the United States.

So what is my problem, then? Simple: I am alarmed by the idea that we are outsourcing the (legitimate, necessary) policing of the boundaries between free speech and criminal speech to corporations. Not just social media corporations like Facebook and Twitter, but also to corporations that provide fundamental Internet infrastructure, such as Amazon’s AWS.

As private corporations, these companies are of course well within their rights to deny their platforms to anyone, for whatever reason. But is this the world in which we wish to live? Where private corporations manage our fundamental communication infrastructure and decide who can or cannot communicate with the public?

This does not bode well for the future.

When the commercial Internet emerged, Internet Service Providers asked to be viewed by the law much like telephone companies: common carriers, that is, who are responsible for providing an infrastructure, but are not responsible for the content. (The telephone company does not become an accomplice by providing the service through which criminals arrange a crime.) But social media blurs this line, since these companies become the curators of user-supplied content, which they prioritize, filter, and use for advertising. And companies like AWS must be mindful of the content supplied through their infrastructure, since there are repercussions: letting an AWS VM spit out spam, for instance, can cause other service providers to block a range of AWS IP numbers affecting a large number of well-behaved AWS users.

But now, we have service providers that police political content. Even attaching labels like “the factual accuracy of this item is disputed” is questionable: Who disputes it? How can I trust the objectivity of the fact checkers who attach such labels? But things get much worse when Facebook or Twitter altogether ban someone like Trump from their respective platforms, or when AWS kicks out parler.com.

I am not questioning the judgment behind these individual cases. I am not questioning the necessity behind the decisions. Rather, I am questioning the haphazard, arbitrary, opaque process that lead to these actions. How can the same process that, say, led to Trump’s lifetime ban on Twitter still permit religious extremists, dictators and worse to spread hate or promote acts far more criminal than anything Trump has done?

There has to be a better way.

And I think there is a better way. Now is the time, I think, for this industry to create a nonprofit council that establishes and manages standards, adjusted if necessary to take into account applicable law in different jurisdictions. The institution should be arms-length, with secure funding, so that its decisions would not be swayed by undue influences or funding concerns. The process should be entirely transparent. And companies, especially social media and cloud computing infrastructure companies, should abide by the standards set by this council.

The alternative is just unacceptable. I don’t care how well-intentioned Facebook or Twitter or Amazon are, I do not wish our freedom of expression in our digital future to be opaquely managed by for-profit corporations.

 Posted by at 1:47 pm