Ever since last week's riots at the Capitol, the global debate on freedom of speech online has been dragged back out of it's closet for another going over.
As a TLDR summary to provide context to this thread, as of this week, President Trump has been banned or blocked from a wide range of the largest social media platforms - for the most part the bans and suspensions are for terms of use breaches in response to the claim that he used his pulpit to prompt the Capitol riots. As an aside to this, social media platforms which have more liberal free speech policies and that have been seen as alternative pathways for the President to communicate have been dropped by app hosts such as Google, Apple and Amazon.
For me this conversation leads to a level of dissonance that I struggle to reconcile.
Freedom of speech
I am an advocate of the concept of freedom of speech in it's classic, and internationally recognised sense: Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. On the other hand I also feel that freedom of speech shouldn't shield people from the consequences of what they say where it infringes on other rights of other individuals - there is a delicate balance to be found where the expression of a right of one individual can potentially infringe on a right of another.
Scope
Confusing the discussion further are two other factors. The first factor is that our communications are now global. Participants in conversation sit under different legal interpretations of what freedom of speech and accountability are. The second is that much of our communication online occurs in a confused collection of environments that function as a public realm, but are administered as private and corporate owned space. One could easily envisage a situation where an opinion is stated by a Russian person speaking on a platform that is hosted in multiple locations including India and Turkey, moderated in Bangladesh, owned by a company that is incorporated in Ireland and where the majority of development staff are based in the United States: Would US legal or cultural ideas on free speech apply to this situation where an American reader comes across the original Russian opinion?
Question one: How do we manage freedom of speech when it crosses borders? I.e. What obligations to private corporations who own our communication space have to respect freedom of speech across borders? What obligations do these corporations have, when they are headquartered in another legal jurisdiction from their primary user base?
Social media - a facilitated community
Corporations that facilitate conversation within their privately owned and moderated communities should in essence be able to moderate free speech however they see fit within their legal framework. We can choose to join and leave their communities and we accept their terms upon joining. Much like I can kick ol' uncle racist Bob out of my house when he starts screaming about "PC gone mad", a facilitated community should be able to define who can join a conversation, and what the limits of that conversation are. But the nature and scope of the way these facilitated communities function as a semi-public space does lead to a grey area: whether they are providing public utility - or for that matter, whether they are broadcast media. The latter two situations would imply different and perhaps conflicting sets of standards and different obligations regarding the free expression of their users.
Question two: Are social media companies providing public space? and as such should the use of that public space be governed by a nation's legal framework around freedom of speech?
Systemic bias
Because of the size and scale of the interactions in these facilitated communities, algorithms are used to manage what users within the community see. The role of the algorithm is two fold. Firstly it serves to improve a user experience: it filters what a user sees by judging what is most relevant to that user's interactive engagement. Secondly it serves to monetise the facilitated community: again it filters what a user sees in a way that maximises the future potential interactions with services that have a paid component. The algorithm's goals: to increase engagement and monetise that engagement - don't specifically include human judgement on what type of content is engaged with, just that it increases that engagement - at the same time it is topic neutral, but pro-topic.
The "basket of deplorables"
Thanks to the two stated roles of a facilitated community algorithm - interaction and monetisation at all costs - these communities have witnessed a 'race to the bottom' whereby communication that is more likely to be interacted with, is more likely to be propagated. In effect, algorithms punish mundane communication in favour of opinionated discourse. This leads to a hollowing out of the middle ground in debates as the loudest voices are rewarded with the most engagement. Make no mistake: this situation isn't exclusive to one political party or narrative - no matter how much you feel it is - Every political perspective becomes amplified at once to a specific audience in order to maximise the algorithm's goal of engagement and monetisation. This means community feelings of anger, victimhood, success, struggle, victory etc etc are amplified to the most engageable users for each of those feelings.
Arbitrary solutions to the most obscene of voices
In a situation where freedom of speech has been sidelined in favour of algorithmically targeted and monetisable communications, and where mundane conversations are sidelined in favour of louder discourse, we find ourselves facing up to the results of the increased volume of opinionated content. The major facilitated community providers have been forced through public pressure to tackle the net result of the effect of their algorithms: punishing people who do exactly what their algorithms prefer (create loud opinionated conversations). Because of the internal conflict within these organisations between their desire to increase and monetise opinionated conversation, and the public relations problems created by the amplified opinions, the owners of facilitated communities have operated a minimum possible interference model. They they are slow to act even in response to those who fail to behave in accordance with their own terms of use. They are reactive rather than proactive and appear arbitrary in action. For example, Donald Trump has likely been banned from his facilitated communities because of the results of his speech, not because of the speech itself or any speech he has given previously - He was banned only when his speech led to an action.
On the other hand, facilitated communities that take a lighter touch on moderation tend to end up as a "basket of deplorables" where under the guise of free speech, users utilise the open mic that is provided to advertise extreme perspectives that are too distasteful for the majority of facilitated communities. Whether these services like it or not, their services become active facilitators for extremism - and again in effect, suppress free speech by algorithmically disincentivising mundane opinion.
Question three: How can social media companies moderate their communities when their financial survival depends in part on the results of unmoderated engagement?
Question four: Should we be depending on private corporations to moderate free speech, and if not, would this necessitate further guidance from governments on exactly what free speech is in the age of amplified opinion?
Question five: Can free speech even exist at all when our entire discourse is being channelled by algorithm?
That should be enough to get some opinions going...