Deplatforming between democratic hygiene and cancel culture

In the end, it was not the U.S. Senate that pulled the plug on Donald Trump, but social media platforms, notably Twitter and Facebook. Since it is well known that the greatest weapon of mass destruction are the masses themselves, social networks have increasingly scrutinized those who want to seduce the masses with populism, demagogy, or just plain lies. The fact that Donald Trump, now the former president of the United States, has been ousted from the most impotant social meda platforms is unlawful censorship for some and an overdue correction of an obvious aberration for others. But one step after another.

Deplatforming, the withdrawal of access to the digital public sphere of social networks, is not a new phenomenon, but a well-known moderation technique that has been used for years in online forums, such as when dealing with spam accounts. Nor is Trump the first politician to have this access revoked. In 2018, millions of users were banned from Twitter for their proximity to the Islamic State. Also in 2018, facebook stripped Myanmar’s military leaders of their official accounts after the platform was used to demonize Muslim Rohingya, hundreds of thousands of whom were then forced to flee ethnic cleansing to Bangladesh. Similarly, the removal of the right-wing conservative social media service parler by Amazon Web Services, Google and Apple also has precedent: Wikileaks was banned from Amazon back in 2010 after publishing secret documents about potential war crimes. So while it was by no means the first time a politician lost his speaking platform on the internet, the case of former President Donald Trump got the discussion going on the topic of deplatforming.

How did Trump’s deplatforming come about?

Long before Trump was even close to running for president, he was using his social media platforms to spread lies and conspiracy theories, such as that then-President Obama was not born in the United States. The far-reaching effects of the constant lies on large sections of the population led to an acceleration and intensification of the discussion on social media’s practical handling of this problem. As recently as 2017, Twitter let Trump get away with anything under the pretext of special news value – even when he threatened North Korea’s dictator Kim Jong Un with its extinction in a dispute over nuclear weapons testing. Ever since Trump’s presidential candidacy, the two major social media services went to incredible lengths to avoid having to rein in their biggest crowd-puller. It wasn’t until three years and countless lies and hate messages later that Twitter felt compelled to correct its line: under its “civil integrity” policy, created in 2018 and tightened in 2020, Twitter classified a tweet from Trump as “misleading information” for the first time on May 26, 2020, and put a warning label on it.

On Jan. 7, 2021, a day after the Trump-inspired riots at the Capitol in Washington that left 5 people dead and 138 injured, Twitter suspended Trump’s account for 12 hours. The short messaging service tied the temporary nature of the suspension to the requirement that Trump delete three tweets and warned that the suspension would be extended indefinitely on the next offense. Shortly before, facebook and instagram had also suspended the president’s account. Finally, one day and two tweets later, Twitter completed the step to permanent suspension. In addition to facebook and instagram, other services such as snapchat, twitch, spotify and shopify also blocked Trump’s user accounts.

Deplatforming in Germany

Private companies in the U.S. are allowed to deny politicians their services even if they provide elementary communication channels with the public. In Germany, however, this case is somewhat different. According to a decision by the Federal Constitutional Court, intermediaries are “bound by fundamental rights” as soon as they reach a decisive size that is relevant to public communication. In this context, the Federal Constitutional Court has confirmed that “private spaces” are no longer private if public communication is severely restricted without them.

Accordingly, a politician of Trump’s caliber could not so easily have been deprived of access to the digital public sphere in Germany, because judicial protection of political statements takes a higher priority here. According to the Federal Constitutional Court, private companies are not directly bound by fundamental rights such as freedom of expression, but fundamental rights “radiate” into other areas of law, including the T&Cs of social networks. In practice, this means that facebook had to take back the deletion of a statement by an AfD politician because the exercise of freedom of expression did not violate “the rights of another person,” as the T&Cs required.

At the same time, government politicians in Germany have greater obligations to tell the truth than their American counterparts do. Public expression law demands principles such as objectivity and accuracy from the speeches of public officials more rigorously than in the United States. In November 2015, for example, then-Federal Research Minister Johanna Wanka had to delete a “red card” she showed the AfD on her ministry’s website for “incitement of the people” as a result of an injunction from the Federal Constitutional Court. So legally, a German Trump could have been fought much earlier.

Even if the legal situation in Germany makes a similar course of events as in the U.S. seem unlikely, this does not answer the question of how we will deal in the future with politicians who divide our societies and incite them against each other, and whether blocking important digital communication channels is one of them. What is clear and indisputable is that social media platforms have too much power. But what to conclude from this interim finding is less clear. That’s because two sides are diametrically opposed in the discussion about what social networks should and should not be allowed to do now.

One perspective goes like this: deplatforming should be allowed, because real censorship can only come from the state, and certainly not from private companies. The right to freedom of expression is not restricted by a simple deletion of accounts on social networks, Donald Trump can continue to make use of this right, the reasoning goes, just not on twitter and facebook. Moreover, the state cannot force companies to give people like him a platform – especially not if that person has previously agreed to the terms of use and then violates them in his statements.

The opposing side, represented by Chancellor Merkel among others, also argues that freedom of expression, as a fundamental right of elementary importance, can only be restricted by politicians, not at the whim of influential corporate leaders. The conclusion here is albeit a different one, namely that deplatforming should be rejected, at least insofar as it is executed by social media themselves. After all, freedom of expression in social networks has also led to very desirable developments such as the Arab Spring and should therefore not be touched.

Alternatives to company-driven deplatforming

First of all, scientific evidence shows that deplatforming really does work. A 2016 study showed that mass deletion of accounts of supporters of the Islamist terrorist organization ISIS led to a significant loss of digital influence. Another analysis proved a week after Trump’s platform withdrawal that disinformation about election fraud in the U.S. had declined by 73%. And with a view to Germany, a further study suggested that deplatforming significantly limits the mobilization power of the far right.

In the search for alternatives to corporate-driven deplatforming, some good suggestions have been made. Many of them, however, do not so much concern themselves with the root of the problem (i.e. the creation and popularization of hateful content), but rather with the mere alleviation of symptoms. These suggestions include the Santa Clara principles on content moderation. Some items from this list, such as the right to object to unlawful deletions, have already been adopted by EU and German legislators. In addition to YouTube and Twitter, these principles are also supported by Facebook, but none of the major platforms in the U.S. adhere to them except reddit. So while social media in the U.S. are largely free to delete whoever with no way to formally object to this decision, in Germany they are being held accountable by the updated version of the Network Enforcement Act.

External platform councils, staffed by figures of great legitimacy such as Nobel Peace Prize winners, are also a good start in this regard, albeit one with room for improvement. Examples include the deletion advisory board that Google assembled to define its rules on the “right to be forgotten”, or the facebook oversight board that will decide whether to permanently suspend Donald Trump from the social network. The platforms have realized that the rules they set are enormously influential and that they need to seek legitimacy from outside because they do not have it themselves. However, these boards should not be filled by the social media themselves. Also, in the case of facebook’s oversight board, more than 25% of the council members are U.S. citizens, so the diversity is not representative of a global company.

We need to talk

…because even if those approaches are good first steps, they are only effective in treating the symptoms, but not the problem itself. The problem is the algorithms that give social networks their character as fear mongers. The corporate secrets of twitter and facebook that threaten democracy – namely, those algorithms that are responsible for curating individual social media accounts and, for business reasons, primarily promote fear- and anger-ridden messages – have so far been untouchable. Admittedly, the EU Commission’s Digital Services Act promises a better understanding with a transparency obligation for these algorithms. A major hurdle in effectively regulating social networks is still the lack of knowledge about their internal decision-making and rule-making processes. At the same time, however, according to lawyer and scholar Matthias Kettemann, intermediaries are so complex that legislators still lack the ability to adequately regulate social networks. This is because they fall through many categories because they fulfill many different functions: privacy law, competition law, communications law, media law (if they produce their own content).

However, mere transparency is not enough. More important would be a genuine “democracy compatibility check” of the recommendation algorithms of social media. In addition, filter bubbles should be able to be removed in a new “real world mode” so that users can see their home feed without the automatized recommendation function. Last but not least, users should also be able to pay for social networking services with money instead of data.

Ultimately, the social media have created their own monster in Trump. Deplatforming is only the ultima ratio for correcting an undesirable development that has been destabilizing societies for years. It would therefore be more important to work on the causes, the algorithms, which are calibrated for interaction and spread anger and fear more strongly than moderate and deliberative views.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.