About fortune cookies and freedom of speech

14 January 2021 | IT-law, Social Media

Chaos all around these days in Washington and all hands on deck on social media. Donald Trump is entering his final days in the White House and everyone will have noticed. Or won’t they? Will Twitter put a stop to it? And is that censorship? We outline the legal issues below.

Make America great again

We have got used to it by now. The President of the United States of America thinking out loud via tweets. For the past three years, Donald Trump has regarded Twitter as the ideal medium for addressing the American people in his capacity as president. Social media platforms have evolved a lot in recent years and have also become stricter against harmful content. Trump received occasional warnings and his tweets were often labelled untrue by Twitter, but his final ban last weekend came as something of a surprise.

Two days after storming the Capitol, Trump tweeted the following:

“The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!”

Shortly afterwards, he was also kind enough to reply to the 1 million dollar question:

“To all of those who have asked, I will not be going to the Inauguration on January 20th.”

With the events in Washington still fresh in mind, Twitter decided to suspend Trump’s account first temporarily, then permanently. All his messages were taken offline. An official Twitter announcement stated that this was done because it was highly likely that the tweets would encourage and inspire people to repeat the criminal acts in the US Capitol on 6 January 2021. A decision also supported by other major social networks, such as Facebook.

Legal framework

Twitter is a private company. In the US, private companies are in principle allowed to decide for themselves what they will and will not allow on their platform. Based on its own terms of service, Twitter therefore decided to delete the account. In other words, Twitter can interpret “incitement to violence or hatred” as it sees fit. Is that an act of censorship, and thus a violation of the fundamental right to freedom of expression?

In Europe, Article 10 ECHR protects freedom of speech. If we make a jump to the new Audiovisual Media Services Directive, we see that in moderating content (e.g. removing videos that violate the terms of service, such as porn) the freedom of expression must always be respected, as well as the pluralism of the media. Unfortunately, it is too early to see how the Court of Justice will draw this delicate line. The room for manoeuvre for video sharing platforms in this new directive, by the way, is the same: legal obligations apply to the providers, but these rules can be interpreted very strictly in terms of their own terms of service.

One could rightly say that content can be removed too quickly. Providers of online platforms are obviously turning a deaf ear towards possible responsibility for content that incites violence or hatred. Logically, they will intervene in time.


You can make the comparison with Facebook’s decision to remove pictures and videos of Black Pete on its platform for discriminatory reasons. Facebook can do this perfectly well, based on its own general terms and conditions. The criticism is that world players – which is what they have become in the meantime – such as Facebook or Twitter start judging about what is morally right and wrong. Again and again, the balance between adequate protection of platform users on the one hand, and guaranteeing freedom of expression on the other, turns out to be anything but evident.

You may ask yourself whether it is wise to give those providers so much leeway for the interpretation of rules on their own platform. But it is not as simple as that. Just think of the live stream of the terrorist attack in Christchurch (New Zealand) on Facebook. National regulators set framework rules, the platforms elaborate these and intervene themselves. Platform providers are best placed to decide effectively and at an early stage, without intervention from above, on content that appears online. The artificial intelligence that discovers harmful content is managed by these companies, as are the teams of moderators that intervene when the computers fail (human-in-the-loop).


It is difficult to agree on all these issues. Consensus will exist on the fact that freedom of speech difficultly goes hand in hand with social media. Apple and Google also decided to remove ‘Parler’, another favourite platform of Trump supporters, from their app stores. Big Tech, too much power? Decisions like these transcend the discussion of freedom of expression, and even put the competition card on the table.

Of course, it is easier to act against Trump now that he is being pushed to the exit. This makes the timing of Twitter’s decision raise some eyebrows, on the other hand not at all. Were those tweets really dangerous? Was there a significant risk of another assault? Judge for yourself. It may feel awkward that two billionaires in Silicon Valley make a ‘democratic decision’ on their own. However, the right to freedom of speech is not absolute. Twitter did not crack open a fortune cookie, it removed a political message in favour of public safety. A message that could have been read by 88 million people.

Are you also confronted with dubious content, and do you doubt how to deal with it? Contact us at hallo@dejuristen.be !

Written by Emiel Koonen, Legal Adviser theJurists, and Kris Seyen, Partner theJurists

Our services

Information Technology

Intellectual Property