In the wake of the recent attacks in Manchester and London, British Prime Minister Theresa May has called on social media companies to eliminate “safe spaces” online for extremist ideology. Despite losing the majority in the recent election, she is moving forward with plans to regulate online communications, including in cooperation with newly elected French President Emmanuel Macron.
May’s statement is just one of several initiatives aimed at “cleaning up” the internet. Others include Germany’s proposal to fine social media companies that fail to remove illegal content and the Australian attorney general’s call for laws requiring internet companies to decrypt communications upon request.
It is understandable to want to do something – anything – to help restore a lost sense of security. But as a human rights lawyer who has studied the intersection of human rights and technology for the last 10 years, I think May’s proposal and others like it are extremely concerning. They wrongly assume that eliminating online hate and extremism would reduce real-world violence. At the same time, these efforts would endanger rather than protect the public by curtailing civil liberties online for everyone. What’s more, they could involve handing key government functions over to private companies.
Weakening security for all
Some politicians have suggested tech companies should build “back doors” into encrypted communications, to allow police access. But determined attackers will simply turn to apps without back doors.
And back doors would inevitably reduce everyone’s online safety. Undermining encryption would leave us all more vulnerable to hacking, identity theft and fraud. As technology activist Cory Doctorow has explained: “There’s no back door that only lets good guys go through it.”
The harms of speech?
May’s statement also reflects a broader desire to prevent so-called “online radicalization,” in which individuals are believed to connect online with ideas that cause them to develop extreme views and then, ultimately, take action.
The concept is misleading. We are only beginning to understand more about the conditions under which speech in general, and particularly online speech, can incite violence. But the evidence we have indicates that online speech plays a limited role. People are radicalized through face-to-face encounters and relationships. Social media might be used to identify individuals open to persuasion, or to reinforce people’s preexisting beliefs. But viewing propaganda does not turn us into terrorists.
If it isn’t clear that removing extreme or hateful speech from the internet will help combat offline violence, why are so many governments around the world pushing for it? In large part, it is because we are more aware of this content than ever before. It’s on the same platforms that we use to exchange pictures of our children and our cats, which puts pressure on politicians and policy makers to look like they are “doing something” against terrorism.
Overbroad censorship
Even if online propaganda plays only a minimal role in inciting violence, there is an argument that governments should take every measure possible to keep us safe. Here again, it is important to consider the costs. Any effort to remove only “extremist” content is destined to affect a lot of protected speech as well. This is in part because what some view as extremism could be viewed by others as legitimate political dissent.
Further, the exact same material might mean different things in different contexts – footage used to provoke hate could also be used to discuss the effects of those hateful messages. This is also why we are not likely to have a technological solution to this problem any time soon. Although work is underway to try to develop algorithms that will help social media companies identify dangerous speech, these efforts are in early stages, and it is not clear that a filter could make these distinctions.
The risks of private censorship
Trying to eliminate extremist content online may also involve broad delegation of public authority to private companies. If companies face legal consequences for failing to remove offending content, they’re likely to err on the side of censorship. That’s counter to the public interest of limited censorship of free speech.
Further, giving private companies the power to regulate public discourse reduces our ability to hold censors accountable for their decisions – or even to know that these choices are being made and why. Protecting national security is a state responsibility – not a task for private companies.
If governments want to order companies to take down content, that’s a public policy decision. But May’s idea of delegating this work to Facebook or Google means shifting responsibility for the regulation of speech to entities that are not accountable to the people they are attempting to protect. This is a risk to the rule of law that should worry us all.
The way forward
There is, of course, online material that causes real-world problems. Workers tasked with reviewing flagged content risk harm to their mental health from viewing violent, obscene and otherwise disturbing content every day. And hate crimes online can have extraordinary impacts on people’s real-world lives. We need to develop better responses to these threats, but we must do so thoughtfully and carefully, to preserve freedom of expression and other human rights.
One thing is certain – a new international treaty is not the answer. In her June 4 statement, May also called on countries to create a new treaty on countering the spread of extremism online. That is simply an invitation to censor online speech, even more than some nations already do. Nations need no additional incentives, nor international support, for cracking down on dissidents.
Human rights treaties – such as the International Covenant on Civil and Political Rights – already provide a strong foundation for balancing freedom of expression, privacy and the regulation of harmful content online. These treaties acknowledge legitimate state interests in protecting individuals from harmful speech, as long as those efforts are lawful and proportional.
Rather than focusing on the straw man of “online radicalization,” we need an honest discussion about the harms of online speech, the limits of state censorship and the role of private companies. Simply shifting the responsibility to internet companies to figure this out would be the worst of all possible worlds.
This article was originally published on The Conversation.