Research and advocacy of progressive and pragmatic policy ideas.
We share the challenges that we face in tracking online hate and call for more effective content regulation policies.
By Jia Vern Tham & Khairil Ahmad27 September 2021
In the recently published Freedom on the Net 2021 report, researchers have noted that there has yet been another decline in internet freedom globally, citing among other factors a “high-stakes battle between states and technology companies” especially in the last year.
At the heart of this battle is the growing pressure to address harmful content such as hate speech and extremism. There have been interesting developments in this area over the past few weeks; abroad, the international community has witnessed the takeover of Afghanistan by a social media savvy Taliban. Meanwhile, at home and on the opposite end of the spectrum, activist Heidy Quah is suing the Malaysian government over a law which criminalises online content deemed “offensive” and intending to “annoy”.
These contrasting cases beg an important question: what really constitutes harmful online content? Social media platforms and policymakers have long struggled to define and characterise it, a fact noted by many researchers. The Counter Extremism Project (CEP), for instance, found that tech giants continue to fail in noting the severity of online extremism, to draft and enforce their own guidelines, or otherwise implementing safeguards against the problem. These pitfalls are not unique to extremist content – other forms of harmful content such as incitement to hatred, marginalisation, and violence are still rife on the internet.
Purportedly banned content has managed to stay online via a range of evasion tactics, exposing a lack of comprehensiveness and agility in social media policies as well as monitoring. Research has found that content from extremist groups were only flagged and removed from social media platforms about 38% of the time, contrary to claims of high removal rates by companies such as Facebook and Twitter.
The Taliban, long designated as a violent group by major companies and banned from key social media platforms, is an interesting example. Since their takeover of Afghanistan, pro-Taliban accounts have reportedly grown in numbers across various platforms. The accounts’ content is typically written in local languages. The writing and tone is subtle so that the posts do not explicitly violate online content policies. The Taliban has also reportedly been hopping from one platform to another to evade bans. It is a case study in the growing sophistication of how harmful online content is able to stay online — in some cases, long enough to incite offline violence.
On the other hand, certain speech policies, laws or controls are prone to misuse. In Malaysia, overly broad provisions describing harmful content have contributed to an overly punitive approach towards content as well as limiting legitimate criticism, as exemplified most recently by the Heidy Quah case. After being charged for a Facebook post spotlighting alleged mistreatment of refugees at immigration detention centres, the activist is challenging the law criminalising “offensive” content as unconstitutional. The tagging of her Facebook post as “offensive” is debatable in itself, but equating “offensive” to “criminally harmful” is, in our view, overzealous and disproportionate.
For extremist or hateful content, current measures lack clarity and comprehensiveness. These are issues that we have encountered in our own ongoing work to develop The Centre’s proof-of-concept hate tracker, #TrackerBenci. Fairly serious posts containing localised insults (ex. ‘Bangsa DAPig’, an anti-Chinese slur), rewritten slurs (ex. Malay as ‘meleis’, an anti-Malay/Muslim slur), and vague threats (ex. using the gun emoji) targeting specific groups have managed to stay online. Some of these include explicit threats of violence, such as wishing for the death of someone from a particular ethnic group. The posts have yet to be detected and flagged as harmful content by social media platforms’ monitoring algorithms.
Machines are imperfect, and so are the humans behind them. Containing harmful content will take continuous improvement of policies, both in terms of enforcement and definitions. More resources and continuous learning are required to detect and minimise cleverly written online posts that could be dangerous to individuals or groups of people. Our initiative recognises the importance of this; as part of the process to develop #TrackerBenci, a diverse (though small) panel of locally-informed researchers has sifted through thousands of tweets containing everchanging terminology and misspelled phrases used to target hatred at specific groups in Malaysia.
It’s a different story, however, for so-called “offensive content”. Again, we take the example of tweets analysed for #TrackerBenci. Numerous tweets would likely be determined as “offensive” or “insulting” by the individuals or groups targeted, but not necessarily dangerous, inciting or deserving of criminal charges. As such, current understanding and policies on harmful online content require more clarity and nuance, especially in distinguishing dangerous content from merely “offensive” ones, or ones that are legitimate criticism and dissent.
The issue of deciding what is too lax and too punitive in content regulation will likely remain with us for the foreseeable future. But this is the nature of dealing with harmful online content; constant discourse of what is harmful needs to happen in order to effect policy and legislative improvements. In the meantime, we continue the eye-opening work of perusing Malaysian tweets for our machine-learning hate tracker project – stay tuned.