How the far right is weaponising irony to spread anti-Muslim hatred

Unchecked racist internet subculture grows online by caricature and satire

TOPSHOT - Messages written in chalk are seen on a pavement in Christchurch on March 18, 2019, after 50 worshippers were killed last week in two mosque attacks. New Zealand Prime Minister Jacinda Ardern said she would be looking for answers from Facebook and other social media firms about how an attack that killed 50 mosque-goers was livestreamed on their platforms. / AFP / DAVID MOIR
Powered by automated translation

The Christchurch terror attack prompted New Zealand’s cabinet to reform gun legislation in 72 hours, banning and recalling all military-style weapons, assault rifles and high-capacity magazines.

"On 15 March our history changed forever. Now our laws will too,” said Prime Minister Jacinda Ardern.

But guns are only the tip of the iceberg. A legal framework to safeguard against online radicalisation faces much more complex challenges.

The online history of Brenton Tarrant, the Christchurch shooter, shines a spotlight on the calculated efforts by the far right to use internet subcultures to stoke hatred and promote violence.

Before opening fire, the attacker said it was “... time to make a real-life effort”. This referred to premeditated acts of disseminating heaps of online content to create confusion and sow discord, including the dissemination of purportedly satirical pieces of media (“memes”) as a gateway to the more extreme elements of far-right ideology.

"All of these images entrench existing prejudices that people might already have about marginalised groups," Cindy Ma, who researches the use of irony by the far right at the Oxford Internet Institute in the UK, told The National.

Islamophobic memes include depictions of refugees as ill-intentioned scammers or predators and veiled Muslim women as criticisms of Islam. One of their goals is to push the boundaries of what is acceptable within mainstream audiences – a concept known as the “Overton window” or the “window of discourse”.

New online memes are produced every day on message boards such as 4Chan and 8Chan, which offer various forms of participation, a jargon and a sense of belonging. Some of these “memes” are directed at a mass audience and can be widely understood, while others contain implicit symbols that can only be decoded by members of the subculture.

This is the case of the hand gestures made by the shooter at the Christchurch District Court in New Zealand (an "OK" sign associated with white supremacy) and the song Remove Kebab (used by the attacker in the live broadcast and described as a "memetic euphemism for ethnic cleansing.")

According to the Oxford researcher, the internet subculture the attacker belonged to lies dangerously at the intersection between two phenomena: a long-brewing resentment against globalisation and multiculturalism and the emergence of a young battalion that is digitally based and wants to rebel against “polite society”.

“I think there is real danger to the mental health of people who participate in these forums,” Ms Ma said. “We have seen the way in which the shooter was engaging in this meme-dense rhetoric, the sense that he was doing it for the laugh, to engage in this kind of transgressive rule-breaking.”

A study on disinformation and radicalisation by Data & Society, a New York think tank, found that the dislike of “political correctness” is being exploited by the far right to radicalise the average young man – or, in jargon, to “redpill the normies”.

David Toube, director of policy at the anti-extremism organisation Quilliam in the UK, added that hiding this ideology behind satirical images provides deniability to their mission.

"The rationale is to produce material which could be understood by those in the know to be racist, but in a way which makes those fussing about it appear ridiculous," Mr Toube told The National.

“The effect is that these ideas become commonplace cultural references within meme culture.”

Mainstream social media platforms such as YouTube, Reddit, Facebook and Twitter have so far erred on the side of freedom of speech and failed to tackle white supremacist content as aggressively as Islamist content.

Dan Hett, a creative technologist and digital artist whose brother Martyn was killed in the 2017 Manchester Arena attack at the hands of a man radicalised in part by online content, argued in an op-ed this week that these tech companies must be held fully accountable.

“I’m often asked: what can the platforms do? There’s so much content, and moderating is so difficult, right? Wrong. I’m not having this for a second. I will say this with confidence: the platforms can do a whole lot, but aren’t doing it,” he wrote. “I know this because I’ve spent the past decade sitting in audiences at conferences listening to these people show off their impressive tech.”

Censored content is likely to mushroom again on other platforms, but “the average at-risk young person isn’t logging on to the dark web or frequenting relatively obscure, impenetrable forums – they’re trawling YouTube,” Mr Hett wrote.

Tech companies have argued that they cannot take an editorial stance and, indeed, recognising truly satirical content from the one peddled by far-right groups may not be an easy task.

But despite the challenges, experts believe more should be done. “They [the companies] and we as a public need to grapple with the idea that algorithms are a form of editorialising, because they decide what videos to push to the front page,” Ms Ma said.

The dangers of irony-drenched messages, which offer a way for racism and Islamophobia to hide in plain sight, must not be underestimated, according to Mr Toube.

“It is easy to write off this sort of [messages] as unimportant,” he said. “But when they're used by somebody who commits a massacre, it isn't proper to do so.”