Church stabbing, Bondi Junction highlight X’s failure to moderate

Church stabbing, Bondi Junction highlight X’s failure to moderate

Extreme conservative influencer Andrew Tate is among those using footage of the Wakeley stabbing to push hateful rants and conspiracies on X, days after an attack at Bondi Junction was followed by a swell of misinformation leading to the vicious harassment of an innocent 20-year-old student, prompting calls for the social media network to do more to moderate harmful posts.

The stabbing attacks have each been seized upon by certain communities on X to promote racist and antisemitic conspiracies and rants. In the Bondi Junction case, an infamous troll misidentified the suspect and made claims about his Jewish identity in a post viewed more than 400,000 times. In the Wakeley case, footage of a bishop being stabbed has been repurposed, republished, remixed and circulated endlessly.

Andrew Tate publishes a graphic edited version of the Wakeley attack, as well as a related rant on “male rage” against atheists and Zionists.Credit:

Prime Minister Anthony Albanese on Tuesday morning acknowledged the damage that online platforms can cause in the wake of these crises.

“We remain concerned about the role of social media, including the publication of videos that can be very harmful, particularly for younger people who have access,” Albanese said. “Anyone with a phone essentially can do that.”

Loading

On Tuesday afternoon, eSafety Commissioner Julie Inman Grant said she had issued X and Meta (parent of Facebook and Instagram) with notices to remove footage of the Wakeley stabbing or face fines.

“While the majority of mainstream social media platforms have engaged with us, I am not satisfied enough is being done to protect Australians from this most extreme and gratuitous violent material circulating online,” she said.

“I will not hesitate to use further graduated powers at my disposal if there is noncompliance.”

X, formerly Twitter, has been widely criticised for its approach to harmful material since it was acquired and renamed by billionaire Elon Musk. In January, X provided information to eSafety showing that under Musk’s leadership it had sacked 80 per cent of its safety engineers, and reinstated many accounts previously banned for hateful conduct, which Inman Grant described as a perfect storm for abuse. The platform has previously been fined by the office for failing to respond to its requests, and more recently said it would challenge eSafety in court over an order to remove abusive material.

X was contacted for comment and replied with an automated message saying “busy now, please check back later”.

Misinformation and the Bondi attack

Soon after the Bondi attack, in which six people were killed and several others injured, prominent accounts began sharing misinformation originally posted by Simeon Boikov – known on X as “Aussie Cossack” – that identified the perpetrator as a person named Ben Cohen. Some accounts shared photos of a 20-year-old student by that name, pulled from social media, while others spread wild antisemitic conspiracy theories. Cohen’s name was trending on X for at least 12 hours and the student’s family was inundated with messages. The misinformation was even briefly picked up and reported as fact by some mainstream media.

Dr Belinda Barnet, senior lecturer at Swinburne University, said Twitter was never completely immune from inaccurate reports but that its ability to amplify verified information declined drastically after it was acquired by Musk as he intentionally tore down the processes that made it useful for disseminating news.

Elon Musk’s acquisition of Twitter is seen as a turning point in its attitude towards moderation.Credit: Bloomberg

“Almost every decision he’s made has made it more difficult to distinguish fact from fiction,” she said.

“He immediately dismantled the verification system. That was important because a lot of the media and journalists and people who fact-check information had blue ticks. It definitely wasn’t a perfect system but it was a useful identifier of trustworthiness.”

Under the current system, blue ticks are given to paying X subscribers or gifted by Musk to accounts of his choosing, with the platform’s underlying algorithm pushing posts from those users above all others.

Barnet said that, combined with Musk’s moves to reduce moderation, reinstate accounts with histories of bad behaviour, and suppress posts with links to news organisations, the changes to verification had a profound effect on Twitter’s usefulness during a crisis as it was much harder to quickly assess the credibility of reports.

“It’s made it significantly less reliable as a source of information and, in fact, more dangerous in terms of its potential to spread misinformation,” she said, pointing to the fact Cohen’s name was trending all night on Saturday and into Sunday.

Loading

“[Before Musk] Twitter would have stepped in to kill the trend, or moderate, or take the original tweet or piece of misinformation out. They had some stopgaps in place that are not in place now.”

Australia has a voluntary code on misinformation, designed to prevent the spread of harmful posts, to which Meta, TikTok, Apple, Google and Microsoft are signatories. Twitter was previously a signatory but this, too, fell apart following Musk’s acquisition.

As part of the dismantling of safety features on the platform, X removed the ability for users to report misinformation, breaching the code. An independent committee investigating the breach reported that X promised to provide documents in its defence, and have an executive join a video-call discussion, but neither eventuated.

The Australian Communications and Media Authority (ACMA), which administers the voluntary code, is set to get expanded powers this year that would allow it to enforce a mandatory code or standard.

Live-streamed video and the church attack

When an armed assailant in Christchurch live-streamed himself opening fire in a mosque five years ago, it had an immediate impact on social media networks, which were just beginning to reckon with the moderation requirements of real-time video.

Facebook, Twitter and others founded a non-government organisation called the Global Internet Forum to Counter Terrorism (GIFCT), which monitors for violent extremist content and shares information to help platforms take action according to their own policies. This includes a hash-sharing database that allows all versions of a particular image or video to be blocked. Yet in some cases harmful images still proliferate.

Bishop Mar Mari Emmanuel was live-streaming a sermon when he was approached and stabbed several times, meaning thousands of viewers potentially saw the event in real time. But within hours X was filled with many versions of the violent and confronting video, which could be viewed on demand. Some users discussed the actions of the attacker and parishioners in detail, while some used it as evidence to support various speculative theories, and others even edited or remixed the video with other videos and Bishop Emmanuel clips.

At the time of writing, not only is it still easy to find graphic video of the stabbing on X but there are many terms in the “trending” sidebar (which does differ from user to user) that lead directly to the videos, to hate speech or to speculation about the attacker’s name, ethnicity, religion and motivation. Inman Grant has given X 24 hours to remove the footage.

On Facebook and Instagram, copies of the video are also floating around, though most are covered by a graphic content warning. A key difference between these and X is that searching on Meta’s platforms primarily brings up links to news sources, whereas X primarily brings up comments and videos posted directly to the platform. At the time of writing, video of the incident was difficult to find on TikTok.

Loading

Unlike the Christchurch live-stream, the stabbing video might not be considered “abhorrent violent material” under Australian law, since the footage was not taken by the perpetrator as part of their act of terror. That means there might not have been a legal obligation to remove it before Inman Grant’s order. But Zoe Hawkins, head of policy design at the Australian National University’s Tech Policy Design Centre, said X’s lack of action constituted a failure to protect its users.

“X should absolutely be doing more to moderate the broadcasting of graphic violence on their platform in order to comply with their obligations under Australia’s online safety codes, but also, importantly, to meet the community expectations of its Australian users”, she said.

“Unfortunately, X has already shown its willingness to disregard Australia’s existing online safety regulation. Australia needs to think creatively about how we recast the business incentives for social media companies, for example, by encouraging advertisers to vote with their feet or by building coalitions with other countries’ digital regulation agendas to create increasingly large market pressure for change.”

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in Technology

Loading

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : The Age – https://www.theage.com.au/technology/church-stabbing-bondi-junction-highlight-x-s-failure-to-moderate-20240415-p5fjze.html?ref=rss&utm_medium=rss&utm_source=rss_technology

Exit mobile version