Australia gives Twitter legal notice to clean up online hate content

10 months ago 90
BOOK THIS SPACE FOR AD
ARTICLE AD
Twitter logo on phone
Constanza Hevia/Contributor/Getty Images

Twitter now has 28 days to respond to the legal notice from Australia and detail what the social media website is doing to deal with online hate posted on its platform, which accounts for the most complaints over the past year. Continuing violations could lead to daily fines of up to AU$700,000 ($474,670). 

The social media platform is the source of one in three complaints sent to Australia's online safety regulator, eSafety. 

Also: Micro-social media: What is it and which tools should you try?

The number of reported online abuse on Twitter also has been climbing since Elon Musk took control of the company last October, according to eSafety Commissioner Julie Inman Grant. She noted that the spike in complaints coincided with Twitter's move to cut its global workforce from 8,000 to 1,500, which included its trust and safety teams. The company also removed its public policy presence in Australia. 

In addition, Musk announced a "general amnesty" in November, during which 62,000 banned or suspended users reportedly were reinstated to the platform, including 75 that had more than 1 million followers, said Inman Grant. She also pointed to the reinstatement of previously banned accounts that "emboldened extreme polarizers, peddlers of outrage and hate." These included neo-Nazis in Australia and overseas. 

While Twitter's current terms of use and policies prohibit hateful conduct on the site, the increase in complaints to eSafety and reports of hate content that remained on the platform indicate that Twitter is unlikely to be enforcing its rules, she noted. 

Also: US Surgeon General releases social media health advisory for American teens and tweens

Citing eSafety's own research, she said almost one in five Australians had experienced some form of online hate. 

"Twitter appears to have dropped the ball on tackling hate," Inman Grant said. "We need accountability from these platforms and action to protect their users. You cannot have accountability without transparency and that's what legal notices like this one [issued by eSafety] are designed to achieve." 

eSafety in February also served legal notices to several social media platforms, including Twitter, Google, TikTok, Twitch, and Discord, seeking answers on the steps each was taking to address child sexual exploitation and abuse, sexual extortion, and the promotion of harmful content by its algorithms.

Also: Do you use Snapchat's AI chatbot? Here's the data it's pulling from you 

The regulator said it currently was assessing the responses to these notices and would release more details in due course. Its regulatory powers fall under Australia's Online Safety Act, encompassing serious adult online abuse, cyberbullying of children, and image-based abuse. 

Before joining Australia's public sector, Inman Grant had served corporate stints at Microsoft and Twitter, the latter of which she helped set up the company's policy and safety initiatives in Australia, New Zealand, and Southeast Asia. 

In a note published Wednesday on Tech Policy Press, she wrote: "Due to the spontaneous, open, and viral nature of Twitter, I once believed that no other platform held such promise of delivering true equality of thought and free expression... I was so convinced of the company's potential for positive social change that I went to work there in 2014. 

"Today, I oversee the independent statutory agency... [which] serves as a safety net, protecting Australians from a range of online harms when the platforms fail to act," she said. "And Twitter does appear to be failing: Failing to confront the dark reality the platform is increasingly used as a vehicle for disseminating online hate and abuse."

Also: How to get a Twitter blue checkmark (and other minor features)

Grant said the legal notice sought to find out how Twitter was enforcing its online hate policies and how many accounts previously banned for hate had been allowed back on the platform.

"Without transparency about how Twitter's own rules are set and enforced, or how their algorithms or Twitter Blue subscriptions are further enabling the proliferation of online hate, there is a real risk bad actors will be allowed to run rampant on the platform," she said. 

Read Entire Article