BOOK THIS SPACE FOR AD
ARTICLE ADAustralia's eSafety Commissioner is set to receive sweeping new powers like the ability to order the removal of material that seriously harms adults, with the looming passage of the Online Safety Act.
Tech firms, as well as experts and civil liberties groups, have taken issue with the Act, such as with its rushed nature, the harm it can cause to the adult industry, and the overbearing powers it affords to eSafety, as some examples. Current eSafety Commissioner Julie Inman Grant has even previously admitted that details of how the measures legislated in the Online Safety Bill 2021 would be overseen are still being worked out.
The Bill contains six priority areas, including an adult cyber abuse scheme to remove material that seriously harms adults; an image-based abuse scheme to remove intimate images that have been shared without consent; Basic Online Safety Expectations (BOSE) for the eSafety Commissioner to hold services accountable; and an online content scheme for the removal of "harmful" material through take-down powers.
Appearing before the Parliamentary Joint Committee on Intelligence and Security as part of its inquiry into extremist movements and radicalism in Australia, Inman Grant said while the threshold is quite high in the new powers around take-down requests, it will give her agency a fair amount of leeway to look at intersectional factors, such as the intent behind the post.
"I think that the language is deliberately -- it's constrained in a way to give us some latitude ... we have to look at the messenger, we have to look at the message, and we have to look at the target," she said on Thursday.
The Act also will not apply to groups of people, rather simply individuals. The commissioner guessed this was due to striking a balance on freedom of expression.
"To give us a broader set of powers to target a group or target in mass, I think would probably raise a lot more questions about human rights," she said.
She said it's a case of "writing the playbook" as it unfolds, given there's no similar law internationally to help guide the Act. Inman Grant said she has tried to set expectations that she isn't about to conduct "large scale rapid fire".
"Because every single removal notice or remedial action that we take is going to have to stand up in a court of law, it's going to have to withstand scrutiny from the AAT, from the Ombudsman, and others," she said. "So the threshold is high, it's really probably going to target the worst of the worst in terms of targeted online abuse."
Of concern to the commissioner is that social media platforms have vast access to all sorts of signals that are happening on their platforms, yet they often step in when it's too late.
"I think what we saw with the Capitol Hill siege is it wasn't really until the 11th hour that they consistently enforced their own policies," she said. "So I think we've seen a real selective application of enforcement of some of these policies and we need to see more consistency."
AVOIDING WHACK-A-MOLE
She believes the BOSE will go some way to fixing that. Without setting these expectations, Inman Grant said she would be trying to energise her team to "play a big game of whack-a-mole".
On finding the same perpetrators using the same modus operandi to target others, Inman Grant said it's a prime example of where safety by design is so important.
"You're building the digital roads, where are your guard rails, where are your embedded seatbelts, and what are you doing to pick up the signals?," she said.
"I don't care what it is, whether you're using natural language processing to look at common language that might be used or IP addresses, there are a range of signals that they can -- they should be treating this like an arms race, they should be playing the game of whack-a-mole, rather than victims and the regulators."
The safety by design initiative kicked off in 2018 with the major platforms. Currently, eSafety is engaged with about 180 different technology companies and activists through the initiative.
Inman Grant called it a "cultural change issue", that is, tweaking the industry-wide ethos that moving fast and breaking things gets results.
"How do we stop breaking us all?," she questioned. "Because you're so quick to get out the next feature, the next product, that you're not assessing risk upfront and building safety protections at the front end.
"I mean, how many times do we have to see a tech wreck moment when companies -- even a startup company -- should know better."
The solution, she said, isn't the government prescribing technology fixes, rather a duty of care should be reinforced when companies aren't doing the right thing, such as through initiatives like safety by design. Inman Grant said the BOSE will, to a certain degree, force a level of transparency.
"We're holding them to account for abuse that's happening on their platforms, we're serving as a safety net, when things fall through the cracks, and we're telling them to take it down," she said. "Platforms are the intermediaries … the platforms [are] allowing this to happen, but we are fundamentally talking about human behaviour, human malfeasance, criminal acts online targeting people."
Inman Grant said eSafety is currently working with the venture capital and investor community, "because they're often the adults in the room" on developing an interactive safety by design assessment tool, one for startups and one for medium-sized and large companies, that should be made public within the next three weeks.
LIKE THE REAL WORLD, JUST DIGITAL
"It's only been 50 years since seatbelts have been required in cars and there was a lot of pushback for that. It's now guided by international standards. We're talking about standard product liability -- you're not allowed to produce goods that injure people, with food safety standards you're not allowed to poison people or make them sick -- these should not be standards or requirements that technology companies should be shunning," the commissioner said.
"The internet has become an essential utility … they need to live under these rules as well. And if they're not going to do it voluntarily, then they're going to have a patchwork of laws and regulations because governments are going to regulate them in varying ways."
Inman Grant said eSafety is engaging with the social media platforms every day, and has garnered an 85% success rate in the removal of non-consensually shared intimate images and videos.
"It tends to be what we would call the 'rogue porn sites' that are resistant to take down," Inman Grant said. "And of course, we see a lot of similarities in terms of the hosting services and the kinds of sites that host paedophile networks or pro terrorist or gore content."
She said eSafety saw a spike in terms of all forms of online abuse over the COVID period, but it wasn't due to the reason many would think.
"We often talk about seeing a lot of child sexual abuse on the dark web, but we saw a lot more on the open web and out in the open on places like Twitter, Instagram, and Facebook -- up to 650% in some cases from the from the year prior," she said.
"It wasn't just that simplistic explanation that more kids were online unsupervised [and there were more] predators targeting them, that certainly did happen, but really what was happening is a lot of the companies have outsourced their content moderation services to third parties, and many of these are in the Philippines and Romania, in developing countries where these workers were sent home and couldn't look at the content."
She said with the content moderation workforce unable to view the content and the preponderance of more people online, created a "perfect storm".
"You saw some of the companies using more AI and analytic tools, but they're still really very imperfect. And almost all of the platforms that do use AI tools always use a portion of human moderation because it's just not up to par."