Powered by MOMENTUM MEDIA
cyber daily logo

Breaking news and updates daily. Subscribe to our Newsletter

Breaking news and updates daily. Subscribe to our Newsletter X facebook linkedin Instagram Instagram

Report: Australians are increasingly exposed to risk of eating disorders by social media

Lax content moderation and recommendation systems, alongside poor ad approval systems, are creating “safety risks for end users”.

user icon David Hollingworth
Tue, 26 Mar 2024
Report: Australians are increasingly exposed to risk of eating disorders by social media
expand image

Research by the Australian arm of research and policy organisation Reset.Tech has exposed a poorly moderated social media and internet landscape when it comes to managing exposure to dangerous content promoting eating disorders.

According to the Not Just Algorithms report, there are four key areas of risk when it comes to actually creating eating disorders in consumers.

Rys Farthing, Reset.Tech director of children’s policy, says that despite the current level of exposure, the fixes are easy ones.

============
============

“Our investigation shows that platform’s systems can create risks, but also that they can easily be redesigned in ways that make them safe,” Farthing said in a statement.

“We need strong, comprehensive regulations that make sure that platforms build their systems in safe ways in the first instance. It isn’t enough for a platform to have a policy against something, they’ve got to build their products and all of their systems to be safe in the first instance.”

Reset.Tech performed several experiments: setting up fake and “primed” accounts for a 16-year-old girl, reporting “explicitly pro-eating disorder content”, creating fake ads, and investigating how ad systems can help target users who interact with such content.

Each experiment was performed on TikTok, Instagram, Meta, Google, and X – and while the results are not great, they’re also mixed in terms of which platform fails the hardest.

When it comes to content recommendation, TikTok came out commendably on top – none of its recommended content was classified by Reset.Tech as pro-eating disorder content. Twenty-three per cent of content recommended by Instagram was considered pro-eating disorder content, while X – formerly Twitter – was by far the worst.

Sixty-seven per cent of content that X recommended was pro-eating disorder content, and alarmingly another 13 per cent displayed imagery that Reset.Tech counted as self-harm-related.

The stats were similar with content moderation as well – X was the best at dealing with reported content, though with only 15.5 per cent of reported content being removed. X beat Instagram to the last spot, but only just – each company removed six and 6.3 per cent of reported posts that included pro-eating disorder content.

Ad approval systems were tested on TikTok, Google, and Facebook, with TikTok ending up the worst culprit. The ByteDance-owned social media app approved every ad promoting “dangerous weight loss techniques and behaviours” that was thrown at it. The rest weren’t much better, however, with 83 per cent of ads getting approved on Google, and 83 per cent on Facebook.

Finally, Reset.Tech found that pretty much every platform tested allowed some targeting of users who watch or search for content that is pro-eating disorder. In every case, an advertiser could use this information to target a vulnerable cohort of children in some form.

Reset.Tech believes the answer is stronger regulation, and that the Basic Online Safety Expectations section of the Online Safety Act should be amended so that all such platforms are required to protect their users from harmful content and to introduce a “duty of care” element that social and internet platforms should follow.

“In light of these findings, it’s clear that stronger regulation is needed to ensure that platforms build safeguards in their systems without exception,” Farthing said.

“They cannot be allowed to pick and choose which safeguards they use, or which systems they protect, as this inevitably leads to patchy protection. We need stronger accountability and enforcement mechanisms including enhanced civil penalties and the ability to ‘turn off’ services which demonstrate persistent failures.”

You can read the full report here.

David Hollingworth

David Hollingworth

David Hollingworth has been writing about technology for over 20 years, and has worked for a range of print and online titles in his career. He is enjoying getting to grips with cyber security, especially when it lets him talk about Lego.

cd intro podcast

Introducing Cyber Daily, the new name for Cyber Security Connect

Click here to learn all about it
newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.