AI-generated child abuse content must be blocked, says IJM Australia CEO

The rise of generative AI has sparked urgent calls for stronger protections in the fight against child sexual exploitation online.

The rise of generative artificial intelligence has sparked urgent calls for stronger protections in the fight against child sexual exploitation online.

Speaking in response to legislation introduced by independent MP Kate Chaney, International Justice Mission (IJM) Australia CEO David Braga stressed the need to block AI-generated child sexual abuse material before it can spread.

Mr Braga warned that much of this material uses faces of real children or is created from known abuse images and videos. He said such content normalises abuse and can lead offenders closer to real-world harm. “Because the AI-generated child sexual abuse images and videos are now almost indistinguishable from abuse images and videos created through sexual abuse of children, it is a short step from AI-generated content to real-world content,” he said.

While the government has committed to a “digital duty of care” for technology platforms, legislation is yet to be introduced. This measure would require companies to take reasonable steps to prevent foreseeable harms, including child abuse material. Mr Braga argued that AI-generated content must be included in this scope. He called for the strengthening of the Online Safety Act so that all parts of the technology ecosystem—from operating system providers to device manufacturers—are required to detect and disrupt such material.

Read more: ACCAN welcomes move to shield kids from harmful YouTube algorithms

IJM, a global organisation working to protect people in poverty from violence, has seen firsthand the damage caused by online sexual exploitation of children. Its research in the Philippines reveals the country as a global epicentre for financially motivated child sexual exploitation material (CSEM) production, often facilitated via livestreaming. Alarmingly, offenders in Australia are among those paying to direct abuse in real time.

The scale is confronting. IJM’s prevalence study, in partnership with the University of Nottingham Rights Lab, estimates that in 2022 nearly 471,416 Filipino children—around one in every 100—were trafficked to produce new CSEM. Around 232,444 adults were involved in this abuse. These figures highlight the industrial scale of the crime, much of it hidden on encrypted platforms.

Communications strategies play a critical role in addressing this crisis. Clear public messaging, awareness campaigns, and targeted education can reduce demand, support survivors, and encourage reporting. In the age of social media, platforms have both the reach and responsibility to ensure harmful content is detected and removed swiftly. Public trust depends on transparent action and visible commitment to safety.

Ultimately, combating AI-generated child abuse material is not just a technical challenge but a communications imperative. By making the dangers understood, rallying public support, and holding technology providers accountable, it becomes possible to disrupt the cycle of exploitation and protect children from harm—both online and offline.

Comms Logo
Commsadmin
+ posts
Share

Related Posts

Recent Posts