Silverer: New Tool to Combat AI-Generated Child Abuse and Deepfakes with Data Poisoning
November 9, 2025
AiLECS Lab, a collaboration between the Australian Federal Police and Monash University, is developing a data-poisoning disruption tool called Silverer to counter malicious AI-generated content, including child abuse material and deepfakes.
The tool works by subtly adding patterns to images before upload, causing AI models to learn the pattern rather than the original content, resulting in low-quality or unrecognizable outputs for criminals.
Officials say this approach can slow the spread of fake content and reduce the volume of material investigators must review, while acknowledging that multiple methods will be needed to counter misuse of AI.
AiLECS co-director Associate Professor Campbell Wilson notes that easy access to AI tools lowers barriers to creating harmful imagery, highlighting the need for proactive defenses.
Data poisoning targets materials such as child abuse imagery, extremist propaganda, and deepfakes to reduce harm online and aid law enforcement.
Silverer’s development reflects broader efforts to counter AI-enabled crime through collaboration among researchers, law enforcement, and community safety voices.
Officials envision a user-friendly tool for ordinary Australians, encouraging people to poison images at risk of manipulation to deter criminals from distorting reality with AI.
The AFP notes a rise in AI-generated child abuse material and cites international arrests for possessing or producing such content, underscoring the need for protective tools.
The agency reports global increases in AI-generated abuse material and emphasizes ongoing efforts to counter its spread through new protections.
The effort aims to make it harder for malicious actors to misuse AI while aiding investigators by lowering the volume of fake material.
Experts view the rising accessibility of AI-generated harmful imagery as part of a broader strategy to combat online exploitation, signaling a need for proactive tools like Silverer.
Digital forensics specialists acknowledge the growing problem of AI-generated harmful content and low barriers to using AI tools, which Silverer aims to counter.
Officials emphasize that while not a complete solution, large-scale data poisoning could slow the growth of malicious AI content and help investigators focus on real victims.
The twelve-month prototype, led by PhD candidate Elizabeth Perry, uses the metaphor of slipping silver behind glass to obscure real images and mislead AI systems.
Long-term goal is to empower the public with accessible protections to safeguard online images on social media from AI-driven manipulation.
Led by Elizabeth Perry over the past year, the project uses data poisoning to deter misuse by distorting outputs and reducing the effectiveness of AI-generated content.
AFP Commander Rob Nelson says data-poisoning technologies are in early stages but show promise for reducing fake material and aiding investigators by creating hurdles for misuse.
Nelson compares the approach to speed bumps on an illegal drag racing strip, insisting no single method stops all misuse but multiple hurdles can deter criminals.
The AFP views Silverer as an early but promising step to disrupt misuse and help investigate by slowing AI-generated content generation.
Officials stress easy-to-use protections for the public and invite individuals to poison at-risk images to hinder criminals’ manipulation of reality with AI.
Additional media materials and demonstrations are available via Monash’s media kit linked to the project.
In Australia, several individuals were charged in 2025 for possessing or producing AI-generated child abuse material, highlighting the urgency of Silverer.
The prototype is being tested with potential internal AFP deployment first, with the broader aim of delivering easy-to-use protections for Australians on social media.
Discussions indicate initial internal use within the AFP, followed by expansion to provide accessible protection for ordinary Australians online.
The project is in the prototype stage, focusing on practical testing and internal adoption before wider public rollout.
Campbell Wilson stresses that rapid open-source AI growth lowers barriers for criminals to produce hyper-realistic deepfakes, and tools like Silverer could help police focus on real victims.
Summary based on 7 sources
Get a daily email with more AI stories
Sources

Monash University • Nov 10, 2025
Poisoned pixels: New AI tool to fight malicious deepfake images
Mirage News • Nov 9, 2025
AI Tool Battles Malicious Deepfake Images
Mirage News • Nov 9, 2025
AFP, Monash Uni Poison Data To Fight AI Crime
Cyber Daily • Nov 10, 2025
AFP, Monash University join forces to fight AI-generative crime