HMGUARD: New AI Framework Revolutionizes Harmful Meme Detection with 0.92 Accuracy

November 6, 2025
HMGUARD: New AI Framework Revolutionizes Harmful Meme Detection with 0.92 Accuracy
  • A new framework named HMGUARD uses adaptive prompting and chain-of-thought reasoning within multimodal large language models to detect harmful memes.

  • The research is presented at NDSS 2025, with online resources listing the authors, presenter details, and a blog post that summarizes the work.

  • The study analyzes visual arts and propaganda techniques to explain why current detection methods fail and to guide the design of HMGUARD.

  • The article highlights how harmful memes spread on social media, emphasizing challenges in detection due to varied expressions, complex compositions, propaganda strategies, and cultural contexts.

  • HMGUARD reportedly delivers strong performance, achieving 0.92 accuracy on a public harmful meme dataset and outperforming baselines by 15% to 79.17%, while surpassing existing tools with 0.88 accuracy in real-world use.

Summary based on 1 source


Get a daily email with more AI stories

More Stories