AI Chatbots Embed Trackers: Privacy Risks and Legal Concerns Exposed by New Study

May 9, 2026
AI Chatbots Embed Trackers: Privacy Risks and Legal Concerns Exposed by New Study
  • A study by IMDEA Networks Institute finds that popular AI chatbots—ChatGPT, Claude, Grok, and Perplexity—embed trackers from Meta, Google, TikTok, and others, exposing user conversations and activity.

  • The investigation identifies 13+ third-party trackers across the four chatbots, with none of the trackers clearly disclosed to users in plain language.

  • Legal implications are framed under GDPR, highlighting unclear legal bases for data exchange and insufficient user information, and urging stronger transparency, access controls, and data protection in generative AI.

  • Users lack transparency about data practices; privacy tools do not clearly reveal protection levels or data flows within AI systems.

  • Researchers warn that current privacy controls may mislead users about protections and call for closer scrutiny by Data Protection Authorities; the study will expand to Meta AI, Microsoft Copilot, and Google Gemini, which were excluded due to dual roles as AI providers and ad companies.

  • The business model section describes mechanisms—cookies, hashed emails, and server-side tracking—that could link AI activity to real identities and build persistent user profiles for re-identification.

  • The main threats include exposure of permanent conversation links to trackers, potential linking of interactions to identities, and privacy policies that don’t reflect actual data flows.

  • Practical steps for users: on Grok, limit conversation visibility and revoke shared links; on Claude, disable non-essential cookies to reduce Meta Pixel activity; on Perplexity, keep conversations private; on ChatGPT, reject cookies where possible, noting Google Analytics may still run for free, logged-in users.

  • Claude and ChatGPT show stronger access controls but continue transmitting conversation URLs and identifying data to Meta and Google; Claude’s data exits through Anthropic’s servers, making browser-based ad blockers ineffective.

  • Perplexity has removed its Meta tracker, but other trackers remain; there’s no clear evidence trackers read conversations yet, though the infrastructure and permalinks could expose content.

  • Examples include Grok and Perplexity transmitting conversation URLs with weak access control to trackers like Meta Pixel, and Grok exposing message text in Open Graph data collected by TikTok.

  • Grok (Elon Musk’s xAI) is the most exposed, with guest conversations defaulting to public and readable content without login, while TikTok’s tracker captures verbatim conversation content via Open Graph metadata.

Summary based on 2 sources


Get a daily email with more AI stories

More Stories