Italy's Antitrust Regulator Enforces AI Transparency, Targets Generative AI Hallucination Risks

April 30, 2026
Italy's Antitrust Regulator Enforces AI Transparency, Targets Generative AI Hallucination Risks
  • Italy’s antitrust regulator is tightening consumer rights and transparency in AI services, policing unfair practices related to generative AI.

  • The AGCM’s action underscores its role in safeguarding consumers from misleading AI claims and in promoting responsible AI deployment with clear user awareness of AI limitations.

  • Firms must display permanent in-chat disclaimers about hallucination risks on websites and apps to clearly inform users.

  • In addition, companies are required to communicate the risks of AI hallucinations where users interact with the technology.

  • The companies involved are DeepSeek (China), Mistral AI SAS (France), and Scaleup Yazilim Hizmetleri Anonim Şirketi (Turkey).

  • Devdiscourse reported the developments on April 30, 2026, with input from authorities.

  • Investigations focused on potentially unfair practices tied to generative AI and the risk of hallucinations, which produce inaccurate or misleading content.

  • Italy’s AGCM closed the probes into three AI firms after they accepted binding commitments to address hallucination risks and improve real-time disclosures.

  • The regulator frames in-use warnings as a core consumer protection obligation to prevent harm from overreliance in high-stakes domains.

  • The closure emphasized transparency and consumer information as central to resolving the probes without penalties.

  • DeepSeek pledged to invest in technology to reduce hallucinations, while acknowledging current limits in preventing them.

  • Scaleup’s NOVA AI service will disclose its role as an aggregator of multiple models and explicitly state it does not process or aggregate user responses.

Summary based on 7 sources


Get a daily email with more Tech stories

More Stories