India Tightens IT Rules: Mandatory AI Content Labeling and Stricter Data Protection Measures Announced
December 3, 2025
Public consultation drafts for IT Rules, 2021 amendments propose mandatory labeling and watermarking of AI-generated or manipulated content, traceability requirements, and stronger due diligence for platforms enabling synthetic content.
The IT Ministry also proposes mandatory disclosure and labeling of AI content through a draft amendment to the Intermediary Guidelines and Digital Media Ethics Code Rules, 2021, and has issued advisories to platforms under the IT Rules.
The amendments aim to help users identify manipulated material by requiring labeling, watermarking, and traceability for AI-generated content.
The Data Protection Board of India will oversee complaints, ensure compliance, and enforce penalties, with a chairperson and four members appointed via a Search-cum-Selection Committee.
Data Protection Board responsibilities include handling complaints, monitoring adherence, and imposing penalties for DPDP violations.
Enforcement for crimes involving AI content remains a state responsibility, with police and law‑enforcement empowered to investigate and prosecute misuse.
State governments retain responsibility for public order and crime investigations, while authorities can act against individuals misusing social media or creating harmful synthetic content.
Data protection expands under the Digital Personal Data Protection Act, safeguarding personal images, biometric data, and related information shared with AI platforms, effective mid‑November 2025.
Personal photos, biometric details, and other digital personal data shared with AI platforms are protected under the DPDP Act.
DPDP Act of 2023 covers all forms of digital personal data, with its Rules notified on November 13, 2025, and the Data Protection Board overseeing processing and compliance.
The framework gives individuals greater control over how AI platforms handle their data and addresses concerns about deepfakes, manipulated images, and misinformation.
As part of IndiaAI Mission, three projects—Saakshya, AI Vishleshak, and Real-Time Voice Deepfake Detection System—have been selected to advance deepfake detection.
IndiaAI Mission, launched in 2024, funds deepfake detection and governance initiatives across IITs and state partners to promote safe, responsible AI use.
The selected projects involve IIT Jodhpur, IIT Madras, IIT Mandi, IIT Kharagpur, and the Himachal Pradesh Directorate of Forensic Services.
Safe & Trusted AI initiatives aim to strengthen detection of AI-generated manipulation and protect citizens from reputational and financial harm.
DPDP rollout includes strict data breach reporting and parental consent norms as part of an 18‑month implementation timeline.
Government advisories to social platforms on deepfakes and manipulated images were issued in late 2023, March 2024, and November 2025 to improve detection and removal under IT Rules.
DPDP grants individuals enforceable rights over their digital data and imposes obligations on data fiduciaries who collect or process it.
The DPDP framework imposes responsibilities on companies handling digital personal data and gives individuals rights over their data.
IT Rules amendments push platforms to remove unlawful content promptly, with a 36-hour turnaround for reported content under Amendment 2025.
Under IT Rules (Amendment) 2025, platforms must remove or block access to unlawful content within 36 hours of government orders or court directions.
Clarifications address concerns about deepfakes, morphed images, and synthetic media that could harm individuals or mislead the public.
Summary based on 3 sources
Get a daily email with more Tech stories
Sources

Storyboard18 • Dec 3, 2025
Digital Personal Data Protection Act covers AI images, Centre reaffirms as Deepfake rules tighten
APAC Digital News Network • Dec 3, 2025
Govt Details How Current Laws Protect Personal Images and Data Shared with AI Apps