India has introduced sweeping new rules requiring social media platforms to quickly remove illegal AI-generated content and clearly label synthetic media, placing global pressure on the technology industry to improve deepfake detection and transparency.
The measures, issued under amendments to the Information Technology Rules, will take effect on February 20, 2026, giving companies only days to comply.
Mandatory labelling and faster takedowns
Under the new requirements, platforms must deploy “reasonable and appropriate technical measures” to prevent users from creating or sharing unlawful synthetic audio, video or images.
If such content is not blocked, companies must ensure it carries permanent metadata or other technical provenance markers that identify it as AI-generated. Platforms must also:
- Require users to disclose AI-generated or edited content
- Verify those disclosures using automated tools
- Clearly label synthetic media so it is immediately recognizable
- Add audible disclosures to AI-generated audio where applicable
In addition, companies must remove illegal content—including harmful deepfakes—within three hours of detection or reporting, a sharp reduction from the previous 36-hour deadline.
A major global test case
India’s scale makes the regulation particularly significant. The country has roughly one billion internet users and more than 500 million social media users, making it one of the most important growth markets for major platforms such as Google, Meta, and X.
Because of that scale, industry observers say the rules could influence deepfake moderation standards worldwide—either accelerating improvements in detection technology or exposing its current limitations.
Technology gaps remain
Much of the industry’s current labeling approach relies on the C2PA system, which attaches metadata to content at the time of creation or editing. Several platforms already use the standard to apply AI labels.
However, the system faces major challenges:
- Metadata can be easily stripped during editing or uploading
- Many AI tools, including open-source models, do not include provenance data
- Labels applied by platforms are often subtle or difficult to notice
- Interoperability between services remains limited
India’s rules also require that metadata or labels cannot be removed or hidden, a technical hurdle that companies must address quickly.
Digital rights groups have warned that the strict timelines could lead to excessive automated censorship. The Internet Freedom Foundation said the three-hour deadline leaves little room for human review and may push platforms toward “rapid-fire” removals to avoid liability.
Industry under pressure
Officials acknowledged that provenance systems must be implemented only to the extent that they are “technically feasible,” suggesting that current detection tools are still evolving.
Nevertheless, the deadline leaves little time for platforms—particularly those without established labelling systems—to introduce compliance measures.
With the new rules about to take effect, India is poised to become the largest real-world test of whether today’s deepfake detection and labelling technologies can operate at a national scale.
