Table of Contents
India has moved decisively to regulate artificial intelligence-driven content online. Under newly amended IT rules, social media platforms will be required to clearly label AI-generated or AI-altered content and comply with sharply reduced takedown timelines.
The changes, notified through the Information Technology Amendment Rules, 2026, will take effect on February 20, 2026. The government’s focus is on transparency and faster response to harmful digital content, particularly deepfakes and synthetic media that can mislead users or damage reputations.
AI tools have advanced quickly, and so has their misuse. From manipulated political videos to fabricated celebrity clips, the line between authentic and artificial content is increasingly blurred. The new rules attempt to draw that line clearly.
Mandatory Labelling of AI Content
The most significant change is compulsory labelling. Any content that has been generated or substantially modified using artificial intelligence must carry a visible disclosure before it is published.
This applies to deepfake videos, altered images, cloned voices, and other synthetic formats. Platforms are expected to deploy technical measures to identify such content and ensure that appropriate labels are attached. These disclosures must remain intact and cannot be removed or tampered with once applied.
The objective is simple. Users should know when they are viewing AI-created material instead of assuming it is real.
Three-Hour Compliance Window
Equally impactful is the reduction in takedown timelines. Previously, intermediaries were generally given up to 36 hours to act on lawful removal orders. Under the revised framework, that window has been reduced to three hours in many cases.
In matters involving highly sensitive content such as non-consensual intimate imagery or identity-based deepfakes, the expectation is for even faster action once flagged.
Grievance redressal mechanisms have also been tightened. Platforms must now resolve user complaints within seven days, compared to the earlier 15-day timeline.
Pressure on Platforms
The new compliance burden falls on major digital intermediaries, including global platforms operating in India. They will need to invest in content detection systems, moderation teams, and infrastructure capable of meeting accelerated response times.
For companies, this means higher operational costs and stricter oversight. For users, it could mean faster removal of harmful or misleading content and greater clarity around AI-generated media.
A Broader Signal
India is one of the world’s largest digital markets, and these amendments signal a more assertive regulatory approach to emerging technologies. The government appears determined to address AI-related risks before they escalate further.
Whether the rules strike the right balance between innovation and accountability will depend on how effectively they are implemented. What is clear is that the era of unlabelled AI content circulating freely is coming to an end.