India’s Strict Decision: AI Videos Must Now Show ‘Fake’ Mark on 10% Surface Area

India proposes world’s first measurable deepfake identification rule; draft IT amendments seek permanent AI labels on content, feedback open till Nov 6.

10 Min Read
AI Image

Crackdown on Deepfakes After Bollywood Case and Election Manipulation, Feedback Sought Till November 6

New Delhi: To curb the growing misuse of deepfake videos, the Indian government on Wednesday proposed strict IT rules. The biggest feature of these rules is that for the first time globally, a government has set a measurable standard for identifying AI-generated content—a label stating ‘Made with AI’ must be visible on at least 10% of the surface area of a video or image.

This step comes at a time when the country has recently experienced the havoc of deepfakes during elections and major Bollywood stars have filed crore-rupee lawsuits against Google-YouTube.

Historic ’10 Percent’ Rule

The amendment proposed by the Ministry of Electronics and Information Technology clarifies that AI identification should be visible on 10% of the screen in any video or image. For audio clips, this warning should be audible in the initial 10% duration.

“This is among the first explicit attempts globally to prescribe a quantifiable visibility standard,” said Dhruv Garg, founding partner at public policy research firm Indian Governance and Policy Project.

He explained that if these rules are implemented, AI platforms operating in India will have to build automated labeling systems that mark content at the point of creation.

Bollywood’s ₹4 Crore Lawsuit

This proposal comes at a time when Bollywood power couple Abhishek Bachchan and Aishwarya Rai Bachchan filed a ₹4 crore lawsuit in Delhi High Court against Google and YouTube in September.

Their petition states that a YouTube channel named “AI Bollywood Ishq” has posted more than 259 AI-generated videos, which have received over 16.5 million views. Many of these videos are “obscene” and “fictitious” showing Aishwarya with Salman Khan, and Abhishek suddenly kissing an actress.

The Bachchan couple’s biggest concern is that under YouTube’s data-sharing policy, creators can give their videos to third-party AI platforms like OpenAI, Meta, and xAI for training. Their petition states, “If AI platforms are trained on such false content that portrays them negatively, AI models ‘will learn all this false information,’ leading to its further spread.”

Delhi High Court asked Google’s lawyer for a written response in October. The next hearing is scheduled for January 15, 2026.

Deepfake Rampage in Elections

Unprecedented use of AI-generated content occurred during the 2024 Lok Sabha elections. Political parties spent an estimated ₹1,30,000 crore, with extensive use of AI.

Fake videos of Bollywood stars—Aamir Khan, Ranveer Singh—went viral showing them supporting political parties. All denied these and filed complaints.

The Dravida Munnetra Kazhagam (DMK) party created a deepfake video of former leader M. Karunanidhi, who passed away in 2018, showing him praising his son M.K. Stalin.

A manipulated video of Home Minister Amit Shah also went viral showing him talking about ending SC/ST reservations. BJP claimed it was made by Congress, though it later turned out to be merely edited, not fully AI-generated.

What Will Be Companies’ Responsibilities?

Under the proposed rules, major companies like Meta, Google, OpenAI, X (formerly Twitter) will have to:

1. Embed Permanent Metadata: Every AI-generated content must have a permanent and unique metadata or identifier that cannot be altered, hidden, or removed.

2. Obtain User Declaration: Social media platforms must obtain declarations from users about whether the content they’re uploading is synthetically generated.

3. Technical Verification Measures: Platforms must deploy “reasonable technical measures” to verify whether uploaded content is AI-generated.

4. Visible/Audible Labels: Labels or identifiers must be prominently displayed on at least 10% surface area for visual content, and in initial 10% duration for audio.

5. Label Removal Prohibited: No intermediary will be allowed to modify, suppress, or remove these labels or identifiers.

Give Feedback Till November 6

The IT ministry has invited feedback/comments from the public and industry on these draft amendments till November 6, 2025.

The ministry stated, “Recent incidents where deepfake audio, videos and synthetic media went viral on social platforms have demonstrated generative AI’s potential to create convincing falsehoods depicting individuals in acts or statements they never made.”

How Big is the Threat?

The numbers are concerning. According to cybersecurity firm McAfee’s 2024 report, AI-powered voice and video scams saw a 900% increase globally between 2023 and 2024. India was identified as a major hotspot for this activity.

According to India’s National Crime Records Bureau (NCRB) 2024 data, online defamation and cyberstalking cases involving morphed images and videos increased 45% from the previous year, with women being the most affected victims.

India has approximately one billion internet users the world’s second-largest online market. In the country’s diverse ethnic and religious structure, fake news can incite violence, raising the stakes further.

India: Second Largest AI Market

OpenAI CEO Sam Altman said in February that India is their second-largest market and the number of users has tripled in the past year. India ranks second globally in ChatGPT usage.

This shows how rapidly AI adoption is happening in India and also the risk of misuse.

Global Context: Following EU and China’s Footsteps

India’s move follows the European Union and China, where laws controlling AI-generated content have already been enacted.

However, experts say India’s “10% visibility standard” is the first quantifiable benchmark of its kind, making it distinct from other countries.

Challenge for Small Creators

While these rules are necessary, developing automated labeling systems may be expensive for small content creators and startups.

A tech entrepreneur said on condition of anonymity, “Big companies Google, Meta have resources. But for small YouTubers, independent creators, this compliance can become a burden. The government should also consider this.”

Tech Companies Haven’t Responded Yet

So far, OpenAI, Google, and Meta have not issued any official response to these proposed rules. It’s expected that these companies will provide their feedback by the November 6 deadline.

YouTube has clarified in its data-sharing policy that creators can share their videos with third-party AI platforms for training, but also stated: “We can’t control what the third-party company does if users do this.”

This is exactly the issue the Bachchan couple has objected to.

Experts’ Opinion: Step in Right Direction, But Enforcement Will Be Challenging

The Deepfakes Analysis Unit (DAU) of the Misinformation Combat Alliance reviewed hundreds of suspicious audio and video files during the 2024 elections.

DAU’s analysis revealed that complete deepfakes were fewer, but videos manipulated with synthetic audio tracks were more common. “These cheapfakes are a pervasive misinformation strain that needs to be combated,” DAU said in its report.

Cyber law expert Pavan Duggal said, “This law is a step in the right direction. But the real challenge will be in enforcement. AI tools are evolving every day. Detection systems will need constant updates.”

Other Celebrities Also Sought Protection

The Bachchan couple are not alone. In recent months, several other celebrities have also filed petitions in Delhi High Court for protection of personality rights including Anil Kapoor, Hrithik Roshan, Akshay Kumar, Kumar Sanu and even Tata Group for protecting late Ratan Tata’s legacy.

The Road Ahead

These amendments are to be made in the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The ministry has said these aim to “promote user awareness, enhance traceability and ensure accountability while maintaining an enabling environment for innovation in AI-driven technologies.”

In the coming weeks, responses will come from tech companies, civil society organizations, and the public. After that, the government will prepare final rules.

One thing is certain the era of AI regulation has begun in India. This isn’t just the responsibility of technology companies, but an important issue for every citizen of digital India.

Also read | Delhi High Court Grants Hrithik Roshan Urgent Relief Against AI Deepfakes and Identity Misuse

Share This Article