In late 2025, the Supreme Court of India had to deal with a very serious new problem: fake content made using artificial intelligence. This includes deepfake videos, morphed images, fake audio recordings, and even made-up legal documents. Such content looks real but is not, making it difficult for courts to know what evidence can be trusted.
Two public interest cases brought this issue before the Court. One of them, filed by lawyer Aarati Sah, asked the government to create clear national rules to control the misuse of AI tools that can create fake photos, videos, and voices. The petition argued that these deepfakes are damaging people’s reputation, privacy, and dignity, and that this goes against the right to equality and the right to life under the Constitution.
The petition also pointed out that some High Courts have already stepped in to protect people who were harmed by deepfakes. It asked the Supreme Court to set up a team of experts technology specialists, legal scholars, and members of civil society to help decide how AI should be used responsibly and ethically in India.
In a separate but related public interest case, lawyer Kartikeya Rawal asked the Court to create rules for how generative AI should be used within the judiciary. He warned that AI tools can sometimes make things up they may invent facts, cite cases that don’t exist, or confuse legal reasoning. This, he said, can be dangerous for the rule of law.
During the hearing, Chief Justice of India B.R. Gavai openly admitted that this is a real problem. He remarked, “We’ve seen our morphed pictures too,” pointing out that even judges have been targeted using fake or altered images created with technology.
The exchange highlighted a growing concern: if even courts and judges are affected by AI misuse, clear safeguards are urgently needed.
By early December 2025, the Supreme Court changed its approach. A Bench led by Chief Justice Surya Kant and Justice Joymalya Bagchi refused to pass binding rules on the use of AI through public interest petitions. The Court said that worries about misuse of AI in courts should be handled by government departments and court administrations, not by judges issuing strict orders.
The judges pointed out that the Union Government had already released draft AI rules and invited public feedback. These rules are meant to deal with problems like deepfakes and the responsibility of online platforms. Because of this, the Court allowed the petitioners to share their concerns and suggestions through these existing administrative processes.
Chief Justice Surya Kant also made it clear that AI is not being used freely or carelessly by judges. He said judges are cautious, and that lawyers must double-check any AI-generated cases or references before using them in court.
Overall, the Court took a balanced position. AI can help with research or administrative work, but it should not replace human thinking or influence how judges decide cases.
Supreme Court White Paper: Risks, Ethics, and Human Oversight
Behind these court decisions is serious work being done by the Supreme Court’s own research team. In late 2025, it released a White Paper on Artificial Intelligence and the Judiciary. The paper talks honestly about both the benefits and the risks of using AI in courts.
It warns that AI tools especially generative AI can make up false information, leak private data, or reflect social bias. It also points out that fake audio or video created using AI could threaten the reliability of evidence if courts are not careful.
Because of these risks, the paper strongly says that humans must stay in control. Judges and court staff should always check and verify anything produced by AI. AI should only be used to assist, such as helping with legal research, summarising documents, or managing court work not to decide cases or replace judges.
The paper also suggests practical safeguards: courts should have ethics committees to oversee AI use, rely on secure in-house AI systems, keep clear records of when and how AI is used, and strictly protect the confidentiality of case information.
Regulatory Context: Draft AI Rules and Intermediaries
Alongside what courts are saying about AI, the government has also stepped in. The Ministry of Electronics and Information Technology (MeitY) has proposed changes to India’s digital rules to deal with fake or misleading content created using AI.
These draft rules focus on what the government calls “synthetically generated information” basically AI-made content that looks real, even when it isn’t. This includes things like deepfake videos, AI-generated images, or fake audio clips that can mislead people.
Under the proposed rules, online platforms would have to clearly label AI-generated content, add identifying information like metadata, and take responsibility for removing harmful or misleading AI content within a fixed time. Platforms would also be expected to show greater care in monitoring and managing such content.
These ideas are not entirely new. Similar rules already exist or are being planned in places like the European Union and China, where AI-generated content must be labelled so users know what they are seeing is not fully real.
If these rules are finalised in India, they could significantly change how social media platforms, websites, and apps deal with misinformation and deepfakes, pushing them to act faster and be more transparent when AI is involved.
Supreme Court, Evidence and Digital Forensics
Deepfake technology is creating serious problems for courts, especially when it comes to evidence. Earlier, courts checked digital evidence like videos or recordings by verifying where it came from and whether it had been tampered with. But now, with AI tools that can create fake videos, clone voices, or edit images realistically, it has become much harder to tell what is real and what is fake.
Because of this, courts may end up accepting false or manipulated material as evidence if proper checks are not done. The White Paper also warns about this danger and clearly says that courts must rely on careful human verification and strong forensic checks before trusting AI-generated content.
This issue is already appearing in real cases. The Delhi High Court, for example, has told online platforms that they must quickly remove deepfake content when complaints are made, so that victims do not have to run to court every time. The Court has also ordered the removal of fake AI-generated content that harms public figures, showing how urgent and serious the problem has become.
Legal and Policy Impact
These court cases and judicial responses will affect many areas:
Digital Evidence:
Courts will have to find better ways to tell what is real and what is fake online. This may include using special tools to detect AI-made images, videos, and documents so fake content is not treated as real evidence.
Duties of Lawyers:
Lawyers will need to be more careful. If they use AI tools for research, they must properly check facts and cases, because using fake or wrong AI-generated information can lead to ethical problems.
Responsibility of Platforms:
Social media and online platforms may be legally required to clearly label AI-made content and remove harmful fake content quickly, instead of letting it spread.
Government AI Policy:
The government will likely make stronger AI laws that balance innovation with public safety. This could include strict rules and punishments for creating and spreading deepfakes and harmful AI content.
Background on Judicial and Digital Content Regulation
India’s courts have not ignored the problem of deepfakes. Some High Courts, like those in Delhi and Bombay, have stepped in when fake AI-generated videos or images harmed public figures. They have ordered such content to be taken down. However, these court decisions are case-by-case fixes, not part of a single clear national law. That is why Public Interest Litigations (PILs) have asked the Supreme Court to create clearer, uniform rules for dealing with deepfakes across the country.
At the same time, past legal cases on digital content such as Kunal Kamra v. Union of India show that regulating online speech is not simple. These cases reveal a constant tension between protecting free speech and controlling harmful online content. Any future law on deepfakes or artificial intelligence will have to carefully balance these two concerns.
Conclusion
The Supreme Court’s approach to AI, deepfakes, and misinformation is careful but practical. It recognises that new technology can cause real harm, but it also understands that judges alone cannot control it. Instead of giving strict court orders, the Court prefers stronger rules, better administration, and responsible use of technology, while keeping humans in charge of important decisions.
As AI becomes part of daily life and legal work, India’s courts, lawmakers, and government are slowly working together. Their goal is to create a balanced system where technology is regulated, democracy is protected, and innovation is not stopped but justice and fairness always come first.
ALSO READ | Juvenile Law Under Review as Delhi High Court Flags Criminal Use of Children



