Photo Credit: Pexels/Kindel Media
Artificial intelligence (AI) models and generative AI models that are in any phase of testing or unreliable in any way will need to receive “explicit permission of the government of India” before it is deployed in India, India's Ministry of Electronics and Information Technology (MeitY) issued in an advisory, as per reports. The advisory comes just days after some users found that Google's Gemini AI chatbot was responding with inaccurate and misleading information regarding the Prime Minister of the country.
According to a report by The Economic Times, the advisory was issued on March 1 and companies were asked to comply with it going forward. The advisory asked firms that already have deployed an AI platform in the country to ensure that “their computer resources do not permit any bias or discrimination or threaten the integrity of the electoral process.” Further, MeitY has also reportedly asked the AI platforms to add metadata in case the content generated by the AI can be used to spread misinformation or create deepfakes.
Companies were also asked to add explicit disclaimers in case the platform can behave in an unreliable manner and generate inaccurate information. Further, platforms will also have to warn users to not use AI to create deepfakes or any content that can impact elections in any way, as per the report. While the advisory is not legally binding currently, it states that this is the future of AI regulation in India.
The issue of unreliability first arose when some users posted screenshots of Google Gemini posting inaccurate information about PM Narendra Modi. On February 23, the Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar responded to on X (formerly known as Twitter) and said, “These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code.”
The issuance of the advisory has garnered mixed reactions from entrepreneurs and the tech space. While some have appreciated the move, calling it a necessity to mitigate misinformation, others have highlighted that regulation could have an adverse impact on the growth of the emerging sector. Perplexity AI's Co-founder and CEO Aravind Srinivas called it a “Bad move by India” in a post.
In the same vein, Pratik Desai, founder of KissanAI said, “I was such a fool thinking I will work bringing GenAI to Indian Agriculture from SF. We were training multimodal low cost pest and disease model, and so excited about it. This is terrible and demotivating after working 4yrs full time brining AI to this domain in India.”
Responding to the criticism in a series of posts, Chandrasekhar highlighted that the advisory was issued considering the existing laws of the nation which prohibit platforms from either enabling or generating unlawful content. “[..]platforms have clear existing obligations under IT and criminal law. So best way to protect yourself is to use labelling and explicit consent and if you're a major platform take permission from govt before you deploy error prone platforms,” he added.
The Union Minister also explained that the advisory is aimed at “significant platforms” and only “large platforms” will have to seek permission from MeitY. This advisory is not applicable to startups. He further added that following the instructions of the advisory is in the best interest of the companies as it creates insurance from users who can otherwise file a lawsuit against the platform. “Safety & Trust of Indias Internet is a shared and common goal for Govt, users and Platforms,” he said.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.