Last Updated:
India has issued formal governance guidelines on AI for the first time. Under these rules, companies and developers will have to follow the ‘do no harm’ principle. Meaning, AI cannot harm anyone. The government’s focus is that the use of AI should be transparent, safe and human-centric. India will also organize a big global summit on this in February 2026.
On this occasion, Principal Scientific Advisor to the Government of India, Ajay Kumar Sood said that India will work on the principle of “Do No Harm”. This means that AI should not have any adverse impact on any person, community or environment.
Now government’s eye on AI
Ministry Secretary S. Krishnan (S. Krishnan) said that this AI framework of India will be human-centric. That is, the purpose of AI is to help humans, not to replace or harm them. The government wants AI technology to make people’s lives easier, and to use it in a transparent and trustworthy manner. In these new rules, 7 main ethical principles and 6 big governance pillars have been set for AI developers and companies. These include issues like data privacy, preventing bias, accountability and security.
Who prepared this framework
To prepare these guidelines, the government had formed a special committee, which was headed by Professor Balaraman Ravindran. This team included experts from institutions like NITI Aayog, Microsoft Research India, IIT Madras and ISPIRT Foundation.
India-AI Impact Summit 2026 also announced
The government also informed that India-AI Impact Summit 2026 will be held in Delhi in February 2026. AI experts, policy makers and industry leaders from all over the world will participate in it. In this summit, it will be discussed how to use AI responsibly and safely for the good of the society.
In the same event, the winners of the AI Hackathon organized by IndiaAI Mission and Geological Survey of India (GSI) were also honored. These teams introduced new AI solutions in mineral exploration and resource mapping.





























