The Indian government is considering the establishment of a national AI safety institute, according to MeitY secretary S Krishnan at Microsoft’s Building AI Companions for India event in Bangalore. Along with Microsoft AI CEO Mustafa Suleyman, Krishnan emphasized the importance of an AI Safety Institute in India, stating that it is a global trend and that India is in the process of establishing one to better understand the technology. Last month, the MeitY conducted a meeting to discuss the objectives, budget, and framework of the institute. Krishnan also stressed the need for balanced and proactive regulation, stating that waiting for things to go wrong is not a viable approach. He also mentioned that existing legislation has been effective in addressing issues such as misrepresentation and deepfakes. However, with AI still in its early stages, there is much to be discovered in the coming years. Suleyman also highlighted the need for policymakers to prioritize the future of AI, especially when it comes to recursive self-improvement mechanics. He also noted that predicting AI advancements is difficult and may require a more interventionist regulatory approach. The UK was the first country to launch an AI Safety Institute, followed by similar institutes around the world. The goal is to advance the testing and evaluation of frontier AI systems for safety risks. Companies like OpenAI, Meta, Google Deepmind, and Microsoft have signed voluntary agreements with the UK AISI, giving them early access to their models. In the US, OpenAI and Anthropic have also signed MOUs with the US AI Safety Institute. This showcases a step towards cooperative global AI safety measures, as stated by Ian Hogarth, chair of the UK Government’s AI Foundation Model Taskforce. The focus on AI safety is crucial as it promotes trust, which in turn drives adoption and innovation.