India’s Minister of Information and Technology, Rajeev Chandrasekhar, unveiled the country’s strategy for artificial intelligence (AI) data sets at the Global Partnership on Artificial Intelligence (GPAI) Summit, held on Wednesday, in Delhi.
Emphasizing the need for trust, Chandrasekhar outlined India’s plan to restrict access to its AI data only to models deemed trustworthy.
The fireside chat, featuring key figures such as Hiroshi Yoshida from the Ministry of Internal Affairs and Communications, Japan, and Viscount Camrose, Minister for Artificial Intelligence and Intellectual Property, United Kingdom, delved into crucial topics such as AI safeguards, global agreements on AI principles, and addressing computing scarcity.
Necessity for an international dialogue
Minister Chandrasekhar articulated the need for a global dialogue on building models based on trusted datasets. He stressed the reciprocal nature of this approach, where trusted datasets are exclusively made available to trusted AI models.
This initiative aligns with India’s broader vision to leverage AI in sectors like healthcare, agriculture, governance, and language translation, as revealed in previous conferences.
Moreover, Chandrasekhar disclosed plans for substantial AI computing capacity, a collaboration between the public and private sectors. This innovative strategy aims to overcome challenges related to the shortage of AI computing power.
In a parallel discussion, the Union Minister of State for Electronics and Information Technology urged swift international action in regulating AI.
India’s stance on AI Regulation
Chandrasekhar, while meeting counterparts from the UK and Japan at the GPAI Summit, emphasized the urgency of reaching a consensus on governing AI within the next 6-9 months.
Chandrasekhar’s multifaceted agenda includes engaging with social media platforms to combat deepfakes, supporting Indian startups in navigating the AI landscape, establishing local computing facilities for AI model training, and formulating regulations to distinguish between beneficial and harmful AI.
In an exclusive conversation with Moneycontrol, Chandrasekhar shed light on the framework for judging trusted platforms seeking access to Indian datasets.
He highlighted the significance of the Digital Personal Data Protection (DPDP) Act, 2023, the India Datasets platform, and the Digital India Act (DIA) in ensuring responsible data usage.
According to the minister, personal data must be used only for the specified purpose, and AI models trained on Indian datasets should adhere to stringent trust criteria.
Chandrasekhar also addressed concerns about AI models trained on publicly available data like tweets and blog posts, emphasizing the need for stringent regulations to prevent untrusted models from accessing and using such information.
Furthermore, the minister discussed the government’s approach to regulating AI, indicating that a global agreement is essential. He stressed the need for a comprehensive definition of what constitutes safe, trusted, and harmful AI and emphasized the importance of expeditious decision-making, considering the current inflection point in AI advancements.
In response to the growing threat of deepfakes, Chandrasekhar outlined a two-pronged approach involving advisories and potential amendments to the IT Act.
He emphasized the government’s commitment to ensuring compliance with rules and protecting citizens from the misuse of AI technologies.
Lastly, Chandrasekhar addressed the GPU compute capacity for AI, hinting at the creation of substantial capacity through a public-private partnership (PPP) model and the involvement of the Centre for Development of Advanced Computing (CDAC).
While refraining from specific numbers, the minister assured that India’s approach to overcoming GPU shortages is a short-term challenge.
As India positions itself at the forefront of responsible AI usage, Minister Rajeev Chandrasekhar’s stance reflects the country’s commitment to harnessing the potential of AI for societal benefit.