India’s Responsible Approach to the AI Revolution

0

The AI Revolution

The 21st-century world is driven and centred around technology, laying the groundwork for a more profound revolution in technology. Artificial Intelligence (AI) has emerged as the new oil in international affairs, and its advancement is the key to becoming a global superpower.

India is fully aware of the importance and potential that come with developing and utilising AI. Artificial intelligence (AI) seems to hold the key to transforming economies and societies, from operating vehicles and handling household tasks to bringing about revolutionary advances in the domains of engineering, medicine, and even diplomacy.

Amir Husain, a computer scientist, continues by calling it “the enabler of everything we do”.

In a world where AI technologies will dominate, India’s national strategy for leveraging AI’s advantages will be essential to the country’s rise to power. With an impressive degree of AI adoption by Indian companies, it has started its journey to become a global hub for AI and has been at the forefront of AI-related research and innovation.

AI adoption and development are now top priorities in India thanks to the government’s Digital India initiative, which aims to digitally transform the nation.

The problem, however, is that if artificial intelligence is not used responsibly, it can lead to issues such as a decline in ethics due to, among other things, mass surveillance, data leaks, and the monitoring and replication of social biases.

Some of the horrifying potentialities of blindly advancing artificial intelligence are the development of lethal weapons powered by AI, the loss of human influence and prowess in the real world, and eventually the much-feared domination of Artificial Superintelligence over humans.

To achieve maximum utility without endangering citizens’ constitutional rights, artificial intelligence (AI) must be used responsibly. This is especially true for citizens’ rights to data protection and privacy, which are essential components of the fundamental right to privacy.

It is essential to eliminate the risks and hazards associated with the advancement of AI in order to limit its use to improving citizen convenience and efficiency while promoting social inclusion and empowerment.

Path to AI Superpower Status

Developing a thriving and secure Indian AI ecosystem requires addressing issues with ethics, privacy, security, and algorithmic biases in line with the agenda itemised above regarding risk mitigation.

The government think tank Niti Ayog developed a national AI strategy called “AI for all,” emphasising that AI could significantly transform and advance five broad sectors: healthcare, agriculture, education, smart cities, and smart mobility.

The plan supports the application of AI for societal advancement, inclusive growth, and economic prosperity. It also emphasises the necessity of a coalition of ethical councils in AI research and development centres to handle security and ethical issues—a fundamental first step in resolving the matter at hand.

In addition, a 2018 report from the Union Ministry of Commerce recommended the establishment of a nodal agency to oversee, plan, and ultimately support the development of AI systems in India. The study placed a great deal more emphasis on the development of AI than it did on addressing and mitigating the almost inevitable risks of the technology, which accurately reflected India’s and, to some extent, the global community’s attitude towards AI.

Nevertheless, the Indian government passed a Digital Personal Data Protection Bill in 2022.

It gives citizens the legal authority over their personal data and the right to access basic information. A Data Protection Board supports the bill by guaranteeing the uninterrupted execution and operation of its provisions.

However, the bill opens the door for potential surveillance and discriminatory threats by requiring the government to access personal data only in extreme cases. Additionally, in 2020, Niti Ayog unveiled the Data Empowerment and Protection Architecture (DEPA), which aimed to safeguard data sharing and usage across the nation.

DEPA gives people a way to manage their personal data and gain digital empowerment. The threats of the future are unclear, though. Technological developments in AI will make the risks worse.

Thus, exercising caution is crucial. To align with this perspective, the Ministry of Electronics and Information Technology (MeitY) formed several committees in 2019 to investigate the primary methods of utilising AI in India, while also emphasising cyber security, legal, and ethical considerations. Thus far, India has benefited greatly from the application of AI. This includes the integration of linguistic diversity through translations and the enhancement of administrative efficiency through Face Recognition Technologies.

But even well-meaning machine learning systems can have unfavourable effects. The risks that could turn these technologies’ positive stories around are unauthorised access, data breaches, and bugs.

Furthermore, if data-driven decision-making via machine learning is not carefully programmed and regulated, it may become biassed and exacerbate already-existing social inequalities. AI biases and discrimination are reflective of the unfair society we currently live in, and data inputs reflect that as well.

Therefore, if AI had been developed and implemented in a utopian society, it would not have needed to be subject to regulations. However, in our far from perfect society, the issues can worsen to the point of nightmares.

Comprehensive Legislations

Artificial intelligence ought to be regulated by strong laws that recognise and convey knowledge of all possible risks arising from AI systems. In this situation, a policy that regulates AI systems through trial and error is probably the worst course of action.

India must set up flexible regulatory agencies that will carry out all-encompassing policies and conduct frequent oversights of AI developments. Lawmakers must establish guidelines and policies that would compel private tech firms to answer for any losses brought about by their AI systems.

While creating regulatory policies, it is important to include tech experts and significant tech companies because AI-related policies need to be supported by counsel from organisations that are knowledgeable about the technologies. Private tech companies will be given more influence over policy decisions and will be bound to follow draft regulatory policies by virtue of the subordinate authority granted to them.

In addition, task forces with the authority of the government should be established in order to carry out or guarantee risk assessments before frontier AI technologies are made available to the general public.

With AI developing at such a rapid rate, policymakers need to implement these changes as soon as possible. Relevant safeguard laws against AI will be shaped by this urgency. In the case of AI, it is impossible to draw firm conclusions about particular policies because the very nature of the technology encompasses a wide range of systems, each with a unique function and associated risks that will continue to evolve quickly over time.

Policies must adapt to the changing role and rate of development of artificially intelligent systems. Similar to its advantages, artificial intelligence poses a threat that crosses national boundaries. India must work with other nations to harness these advantages and control these risks.

The Global Partnership on Artificial Intelligence (GPAI) provides India with a perfect international forum to express its responsible AI objectives and collaborate with other like-minded nations to strengthen the nation’s democratic integrity by combining data protection laws with other aspects of AI governance.

India must view the AI revolution as a slow, methodical process instead of as a global competition to obtain cutting-edge technology. Right now, the threats posed by AI are more real than ever. This is no science fiction film. Before it’s too late, India needs to lead by example by utilising AI technologies responsibly and preventing any further risks from spreading. Instead of the other way around, we need to build a society run by humans with limited AI support.

I am a student pursuing Masters in Diplomacy, Law and Business from OP Jindal University. I have a keen interest in geopolitics, risk analysis and data visualization.

Comments are closed.

Copyright © 2024 INPAC Times. All Rights Reserved

Exit mobile version