By Dr. Anup K Tiwari
Published on: November 08, 2023 at 11:10 IST
India is racing ahead to lead the charge in 6G technology, fueled by Prime Minister Narendra Modi’s vision for a tech-savvy India. At the recent Indian Mobile Congress, the Prime Minister highlighted India’s pivotal role in the 6G world, stressing the importance of telecommunications, technology, and connectivity.
PM emphasized key elements like AI, cybersecurity, semiconductors, drones, and space exploration. However, in this enthusiastic pursuit, a critical need emerges: building a strong and reliable cybersecurity and AI system. The rapid advancements in 6G call for a sophisticated framework that doesn’t just ensure connection but also safeguards data, privacy, and ensures ethical AI use. As India charges forward, the big question looms: How crucial is AI regulation in India’s journey towards 6G dominance?
The rapid advancement of artificial intelligence (AI) has undoubtedly revolutionized our world, bringing about unprecedented innovation and progress. However, as with any transformative technology, AI carries inherent risks that demand careful consideration and proactive measures to mitigate potential harm. In the absence of effective regulation, AI can be misused to manipulate, deceive, and even cause significant societal damage.
As part of India’s digital strategy, Modi Government is actively considering the regulation of artificial intelligence (AI) to ensure the conducive development and responsible use of this transformative technology. AI stands to offer numerous advantages, such as advancements in healthcare, enhanced transportation safety and sustainability, optimized manufacturing processes, and more affordable and eco-friendly energy solutions.
In this journey towards technological prowess, India has taken notable strides by mandating social media platforms to uphold stringent measures. These platforms are required to maintain a complaint redressal body and adhere to a swift 36-hour timeframe for the removal of objectionable content. Failure to comply may lead to legal repercussions against the respective social media platform. Despite these initiatives, the pressing need for a more comprehensive regulatory framework concerning artificial intelligence in India remains a critical and pivotal concern.
A recent incident involving Bollywood megastar Amitabh Bachchan highlights the growing threat of AI-powered deepfake technology. Bachchan shared a morphed video of actress Rashmika Mandanna and raised voice, sparking outrage and raising concerns about the potential for AI to be weaponized for malicious purposes. This incident serves as a stark reminder of the need for robust AI regulation, particularly in India where the technology is rapidly gaining traction.
The proliferation of pornographic content on social media platforms further underscores the urgency of AI regulation. AI algorithms can be exploited to generate, distribute, and even personalize explicit content, posing a serious threat to vulnerable individuals, particularly children. The lack of adequate safeguards can lead to the exploitation of individuals, the erosion of societal values, and the normalization of harmful content.
Global Approach on AI
Learning from the European Union’s proactive approach to AI regulation, India must take decisive steps to establish a robust regulatory framework that addresses the ethical, social, and legal implications of this powerful technology. The European Union’s Artificial Intelligence Act (AIA), set to come into force in 2024, represents a significant step towards ensuring the responsible development and deployment of AI. India should follow suit by enacting comprehensive AI legislation that aligns with international best practices.
Major Aspects of AI Regulation in India
India should follow suit by establishing a dedicated AI regulatory body, tasked with developing and enforcing AI regulations tailored to the Indian context. These regulations should encompass various aspects of AI, including:
Data privacy and protection: To protect the citizens personal data and that it is used responsibly.
Accountability: Requiring AI developers to disclose the workings of their algorithms and provide mechanisms for redress in case of algorithmic bias or discrimination.
Content moderation and takedown procedures: To Establish an effective mechanism to identify and remove objectionable content, particularly pornographic content, from social media platforms.
Public Awareness Programs: Raising awareness among the public about the potential risks and benefits of AI, promoting responsible AI use, and empowering individuals to protect themselves from AI-related harms.
India’s proactive stance in considering AI regulation aligns with global best practices, particularly mirroring the European Union’s approach. The EU’s AI Act serves as a benchmark for comprehensive AI governance. It notably outlines prohibited AI practices, categorizing AI systems based on risk levels, and delineating stringent regulations for high-risk AI systems.
The EU’s stance on prohibited practices encompasses a wide array of AI systems that exploit vulnerable groups, deploy manipulative techniques, or engage in socially intrusive practices. It seeks to prohibit biometric identification systems, predictive policing algorithms, emotion recognition systems, and AI systems utilizing indiscriminate data scraping for facial recognition databases. This stringent approach prioritizes the protection of individuals’ rights and safety.
Additionally, the regulation outlines strict guidelines for high-risk AI systems deployed in critical areas such as biometric identification, law enforcement, education, and border control management. By imposing regulations on these sectors, the EU aims to ensure that AI systems are developed and deployed with meticulous oversight and adherence to ethical standards.
India, similarly, should prioritize the establishment of a dedicated AI regulatory body responsible for formulating and enforcing regulations tailored to the Indian context. The framework should encompass crucial facets, including data privacy protection, algorithmic transparency, content moderation, and public awareness campaigns. Emulating the EU’s approach would ensure that AI in India operates under stringent ethical guidelines, prioritizing societal well-being.
The urgency for AI regulation in India cannot be overstated. With the proliferation of AI-based threats and potential misuse, the government must act swiftly to safeguard individuals’ privacy, rights, and overall societal integrity. By implementing comprehensive AI legislation, India can harness the transformative power of AI while mitigating its adverse effects, setting a global precedent for responsible AI governance.
The effective regulation of AI will not only foster innovation but also ensure that technology serves humanity without compromising individual rights or societal values. India’s strides toward AI regulation are crucial steps in safeguarding the nation’s digital future.
The European Union’s AI Act delineates crucial points for the regulation of artificial intelligence. It substantially modifies the list of prohibited AI systems, advocating for a ban on various systems in the EU. The EU Parliament proposes banning biometric identification systems for both real-time and retrospective use, except in cases of severe crime, and prohibits using sensitive characteristics for categorization. Predictive policing systems, emotion recognition systems in specific settings, and AI systems creating facial recognition databases via indiscriminate scraping of biometric data are also on the banned list.
Under the EU’s proposed AI Act, Title II (Article 5) targets “unacceptable risk” AI practices that pose a clear threat to people’s safety, livelihoods, and rights. This includes banning AI systems using harmful manipulative techniques, targeting specific vulnerable groups, engaging in social scoring by public authorities, and using ‘real-time’ remote biometric identification in public spaces for law enforcement purposes, except in limited instances.
Moreover, the regulation, under Title III (Article 6), focuses on ‘high-risk’ AI systems that impact people’s safety or fundamental rights. It categorizes these systems into two groups: those serving as safety components or falling under health and safety harmonization laws, such as in toys, aviation, medical devices, and those deployed in specific areas, including biometric identification, critical infrastructure management, education, employment, law enforcement, migration and border control, and democratic processes. The EU Commission holds the authority to update and identify additional high-risk AI areas as necessary.
The Indian government must recognize the urgency of AI regulation and act swiftly to establish a comprehensive framework that balances innovation with safeguards. By proactively addressing the potential risks of AI, India can ensure that this transformative technology is harnessed for the benefit of society, not its detriment.