The Prohibition and Impact of European Union’s New Artificial Intelligence Regulations
By Krishna Garg and Vrinda Talwar (Research and Editorial Team)
The European Commission had proposed a new EU Regulatory Framework on Artificial Intelligence in April 2021. The draft AI regulation were the first of their kind, with an attempt to enact a horizontal regulation of AI. The proposed legal framework focused on specific utilization of AI systems and associated risks. The EU Commission seeks to establish a technologically neutral definition of AI systems in the European Laws, whilst at the same time laying down a classification of AI systems. The peculiar feature being, that such classification is being made on ‘risk-based-approach’. Wherein, the risks are classified as ‘unacceptable’ which would be prohibited, a range of ‘high-risk’ AI systems would be authorized subject to restrictions and obligations prior to EU market access. Then, the ones that pose ‘limited-risk’ would be subject to very light transparent regulations. In this article, we have endeavored to list down the impact and the challenges that may come along with these new age regulations, such as strong enforcement and redressal mechanisms and a democratic oversight of the designs and implementations of AI regulations in Europe.
The Artificial Intelligence (AI) Regulations of the European Union shall play an instrumental role in bringing economic and social benefits of the new age technology across all sectors and segments. High functioning sectors such as finance and manufacturing shall especially utilize AI in their business operations, which will have a significant impact in climate change, mobility and agriculture. However, all is not rosy with AI. There are certain limitations of the technology in terms of its usage, and thus the new regulations have cast out prohibited usage of AI in the draft as well. Article 114 of the Treaty on the Functioning of the European Union (TFEU) provides for measures that will ensure establishment and functioning of a balanced European market. The primary objective is to ensure that internal markets are harmonized in their development and rules are in place which govern a balanced usage of AI devices and technology.
The EU Regulations classify the risks posed by AI into a pyramid of fours layers.
- The Top Layer – These risks posed by AI are the ones that are unacceptable and are completely prohibited. Title II Article 5(1) of the Artificial Intelligence Regulations explicitly bans harmful AI activities that are considered to be a direct threat to people’s safety, livelihoods and rights. The AI system that deploys harmful manipulative subliminal techniques, exploit specific vulnerable groups, in use by public authorities for social scoring purposes and real time biometric identification systems in public access for law enforcement purposes are completely prohibited in the EU markets.
- The Second Layer – Title III Article 6 of the EU AI Regulations, seeks to regulate high risk AI system usage, that may create adverse impact on the people’s safety and fundamental rights. High risk AI systems used as a safety component of a product falling under health and safety such as toys, aviation, medical devices, cars etc. Further high-risk AI systems deployed in eight specific areas of biometric identification and categorization of natural persons, management and operations of critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control and administration of justice and democratic processes.
- The Third Layer – The third layer comprises of limited risks which come with transparency obligations. These are AI systems interact with humans, emotion recognition systems, biometric categorization and AI systems that generate and manipulate images, audio and video content. As per Title IV these systems shall be subject to a limited set of transparency obligations.
- The Fourth Layer – All the AI systems that are not covered in the above lists, however do pose a low or minimal risk could be developed and used in the EU without conforming to any additional legal obligations. Title IX of the proposed AI enactment does envisage the creation of codes of conduct that encourages providers of AI systems of non-risk AI which will be voluntarily be applicable.
The European Union’s decision to carve out a bold regulatory direction on Artificial Intelligence inevitably raises questions on what impact these regulations will have on the trans-Atlantic agreements. Whether or not the United States of America would formally adopt the same risk-based approach of governance is still left to be seen. Nevertheless, the new AI regulations are politically attractive and will play an instrumental role in political co-operation between different states.
The AI regulations will in all probability have extraterritorial reach, meaning that any AI system providing output within the European Union would be subject to it, regardless of where the provider or user is located. Individuals or companies located within the European Union, placing an AI system on the market in the European Union, or using an AI system within the European Union would also be subject to the regulations. Enforcement measures include fines of up to €30 million or 6 percent of global revenue, making penalties even heftier than those incurred by violations of GDPR. The use of prohibited systems and the violation of the data-governance provisions when using high-risk systems will incur the largest potential fines. All other violations will be subject to a lower maximum of €20 million or 4 percent of global revenue, and providing incorrect or misleading information to authorities will carry a maximum penalty of €10 million or 2 percent of global revenue. Although enforcement rests with member states, as is the case for GDPR, we can expect that the penalties will be phased in, with the initial enforcement efforts concentrating on those who are not attempting to comply with the regulation.