The EU’s New AI Regulations – Direct challenge to Silicon Valley Standards
By Shreya Rai and Rishabh Moon (Research and Editorial Team)
The European Union, on 21st April, 2021, issued a proposal for harmonized rules on Artificial Intelligence, laying down very stringent rules for certain areas of the field of AI and completely bans certain other uses of AI. However, the regulations are riddled with certain loopholes and omissions which can prove detrimental for the industries and users and it does not regulate BigTech at all which has been the source of major concern and involves the use of the most advanced AI technology.
Furthermore, the proposed regulations are in direct conflict with the Silicon Valley’s belief of independent technology and least government intervention. The article aims towards analysing the loopholes and gaps in the regulations and its impact on related industries, especially Silicon Valley industries.
Europe has been a leading player in the field of regulations and laying down framework proactively. For instance, the General Data Protection Regulation notified in 2018 was a foundational regulation for data protection and is a major legislation to protect the rights and data privacy of European citizens which organically led to companies in the international markets adopting it as a global standard. The Draft AI Regulations have been formulated in the same light. However, the draft AI regulations have many loopholes and formulate laws and compliance for providers and not the users. The regulations also provide no provision to inform people subject to algorithmic assessments.
However, the biggest impact that these regulations are bound to have is on silicon valley industries. With the advent of AI, the practices of AI have kickstarted a new era in semiconductor innovation. In the bid to make microprocessor and CPU chips more efficient and able to handle higher level of activity, technologists are aiming towards exercising the full potential of AI. This will not only prove monumental in the future of research and development of AI but also the entire Silicon Valley. However, with the narrow ambit and strict regulations, it is bound to have a debilitating effect on both these sectors.
Necessity and Scope of Regulations
Irrespective of the fact that AI is bound to form the backbone of the next technological revolution and its indisputable role in the development of tech and related industries, its risks on individual privacy and livelihoods cannot be ignored. According to Margrethe Vestager, executive vice president of the European Commission, “Artificial intelligence is one of the world’s most promising technologies, but it presents many dangers because it requires trusting complex algorithms to make decisions based on vast amounts of data. There must be privacy protections, rules to prevent the technology from causing discrimination, and requirements that ensure companies using the systems can explain how they work.” Consequently, a regulation for the regulation of AI and its uses becomes essential.
The draft AI Regulations covers the providers of AI systems and applied an “effects” test which means that irrespective of the location of the provider if the “output produced by the system is in the Union” then they shall be subject to these regulations. The regulations also contemplate various facets of AI like machine learning, traditional statistical technology etc and divides them under four risk-based categories. These categories of AI systems are:
- “Unacceptable Risk” under Article 5 – Some specified types of biometric surveillance and social scoring are classified as “unacceptable risks” to right to privacy and related human rights and thus scoring people’s trustworthiness in a single facet of their life to justify damaging treatment in other unrelated facet are completely banned.
- “High-risk” under Article 6 and 7 – Annexure III under Article 6 of the regulations provide for a list of AI systems as “high-risk” and the deployment of such systems will require extra safeguards as they create an adverse impact on the fundamental rights or safety of people.
- “Limited Risk” under Article 52 – Systems such as emotion recognition, biometric categorisation and deep fakes where there is a clear risk of manipulation but do not require the extra safeguards as in high-risk AI systems.
- “Minimal-risk” – The AI systems falling in this category can function in EU subject to exiting law and do not need any additional safeguards.
Rules for High-Risk AI systems
The draft regulations have been formulated on the backbone of proportionality with the logic for strict framework for high-risk AI systems being that to alleviate the threats to rights and safety presented by AI that are not addressed by other current legal frameworks, high quality data, documentation and traceability, transparency, human supervision, accuracy, and robustness are absolutely important. High-Risk AI systems are allowed on the European Market subject to compliance with mandatory requirements and conformity assessment. Chapter 1 of Title III divides high-risk AI systems into two categories: AI systems intended to be used as a safety component of products that are subject to ex-ante conformity assessment by a third party and other stand-alone AI systems with primarily fundamental rights implications that are explicitly listed in Annexure III.
Regulations such as ensuring the quality of data sets used to train AI systems, applying human insight to a certain extent, formulation of records for compliance checks, transparency and make relevant information available to users are applicable to High-risk AI systems along with certain regulations applicable on importers, distributors, users and any other third party which effectively covers and effects the complete AI supply chain.
Enforcement and Governance
As discussed above, the regulations are not solely applicable in the territory of EU and instead adopts an effects doctrine. The implication is that other than the providers of AI systems located in EU, the regulation is bound to have an extra territorial application and regardless of the location of the provider, any AI system which provides output in the EU will also be subject to the regulations. The regulations contemplate the highest fines in case of violation regarding high-risk systems which is up to €30 million or 6% of global revenue whichever is greater which are stricter than the penalties established under GDPR. The regulations also propose setting up of a European Artificial Intelligence Board to assist in developing standards and enforcement of the regulations along with member states.
Impact on Silicon Valley Companies
The draft AI regulations come in light of the GDPR which forced multiple IT companies to conform to those norms to retain business in the EU and they gradually became the globally accepted norms for data privacy. However, the AI Regulations have gaps and extremely stringent provisions for non-compliance. A major loophole is that it does not adequately cover Big Tech companies which can be a source of widespread concern since it uses the most revolutionary AI systems. Furthermore, on one hand where the regulations state that the reason for risk-based classification is regarding the supposed risk to fundamental rights and human rights, on the other hand, algorithms used in social media, online retailing, mobile applications etc which have suspicions of serious violations of right to privacy are not treated as high-risk AI systems.
This disparity in regulation for different classes of risks and systems for AI plus the absolute prohibition of other AI systems can prove to be detrimental to Silicon Valley companies with target area in the EU since these companies might need an overhaul of their usage of AI and ensure compliance to these regulations to continue to provide their AI systems in the EU. Either way, these regulations directly and severely affect Silicon Valley considering the role of AI for the Silicon Valley.
Although EU officials have stated that the regulations have a double-sided approach where they enable the technology as well as ensure it is not harmful to EU citizens, other parts of the world have taken an innovation and research friendly route and have avoided cumbersome legislation and policies to encourage Silicon Valley companies and take them to the next level.
In comparison to the EU regime of tight regulations even at the cost of innovation, the United States of America government has, instead of focusing on drafting framework for regulation of AI, emphasised on allocating funds for research in the field of AI, with the White House Budget proposal allocating around $1.1 billion for research.
It is no doubt that the regulations will prove to be a hurdle for Silicon Valley and the companies need to prepare themselves for compliance to these norms. The disparate situation and approaches by international players for the field of AI can also prove to be tenuous.