In May, EU member states passed the Artificial Intelligence Act (AI Act), a comprehensive regulation to govern the use and development of Artificial Intelligence (AI) within Europe. The AI Act aims to minimize potential risks associated with AI technologies while creating a secure environment for research and development to maintain the EU's competitiveness in this rapidly evolving field and to foster innovation.
Specifically, the AI Act proposes to classify AI applications according to their risk level. High-risk applications, such as those used in critical infrastructures or sensitive areas like healthcare and law enforcement, will be subject to strict requirements regarding transparency, data quality, and the necessity of human oversight. The goal is to prevent misuse and protect the fundamental rights of EU citizens. However, not all AI applications are subject to these stringent regulations. Low and minimal risk applications may be used more flexibly, allowing room for creative and innovative developments.
Although the AI Act aims to promote innovation, it could also be perceived as a hindrance. The additional regulations may pose a challenge for startups and smaller companies, as compliance requires extra resources. On the other hand, a clear legal structure could also attract investment by providing security for companies and consumers. Ultimately, the impact of the AI Act will depend on how precisely the regulations are implemented and whether they can balance the protection of society with the promotion of technological advancements.
Europe-wide Uniform Framework
The AI Act aims to establish a uniform and binding legal framework in Europe. The regulation, for example, prohibits AI applications that pose unacceptable risks, such as social scoring or real-time biometric identification of individuals. Systems in high-risk areas, such as in schools, human resources, and law enforcement, are subject to strict safety requirements and cannot be introduced in the EU market without compliance. These requirements include risk management, human oversight, and quality of training data.
Transparency is also a key point of the regulation. People should always be aware when they are interacting with AI. AI-generated images and texts must also be clearly marked as such in the future. Additionally, the regulation governs the handling of AI base models like ChatGPT, which form the foundation for many generative AI applications. Such systems are not initially designed for a specific purpose but could later be integrated into a high-risk system. Therefore, different strict regulations apply to these models based on their computing capacity, addressing transparency, cybersecurity, and energy efficiency.
“The AI Regulation is pioneering: It represents the world's first attempt to ensure the safety of AI systems ex ante”, says Ruth Janal, Professor of Law at the University of Bayreuth and member of the IT Security, Privacy, Law, and Ethics Working Group of the Platform Learning Systems. She further explains that classifying a system as AI can be complex. By definition, these are systems with varying degrees of autonomy that do not solely operate on human-created rules. Knowledge or rule-based applications should still fall under the regulation, according to the recitals. “The specification and application to specific borderline cases are left to the judiciary”, Janal states.
Criticism of the AI Act
The uncertainties in the definition of AI have been criticized by some voices. The same applies to the application of the law; for example, different testing mechanisms to determine whether an AI system complies with the law could distort competition. Furthermore, some critics worry that the AI Act could stifle AI innovations due to the high costs of compliance, especially in medium-sized companies.
20 days after its publication in the EU Official Journal, the AI Act will come into force. The bans will take effect after six months, but the entire regulation will not be applicable for two years. Member countries are currently working on harmonized European standards that dictate how the regulation should be precisely implemented in various fields. These will have uniform validity in all EU countries. To implement the regulation at the national level, each member state will also set up at least one authorized testing body and a market surveillance authority.