EU AI Act: Why do we need rules on AI?
No items found.
Published:
October 28, 2024
Topic:
Insights
What is real and what is not? This seemingly philosophical question has become a pressing, even a potential threat. Could artificial intelligence, a technology rapidly integrated into our lives, be one of humanity’s most significant challenges today?
Artificial intelligence has brought unprecedented opportunities and challenges across various sectors. AI's potential to change and revolutionize industries is impressive, from healthcare to transportation. In a short period, AI has intertwined with our everyday lives and found its place in every household, workplace, classroom, and even artist's studio. AI has become a blessing for many people because of its ease into their everyday lives by saving time, helping to do repetitive tasks, or researching through big chunks of data. Artificial intelligence is one of the most significant advancements in the modern day and a big achievement of human technology. On the other, more concerning hand, AI poses plenty of threats. Privacy invasion, bias, discrimination, manipulation, and misinformation are only a few of them. With AI spreading globally, somebody needed to step out and take the regulations into their hands. The security question is a considerable threat among all these traits. The EU AI Act was presented as a solution to the lack of regulation. It entered into force on August 1st, 2024, so this week's blog is dedicated to discovering precisely what this Act brings.
The EU is not just in a race for technological leadership but also to set the proper standards for AI. Without proactive regulation, there's a real risk that non-democratic actors will set AI standards, potentially leading to abuse. Authoritarian regimes could use AI for mass surveillance and control, while dominant tech platforms might exploit AI to gather extensive personal data, posing threats to democratic systems. In response to these potential threats, the EU has taken a proactive stance, positioning itself as a global standard-setter in AI. This reassuring step ensures that AI is used responsibly and ethically. In April 2021, the European Commission proposed the first EU regulatory framework for AI, which laid the groundwork for the EU AI Act. With its global implications, this Act is a step towards regulating AI and a crucial component of the EU's Green Deal and the COVID-19 recovery plan. With its role in setting standards in AI regulations, the EU has once again proven its importance in the technological realm, surpassing the US and China.
Key features of the EU AI Act
The EU AI Act embraces a risk-oriented approach to regulate AI, grouping AI systems into four risk levels:
- unacceptable,
- high,
- limited,
- minimal risk
The level of regulation alternates according to the risk posed by each AI system.
- Unacceptable risk: AI systems that threaten safety, livelihoods, and rights are banned. This includes AI used for social scoring by governments and certain types of biometric monitoring, which are examples we can see in China.
- High risk: AI systems in critical sectors such as healthcare, transportation, and law enforcement face severe requirements, including rigorous testing, documentation, and human oversight. Giving AI that much autonomous control over delicate and essential components of human daily lives is exceptionally unsafe.
- Limited risk: AI systems with limited risk must comply with transparency obligations, such as informing users about their interactions with AI. This section mainly includes chatbots and deepfakes.
- Minimal risk: AI systems with minimal risk are mainly exempt from regulation but must still adhere to principles of fairness and transparency, like video games that use AI to improve player experience.
Ensuring safety, transparency, and accountability
The EU AI Act ensures AI systems' safety, transparency, and accountability. For high-risk AI systems, the Act mandates:
- Risk management: Organizations must implement risk management systems to identify and diminish potential risks associated with AI.
- Data governance: High-risk AI systems must be trained on high-quality datasets to minimize biases and ensure accuracy.
- Transparency and explainability: AI systems must be transparent and explainable, allowing users to understand decision-making processes.
- Human oversight: High-risk AI systems must incorporate human oversight to prevent harmful outcomes.
- Robustness and accuracy: AI systems must be robust and accurate, with measures to address potential errors and vulnerabilities.
Focus on fundamental rights and social responsibility
The EU AI Act prioritizes the protection of fundamental rights and social responsibility. It seeks to prevent bias and discrimination, foster social and environmental responsibility, and ensure respect for basic rights. The European Parliament's priorities include:
- Ensuring AI systems in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
- Oversight by humans to prevent harmful outcomes.
- Establishing a technology-neutral, uniform definition for AI applicable to future AI systems.
- Emphasizing intellectual property rights, including patents and new creative processes for further AI development.
- Strengthening digital infrastructure and ensuring everyone can access services, including developing broadband, fiber, and 5G.
Regulatory sandboxes and global cooperation
The EU AI Act introduces regulatory sandboxes to foster innovation while ensuring obedience. These controlled environments allow organizations to test AI systems under regulatory supervision, providing a safe space for experimentation and helping regulators understand developing technologies.
International cooperation is also a priority. The EU aims to work with like-minded partners to safeguard fundamental rights and minimize technological threats. This collaboration is crucial for setting global AI standards and preventing misuse by authoritarian regimes. AI Act mirrors GDPR in a few instances. AI Act aims to establish an international standard in AI regulation similar to GDPR. For companies doing business in cybersecurity, information governance, and eDiscovery, the AI Acts aligns AI systems with strict privacy and data protection standards. For example, in eDiscovery, the Act guarantees that AI tools for legal investigations cohere with transparency and ethical standards. The co - legislators agreed to “prohibit biometric categorization systems that use sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation, race); untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; emotion recognition in the workplace and educational institutions; social scoring based on social behavior or personal characteristics; AI systems that manipulate human behavior to circumvent their free will; AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)”.
Penalties for non-compliance
The EU AI Act includes strict penalties for non-compliance. Companies breaching the Act face fines based on their global annual turnover from the previous financial year or a predetermined amount, whichever is higher. Fines range from €35 million or 7% of global turnover to €7.5 million or 1.5%. These profound penalties highlight the importance of adhering to the regulations and building a culture of trust and transparency in AI development.
The role of the European Parliament
The European Parliament plays a crucial role in shaping the AI regulatory landscape. It has set up a special committee to analyze the impact of AI on the EU economy and proposed guidelines for military and non-military use of AI. On January 20th, 2021, the Parliament presented guidelines for AI use in the military, law, and health. On October 6th, 2021, Members of the European Parliament required strong measures for AI tools used by the police, calling for a permanent ban on automated recognition of people in public spaces and transparency of algorithms to counter discrimination.
With the adoption of this legislation, the EU has become the first government institution to implement AI regulation. Still, the question arises: Does this legislation recognize all the threats that certain aspects of AI represent to citizens’ rights?
For instance, a great deal of existing data is created by humans, and that should be accessible and protected as such. One of the tasks that AI needs to define is the conditions for collecting personal data from publicly available sources for mandatory training, validation, and testing of high-risk AIs. There are a few examples of this, such as terrorism threats or child exploitation, in which law enforcement could use the help of AI biometrics and facial recognition to find missing people or criminals faster. However, the governments haven’t entirely banned the public use of facial recognition, which poses a significant security breach and violation of human rights. Also, while the EU AI Act protects all citizens’ rights, some groups remain unprotected, such as migrants.
Addressing AI’s environmental footprint
When talking about ecology and climate change, while AI applications can benefit the environment, AI poses a significant problem regarding the massive quantity of electric energy and water resources needed to train large-scale AI models. The forecast is that by 2027, AI models might consume energy like Argentina or the Netherlands in one year. The EU AI Act must recognize and act upon that environmental threat. The AI industry must progress in an environmentally sustainable way, with a stricter approach to managing its systems’ ecological footprint.
Investing in AI
EU invested a lot of time and effort in AI legislation, but looking economically, it needs to catch up. The EU overlooks a critical segment of success in the AI field- public investment. While the US, China, UAE, and Australia are investing large sums of money in AI development in their countries, the EU needs to recognize its importance. This shows that the EU hasn’t learned much from past mistakes, which can cost it a geostrategic position on the global political and economic power map. EU became dependent on other countries such as Russia, UAE, China, and the US for energy, oil, and essential trade, and it lost a great deal of geopolitical power in the last decade. Getting hold of AI legislation and investing in it can make the EU regain some of that power and put it back in a leading position in matters of geostrategic power. Failing to recognize that could cost the EU the position that it craves. The AI Act should be announced with significant investments in AI development and research. Infrastructure, talent retention, and production questions need to be funded by the EU to assert dominance in the field. Legislation is an intelligent step towards domination, but it might not be enough to look globally. Only by investing in AI can Europe secure strategic independence in a critical technology of the 21st century and prevent one more geostrategic dependency that brought Europe on margins in the gas and oil supply field.
The path forward: building a trustworthy AI hub
The EU AI Act targets to turn the EU into a global hub for trustworthy AI. By protecting Europeans and providing businesses with the legal certainty necessary to inspire innovation, the Act attempts to balance technological advancement with ethical considerations. The higher the risk of harm, the stricter the regulation, ensuring that AI systems with an unacceptable level of risk are prohibited and high-risk systems are subject to harsh obligations.
The EU AI Act is a pioneering initiative that reflects the EU's commitment to responsible AI governance. By addressing the risks and mobilizing the benefits of AI, the Act sets a global benchmark for AI regulation. As AI technology continues to evolve, the EU AI Act will serve as a model for other regions, promoting responsible innovation and protecting the rights of citizens worldwide. The idea of this law is to serve citizens, but it has few plotholes in it. Some aspects of the law need to be improved, such as better public protection and including all citizens in it. The ecological impact also serves as a big problem for regulations as AI uses significant chunks of energy to function, learn, and train new technologies. Also, the EU got hold of AI legislation to regain power on the global geopolitical scene, but significant investments are necessary to succeed. In conclusion, AI legislation is much needed to protect citizens’ privacy and set boundaries on using it for good causes only, preventing discrimination and crime. While the idea behind it is solid, in practice, there is much more to do to make the EU AI Act the blueprint for AI usage worldwide.