Advertisement
Top
image credit: Adobe Stock

Understanding EU’s Artificial Intelligence Act: The First AI Law Proposed by a Major Global Regulator

June 22, 2023

Category:

The AI Act is a groundbreaking proposal for a European Artificial Intelligence (AI) law, marking the first AI bill introduced by a major global regulator. Designed to regulate AI systems, the Act classifies the level of risk posed by these systems, imposing obligations and penalties proportionate to the identified degree of risk. Similar to the EU General Data Protection Regulation (GDPR) of 2018, the AI Act could become a global standard, determining the extent to which AI positively impacts people’s lives. The international impact of the EU AI Regulation has prompted other governments to enhance their measures regarding the use of artificial intelligence. For example, in September 2021, Brazil’s Congress passed a bill creating a legal framework for AI, awaiting adoption by the country’s Senate. Additionally, Chinese regulators have released draft rules aimed at managing how companies develop generative AI products, such as ChatGPT.

Originally proposed by the European Commission in April 2021, the AI Act has reached significant milestones in its journey toward becoming the first law regulating artificial intelligence. At the end of 2022, the European Council adopted a ‘general approach’ position on the legislation. Then, in early May 2023, the Committee on Internal Market and the Committee on Civil Liberties adopted a draft negotiating mandate for the first artificial intelligence regulations by 84 votes to 7, with 12 abstentions.

What Is the AI Act, and Why Is It Important?

The AI Act represents the first proposal for a European law on artificial intelligence, applying to the development, deployment, and use of AI in the EU. The Act classifies AI applications into three risk levels of risk:

1) Applications and systems that create unacceptable risks are prohibited. Examples include social scoring systems such as those employed by the Chinese government, systems based on subliminal, manipulative, or exploitative techniques, and biometric identification systems.

2) High-risk applications are subject to specific legal requirements. This category encompasses CV scanning tools for job applicants, systems assessing customer creditworthiness, systems endangering public health, and any system used in the administration of justice.

3) Limited risk applications that are not explicitly prohibited or listed as high risk are largely left unregulated. This category includes chatbots, games with AI components, spam filters, inventory management systems, market, and customer segmentation systems, and most AI systems.

The AI Act has taken another important step towards becoming the pioneering law in regulating the field. The bill was approved by two parliamentary committees with strong support. The latest version of the bill presents a stricter set of rules, prohibiting unacceptably risky systems, such as the use of facial recognition software in public, predictive AI systems used by police, and emotion recognition systems.

The recent amendments also introduce specific requirements for basic models and transparency measures for generative AI applications. Members of the European Parliament agreed on an amendment that obliges vendors of core models to conduct safety checks, implement data governance measures, and mitigate risks before launching their models on the market. Additionally, they should take into account potential risks to health, safety, fundamental rights, the environment, democracy, and the rule of law.

Basic model builders are also required to reduce energy and resource consumption and register their systems in an EU database. Furthermore, the Act aims to establish a uniform, technology-neutral definition of AI, applicable to both current and future AI systems.

The Main Provisions of the AI Act

The AI Act strictly prohibits AI systems that pose an unacceptable level of risk to human safety. These include systems that:

  • use subliminal or intentionally manipulative techniques
  • exploit people’s vulnerabilities
  • calculate social scores to classify people based on their social behavior, socioeconomic status, and personal characteristics.

The law introduces a ban on the intrusive and discriminatory use of AI systems such as:

  • “real-time” remote biometric identification systems in publicly accessible spaces
  • “post-factum” remote biometric identification systems
  • biometric classification systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion, political orientation)
  • predictive surveillance systems (based on profiling, location, or past criminal behavior)
  • emotion recognition systems in law enforcement, border management, workplace, and education institutions
  • indiscriminate extraction of biometric data from social media or CCTV footage is prohibited.

In addition, the new version of the Act expands the area of high-risk AI applications to include:

  • harm to human health, safety, fundamental human rights, or the environment
  • AI systems that can influence voters in election campaigns
  • recommendation systems used by social media platforms

Transparency measures targeting general-purpose AI are also included, imposing obligations on providers of foundation models, such as large language models and generative AI (like ChatGPT). These providers should ensure robust protection of fundamental rights, human health and safety, and the environment, democracy, and the rule of law.

Before releasing their models, developers will be required to: conduct security checks, apply data governance measures, vet data sources to eliminate potential discrimination or bias, and assess and mitigate any risks to fundamental rights, human health and safety, the environment, democracy, and the rule of law. Compliance with copyright law regarding training data is also mandatory.

What Will Happen Next?

Before negotiations can begin with the European Council on the final bill, the draft negotiating mandate needs to be endorsed by the full Parliament during the 12-15 June session. Final approval is expected before spring 2024. Once the Act is approved, companies will have a grace period of approximately two years to comply. The Artificial Intelligence Act imposes stiff penalties for non-compliance, with fines reaching up to €30 million or 6% of global revenue. For tech giants like Meta or Google, this could mean billions of dollars.