On August 1, 2024, the European Union’s new Artificial Intelligence (AI) Act came into effect, setting a precedent for AI regulation worldwide. The EU AI Act is the first comprehensive regulatory framework addressing the safe, transparent, and non-discriminatory use of AI in the world, establishing Europe as a leader in the field. With a range of legal obligations for AI systems based on their potential risk, the European Parliament aims to “protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation.”

In sum, the EU AI Act places safeguards on general purpose AI (GPAI), creates clear data quality and risk management obligations for systems categorized as high risk, bans certain AI systems outright, and grants EU citizens a right of action to submit complaints against AI systems. The majority of obligations fall on developers of high-risk AI systems that intend to use or sell their systems in the EU.

While the AI Act directly applies to EU member states, its implications extend far beyond EU borders. Developers and deployers of AI systems in the United States will still be subject to the act’s provisions if their AI system is or will be marketed within the EU and can potentially affect EU consumers. Failure to comply with the act’s provisions can result in severe penalties of up to 7% of global annual turnover, posing a significant financial risk to American companies that reach into the EU.

European union flag flying in sky

Risk Categories and Requirements

The EU AI Act categorizes AI systems into four tiers based on their potential risk, and each category has specific regulations and requirements:

  • Unacceptable Risk: AI systems that fall under the unacceptable risk category are banned completely. This includes biometric categorisation systems inferring protected characteristics, emotion recognition in the workplace and schools, social scoring, predictive policing, and AI that is used to manipulate or exploit individual vulnerabilities.
  • High Risk: High risk systems under the AI Act are obligated to establish a risk management system, conduct data governance, provide technical documentation to demonstrate compliance, and implement human oversight, among other quality and accuracy requirements. An AI system may be categorized as high risk if it is used in critical infrastructure, education, employment, or law enforcement, and poses a significant risk to people’s health, safety, or fundamental rights.
  • Limited Risk: Limited-risk AI tools and GPAI systems, such as chat bots, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. However, more powerful GPAI models that pose higher risks may face additional requirements. This category also requires artificial or manipulated images and videos, known as “deepfakes,” to be clearly labeled as such.
  • Minimal Risk: There are no specific requirements for minimal risk AI systems, which refers to applications like AI-powered video games or email spam filters. Most AI systems will likely fall into this category.

Implications for US-Based AI Systems

The EU AI Act’s focus on high-risk AI systems that could potentially result in unfair treatment or discrimination, makes it crucial for AI developers to consider how their patent applications describe AI systems and functionalities. Although simply disclosing these features in a patent application does not automatically categorize the AI system as high risk, US companies should be aware that these disclosure might attract greater scrutiny to ensure compliance with the EU AI Act.

Overall, the EU AI Act necessitates that US companies developing or investing in AI technology consider its regulatory requirements as the legislation is likely to influence AI regulations in other countries.

AI Governance in the US is Growing

The United States does not have a comprehensive federal law specifically regulating AI, but there is a growing trend toward implementing various safeguards on AI technology. For example, the Federal Trade Commision (FTC) has started cracking down on companies with unfair or deceptive practices involving AI technology. The United States Patent and Trademark Office (USPTO) is considering changes to patentability requirements in light of AI advancements. 

At the state level, the Colorado Artificial Intelligence Act (CAIA) was enacted earlier this year, based in some parts on the EU AI act. Similar to the EU AI Act, the CAIA targets high-risk AI systems  in areas such as education, employment, financial services, and health care, among others. Additionally, lawmakers in California have been in the process of drafting legislation to address AI safety concerns.

As the legal landscape for AI continues to evolve in the United States and elsewhere, companies developing or deploying AI technologies should be mindful of transparency, risk mitigation, and anti-bias measures.

AI Analysis and Evaluation by Software Experts at Quandary Peak

Our AI and machine learning experts have extensive knowledge about software that employs artificial intelligence and machine learning techniques. Contact us to speak with a ML/AI expert who understands the unique challenges of evaluating AI software and source code for litigation and due diligence matters.