Securities Attorney for Going Public Transactions

Securities Lawyer Blog

knowledge itself is power

What Are High-Risk AI Systems Within the Meaning of the EU’s AI Act, and What Requirements Apply to Them?

In the evolving landscape of artificial intelligence (AI) regulation, the European Union’s AI Act has introduced a comprehensive framework that classifies AI systems based on risk levels: unacceptable, high risk, limited risk, and minimal risk. Each category has distinct requirements, with high-risk AI systems subject to extensive regulations. Understanding these classifications and the specific requirements is crucial for businesses aiming to innovate responsibly and comply with EU regulations.

Identifying High-Risk AI Systems

Article 6 of the AI Act delineates the criteria for categorizing AI systems as high-risk:

  1. Article 6(1): An AI system is deemed high-risk if it meets two cumulative conditions:

    • Safety Component: The AI system is intended as a safety component of a product (or is a product) covered by specific EU harmonization legislation listed in Annex I of the AI Act. This includes over 30 directives and regulations covering toys, vehicles, civil aviation, lifts, radio equipment, medical devices, etc.

    • Third-Party Conformity Assessment: The harmonization legislation mandates a third-party conformity assessment before the product (incorporating the AI system as a safety component) or the AI system itself is placed on the EU market or put into service.

  2. Article 6(2): Annex III of the AI Act lists specific AI systems that are high-risk, such as those in biometrics, critical infrastructures, education, vocational training, employment, worker management, and access to self-employment. The European Commission (Commission) can amend this list. AI systems in Annex III are not high-risk if they do not pose significant risks to health, safety, or fundamental rights, provided they:

    • Perform narrow procedural tasks,

    • Improve previously completed human activities,

    • Detect decision-making patterns without replacing human assessment, or

    • Perform preparatory tasks for high-risk assessments.

    However, the exemption does not apply if the AI system profiles natural persons based on GDPR’s definition.

Requirements for High-Risk AI Systems

High-risk AI systems must comply with extensive requirements considering their intended purposes, the state of the art, and the risk management system implemented. Key requirements include:

  1. Risk Management System:

    • A continuous risk management system throughout the AI system's lifecycle.

    • Identification and mitigation of foreseeable risks to health, safety, and fundamental rights.

    • Evaluation of risks from intended use and foreseeable misuse, supported by post-market monitoring.

  2. Data and Data Governance:

    • Governance and management practices for training, validation, and testing data.

    • Data must be relevant, representative, free of errors, and statistically appropriate for the system's intended purpose.

    • Specific geographical, contextual, behavioral, or functional settings must be considered.

  3. Technical Documentation:

    • Detailed technical documentation before market placement or service initiation.

    • Includes system description, monitoring, control, performance metrics, risk management, standards applied, and conformity declarations.

    • SMEs can provide simplified documentation.

  4. Recordkeeping:

    • Automatic recording of events to ensure traceability and facilitate post-market monitoring.

    • Logging capabilities must identify substantial modifications and potential adverse effects.

  5. Transparency and Provision of Information to Deployers:

    • Sufficiently transparent information for deployers to interpret and use the system appropriately.

    • Instructions for use must include details on the provider, system characteristics, performance, human oversight measures, and required computational resources.

  6. Human Oversight:

    • Designed to allow effective human oversight to minimize risks.

    • Oversight measures must prevent automatic reliance on system outputs, enabling human intervention when necessary.

  7. Accuracy, Robustness, and Cybersecurity:

    • AI systems must achieve appropriate accuracy, robustness, and cybersecurity throughout their lifecycle.

    • Accuracy levels and metrics must be declared in the instructions for use.

    • Systems must be resilient to errors, faults, and unauthorized alterations.

Enforcement and Penalties

Starting six months from the AI Act’s expected entry into force in June or July 2024, noncompliance with these requirements could result in severe penalties. Businesses may face fines up to €35 million or 7% of their total worldwide annual revenue, whichever is higher. National market surveillance authorities will enforce these provisions and report annually to the European Commission.

Moving Forward

As the AI regulatory landscape continues to evolve, businesses must stay informed and compliant with the AI Act to avoid substantial fines and ensure ethical AI practices. Thorough documentation, transparency, and a robust risk management system are critical components of compliance.

For detailed guidance on navigating the AI Act and other AI-related matters, please contact our firm. We are here to help you understand and comply with these complex regulations.

Gayatri Gupta