Summary

This article outlines the specific provisions and requirements under the EU Cyber Resilience Act (CRA) for high-risk AI systems, emphasizing the need for comprehensive risk assessment methodologies tailored to AI/ML products to ensure compliance with the Act’s security and resilience standards.

Relevant CRA Provisions

Detailed Explanation

The CRA mandates that products with digital elements classified as high-risk AI systems must comply with the essential cybersecurity requirements outlined in the Regulation. These products must meet specific criteria to be deemed compliant with the cybersecurity requirements of Regulation (EU) 2024/1689. The assessment of cybersecurity risks for these products must consider attempts by unauthorized third parties to alter their use, behavior, or performance, including AI-specific vulnerabilities such as data poisoning or adversarial attacks. For high-risk AI systems that are also important or critical products with digital elements, the conformity assessment procedures provided for in the CRA apply concerning the essential cybersecurity requirements. Manufacturers of these products may participate in AI regulatory sandboxes to facilitate compliance.

Obligations for Stakeholders

Manufacturers: Must ensure that high-risk AI systems comply with the essential cybersecurity requirements, demonstrate compliance through an EU declaration of conformity, and may use AI regulatory sandboxes for testing.

Distributors and Importers: Must ensure that high-risk AI systems placed on the market comply with the CRA’s requirements and maintain necessary documentation.

Open Source Software Stewards: Must follow internal control procedures for conformity assessment if the product qualifies as free and open-source software, making technical documentation publicly available.