BLOG -


Update AI Act - the ten most important questions for users of AI systems

After the political agreement on the AI Act was effectively announced in the media in December 2023, the now provisionally final version was adopted on 13 March 2024. The AI Act was approved by the European Parliament with an overwhelming majority of 523 votes to 46. All that now remains is for the legal and linguistic experts to review it and for the Council to formally adopt the Regulation. This is expected to happen before the end of the current legislature (or by July 2024).

There is no doubt that manufacturers of AI systems will have to comply with the provisions of the AI Act and will therefore certainly be keeping a close eye on this European Regulation. However, companies that "only" use AI should do the same. In the following, we have compiled the ten practical questions that companies should ask themselves if they are using or planning to use AI.

1. Which companies must comply with the provisions of the AI Act?

Most of the provisions of the AI Act deal with prohibited and high-risk AI systems and the resulting obligations for providers, as well as for importers and distributors of such AI systems. However, this does not mean that users of an AI system can now sit back and relax. On the contrary: users (referred to as "deployers" in the AI Act) of AI systems are also covered by the AI Act and must comply with extensive obligations.

The AI Act applies not only to companies based in the EU, but also to providers and deployers based outside the EU, provided that the output generated by the AI systems is used in the EU.

2. Excluded areas of application

First, the AI Act excludes from ist scope the use of AI by natural persons for purely personal, non-professional purposes from the scope of application. AI systems that are developed and used exclusively in the field of scientific research and development are also excluded from the scope of application.

The provision that the AI Act does not apply to certain AI with free and open source licenses (open source) may become significant in the future. The AI Act provides for another significant exception for the use of AI systems in the military, defense and national security sectors. In addition, Member States have the option to provide for further exceptions in specific areas. For example, they can provide for further legal and administrative provisions on the use of AI systems by employers to provide further protection for employees.

3. Prohibited AI systems

AI systems that pose an unacceptable risk are completely prohibited under Art. 5 AI Act. This includes AI systems in the following eight areas:

  • Techniques of subliminal influence beyond human consciousness to significantly distorting or harm a person's behaviour
  • Targeted exploitation of the vulnerabilities of certain groups of people due to their age or disability
  • Social scoring
  • The use of profiling systems to assess or predict the risk of an individual committing a criminal offense
  • Non-targeted collection (scraping) of facial images from the Internet or from video surveillance systems to create facial recognition databases
  • The use of emotion recognition systems in the workplace and in educational institutions
  • The use of biometric categorisation systems
  • The use of 'real-time' remote biometric identification systems in public spaces for purposes of law enforcement (although this is declared permissible within narrow limits).

4. Which systems are high-risk AI systems?

The core of the AI Act are the regulations on so-called high-risk AI systems. In principle, AI systems are considered high-risk AI systems if they pose a significant threat to fundamental rights.

A high-risk AI system exists if an AI system is used as a safety component for a product that falls under the EU harmonisation legislation listed in Annex I or is itself such a product. This includes, for example, machinery, toys and medical devices.

An AI system is also considered to be a high-risk AI system if it falls into one of the following areas of Annex III:

  • Remote biometric identification systems and AI systems for biometric categorisation and emotion recognition
  • Critical infrastructure: This includes AI systems that are intended to be uses as safety components in the management and operation of critical digital infrastructure, road traffic or in the supply of water, gas, heating and electricity.
  • Education and vocational training: AI systems are covered if they are used to make decisions on the access of natural persons to educational and vocational training institutions.
  • Employment, workers management and access to self-employment: AI systems used for analyzing, filtering and evaluating applicants are covered.
  • Certain essential private and essential public services and benefits: This includes, for example, AI systems that are used to evaluate the creditworthiness and credit score of natural persons.
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

However, the AI Act provides for an important exception: AI systems from the aforementioned Annex III categories can be exempted from classification as high-risk AI systems under certain conditions. The prerequisite is that there is no significant risk of harm to the health, safety or fundamental rights of natural persons. Examples include AI systems that ae intended to perform a narrow prcedural task. The same applies if the AI system is used to improve an activity previously carried out by humans. The assessment of whether such an exemption applies must be carried out by the company itself as part of a risk evaluation and documented accordingly.

5. What regulations apply to deployers of high-risk AI systems?

Companies that use high-risk AI systems as deployers must fulfill a comprehensive catalog of obligations. These include the following, for example:

  • They shall take appropriate technical and organizational measures to ensure that the high-risk AI systems are used in accordance with the instructions for use.
  • They transfer human supervision to natural persons.
  • They ensure that input data is relevant and sufficiently representative with regard to the purpose of the AI system.
  • They monitor the operation of the high-risk AI system on the basis of the instructions for use and, if necessary, inform the suppliers or, in the event of serious incidents, the importer, distributor and the relevant authorities.
  • You keep automatically generated logs for at least six months.
  • If they are also employers, they must inform the employees concerned and the employee representatives about the use of a high-risk AI system in the workplace.
  • They are subject to an obligation to cooperate with the authorities.

The fundamental rights impact assessment for high-risk AI systems, which was originally required for all deployers, is now only foreseen in the current text of the Regulation for state institutions and private companies performing public services, as well as for those high-risk AI systems where public services, credit assessment or risk-based pricing of life and health insurance are affected, Art. 27 AI Act.

Under certain conditions, deployers may also become providers of a high-risk AI system themselves and then be subject to the stricter provider obligations, such as the establishment of a risk management system, the implementation of a conformity assessment procedure and registration in an EU database. Such a change of responsibility comes into effect if a high-risk AI system is placed on the market or put into operation under its own name or brand, or if a significant change is made to a high-risk AI system.

6. What obligations apply to deployers of AI systems that are not high-risk AI systems?

While the comprehensive list of obligations outlined above applies to deployers of high-risk AI systems, deployers of low-risk AI systems are generally only subject to certain transparency obligations, Article 50 AI Act. For example, they must disclose if content such as images, videos or audio content constituting a deep fake has been artificially generated or modified by an AI. The same obligation applies when an AI generates or manipulates text that is published with the purpose of informing the public on matters of public interest.

7. What applies to SMEs?

The declared aim of the AI Act is to create an innovation-friendly regulatory framework. Accordingly, the legislator has introduced regulatory relief for micro, small and medium-sized enterprises (SMEs) - including start-ups - based in the EU. For example, SMEs can benefit from non-material and financial support. Finally, under certain conditions, SMEs are to be given priority and free access to so-called regulatory sandboxes. Finally, fines can be capped.

8. When does the AI Act apply?

Exact dates cannot yet be given, as the final text oft he AI Act needs to be published in the Official Journal of the EU before it can enter into force. The ban on AI systems will take effect six months after the Regulation comes into force. The majority of the provisions of the AI Act will apply 24 months after entry into force. However, the obligations stipulated for high-risk AI systems will only apply after 36 months.

9. How are violations of the AI Act sanctioned?

Non-compliance with the requirements of the AI Act can result in exorbitant fines. These vary depending on the violation and the size of the company. While violations of prohibited AI systems can result in fines of up to EUR 35 million or 7% of global annual turnover, other violations of obligations under the AI Act can result in fines of up to EUR 15 million or 3% of annual global turnover. Fines of up to EUR 7.5 million or 1% of turnover may be imposed for providing of false information.

Several national and EU-wide authorities are involved in enforcement, resulting in a complex structure of responsibilities and coordination procedures. In Germany, it is not yet clear which authority will ensure compliance with the requirements of the AI Act. The Federal Network Agency and the Federal Office for Information Security are being discussed.

10. ToDos for companies

First of all, each company should determine and classify the risk class to which the AI systems used belong. The requirements for their proper use are then derived from this categorisation. Especially for future projects, it is important to involve the departments responsible for AI in the company at an early stage to ensure sufficient testing and compliance with the regulations. This is highly recommended, especially in view of the high fines.

Dr Peggy Müller

Another article on this topic can be found under this link.

TAGS

Künstliche Intelligenz Hockrisiko-KI-Systeme

Contact us

Dr Peggy Müller T   +49 69 756095-582 E   Peggy.Mueller@advant-beiten.com