1. Home
  2. News
  3. Overview of the AI Regulation Part 2
  • Datenschutz

Overview of the AI Regulation Part 2

B) Risk-based approach of the AI Regulation

As already described in the Overview of the AI Regulation Part 1, the AI Regulation follows a risk-based approach.This means that the degree of regulation depends on the severity of the risks posed by the AI applications. In order to be able to assess which requirements need to be met, an affected organisation must first check what type of AI system is involved. AI systems are generally divided into four categories:

– According to Art. 5 I AI Regulation into AI systems for prohibited practices

– According to Art. 6 AI Regulation in high-risk systems

– According to Art. 50 of the AI Regulation into AI systems with limited risk

– According to Art. 95 of the AI Regulation in low-risk AI systems

Additional requirements are demanded for systems with a general purpose. More on this later. In order to be able to assess which requirements must be met, an affected organisation must first check what type of AI system it has.

a) Prohibited practices

Prohibited practices are listed in Art. 5 of the AI Regulation. It is assumed that the risk posed by these systems for those affected is too serious to authorise their use.For example, the placing on the market, commissioning and use of AI for subliminal manipulation outside of a person’s awareness is prohibited if this manipulation is significant and is intended to cause physical or psychological harm to that person or another person.

b) High-risk systems

The regulation of AI high-risk systems represents a large part of the AI Regulation.The European legislator has not included a precise definition of high-risk systems in the law. Instead, it aims to remain as adaptable as possible and not to set excessively narrow limits. Points of reference are therefore distributed across Art. 6 of the AI Regulation and Art. 7 of the AI Regulation. According to Art. 6 I of the AI Regulation, a high-risk system exists if it is used as a safety component for a product or is itself a product that is subject to certain EU regulations. Art. 7 I of the AI Regulation authorises the EU Commission to draw up a catalogue of life situations or applications that fall under this definition. Further use cases can be added by the EU Commission in the future. For example, AI systems that are to be used for the recruitment or selection of natural persons, in particular for placing targeted job advertisements, analysing and filtering applications and evaluating applicants, have been defined as high-risk systems.

Requirements for high-risk systems

Art. 8 et seq. AI Regulation define the compliance requirements for high-risk AI systems.The central provision here is likely to be Art. 9 of the AI Regulation, which requires the establishment of a risk management system that covers the entire life cycle of the AI. The risk analysis should take into account the risks to health, safety and fundamental rights that the AI system poses when used appropriately.

c) AI systems with limited risk

Art. 50 of the AI Regulation sets out information obligations for both operators and providers of AI systems with limited risk. The user must be informed that they are interacting with an AI in order to be able to prepare for this. According to Art. 50 I of the AI Regulation, AI systems must be designed in such a way that a normal person clearly recognises that they are interacting with an AI.

d) AI systems with minimal risk

For AI systems that neither fall under Art. 50 of the AI Regulation nor constitute a high-risk system, a code of conduct can be followed voluntarily in accordance with Art. 95 of the AI Regulation. According to the legislator, this is intended to strengthen social trust in AI applications.

e) Special provisions for AI systems for general use

For general purpose AI systems, additional obligations apply in accordance with Art. 51 et seq. AI Regulation, additional obligations apply that must be fulfilled in addition to the requirements of the respective level.

It should be noted that these additional obligations apply exclusively to providers of so-called GPAI (General Purpose Artificial Intelligence). Operators of such systems are not affected by these additional obligations. A GPAI model is an AI model that, even if it has been trained with large amounts of data using large-scale self-monitoring, has significant generality and is capable of competently performing a wide range of different tasks. This is true regardless of how the model is brought to market. It can be integrated into a variety of downstream systems or applications, with the exception of AI models that are used for research, development or prototyping purposes before being brought to market.

A well-known example of a GPAI model is currently ChatGPT. Companies that intend to use or are already using AI systems must therefore consider a number of aspects. It is strongly recommended to prepare accordingly by setting up AI compliance.

Hinweis zu Cookies

Unsere Website verwendet Cookies. Einige davon sind technisch notwendig für die Funktionalität unserer Website und daher nicht zustimmungspflichtig. Darüber hinaus setzen wir Cookies, mit denen wir Statistiken über die Nutzung unserer Website führen. Hierzu werden anonymisierte Daten von Besuchern gesammelt und ausgewertet. Eine Weitergabe von Daten an Dritte findet ausdrücklich nicht statt.

Ihr Einverständnis in die Verwendung der Cookies können Sie jederzeit widerrufen. In unserer Datenschutzerklärung finden Sie weitere Informationen zu Cookies und Datenverarbeitung auf dieser Website. Beachten Sie auch unser Impressum.

Technisch notwendig

Diese Cookies sind für die einwandfreie Funktion der Website erforderlich und können daher nicht abgewählt werden. Sie zählen nicht zu den zustimmungspflichtigen Cookies nach der DSGVO.

Name Zweck Ablauf Typ Anbieter
CookieConsent Speichert Ihre Einwilligung zur Verwendung von Cookies. 1 Jahr HTML Website
fe_typo_user Dieser Cookie wird gesetzt, wenn Sie sich im Bereich myGINDAT anmelden. Session HTTP Website
PHPSESSID Kurzzeitiger Cookie, der von PHP zum zwischenzeitlichen Speichern von Daten benötigt wird. Session HTTP Website
__cfduid Wir verwenden eine "Content Security Policy", um die Sicherheit unserer Website zu verbessern. Bei potenziellen Verstößen gegen diese Policy wird ein anonymer Bericht an den Webservice report-uri.com gesendet. Dieser Webservice lässt über seinen Anbieter Cloudflare diesen Cookie setzen, um vertrauenswürdigen Web-Traffic zu identifizieren. Der Cookie wird nur kurzzeitig im Falle einer Bericht-Übermittlung auf der aktuellen Webseite gesetzt. 30 Tage/ Session HTTP Cloudflare/ report-uri.com
Statistiken

Mit Hilfe dieser Statistik-Cookies prüfen wir, wie Besucher mit unserer Website interagieren. Die Informationen werden anonymisiert gesammelt.

Name Zweck Ablauf Typ Anbieter
_pk_id Wird verwendet, um ein paar Details über den Benutzer wie die eindeutige Besucher-ID zu speichern. 13 Monate HTML Matomo
_pk_ref Wird verwendet, um die Informationen der Herkunftswebsite des Benutzers zu speichern. 6 Monate HTML Matomo
_pk_ses Kurzzeitiger Cookie, um vorübergehende Daten des Besuchs zu speichern. 30 Minuten HTML Matomo
_pk_cvar Kurzzeitiger Cookie, um vorübergehende Daten des Besuchs zu speichern. 30 Minuten HTML Matomo
MATOMO_SESSID Kurzzeitiger Cookie, der bei Verwendung des Matomo Opt-Out gesetzt wird. Session HTTP Matomo
_pk_testcookie Kurzzeitiger Cookie der prüft, ob der Browser Cookies akzeptiert. Session HTML Matomo