AI Act compact (eBook)
329 Seiten
Fachmedien Recht und Wirtschaft (Verlag)
978-3-8005-9773-4 (ISBN)
Peter Hense, lawyer and partner at Spirit Legal, specializes in the fields of technology, data, research and development and privacy engineering. For over a decade, he has been working with leading R and D-companies on machine learning, in particular artificial neural networks and knowledge graphs. His focus is on compliance and data governance in the context of automated decision-making systems (Accountable AI). Translated with DeepL.com (free version) Tea Mustac, Mag. iur. is an expert in European and international technology and IP law at Spirit Legal. She advises and publishes at the interface of law and technology, with a particular focus on artificial intelligence. Together with Peter Hense, she hosts the English-language podcast 'RegInt: Decoding AI Regulation'.
Peter Hense, lawyer and partner at Spirit Legal, specializes in the fields of technology, data, research and development and privacy engineering. For over a decade, he has been working with leading R and D-companies on machine learning, in particular artificial neural networks and knowledge graphs. His focus is on compliance and data governance in the context of automated decision-making systems (Accountable AI). Translated with DeepL.com (free version) Tea Mustac, Mag. iur. is an expert in European and international technology and IP law at Spirit Legal. She advises and publishes at the interface of law and technology, with a particular focus on artificial intelligence. Together with Peter Hense, she hosts the English-language podcast "RegInt: Decoding AI Regulation".
1. The Scope of the AI Act
a. On AI Systems
(1) Introduction
After multiple conceptual changes to the definition, the AI Act was finally based on the OECD definition of an AI system. This is problematic for several reasons, starting with the fact that adopting this definition, intentionally or not, broadens the scope of an already very broad definition. Furthermore, the OECD definition was never intended to serve as a legal definition but rather as a programmatic statement typical of public policymaking. This makes it per se too vague to be one. Nonetheless, Article 3(1) of the AI Act now defines an “AI system” as:
-
– A machine-based system
-
– Designed to operate with varying levels of autonomy
-
– That may exhibit adaptiveness after deployment,
-
– That infers from its inputs how to generate outputs such as predictions, content, recommendations, or decisions, and
-
– That can influence physical or virtual environments.
Recital 12 attempts to provide some clarity on the matter by stating that, firstly, autonomy is to be understood as some degree of independence of the AI system from the human operator. However, this clarification fails to consider that most systems today possess at least a minimum degree of independence associated with process automation. Just think of your spam filter. Yes, of course, we can go check the spam filter and see what the algorithm has sorted out as “spam”. We can also choose to override the algorithmic label. However, the main point is that the algorithm independently sorted an incoming email into the spam folder, which also means you saw the email five days later than you would have otherwise. Not to say that anyone is complaining, as this situation is still preferable to receiving all the spam emails in our regular folder. Still, if any degree of autonomy is sufficient to satisfy this criterion, then it may very well be the case that even very simple programs we have been using for years, or even decades, fulfil it.
Secondly, in terms of adaptiveness, Recital 12 clarifies that it refers to self-learning capabilities, which allow the system to change while in use. Here, one might be tempted to sigh in relief as many systems do not have such capabilities. However, this is where the AI Act definition crucially deviates from that of the OECD one making the material scope of the AI Act virtually unlimited. While the OECD definition demands that AI systems be adaptive, the AI Act merely states that these systems “may exhibit adaptiveness”, meaning that they do not necessarily have to. To continue with our previous example, this implies that even our old-school spam filters, which do not improve over time and initially sort our emails automatically, still fall within the definition, as adaptiveness is apparently not a decisive factor. The third criterion is also clarified in the Recital. Inferences should be interpreted in light of development techniques that enable inference, which include “machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.” Furthermore, inferences are not a specific feature of artificial intelligence, but a general process used in many fields of science, philosophy, and daily life, such as in statistical calculations or medical diagnoses. Inferences, according to the internationally conception of the term,1 are used to draw conclusions from data and models, for example, in predicting outcomes or classifying data. In machine learning, the inference phase is when the trained model is used to make new predictions or decisions. While this specific application is technical, the underlying process of reasoning exists in many other scientific and practical disciplines. Unfortunately, this again fails to serve as a distinctive criterion between many traditional systems used since the nineties and an “AI system”.
The fourth component, which involves influencing the AI system’s environment is not further clarified. However, it is fairly safe to say that integrating any kind of system into anything will necessarily influence that system’s environment. The environment is to be understood as “the contexts in which the AI systems operate, whereas outputs generated by the AI system reflect different functions performed by AI systems”. This can encompass anything. Because, as soon as we implement a system into existing processes, we do so precisely to influence those processes by making them more efficient, simpler, faster, user friendly, etc. Furthermore, when humans use an AI system for any given purpose the AI system will at the very least steer the human thought process, thus, also exerting influence over it. Since there is no other threshold associated with the condition, such as the influence being decisive or even just major, this criterion can be considered fulfilled by any system automating any part of any process, as it will always influence at least that one part of the process.
One straw to grasp upon here is the part of Recital 12 that states that AI systems are and should be observed separately from simpler traditional software systems or programming approaches. Meaning that they should not cover systems based on rules defined solely by natural persons to automatically execute operations. At least here one can argue that our spam filter is not AI so long as someone hardcoded all the “trigger words” that result in an email being designated as “spam”. Needless to say, no one actually does that anymore.
Finally, there has been a lot of theoretical discussions about this definition and its delimitation. Some present novel and somewhat creative criteria for differentiating between traditional systems and AI systems, while others reinterpret parts of the previously analysed definition to allow for reasonable delimitation from a business perspective or even include use-case examples and their thorough examinations.2 While all these contributions are useful, we present a different and much more efficient approach:
Whenever you have to break your head over whether you are dealing with an AI system in the sense of the AI Act, you most likely are.
Many may be unhappy with or surprised by our simple interpretation. However, we consider it useful for several reasons:
-
– Any guidelines from the Commission based on Article 96 of the AI Act or from authorities on what is or is not an AI system will take a while to become available.
-
– Not all AI systems are subject to all the obligations under the AI Act, so wasting too much time on this question might be counterproductive.
-
– Designating your system as an AI system will also allow you to market your system as an “AI system”, which tends to sell better than a plain old traditional software system.
-
– Furthermore, there are already other, very concise and standardized definitions of what an “AI system” is from an engineering perspective that support our view (more on this below).
Finally, there are also AI systems that, although they fall under the definition of an AI system, are not regulated under the AI Act. These exceptions are mentioned in Article 2 and include:
-
– AI systems that are neither placed on the market nor put into service in the Union, if their outputs in the Union are used exclusively for military, defense, or national security purposes (Article 2(3)),
-
– AI systems used by authorities in third countries or international organizations as part of international cooperation or international agreements in the field of law enforcement and judicial cooperation with the Union or with one or more Member States (Article 2(4)),
-
– AI systems or AI models, including their outputs, developed and deployed solely for the purpose of conducting scientific research (Article 2(6)).
(2) A Deeper Dive: Qualitative and Quantitative Aspects of AI Systems
After this brief introduction to the topic, we would like to contribute some necessary and helpful impulses to the current debate derived from legal systematicity, technology, and business process modelling. The limitations on the unclear wording of the term “AI system”, supposedly clarified in Recital 12, must be viewed critically. Not only from a temporal perspective, where they read like an appeal to the legislator, to change what has already been done, but also in light of the clear case law of the CJEU. Most recently, in case C-307/22 (Judgment of 26 of October 2023 – DW v. FT, para. 44), the court held that “the preamble to an act of EU law has no binding legal force and cannot be relied on either as a ground for derogating from the actual provisions of the act in question or for interpreting those provisions in a manner that is clearly contrary to their...
Erscheint lt. Verlag | 25.11.2024 |
---|---|
Reihe/Serie | InTeR-Schriftenreihe |
Verlagsort | Frankfurt am Main |
Sprache | englisch |
Themenwelt | Recht / Steuern ► Wirtschaftsrecht |
Schlagworte | Accountability • AI regulation • Artificial Intelligence • bücher für juristen • Buch Recht • Buch Recht und Wirtschaft • CDSMD • Compliance Literatur • compliance management • compliance testing • da • data governance • data protection • Data Quality • Datenqualität • Datenschutz • Deutsche Gesetze Buch • Deutscher Fachverlag GmbH • dga • Diskriminierungsverbot • DMA • DSGVO • EU AI Act • fachbücher recht • Fachbuch Recht • Fachbuch Verlag • Fachliteratur Recht • fachmedien recht und wirtschaft • Gesetzbuch bestellen • Gesetze Buch • Gesetze Bücher • Gesetze kaufen • jura bücher • Jura Recht • juristische Bücher • Juristische Literatur • juristischer Fachverlag • KI-Verordnung • Konformitätsprüfung • Künstliche Intelligenz • Nachschlagewerk • nachschlagewerk recht • Non-Discrimination • Peter Hense • Rechenschaftspflicht • Recht • Rechtshandbuch • Recht und Wirtschaft • Risikoklassen • risk classes • Risk Management • Tea Mustac • training data • Trainingsdaten • Unternehmenspraxis • Use Cases • Verlag Recht • Verlag Recht und Wirtschaft Frankfurt |
ISBN-10 | 3-8005-9773-X / 380059773X |
ISBN-13 | 978-3-8005-9773-4 / 9783800597734 |
Informationen gemäß Produktsicherheitsverordnung (GPSR) | |
Haben Sie eine Frage zum Produkt? |
Größe: 12,4 MB
DRM: Digitales Wasserzeichen
Dieses eBook enthält ein digitales Wasserzeichen und ist damit für Sie personalisiert. Bei einer missbräuchlichen Weitergabe des eBooks an Dritte ist eine Rückverfolgung an die Quelle möglich.
Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belletristik und Sachbüchern. Der Fließtext wird dynamisch an die Display- und Schriftgröße angepasst. Auch für mobile Lesegeräte ist EPUB daher gut geeignet.
Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen dafür die kostenlose Software Adobe Digital Editions.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen dafür eine kostenlose App.
Geräteliste und zusätzliche Hinweise
Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.
aus dem Bereich