Tiago Sérgio Cabral*
** PhD Candidate on the
College of Minho | Researcher at JusGov | Venture Skilled for the Portuguese
workforce within the “European Community on Digitalization and E-governance”
(ENDE). Writer’s opinions are his personal.
Picture credit score: Salino01,
through Wikimedia
Commons
Introduction to AI Literacy
below the AI Act
Below Article 4 of the AI Act (headed “AI literacy”),
suppliers and deployers are required to take “measures to make sure, to their finest
extent, a adequate stage of AI literacy of their workers and different individuals
coping with the operation and use of AI techniques on their behalf, taking into
account their technical information, expertise, training and coaching and the
context the AI techniques are for use in, and contemplating the individuals or teams
of individuals on whom the AI techniques are for use”. The idea of AI literacy
is outlined in Article 3(56) of
the AI Act as we’ll see under.
Article 4 is a part of a wider
effort by the AI Act to advertise AI literacy, which can also be mirrored in different
provisions reminiscent of these addressing human oversight, the requirement to attract up
technical documentation or the precise to rationalization of particular person
decision-making. Article 4 focuses on workers and folks concerned within the
operation and use of AI techniques. As such, the principle consequence of this
provision is that suppliers and deployers are required to offer coaching for
workers and folks concerned within the operation and use of AI techniques, permitting
them to acquire an inexpensive stage of understanding of the AI techniques used
throughout the group, in addition to normal information about the advantages and
risks of AI.
You will need to distinguish
between the principles on AI literacy and the necessities on human oversight, in
explicit Article 26(2) of the
AI Act. Below this provision, deployers might be required to assign human
oversight of high-risk AI techniques to pure individuals who’ve the required
competence, coaching and authority, in addition to the required assist. The extent
of information and understanding of the AI system required of the human overseer
might be deeper and extra specialised than what’s required from all workers within the
context of AI literacy. The human overseer will need to have specialised information
concerning the system that he/she is overseeing. The individuals subjected to AI literacy
obligations require extra normal information concerning the AI techniques used within the
group, notably those with whom the workers is partaking, together with
understanding of the advantages and risks of AI. The extent of AI literacy
required in organizations that develop or deploy high-risk techniques will,
naturally, increased than organizations that deploy, for instance, techniques topic
to particular transparency necessities. In any case, it should nonetheless probably be
decrease than one is required of a human overseer (though not restricted to
high-risk techniques as the principles for human oversight).
The scope of Article 4 of the
AI Act
AI literacy is a sui generis obligation
below the AI Act. It’s systematically positioned inside “Chapter I – Normal
Provisions”, and thus disconnected from the danger categorization for AI techniques.
This can lead to vital challenges in decoding the scope of the
obligations arising from Article 4 of the AI Act.
In actual fact, an remoted studying of this
provision might outcome within the conclusion that AI
literacy obligations apply to all techniques that meet the definition of an AI system
below Article 3(1) of the AI Act. Because the definition in Article 3(1) of the AI
Act is extraordinarily broad in nature, this is able to lead to a big enlargement of the
scope of the AI Act, far past the normal pyramid composed of (i) prohibited
AI (Article 5 of the AI Act),
(ii) high-risk AI techniques (Article 6
of the AI Act); and (iii) AI techniques topic to particular transparency
necessities (Article 50 of the
AI Act). The danger categorization additionally consists of general-purpose AI fashions and
general-purpose AI fashions with systemic threat, however the AI literacy obligation
below Article 4 seems to solely apply on to techniques. Not directly,
suppliers of AI fashions might be required to offer suppliers of AI techniques that
combine their fashions with adequate data to permit the latter to
fulfil their literacy obligations (see, inter alia, Article 53(1)(b) of the AI
Act).
The abovementioned interpretation
doesn’t maintain, nevertheless, if we go for studying of Article 4 of the AI Act that
adequately considers the definition of AI literacy below Article 3(56) of the
AI Act. Article 3(56) of the AI Act lays down that AI literacy means the “abilities,
information and understanding that permit suppliers, deployers and affected
individuals, considering their respective rights and obligations within the
context of [the AI Act], to make an knowledgeable deployment of AI techniques, as nicely
as to achieve consciousness concerning the alternatives and dangers of AI and potential hurt
it may well trigger”.
Suppliers and deployers of AI
techniques that aren’t a part of the abovementioned categorization will not be, strictly
talking, topic to any obligations associated to those AI techniques– at the very least none
arising from the AI Act. Likewise, the AI Act doesn’t grant affected individuals
any rights in relation to techniques that aren’t a part of the “basic”
categorizations. On condition that the existence of rights and obligations are the
constructing blocs upon which the AI literacy measures must be designed, in the event that they
don’t exist, the logical conclusion is that the definition can’t be utilized
on this context. If the definition below Article 3(56) of the AI Act can’t be
utilized, Article 4 of the AI Act which solely is determined by this definition
can not apply both.
Enforcement
Along with points across the
interpretation of its scope, enforcement of Article 4 additionally raises vital
questions. Article 99(3-5) of the AI Act doesn’t set up fines for the
infringement of AI literacy. As such organizations can’t be fined for failing
to fulfil their AI literacy obligations based mostly on the AI Act (if thought of in
isolation). Market surveillance authorities have enforcement powers that don’t
entail monetary sanctions, however it’s nonetheless an odd state of affairs for the AI Act
to ascertain an obligation with out a corresponding nice which is arguably the
key sanctioning software. It additionally stays to be seen whether or not market surveillance
authorities will prioritize an obligation that the EU legislator didn’t
think about as vital sufficient to advantage inclusion in Article 99(3-5) of the AI Act.
As well as, Member States could
use their energy below Article 99(1) of the AI Act to ascertain extra
penalties and, by means of these, make sure the enforcement of Article 4 of the AI Act.
Nevertheless, this method dangers fragmentation and inconsistency, which is
undesirable.
Personal enforcement can also be a chance,
however whether or not within the context of tort legal responsibility or product
legal responsibility, it appears to us that proving damages and the causal hyperlink between
the behaviour of the AI system and the injury could proceed to be main
obstacles to the success of any makes an attempt. On this context, it is very important
be aware that new EU
Product Legal responsibility Directive (relevant to merchandise marketed or put into
service after 9 December 2026) accommodates related provisions that will make
non-public enforcement simpler in opposition to producers sooner or later. Specifically,
Article 10(3) of the Product Legal responsibility Directive establishes that “the causal
hyperlink between the defectiveness of the product and the injury shall be presumed
the place it has been established that the product is flawed and that the injury
prompted is of a sort usually in line with the defect in query”. In
addition, Article 10(4) addresses conditions the place claimants face extreme
difficulties, particularly as a result of technical or scientific complexity, in
proving the defectiveness of the product or the causal hyperlink between its
defectiveness and the injury by permitting courts to ascertain a presumption. Nevertheless,
on this state of affairs, linking a breach of the duty to make sure AI literacy to a
defect in a product or a particular occasion of injury in any sort of
passable method appears difficult and unlikely to accepted by courts.
Lastly, though the AI literacy
obligations technically turned relevant on 2 February 2025, the deadline for
the appointment of Member State authorities is on 2 August 2025. As such, any
try of enforcement will probably be restricted throughout this era.
Identification of AI techniques
as a preliminary step for the evaluation of literacy obligations
Though, as referred above, literacy
obligations will not be relevant to techniques outdoors of the danger categorization of
the AI Act, from an accountability perspective, suppliers and deployers who
need to depend on this exception ought to nonetheless proceed an analysis
of the AI techniques for which they’re accountable as a preliminary step. Solely
after figuring out the AI techniques and evaluating whether or not they fall out of the
threat classes established by the AI Act can suppliers and deployers know with
an sufficient stage of certainty that they aren’t likewise topic to the
literacy obligations below Article 4 of the AI Act.
The GDPR in its place
supply of literacy obligations
For suppliers and deployers who
are appearing as knowledge controllers below the GDPR, it is very important be aware that
non-applicability of Article 4 of the AI Act doesn’t exclude literacy and
coaching obligations that will come up below different EU authorized devices.
Significantly, for AI techniques that depend upon the processing of non-public knowledge for
their work, sufficient coaching of workers could also be required to adjust to
controller accountability obligations and be certain that the measures applied
by the controller to make sure lawful processing of non-public knowledge within the context
of the group’s use of AI (Articles 5(2) and 24 of the GDPR). Contemplating
the wording of Article 39(1)(b) of the GDPR, knowledge safety officers must be
concerned within the analysis of coaching necessities.