Content area
Praised by many as a landmark ruling with wider impact on Al-powered automated decision-making systems across sectors, and equally criticised for creating systemic fissures in the GDPR framework, the recent judgment by the Court of Justice of the European Union in SCHUFA (scoring) case (C-634/21) concerning the interpretation of Article 22(1) GDPR deserves to be scrutinised in light of its potential implications for the EU labour law and the regulation of algorithmic management practices. Employers increasingly rely on automated systems to support or fully automate their management decisions. The use of automated decision-making systems in the recruitment and/or employment context largely falls back on the general data protection rules. However, there are limits, ambiguities, and potential gaps regarding their application that could undermine the workers protection, and bring into question regulatory compliance. The emerging EU legislation, such as the proposed Platform Work Directive (COM (2021) 762), aims to provide specific obligations concerning algorithmic management. Even if this legislation is passed, it will apply only in the context of platform work, and not in traditional employment, which could result in different levels of protection. The question about the nature and scope of Article 22 GDPR in the employment context therefore remains relevant. This paper will critically evaluate the findings from the SCHUFA (scoring) case and explore their impact on the current and future regulation of algorithmic management. The aim is to propose workable solutions that harness the benefits of legitimate algorithmic management practices, while safeguarding the workers rights in the Al-driven world of work.
ABSTRACT
Praised by many as a landmark ruling with wider impact on Al-powered automated decision-making systems across sectors, and equally criticised for creating systemic fissures in the GDPR framework, the recent judgment by the Court of Justice of the European Union in SCHUFA (scoring) case (C-634/21) concerning the interpretation of Article 22(1) GDPR deserves to be scrutinised in light of its potential implications for the EU labour law and the regulation of algorithmic management practices. Employers increasingly rely on automated systems to support or fully automate their management decisions. The use of automated decision-making systems in the recruitment and/or employment context largely falls back on the general data protection rules. However, there are limits, ambiguities, and potential gaps regarding their application that could undermine the workers protection, and bring into question regulatory compliance. The emerging EU legislation, such as the proposed Platform Work Directive (COM (2021) 762), aims to provide specific obligations concerning algorithmic management. Even if this legislation is passed, it will apply only in the context of platform work, and not in traditional employment, which could result in different levels of protection. The question about the nature and scope of Article 22 GDPR in the employment context therefore remains relevant. This paper will critically evaluate the findings from the SCHUFA (scoring) case and explore their impact on the current and future regulation of algorithmic management. The aim is to propose workable solutions that harness the benefits of legitimate algorithmic management practices, while safeguarding the workers rights in the Al-driven world of work.
Key words: algorithmic management, automated decision-making, EU labour law, Artificial Intelligence Act, Platform Work Directive
JEL classification: J81, K31
1 Introduction
In the Al-driven and digitalised world of work, the intersection between labour law and data protection law is inevitable. This is especially true in view of increasingly used algorithmic management practices. Algorithmic management can be defined in technical terms as "the use of computerprogrammed procedures for the coordination of labour input in an organisation" (Baiocco et al., 2022). In other words, it means a delegation of managerial functions to automated systems (Jarrahi et al., 2021). It involves a set of tools and "workplace practices that rely on digital devices or software to either partially or totally automate functions traditionally exercised by managers and supervisors" (Aloisi and Potocka-Sionek, 2022). In more practical terms, the definition highlights the algorithmic management's potential to automate "the full range of traditional employer functions, from hiring workers and managing the day-to-day operation of the enterprise through to the termination of the employment relationship" (Adams-Prassl et al., 2022).
It is not a "new thing", but a "continuation of very long historical trends of rationalisation or bureaucratisation of economic activity and the organisation of work" (Baiocco et al., 2022). However, it has an enormous disruptive potential, as it combines technological development and the ability to collect, store, process and built the massive amounts of data into organisational and work processes, thus reshaping the power balances at workplace (Baiocco et al., 2022).
Algorithmic management has outgrown the boundaries of its original deployment in the context of platform work,2 transitioning seemingly effortlessly into the conventional labour arrangements (see Aloisi and Potocka-Sionek, 2022; Adams-Prassl et al., 2022). Algorithmic management practices, such as work allocation, direction, real-time monitoring, evaluation, "nudging", etc. are increasingly used in the traditional or conventional work setting (Wood, 2021). The use of artificial intelligence (Al) tools that can boost these practices is growing in human resources management (see De Stefano and Wouters, 2022; Lechardoy, López Forés and Codagnone, 2023), as evident from the overview of available literature (Palos-Sánchez et al., 2022). As rightly claimed, the regulation of algorithmic management falls under multiple legal domains (Abraha, 2023). Where labour law currently fails to offer adequate protections to workers, data protection law, as well as anti-discrimination law, or occupational health and safety law should step in to cover the gaps, if possible. This paper aims to identify and analyse the interplay of the various existing and emerging EU rules on the protection of workers in the context of algorithmic management practices.
In the following section, we will briefly outline the subtleties of the key terms used (2), and proceed with analysing the existing and emerging EU legislation concerning algoritamic management practices. We will first focus on the prohibition of automated individual decision-making under the GDPR, based on the recent interpretation of Article 22(1) GDPR by the Court of Justice of the EU (CJEU) (2.1). We will then turn to the proposal of the Draft Platform Work Directive (DPWD), which aims to regulate algorithmic management in platform work (2.2). The third part of our analysis will explore the algorithmic management from a different angle: the regulation of Al systems deployed in this context by the freshly adopted Al Act3 (2.3). We will then test the coherence between these instruments on a hypothetical example (3), and conclude by offering some perspectives for further development (4).
The methodology is based on desk research and analysis of academic literature, legal sources and case law.
2 Algorithmic management and automated decision-making
Algorithmic management and automated decision-making should not be used as completely synonymous terms (see to the contrary, Abraha, 2023). Algorithmic management implies a symbiosis between technological and social forces (Jarrahi et al., 2021), and equalising it with automated decisionmaking puts too much emphasis on the technological part, overlooking some of its important features. First, the level of automation and human intervention in algorithmic management may vary (Baiocco et al., 2022; Wood, 2021). Algorithmic management relies on data and metrics, as well as diverse technological and computing tools, but there are various degrees in which they feed into and influence the final decision affecting the workers' position (various degrees of automation, from full to supportive). Especially in the conventional work setting, the role of human actors might be more pronounced than in the context of platform work. On top of that, there is a risk that automated decision-making might be assimilated with solely automated decision-making, which is not correct either.
Second, automated decision-making may have a distinct legal meaning in various legal contexts (see Rodriguez de las Heras Ballell, 2022; Hofmann, 2023), whereas algorithmic management (still) does not (see infra 2.2.). Automated decision-making does not necessarily involve personal data processing either. But when it does, we talk about distinct legal rules for automated processing and solely automated processing in the field of data protection, notably about the automated individual decision-making within the meaning of Article 22(1) of the General Data Protection Regulation (GDPR), which is prohibited unless exceptions apply.
2.1 Automated decision-making and GDPR
In data protection law, the GDPR is relevant for automated processing of personal data, for example in the context of profiling and automated individual decision-making. Under GDPR, profiling is a specific technique Which relies on automated processing of personal data with the objective to evaluate certain personal aspects, such as predicting performance at work, creditworthiness, etc. (Article 4(4), GDPR). It is important to highlight that there is no general ban on profiling: data processing for the purpose of profiling is permitted (Recital 71 GDPR; Paal, 2023). Profiling does not exclude human input (Bygrave, 2020b). However, when a profiling result is the basis for a solely automated decision-making, Article 22(1) GDPR kicks in. Solely automated decision-making refers to the ability to make decisions by technological means without human involvement (WP29, 2018). This means that automated decision-making may partially overlap or result from profiling, and that they can, but do not necessarily have to be, different activities (WP29, 2018).
In the context of automated individual decision-making, an individual has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her (Article 22(1), GDPR). The theoretical disagreement whether this right is a qualified prohibition, protecting data subjects from being "mere objects of algorithmic-based decision" (Paal, 2023), or a right that has to be effectively exercised by data subject (Bygrave, 2020a; Sancho, 2020), has been resolved by the CJEU in the SCHUFA (scoring)4 (C-643/21) case in favour of the former: Article 22(1) GDPR 'lays down a prohibition in principle, the infringement of which does not need to be invoked individually by such a person" (SCHUFA judgment: para. 52). The exception from this provision applies if the decision (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests; or (c) is based on the data subject's explicit consent (Article 22(2), GDPR), with the obligation of data controller in the cases referred to in points (a) and (c), to implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision (Article 22(3), GDPR). All three alternative conditions require further interpretation, and may be problematic from the labour law perspective. For example, the imbalance of power, which characterises employment relationships, challenges the assumption that explicit consent by the data subject (employee) is provided freely and with thorough understanding of its implications (EDPB, 2020; Adams-Prassl et. al., 2022). Stricter rules for justification of decisions based on special categories of personal data (referred to in Article 9(1), GDPR; such as data revealing racial or ethnic origin, etc.) apply (Article 22(4), GDPR).
As with other personal data, data subject has the right to obtain information from the controller whether his or her personal data is being processed, and to access the personal data and information prescribed under Article 15 GDPR. This includes the information on the existence of automated decisionmaking, including profiling, referred to in Article 22(1) and (4) GDPR and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject (Article 15(1)(h), GDPR) (on the effective exercise of the right to information under GDPR in general see Wachter, Mittelstadt and Russell, 2018).
This framework is relevant for the context of algorithmic management. Evidence from the EU Member States shows increasing importance and interconnectedness between labour law and data protection law concerning the impact of automated decision-making practices at work (Barros Vale and Zanfir-Fortuna, 2022). National courts and data protection agencies tend to recognise the complexity of legal and factual considerations associated with the application of GDPR provisions and principles on automated decisionmaking in the field of labour law, but interpretations of the relevant legal framework are divergent (Hiessl, 2023). This is why it is important to have at least some guidance from the CJEU. However, the first time that the CJEU had the opportunity to clarify and interpret Article 22(1) GDPR was in December 2023, in the case SCHUFA (C-643/21). This interpretation is liable to have an important impact for algorithmic management and the application of automated monitoring and decision-making systems at work, even though its factual background involves automated decision-making and scoring in the context of loan applications.
2.1.1 The SCHUFA case: the scoring result as a decision within the meaning of Article 22(1) GDPR
The SCHUFA case originates from a dispute involving the applicant, OQ, and the German Land Hesse, whose appointed Data Protection Officer refused to order Schufa Holding AG to grant OQ access to and erasure of her personal data. Schufa is a credit scoring company that provides its partners with information on the creditworthiness of third parties, consumers in particular. Schufa establishes its prognosis on the probability of a future behaviour (score), such as a repayment of a loan, based on certain characteristics of that person, and applying mathematical and statistical procedures. The outcome of such "scoring" is based on the assumption that by assigning a person to a group of other persons with comparable characteristics who have behaved in a certain way, similar behaviour can be predicted. Relying on a negative scoring provided by Schufa, a bank refused to grant a loan to OQ. So, the underlying question here was whether the establishment of a probability value, such as the credit scoring in the case at hand, constitutes an automated individual decision-making under Article 22(1) GDPR, and if so, does it cover the activity of a company such as Schufa, which does not grant loans itself. The referring national court (Verwaltungsgericht Wiesbaden) describes a strong reliance of the bank on the scoring results, and highlights a very realistic risk of a gap in legal protection, if Article 22(1) GDPR would not be applicable until the third party (a bank) takes a decision with regard to data subject (i.e., to grant or refuse a loan). If that were the case, data subject would be left entirely without protection, because first, Schufa would not be obliged to grant the right of access by the data subject to the personal data and information about the logic of automated decision-making under Article 15(1)(h) GDPR, and second, a third party (e.g. a bank refusing a loan) would not be able to provide the data subject such information, simply because it does not have it. Against this factual and legal background, the Court chose to adopt a broad reading of Article 22(1) GDPR, and interpreted the concept of "automated individual decision-making" from that provision to include the automated establishment of a probability value based on personal data, where a third party draws strongly on that probability value to establish, implement or terminate a contractual relationship with that person.
This reasoning is supported by the literal, contextual and teleological interpretation. According to the Court, the wording of this provision clearly entails three cumulative conditions for its application: there must be a "decision" that is based "solely on automated processing, including profiling", which "produces legal effects concerning [a person] or similarly significantly affects him or her". Although the concept of "decision" is not defined in GDPR, it is apparent from its wording that it is very broad, capable of including various acts affecting data subject in many ways. Examples of "decisions", as evident from Recital 71 GDPR, include automatic refusal of an online credit application, or e-recruiting practices without any human intervention. Concerning the second condition, it was common ground in this particular case that the activity performed by Schufa is profiling, within the meaning of the definition from Article 4(4) GDPR. As regards the third condition, the factual finding of the referring court, namely that a third party "draws strongly" on the probability value established by Schufa, where an insufficient probability value leads, in almost all cases, to the refusal to grant a loan, is a viable indication that the probability value itself affects the data subject significantly. This is further corroborated by the context and the objectives and the purpose of GDPR. Article 22(1) GDPR gives data subject the right not to be subject solely to automated decision making, including processing. The Court adheres to the widely accepted doctrinal position (see Paal, 2023) that this is a prohibition in principle, which does not have to be invoked individually by each person. A combined reading of Article 22(1) and Recital 71 GDPR allows the adoption of a decision based solely on automated processing only in cases referred to in Article 22(2) GDPR, i.e. where a decision is necessary for entering into or performance of a contract, where it is authorised by an EU or Member State law, or where data subject provides explicit consent. Such exceptions should be accompanied by suitable measures to safeguard data subject's rights and freedoms and legitimate interests, at least including the right to obtain human intervention, to express data subjects view, and to contest a decision. These enhanced requirements for lawfulness of automated decision making are explained by the purpose of Article 22 GDPR, Which is to protect the individuals against risks to their rights and freedoms posed by such activities. Therefore, a restrictive interpretation of Article 22 GDPR, whereby the establishment of a probability value would be considered merely as a preparatory act, and only the act of a third party as a "decision" within the meaning of this provision, would risk circumventing this guarantee and creating a gap in legal protection. As highlighted by AG Pikamáe, when the final decision is purely formal because "a score established by a credit information agency and transmitted to a financial institution generally tends to predetermine the financial institution's decision to grant or refuse to grant credit to the data subject", the score itself must be regarded as a "decision" within the meaning of Article 22(1) GDPR (Opinion of AG Pikamáe, 2023).
2.1.2 Implications of the SCHUFA judgment for labour law and algorithmic management
The SCHUFA judgment brought an important clarification of Article 22(1) GDPR, but this is just the beginning. What we know after SCHUFA is that the concept of "a decision based solely on automated processing" from that provision is broad enough to encompass a credit scoring, because it is capable of affecting the data subject when the final decision "draws strongly" on it. Therefore, it is a broad concept, encompassing a "number of acts which may affect data subject in many ways" (SCHUFA judgment: para. 46). There is also instruction on how to assess whether the national legal rules on profiling could be deemed as an exception from the prohibition of individual automated decision-making. Given that the Court was constrained by the facts of the case, other important concepts are still in need of clarification, despite the guidance offered by Article 29 Data Protection Working Party on automated individual decision-making and profiling (WP29, 2018). What we do not know in great many detail is to what extent a human involvement affects the status of "solely automated" decisions; and what, if any, legal weight should be ascribed to "legal effects" produced by the solely automated individual decisions, as opposed to "similarly significant' effects of such decisions. Should these two concepts always be viewed jointly, as the judgment seems to suggest (SHUFA judgment: para. 50). What characterises a "significant" effect? Could it, in the context of employment relations, be taken to mean any decision which significantly affects the core interests of workers, i.e. work opportunities and wages, as already emerging from interpretations by some national courts (Hiessl, 2023)? How can one interpret exceptions from the prohibition, provided in Article 22(2) GDPR, especially concerning data subjects consent? This is not just a theoretical debate, as evidence from the national jurisdictions shows that national courts do struggle with these concepts, especially in the context of (platform) work (Hiessl, 2023). This will be especially pertinent for algorithmic management practices in platform work, because that business model heavily relies on automated decisions based on data processing and taken in real time, and human involvement and monitoring of these decisions is in many cases either technically impossible, or just a formality, at best. A way to overcome this, as Adams Prassl et al. (2023) suggest is not just by simply banning fully automated practices (human in the loop), but also by providing for a human involvement at other stages of a decision-making process (review, information and consultation, and impact assessment). In any case, the scope of Article 22(1) GDPR will affect any such consideration (see Martini, 2020).
In terms of GDPR and consumer protection, some authors point to inconsistencies in adopting a broad interpretation of "decision" in SCHUFA judgment, by concentrating on the specific relations between credit institutions and credit scoring companies. It is argued that the gap in protection ensuing from a stricter interpretation of Article 22(1) GDPR could be filled by a more proactive role of credit institutions to ensure that customers have access to the relevant information on the processing of their data, through contractual relations with credit scoring companies, if necessary (Paal, 2023). It is also argued that it makes more sense for the customers to apply and obtain the relevant information from the credit institution, without having to go through the credit scoring company first, which partly contradicts the consumer protection guarantees (Paal, 2023). While it is true that many such objections make sense, it is necessary to turn to the broader implications of this decision. Its impact reaches beyond the data protection law and consumer protection considerations, and is especially relevant in the context of Al applications for algorithmic management and automated decision-making. For example, Al recruiting tools have become omnipresent (see Kelly, 2023). They work in a similar manner: think of the programme going through hundreds of CVs to suggest the most suitable candidates for a job. It will automatically exclude certain job applications, and by doing so, it will do more than just match the applicant's skill with the job description. The inner programming is the tricky part, and Al empowered technology goes beyond keywords matching and into the deep learning techniques, resulting in transparency and explainability issues (Abuladze and Hasimi, 2022; Hunkenschroer and Luetge, 2022). Even if it is claimed that human makes the ultimate decision regarding, for example, whether and whom to call to a job interview, "suggestions" made by the automated system could be considered as "decisions" in the light of Article 22(1) GDPR and SCHUFA judgment, whether they are transmitted to the employer by a third party, or whether the employer applies a system bought from and developed by a third party or not (see similarly, Aloisi and De Stefano, 2022). Issues could also arise in connection with the assessment of the degree of significance of effect of the screening decisions on some applicants (Parviainen, 2022). Consider automated worker monitoring systems that track worker's productivity and assign tasks or consequences based on this input: is a human truly actively involved in considering the results of monitoring, or merely serving as a passive and formal "vessel" for the ultimate decision driven primarily by automation? This becomes even more critical in real-time worker-customer matching and work allocation decisions in platform work.
2.2 Automated decision-making in employment and the regulation of platform work
As explained, apart from the mentioned Article 22(1) GDPR, there is currently no specific provision of EU labour law directly protecting the workers from automated decision-making. The Directive 2019/1152 on transparent and predictable working conditions prescribes minimum requirements concerning the workers right to safe and transparent working conditions in view of adaptation of the labour market to the digital innovation. Assurances provided in the Directive, especially concerning predictability of the work schedule and work patterns, will have to be taken into account in the context of algorithmic management and automated decision-making. However, pending the adoption of the proposed Directive of the European Parliament and of the Council on improving working conditions in platform work (European Commission, 2021a); the Draft Platform Work Directive or DPWD), there is currently no specific legal regime applicable for highly automatized work relations, such as those concerning platform work.
The Draft Platform Work Directive is currently in the legislative process. Drawing on specific legal challenges identified in practice (see, e.g. Agosti et al., 2023; Hauben et al., 2021), it will prescribe obligations for digital labour platforms in connection with the automated monitoring and decision-making systems. Its general objective is to improve the working conditions and the social rights of people working through platforms, whereas ensuring fairness, transparency and accountability in algorithmic management in the platform work context is one of its specific objectives. This includes protection from harm arising from algorithmic management practices, in view of its impact on income and working conditions, models of control and subordination, transparency and access to information and remedies, as well as its potential for gender bias and discrimination. The Commission's proposal refers specifically to "algorithmic management", which is the first time that this term will be used as such in the legislation. It is important to mention, however, that "algorithmic management" in the proposed Draft Platform Work Directive is understood in the narrow terms, confined to elements inherent to digital labour platform's business models. There is no legal definition in the text itself, but the explanatory memorandum makes it clear that it refers to the "use of automated systems to match supply and demand for work" (European Commission, 2021a). Although many of the initially proposed provisions will probably be altered as a result of the negotiation process, it is useful to take a look and briefly analyse the provision of Chapter Ill of DPWD entitled "Algorithmic management". In the initial Commission's proposal this Chapter is comprised of five articles (Articles 6 - 10, DPWD), which regulate the obligations of digital labour platforms to ensure transparency in the use of automated monitoring and decision-making systems, human monitoring of automated systems and their decisions, human review of significant decisions, including the corresponding right for platform workers to obtain an explanation from the digital labour platform for any such decision, to inform and consult with platform workers' representatives or themselves on algorithmic management decisions, as well as appropriate safeguards for genuinely selfemployed platform workers in connection with automated systems. Additional guarantees were inserted in this Chapter during interinstitutional negotiations, along with amendments of the existing provisions, such as clearer limitations regarding the type of personal data to be processed by means of automated decision-making systems, and express reference to data protection impact assessment pursuant to Article 35 GDPR (European Parliament, 2024a).5
As evident, these are all crucial aspects of the digital platforms' business model, where important safeguards of workers' rights at EU level are currently inadequate. The explanatory memorandum particularly highlights the difficulty to "draw the line between algorithmic decisions that do or do not affect workers in a sufficiently "significant way", and the ensuing difficulties in guaranteeing efficient legal protection (European Commission, 2021a). In that sense, the Draft Platform Work Directive should be compatible and does not prejudice the rights and obligations under the GDPR. The GDPR itself provides for the possibility to enact specific rules to ensure the protection of workers' personal data in the context of employment, including the organisation of work (Article 88, GDPR), and in relation to platform work this will, ultimately, be provided in the context of the Draft Platform Work Directive (for a critique of this solution see Ponce del Castillo, 2023). There are three obligations for digital labour platforms that are particularly relevant in the context of automated decisionmaking. The first concerns the transparency and information obligations, which are specific to the digital platform context (Article 6, DPWD), and add to those guaranteed under the Directive on transparent and predictable working conditions. The second is the obligation to ensure regular human monitoring and evaluation of individual decisions taken or supported by automated monitoring and decision-making systems on working conditions, including the guarantee of sufficient, trained, and competent human resources with the authority for the performance of this task, and protection against dismissal and other negative consequences in case they override automated decisions (Article 7, DPWD). The third is the human review of significant decisions, which establishes the right for platform workers to obtain an explanation from the digital labour platform for a decision taken or supported by automated systems that significantly affects their working conditions (Article 8, DPWD; European Commission, 2021a). This includes the possibility to discuss and clarify the facts, circumstances and reasons for such decisions with a human contact person at the digital labour platform, the obligation of platforms to provide a written statement of reasons for any decision to restrict, suspend or terminate the platform worker's account, to refuse the remuneration for work performed by the platform worker, or affecting the platform worker's contractual status, as well as further reassurances (substantiated reply, rectification of decision, compensation in case of infringement of worker's rights).
While the proposed regulation of algorithmic management in the Draft Platform Work Directive is mostly welcome, many authors warn about the potential for abuse which could undermine the original aims of the directive (Veale et al., 2023; Ponce del Castillo and Naranjo, 2022). The Draft Platform Work Directive will hopefully close the gap in protection, which arises with certain automated decisions, that do not result from personal data processing. For example, Al-powered demand predictions (taking into account, e.g. time of year, day, season, events nearby, weather forecasts, etc.) can shape the working patterns and inform decisions on working schedules. In such cases, GDPR would not apply, but the decision could nevertheless have a significant effect on the worker's status. Nevertheless, the same would not apply in conventional working arrangements, which are outside of the scope of the Draft Platform Work Directive. The only EU legal instrument that offers some reassurance for the conventional working arrangements in such case would be the Directive 2019/1152 on transparent and predictable working conditions, but only to a very limited extent.
Algorithmic management and automated decision-making thrive with the development of Al-powered systems. The potential harms of such applications are recognised in the freshly adopted Al Act, the first binding legal instrument for the regulation of Al systems in the world. We now change the perspective and take a look at another avenue for safeguarding the workers' rights before any individual automated decision has been made at all: it implies ensuring the use of trustworthy Al systems in employment and workers management.
2.3 Automated decision-making in employment and the Al Act: (High-risk) Al systems used in employment and workers management?
Aside from the protections in relation to the automated decision-making in the context of algorithmic management itself, in the near future, with the adoption of the Al Act, the Al systems used in the context of algorithmic management Will be subject to a set of rules for high-risk Al systems. Under the Draft AI Act (European Parliament, 2024b; European Commission, 2021b)itis made clear that data subjects continue to enjoy all the rights and guarantees awarded under GDPR, including the rights related to solely automated individual decision-making, including profiling (Recital (10), Draft Al Act). In that sense, the harmonised rules for the placing on the market, the putting into service and the use of Al systems under the proposed Al Act should facilitate the effective implementation and enable the exercise of the data subjects' rights and other remedies guaranteed under Union law on the protection of personal data and of other fundamental rights.
The Al system under the Al Act is defined as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments" (Article 3(1), Draft Al Act). It is important to highlight that a key characteristic of Al systems falling under the Al Act is their capability to infer, which poses particular privacy harms (see Solove, 2024; Keats Citron and Solove, 2021). This particular feature distinguishes them "from simpler traditional software systems or programming approaches", and that notion should therefore "not cover systems that are based on the rules defined solely by natural persons to automatically execute operations" (Recital (12), Draft Al Act). It is imaginable, therefore, that some technological tools and programs used in algorithmic management practices might not fit the definition of an "Al system", rendering the Al Act inapplicable. The Al Act adopts a proportionate risk-based approach, differentiating between uses of Al that create an unacceptable risk, which are prohibited; a high risk, which are subject to ex ante impact and conformity assessments; limited risk, which entail certain transparency obligations, and minimal risk, which are left to voluntary industry standards (see more in Poséié and Martinovié, 2022).
The general criteria for high-risk Al systems are prescribed under Article 6(1) of the Draft Al Act, and include standalone systems, as well as product components. Apart from this general reference, which is capable to encompass a wide variety of Al systems, certain Al systems which are numerated in Annex III are always considered as high-risk systems because of their function or field of usage (Article 6(2), Draft Al Act). They include, among others, Al systems "in the field of employment, workers management and access to self-employment, which are (a) intended to be used for recruitment or selection of natural persons, notably to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates; or (b) intended to be used to make decisions affecting terms of the work related relationships, promotion and termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics and to monitor and evaluate performance and behaviour of persons in such relationships" (Recital (57) and Annex II(4), Draft Al Act). The risks associated with the use of such systems include discrimination, and violation of fundamental right to data protection and privacy, and should therefore be subject to the strict(er) legal regime. Although there is a derogation prescribed in Article 6(3) of the Draft Al Act, whereby Al system will not be considered as high-risk if one or more prescribed criteria are fulfilled, it is expressly provided that an Al system that performs profiling is always considered as high-risk and cannot be subject to derogation. This leads us back to the definition of "profiling" under the GDPR, since the Al Act itself refers to profiling as "any form of automated processing of personal data as defined in point (4) of Article 4 of Regulation (EU) 2016/679 (...y (Article 3(52), Draft Al Act). As explained above (supra 2.1.) profiling and automateddecision making may, but do not necessarily overlap.
What does this mean for the application of automated decision-making and profiling involving Al systems, as defined in the Al Act, in the context of employment relationships? It means that all such systems will fall in the highrisk category, but some of them (those not involving profiling) will be able to rely on the derogation prescribed under 6(3) Draft Al Act and will be subject to more lenient conditions for the placing on the market and use. For example, the automated task assignment and scheduling might involve profiling or not: is the task assigned to a worker in relation to whom the system predicts that Will be the most suitable to perform the task based on his or her personal aspects (includes profiling); or is the task assigned to any available worker (without profiling)? In anticipation of the potential legal issues (see, e.g. Kiesow Cortez and Maslej, 2022), it should be emphasised that high-risk Al systems will have to comply with a range of specific obligations before they can be put on the market, and this will fall mostly on the providers. These obligations include the establishment of a risk management system, ensuring the quality of training data and data governance, technical documentation, automatic logging of activity to ensure traceability, ensuring sufficiently transparent information for deployers, human oversight, guarantee of accuracy, robustness and security, implementing the quality management systems, etc. Harmonised standards, common specifications and conformity assessments will be crucial for effective implementation of these obligations (see Chapter Ill, Section 2, Draft Al Act). There is a specific obligation for deployers6 of high-risk Al systems who are employers to inform workers representatives and the affected workers that they will be subject to the system prior to putting such system into service or use at the workplace (Recital (92); Article 26(7), Draft Al Act). This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives. The performance of a fundamental rights impact assessment is required from deployers that are bodies governed by public law or private entities providing public services, as well as operators deploying certain types of high-risk systems (Article 27, Draft Al Act). Compared to the initial proposal of the Al Act, the final version seems to address some of the concerns in relation to the protection of fundamental rights of workers (Cefaliello and Kullmann, 2022), under the assumption that the possibility of derogation does not become a standard rule in practice.
Concerning its relation to GDPR, the Al Act makes it clear that it does not, by any means, displace the GDPR and other instruments aimed at the protection of personal data and other fundamental rights (Recital (10); Article 2(7), Draft Al Act). Specifically, data subjects continue to be protected against solely automated individual decision-making, including profiling, under that legal framework. There is an inherent presumption that the use of Al makes it more likely that a decision is based solely on automated processing (Sartor and Lagioia, 2020). The underlying idea is that the Al Act's harmonised rules for the placing on the market, the putting into service and the use of Al systems compliment and facilitate the effective implementation and exercise of the data subjects' rights guaranteed under Union law on the protection of personal data and of other fundamental rights. General obligations of data controllers (e.g. employers) under GDPR, such as data protection by design and by default (Article 25, GDPR), and data protection impact assessments (Article 35, GDPR) apply regardless and in addition to any obligation for Al systems under the Al Act.
3 Testing the coherence: Different levels and gaps in protection?
The underlying aim of this paper was to explore the impact of the existing EU legislation on automated decision-making, as interpreted by the CJEU, in the context of algorithmic management. A comparison with the emerging and freshly adopted legal instruments has revealed areas that require further attention and consideration to prevent different levels of workers' protection in the context of algorithmic management practices, depending whether the work is performed on digital labour platforms or within a (more or less) conventional setting.
For example, despite the broad reading of the term "decision" from SCHUFA, the GDPR will not be able to solve all issues in connection with the automated decision-making which affects the worker's position in the employment relationship. GDPR rules cannot protect workers from automated decisionmaking that does not involve the processing of personal data, and/or profiling. Even if personal data processing is involved, exceptions, such as contractual necessity, legally prescribed authorisations, and consent can circumvent the application of Article 22(1) GDPR, as can the simple fact that many management decisions are not fully automated (Abraha, 2023; Lukács and Váradi, 2023; Parviainen, 2022; Solove, 2024). Outside of Article 22 GDPR, there is no general regime for automated decisions (Sancho, 2020), meaning that other GDPR provisions and personal data processing principles apply (for data processing at work see WP29, 2017).
Let us look at the following scenario: the manager (a human being) of a retail store decides to switch shifts, or otherwise change the working time arrangements, or even terminate the employment contracts for some workers based on predictions of customer or labour demand (see e.g. Wood, 2021). This automated prediction is not based on the processing of personal data. The ultimate decision to change the worker's working time patterns, or to dismiss the worker is made by a human. However, that prediction plays a determining role for the manager's decision. The manager may not even know how the prediction works (as long as it is accurate!) - the company relies on the system developed by a third party. So if a worker needs information as to how the prediction was formed, i.e. concerning the logic involved and the scope of its impact on the worker's position, in order to challenge the employer's decision to reschedule the working patterns or to terminate the contract, he or she will not have access to that information pursuant to GDPR: there was no automated processing of personal data involved. There is limited protection in terms of predictability of work patterns and the right to redress under Directive 2019/1152 on transparent and predictable working conditions, but it will depend on contract terms how the work pattern was agreed upon in the first place.
If we take a look at the same scenario, but in the context of platform work and the proposed Draft Platform Work Directive, it seems to accord a higher level of protection, because it does not require that automated decisions result from the processing of personal data, but solely that they concern an individual worker (see supra 2.2.). Consequently, there will be an evident discrepancy in the protection between "platform" and "conventional" workers in view of automated decision-making and algorithmic management practices, once the Draft Platform Work Directive is adopted. Admittedly, automated decisionmaking is an essential characteristic of platform work, which requires tailored solutions. Ultimately, however, there is not much difference between the nature and effect of the automated decision-making and algorithmic management, whether they take place in the context of platform work or conventional work. There should be no differences in workers' protection either (see, similarly, concerning the relation between DPWD and Al Act: Aloisi and Potocka-Sionek, 2022).
If we change the presumption of this scenario, and include profiling or any other processing of personal data in the prediction, thus rendering Article 22 GDPR applicable, the classification of a decision as fully automated one could only be avoided by a "meaningful" human involvement. According to the explanations of the Article 29 Working Party, a meaningful human involvement implies a meaningful oversight of the decision, "rather than just a token gesture" (WP29, 2018). An interesting interpretation of a "meaningful human involvement" comes from the Netherlands, where the Amsterdam Appeals Court interpreted it to mean involvement of a human who is sufficiently qualified, informed, and competent to make a decision (see Hiessl, 2023). The manager is expected to be qualified and competent but is he/she "informed" about the inner workings of an automated system is a matter of debate. The SCHUFA Judgment provides limited guidance for this issue since the facts of that case were rather straightforward and human intervention boiled down to accepting negative scoring in practically all situations. That might not always be the case. In any case, a human involvement cannot be "fabricated" by pure formal acceptance of the results of automated decision-making (WP29, 2018).
Let us broaden the above consideration to the requirements for Al systems used in the field of employment, workers management and access to selfemployment. The requirements under the Al Act for high-risk Al systems would apply, if the system of customer prediction is an Al system as defined under the Al Act. According to Article 6(2) and Annex (4) of the Draft Al Act, an Al system intended to be used to make decisions affecting terms of the work-related relationships, promotion, and termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics and to monitor and evaluate performance and behaviour of persons in such relationships is considered a high-risk Al system. However, under the derogation prescribed in Article 6(3) of the Draft Al Act, an Al system will not be considered as high-risk, if one or more of the following criteria are fulfilled: (a) the Al system is intended to perform a narrow procedural task; (b) the Al system is intended to improve the result of a previously completed human activity; (c) the Al system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or (d) the Al system is intended to perform a preparatory task to an assessment relevant for the purpose of the use of, inter alia, the mentioned Al systems in the field of employment, workers management and access to self-employment. This derogation cannot be applied to profiling, but there is no profiling involved in the first version of our example. So, either way, there would be no protection for the worker against a decision which relies on automated decision-making and algorithmic management practice, because GDPR would not apply and thus the worker cannot get an insight into how the prediction was made to challenge the employer's decision. On the other side, the Al system itself might escape the stricter regime for high-risk Al systems, with obligatory conformity assessments and other prescribed obligations for the operators of those systems, that could, at least to a certain extent, guarantee the trustworthiness of the Al system and ensure the protection of fundamental rights. An alternative version of this scenario which includes (some) profiling or other processing of personal data, e.g. where the prediction of customer demand would incorporate the profiling or evaluation of capability or efficiency of a worker for the performance of a certain shift or job would render both the GDPR and the Al Act applicable, and completely change the legal assessment.
4 Concluding considerations
As the world of work is becoming increasingly digitalised and powered by Al, the existing legal framework is trying to keep up with the new challenges. The adoption of the Al Act and the negotiations for the adoption of the Draft Platform Work Directive are certainly a step in the right direction. However, until they enter into force and are able to show their full potential, we will have to rely on the existing legal instruments, notably in the field of data protection, and adapt them to considerations specific for labour relations. However, this may not be enough. It is sometimes questioned whether new rules for automated decision-making are necessary, and suggested that other bodies of law, primarily in the field of data protection, non-discrimination, and human rights suffice, provided that they are effectively supervised and enforced (see, e.g. Mackenzie-Gray Scott and Abrusci, 2023). We cannot agree with such proposition. As our previous analysis shows, there is room for improvement and adaptation of the legal framework, particularly in view of algorithmic management practices that have spread beyond the platform work into all types of work relations. This requires careful calibration of the existing and emerging EU rules to avoid overlapping and conflicting solutions, with clear delineation between different instruments and a comprehensive approach with the aim of understanding their individual and combined impact on workers' protection.
References
1. Abraha, H. (2023) "Regulating algorithmic employment decisions through data protection law", European Labour Law Journal, Vol. 14, No. 2, pp. 172-191.
2. Abuladze, L., Hasimi, L. (2023) "The Effects of Artificial Intelligence in the Process of Recruiting Candidates". In: Papadaki, M. et al. (eds.) Information Systems. EMCIS 2022. Lecture Notes in Business Information Processing, Vol 464. Cham: Springer, pp. 465-473, doi: https://doi.org/10.1007/978-3-031-30694-5_34.
3. Adams-Prassl, J. et al. (2023) "Regulating algorithmic management: A blueprint", European Labour Law Journal, Vol. 14, No. 2, pp. 124-151.
4. Agosti, C. et al. (2023) "Exercising workers' rights in algorithmic management system, Lessons learned from the Glovo - Foodinho digital labour platform case, ETUI aisbl, Brussels.
5. Aloisi, A., De Stefano, V. (2022) Your Boss is an Algorithm. Artificial intelligence, platform work and labour. Oxford: Hart.
6. Aloisi, A., Potocka-Sionek, N. (2022) "De-gigging the labour market? An analysis of the "algorithmic management" provisions in the proposed Platform Work Directive", Italian Labour Law e-Journal, Vol. 15, Issu 1, pp. 29-50, doi: https://doi.org/10.6092/issn.1561-8048/15027.
7. Article 29 Data Protection Working Party (WP29) (2018) Guidelines on Automated individual decision-making and profiling for the purposes of Regulation 2016/679, 17/EN, WP251rev.01.
8. Article 29 Data Protection Working Party (WP29) (2017) Opinion 2/2017 on data processing at work, 17/EN, WP 249.
9. Baiocco, S. et al. (2022) "The Algorithmic Management of Work and its Implications in Different Contexts", Background paper No. 9, International Labour Organisation, European Union. Available at: <https://www.ilo.org/ employment/Whatwedo/Projects/building-partnerships-on-the-future-ofwork/WCMS_849220/lang-en/index.htm> [Accessed: October 3, 2003]
10. Barros Vale, S., Zanfir-Fortuna, G. (2022) "Automated Decision-Making Under the GDPR: Practical Cases from Courts and Data Protection Authorities", Future of Privacy Forum. Available at: <https://fpf.org/wpcontent/uploads/2022/05/FPF-ADM-Report-R2-singles.pdf> [Accessed: November 6, 2023]
11. Bodiroga-Vukobrat, N., Martinovié, A. (2019) "Izazovi pruzanja usluga na unutarnjem trZistu EU-a - usluge informacijskog drustva i "pozadinske" usluge", Zbornik Pravnog fakulteta Sveucilista u Rijeci, Vol. 40, No. 1, pp. 37-58.
12. Bygrave, L. A. (2020a) "Article 22". In Kuner, C. et al. (eds.) The EU General Data Protection Regulation (GDPR). A Commentary. Oxford: Oxford University Press, pp. 522-542.
13. Bygrave, L. A. (2020b) "Article 4(4)". In Kuner, C., Bygrave, I. A., Doksey, C. (eds.) The EU General Data Protection Regulation (GDPR). A Commentary. Oxford: Oxford University Press, pp. 127-131.
14. Cefaliello, A., Kullmann, M. (2022) "Offering false security: How the draft artificial intelligence act undermines fundamental workers rights", European Labour Law Journal, Vol. 13, No. 4, pp. 542-562.
15. Court of Justice of the European Union, OQ v. Land Hessen, and Schufa Holding AG as intervener, case C-634/21 (SHUFA (scoring)), Judgment of 7 December 2023, EU:C:2023:957.
16. Opinion of Advocate General Pikamáe delivered on 16 March 2023 in case OQ v. Land Hessen, and Schufa Holding AG as intervener, C-634/21 (SHUFA (scoring)), EU:C:2023:22
17. De Stefano, V., Wouters, M. (2022) "AI and digital tools in workplace management and evaluation. An assessment of the EU's legal framework", European Parliament Research Service. Available at: <<https://www.europarl.europa.eu/RegData/etudes/S TUD/2022/729516/> EPRS_STU(2022)729516_EN.pdf> [Accessed: October 11, 2023]
18. Directive (EU) 2019/1152 of the European Parliament and of the Council of 20 June 2019 on transparent and predictable working conditions in the European Union, OJ L 186, 11.7.2019.
19. European Commission (2021a) Proposal for a Directive of the European Parliament and of the Council on improving working conditions in platform work, Brussels, 9.12.2021, COM(2021) 762 final, 2021/0414 (COD).
20. European Commission (2021b) Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, Brussels, 21.4.2021, COM(2021) 206 final, 2021/0106(COD).
21. European Parliament (2024a), Committee on Employment and Social Affairs, Provisional agreement resulting from interinstitutional negotiations, Proposal for a Directive of the European Parliament and of the Council on improving working conditions in platform work (COM(2021)0762 - C9 0454/2021 - 2021/0414(COD)), 11.3.2024.
22. European Parliament (2024b), European Parliament legislative resolution of 13 March 2024 on the Proposal for a Regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206 - C9-0146/2021 - 2021/0106(COD)), P9 TA(2024)0138. Available at: <https://www.europarl.europa.eu/doceo/ document/TA-9-2024-0138_EN.html> [Accessed: June 8, 2004]
23. European Data Protection Board (EDPD) (2020) Guidelines 05/2020 on consent under Regulation 2016/679.
24. Grgurev, I, Bjielinski Radié, I. (2023) "Neizravna diskriminacija platformskih radnika", Zbornik Pravnog fakulteta Sveucilista и Zagrebu, Vol. 73, No. 2-3, pp. 233-260.
25. Hauben, H., Kahancova, M., Manoudi, A. (2021) European Centre of Expertise (ECE) in the field of labour law, employment and labour market policies. Thematic Review 2021 on Platform work. Synthesis report. Luxembourg: Publications Office of the European Union. Available at: <https://ec.europa.eu/social/main. jsp?catld=738&langld=en&publd=8419&furtherPubs=yes> [Accessed: January 3, 2004]
26. Hiessl, C. (2023) European Centre of Expertise (ECE) in the field of labour law, employment and labour market policies. Jurisprudence of national courts in Europe on algorithmic management at the workplace. Last update: 7 April 2023. European Union. Available at: <https://ssrn. com/abstract=3982735> or <http://dx.doi.org/10.2139/ssrn.3982735> [Accessed: October 15, 2003]
27. Hofmann, H. C. H. (2023) "Automated Decision-Making (ADM) in EU Public Law", Indigo Working Paper No. 2023-06, University of Luxembourg, Law Research Paper Series. Available at: <https://ssrn. com/abstract=4561116> or http://dx.doi.org/10.2139/ssrn.4561116 [Accessed: March 3, 2024]
28. Hunkenschroer, A. L., Luetge, C. (2022) "Ethics of Al-Enabled Recruiting and Selection: A Review and Research Agenda", Journal of Business Ethics, Vol. 178, pp. 977-1007, doi: https://doi.org/10.1007/s10551-02205049-6.
29. Jarrahi, M. H. et al. (2021) "Algorithmic management in a work context", Big Data & Society, July-December: 1-14.
30. Kiesow Cortez, E., Maslej, N. (2023) "Adjudication of Artificial Intelligence and Automated Decision-Making Cases in Europe and the USA", European Journal of Risk Regulation, Vol. 14, pp. 457-475, doi:10.1017/err.2023.61.
31. Lechardoy, L., López Forés, L., Codagnone, C. (2023) "Artificial intelligence at the workplace and the impacts on work organisation, working conditions and ethics", 32" European Conference of the International Telecommunications Society (ITS): "Realising the digital decade in the European Union - Easier said than done?", Madrid, Spain, 19th - 20th June 2023, International Telecommunications Society (ITS), Calgary. Available at: <http://hdl.handle.net/10419/27799> [Accessed: April 14, 2024]
32. Lukács, A., Váradi, S. (2023) "GDPR-compliant Al-based automated decision-making in the world of work", Computer Law &Security Review, Vol. 50, 105848.
33. Martini, M. (2020) "Regulating Algorithms: How to Demystify the Alchemy of Code?". In Ebers, M., and Navas, S. (eds.) "Algorithms and Law", Cambridge: Cambridge University Press, pp. 100-135.
34. Mackenzie-Scott Gray, R., Abrusci, E. (2023) "Automated DecisionMaking and the Challenge of Implementing Existing Laws", Verfassungsblog. Available at: <https://verfassungsblog.de/automateddecision-making-and-the-challenge-of-implementing-existing-laws/> [Accessed: October 5, 2023]
35. Paal, B. (2023) "Case Note: Article 22 GDPR: Credit Scoring Before the CJEU", Global Privacy Law Review, Vol. 4 , No. 3, pp. 127-137.
36. Palos-Sánchez, P. R. et al. (2022) "Artificial Intelligence and Human Resources Management: A Bibliometric Analysis", Applied Artificial Intelligence, Vol. 36, No. 1, pp. 3628-3655, doi: 10.1080/08839514.2022.2145631.
37. Parviainen, H. (2022) "Can algorithmic recruitment systems lawfully utilise automated decision-making in the EU?", European Labour Law Journal, Vol. 13, No. 2, pp. 225-248.
38. Ponce Del Castillo, A. (2023) "Regulating algorithmic management in the Platform Work Directive: correcting risky deviations". Available at: <https:// global-workplace-law-and-policy.kluwerlawonline.com/2023/11/22/ regulating-algorithmic-management-in-the-platform-work-directivecorrecting-risky-deviations/> [Accessed: January 14, 2024]
39. Ponce del Castillo, A., Naranjo. D. (2022) "Regulating algorithmic management. An assessment of the EC's draft Directive on improving working conditions in platform work". ETUI Policy Brief 2022.08. Available at: <https://www.etui.org/sites/default/files/2022-08/Regulating%20 algorithmic%20management-An%20assessment%200f%20the%20 ECs%20draft%20Directive%200n%20improving%20working%20 conditions%20in%20platform%20work-2022.pdf> [Accessed: January 14, 2024]
40. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (GDPR), OJ L 119/1, 4.5.2016.
41. Rodriguez de las Heras Ballell, T. (2022) Guiding Principles for Automated Decision-Making in the EU, ELI Innovation Paper, Vienna: European Law Institute. Available at: <https://www.europeanlawinstitute.eu/fileadmin/ user upload/p eli/Publications/ELİ Innovation Paper on Guiding Principles for ADM in the EU.pdf> [Accessed: January 21, 2024]
42. Sartor, G., Lagioia, F. (2020) "The impact of the General Data Protection Regulation (GDPR) on artificial intelligence", European Parliamentary Research Service, European Union. Available at: <https://www. europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS _ STU(2020)641530_EN.pdf > [Accessed: April 14, 2024]
43. Solove, D. J., Keats Citron, D. (2021) "Privacy Harms" (2021) GW Law Faculty Publications 8% Other Works. 1534. Available at: <https:// scholarship.law.gwu.edu/faculty_publications/1534> [Accessed: January 14, 2024]
44. Wood, A. J. (2021) "Algorithmic Management Consequences for Work Organisation and Working Conditions", JRC Working Papers Series on Labour, Education and Technology 2021/07. Available at: <https://jointresearch-centre.ec.europa.eu/document/download/32f3a2e1-21cf-4869b470-3e691ec284d6_en?filename=jrc124874.pdf> [Accessed: April 20, 2024]
45. Solove, D. J. (2024) "Artificial Intelligence and Privacy" (February 1, 2024). 77 Florida Law Review (forthcoming Jan 2025). Available at: <https://ssrn.com/abstract=4713111> or <http://dx.doi.org/10.2139/ ssrn.4713111> [Accessed: April 10, 2024]
46. Sancho, D. (2020) "Automatic Decision-Making under Article 22 GDPR: Towards a more substantial regime for solely automated decisionmaking". In Ebers, M., Navas, S. (eds.) Algorithms and Law. Cambridge: Cambridge University Press, pp. 136-156.
47. Veale, M., Silberman, M. S., Binns, B. (2023) "Fortifying the algorithmic management provisions in the proposed Platform Work Directive", European Labour Law Journal, Vol. 14, No. 2, pp. 308-332.
48. Wachter, S., Mittelstadt, B., Russell, C. (2018) "Counterfactual explanations without opening the black box: Automated decisions and the GDPR", Harvard Journal of Law & Technology, Vol. 31, No. 2, pp. 842-887.
2 "Platform work" usually describes the form of work performance characterised by a multi-sided market where services are provided "on-demand", and online platforms act as intermediaries between platform workers and clients (see Bodiroga-Vukobrat and Martinovié, 2019; Grgurev and Bjelinski Radic, 2023).
3 Atthe time this paper was completed, the Al Act was adopted by the European Parliament. However, the legislative procedure is still not finalised, as the Act will have to go through the lawyer-linguist check before final endorsement in the Council. Since the completion of this process and publication in the Official Journal is expected after submission of this paper, it will refer to the latest publicly available version of the text which was subject to vote in the European Parliament, see European Parliament (2024b), available at: <https://www. europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html>. A reference to particular provision of the Al Act in this paper will therefore rely on the numeration of its provisions from the above document, and refer to the "Draft Al Act".
4 For simplicity, the case will be referred to hereinafter only as "SCHUFA". The case should not be confused with another case involving Schufa Holding AG as intervener, concerning the interpretation of other provisions of GDPR (see Joined Cases C-26/22 and C-64/22, SCHUFA Holding (Discharge from remaining debts), ECLI:EU:C:2023:958), and which was decided by the Court on the same day as the SCHUFA (scoring) case.
5 The text of the Provisional Agreement resulting from interinstitutional negotiations of 11 March 2024 is available at: <https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2021/0414(COD)&l=en>. For the sake of simplicity, this paper will refer to the Articles from the initial Commission's proposal of the directive.
6 Deployer means any natural or legal person, public authority, agency or other body using an Al system under its authority except where the Al system is used in the course of a personal non-professional activity (Article 3(4), Draft Al Act).
Copyright University of Rijeka 2025