RCD


Attendere prego, caricamento pagina...
La responsabilità algoritmica nei contratti con i consumatori: verso un nuovo paradigma di equilibrio contrattuale
ISCRIVITI (leggi qui)
Pubbl. Ven, 22 Ago 2025
Sottoposto a PEER REVIEW

La responsabilità algoritmica nei contratti con i consumatori: verso un nuovo paradigma di equilibrio contrattuale

Modifica pagina

Alberto Jaci
Dottorando di ricercaUniversità degli Studi di Messina



Il presente contributivo analizza il ruolo in evoluzione dei sistemi algoritmici nella formazione, esecuzione e struttura dei contratti dei consumatori nell´ambito del diritto privato europeo. Sullo sfondo di un cambiamento di paradigma tecnologico, riesamina criticamente i concetti classici del diritto privato - come l´autonomia contrattuale, la responsabilità ed il consenso - alla luce della crescente diffusione dell´intelligenza artificiale e degli assistenti digitali nei contesti contrattuali. L´approccio proposto contribuisce allo sviluppo di un modello di diritto privato teoricamente coerente e tecnologicamente reattivo, fondato sui principi di responsabilità, trasparenza e contendibilità.


ENG

Algorithmic liability in consumer contracts: towards a new paradigm of contractual balance

This paper investigates the evolving role of algorithmic systems in the formation, execution and structure of consumer contracts within European private law. Against the backdrop of a technological paradigm shift, it critically re-examines classical private law concepts - such as contractual autonomy, liability and consent - in light of the growing deployment of artificial intelligence and digital assistants in contractual contexts. The proposed approach contributes to the development of a theoretically coherent and technologically responsive model of private law, grounded in principles of accountability, transparency and contestability.

Summary: I. Introduction; 1. Te Algorithm as a Contractual Actor in B2C Relations; 2. Te Emergence of Algorithmic Liability; 3. Opacity and Transparency: Reconstructing a Legal Equilibrium; 4. Protection of Consent and Digital Manipulation; 5. Rebalancing Contractual Autonomy in the Algorithmic Age; II. Conclusions.

Introduction

The technological evolution characterizing the current stage of digital society has profoundly reshaped contractual dynamics[1], particularly in the realm of relationships between professionals and consumers[2]. The integration of algorithmic systems into digital platforms[3], e-commerce infrastructures, and automated interaction environments represents a structural transformation in the architecture of contracts. These systems—designed to optimise market behaviour, personalise offers, and anticipate user preferences—operate through automated decision-making processes that interfere with the contractual freedom of the weaker party, namely the consumer[4].

This phenomenon, commonly referred to ‘algorithmic contracting’[5], does not merely signify a technological innovation, but prompts a systemic reconsideration of the foundational categories of private law[6]. Contract formation, the validity and effectiveness of consent, risk allocation, and liability for performance—all these elements are now influenced, and in some cases redefined, by the presence of non-human agents acting as de facto participants in the legal relationship[7]. The ostensible neutrality of algorithmic systems conceals a structurally determined logic, informed by economic optimisation strategies and commercial imperatives that often remain opaque to the consumer.

Against this background, it becomes necessary to investigate—both theoretically and practically—the concept of algorithmic liability in consumer contracts. The key question is whether and to what extent legal systems are equipped to attribute legal relevance to the role played by algorithms in the contractual sphere, and how responsibility is to be allocated when such systems generate distortive effects. In particular, one must ask whether algorithmic behaviour may be assimilated into existing models of liability—such as fault-based or organizational fault structures—or whether a new paradigm must be developed, based on fundamentally different premises.

This paper aims to explore these issues through a systematic and comparative approach. The first section addresses the functions performed by algorithms in business-to-consumer (B2C) transactions, showing how they affect the formation, structure, and substantive fairness of contracts. It then provides a dogmatic reconstruction of algorithmic liability and examines the conceptual tension it produces within classical frameworks of contractual and precontractual responsibility. Further sections analyse issues of opacity, behavioural influence, and the erosion of meaningful consent, leading to a discussion on how contractual autonomy might be reconstructed in an algorithm-driven environment.

The ultimate objective is to assess whether private law—particularly European contract law—can respond adequately to the challenges posed by technological mediation, either through the adaptive use of existing institutions or by developing new, functionally-oriented instruments capable of balancing technological efficiency with contractual justice[8].

Particular relevance should be attributed to the Guiding Principles and Model Rules[9] developed by the European Law Institute, which provide a functional normative framework for the regulation of algorithmic systems involved in consumer contracts.

This inquiry is situated within the broader context of ongoing efforts to develop a normative methodology capable of ensuring the compatibility of digital innovation with the foundational principles of European private law, including the growing role of soft law instruments such as the European Law Institute’s Model Rules.

1. The Algorithm as a Contractual Actor in B2C Relations

In the context of digital markets, the algorithm is no longer a neutral tool supporting contractual activity, but rather a functional actor that actively participates in the formation, structuring, and execution of contracts[10]. Automated contracting takes place within digital environments—platforms, marketplaces, mobile applications—where consumer choices are increasingly guided by artificial intelligence systems and machine learning technologies[11]. These systems are capable of collecting, analysing, and interpreting vast quantities of personal and behavioural data in real time, with the aim of constructing predictive models that strategically shape commercial offers[12].

The result of this process is an asymmetrical algorithmic negotiation[13], in which the algorithm substitutes or conditions the will of the professional, operating on their behalf but according to logics that often escape human oversight[14]. The selection of goods and services presented to the user, the setting of prices, the ordering of offers, and the timing of proposals are all determined or influenced by intelligent systems that function autonomously based on parameters continuously updated through learning mechanisms[15].

This dynamic gives rise to substantive legal questions, starting with whether a legally valid contractual intention can be identified and attributed[16]. If an algorithm generates a personalised offer based on data collected without any direct interaction between the parties, one must ask whether this constitutes a legally relevant ‘offer’ as defined in classical contract doctrine. Scholars have highlighted that such contracts often emerge from a technologically mediated interaction[17], where the will is not autonomously formed by each party, but rather shaped—or even overridden—by digital agents beyond the user’s control.

Furthermore, algorithmic logic introduces a cognitive misalignment between consumer and professional[18]: the latter—or more precisely, the system they deploy—has exclusive access to the information necessary to understand and manage the contractual process. The asymmetry is not merely economic or educational, but epistemic: the consumer interacts with a surface interface that simulates transparency while concealing the underlying decision-making rules, rendering the genesis of contractual terms effectively opaque[19].

In this sense, the algorithm operates as a kind of interposed technical subject, capable of influencing the contract’s structure and content without possessing autonomous legal personality[20]. This raises difficult questions of will attribution and liability: who is responsible for the algorithm’s conduct? The professional who uses it?[21] The developer?[22] Or perhaps the consumer, as the user of the system?[23]

Recent scholarship[24] suggests that algorithmic agency may be interpreted through the lens of intentionality and imputation, even absent legal personhood, provided that the system exhibits autonomous and context-sensitive behaviour in contractual settings. This calls for a more nuanced account of agency that incorporates digital actors into the functional perimeter of legal responsibility.

Current legal frameworks appear fragmented, incomplete, and often anchored to traditional schemes ill-suited to the specificity of these phenomena[25]. This calls for a theoretical investigation into the nature of algorithmic intervention, in order to assess whether it can be subsumed under the categories of auxiliary agents, intermediaries, or whether it should be treated as a source of mediated legal effects. This conceptualisation calls for a systematic inquiry into the role of human oversight within automated decision-making processes, as mandated by the European Law Institute’s principle of Human Agency and Oversight, which requires the implementation of mechanisms capable of ensuring effective human control at critical stages of contractual consent formation.

This reflection constitutes the necessary basis for constructing a coherent model of liability in digital contractual contexts.

2. The Emergence of Algorithmic Liability

The increasing deployment of automated systems in contractual dynamics compels legal scholars to revisit the conceptual relationship between technical decision-making and legal responsibility[26]. In the absence of legal personhood attributed to algorithmic entities[27], the law must determine how to assign legal effects to their actions[28] and, more critically, how to allocate responsibility when such actions result in harm or contractual imbalance[29].

Under classical private law doctrines, contractual liability is traditionally grounded in the attribution of fault, namely the failure to perform an obligation due to negligence or intent on the part of the debtor[30]. Yet in algorithmically mediated contracts, non-performance or deviation from expected outcomes may be the result of decisions made by a non-human system, acting according to parameters not directly controlled or even fully understood by the professional. This disconnect challenges the applicability of fault-based models and raises the question of whether liability can be meaningfully assigned in the absence of human conduct in the strict sense.

A first hypothesis could be based on the extension by analogy of the principle of «culpa in organizzando», which is already applied in the school context for the managers of institutions who are civilly liable for any damage suffered by students due to the lack of organisational measures aimed at providing adequate supervision of minors[31].

A more advanced approach posits that algorithmic liability ought to be framed as a form of strict or quasi-strict liability arising from technological risk. Drawing analogies from existing legal regimes—such as liability for dangerous activities or defective products—the professional who employs complex algorithmic technologies could be held liable regardless of fault, merely by virtue of engaging in an activity capable of producing harm through opaque or adaptive mechanisms[32].

This perspective aligns with the emerging theory of design-based liability, which attributes responsibility not to specific acts, but to the architectural features and normative quality of the systems implemented[33]. In this context, liability becomes a tool for allocating risk in technologically asymmetrical relationships, where the consumer lacks any meaningful capacity to evaluate or contest the operation of the system.

Both approaches rest on the common understanding that algorithms, though lacking legal subjectivity, function as operative factors within contractual relations and must therefore be accounted for within the liability framework. Their activity cannot be treated as neutral or external to the contract but must be encompassed by the legal mechanisms that ensure accountability and protection for the weaker party.

Additional complexity arises in the case of self-learning algorithms, which autonomously modify their decision-making criteria over time based on accumulated data. In such scenarios, the causal connection between the professional’s original intent and the system’s eventual output may be partially or wholly dissolved[34]. This evolution demands a re-evaluation of the very structure of contractual causality and invites consideration of probabilistic or system-based models of imputation, focused less on individual conduct and more on the management and governance of algorithmic ecosystems.

The legal debate remains open between these doctrinal avenues. However, what appears clear is that the law cannot remain indifferent to algorithmic intervention. Even in the absence of a directly attributable human act, there must exist a liable subject capable of responding to distortions, asymmetries, or failures generated by the system. Algorithmic liability thus emerges not as a separate legal category, but as an evolved articulation of traditional contractual responsibility, adapted to a technological context in which human will increasingly coexists—and sometimes competes—with artificial decision-making[35]. In this regard, the ELI Model Rules identify the necessity of establishing a legally responsible party for the conduct of digital assistants, even in the absence of directly attributable human behaviour, thereby strengthening the notion of accountability as a mechanism for allocating technological risk.

3. Opacity and Transparency: Reconstructing a Legal Equilibrium

Transparency has long been a cornerstone of contract law, particularly in consumer protection regimes, where it serves as a counterbalance to the structural informational asymmetry between professional and consumer[36]. Traditionally, transparency entails obligations to provide clear, accurate, and comprehensible information about the terms and consequences of a contract. Yet in the context of algorithmically mediated contracts, this principle is undergoing a deep transformation—both in its normative meaning and in its practical applicability[37].

The use of automated decision-making systems introduces a novel form of opacity that is not merely intentional, but structural. This opacity is not linguistic, but functional, arising from the complexity, adaptiveness, and non-linear reasoning processes inherent in contemporary algorithmic architectures. In particular, systems based on deep learning techniques generate outcomes not by executing explicit instructions, but by extracting patterns from large datasets and adjusting internal parameters in ways that even their developers may not be able to fully explain. This phenomenon, often referred to as the ‘black box’ effect[38], severely limits the predictability and auditability of the algorithm’s behaviour.

As a result, the information provided to the consumer may be formally correct yet substantively useless, failing to enable meaningful understanding or effective decision-making. This gives rise to a paradox of transparency: the more detailed and technical the information, the less accessible and functional it becomes for the average consumer. What remains is a formal appearance of transparency, devoid of the substantive clarity needed for autonomy in contractual engagement.

The legal framework, rooted in the notion of informed consent, struggles to address this disjunction. Norms designed to govern human-to-human contractual exchanges prove insufficient in a context where the decisional architecture is driven by machine logic. Legal requirements for disclosure risk becoming empty formalities, unable to address the real power imbalance embedded in the algorithmic infrastructure.

What is at stake here is not merely an increase in informational asymmetry, but a qualitative shift in its nature[39]. The professional, or more precisely, the system deployed on their behalf, holds a unilateral predictive advantage derived from exclusive access to data, computational resources, and control over the logic of personalisation. This constitutes a new kind of contractual dominance—algorithmic, opaque, and largely immune to traditional regulatory mechanisms.

To address this challenge, the principle of transparency must be reconceived in functional and contextual terms. It should not be limited to the provision of data or legal documents, but should guarantee intelligibility, traceability, and contestability of the algorithmic process. In this regard, the concept of algorithmic explainability—the ability to understand and communicate the rationale behind a decision—emerges as a normative requirement for fair and accountable contract formation[40]. While difficult to implement in technical terms, explainability is crucial for legal legitimacy and procedural fairness. Algorithmic opacity challenges the very notion of transparency as currently understood in contract law. Furthermore, transparency should be conceived not only as a matter of information accessibility but also as a condition for the meaningful contestability and reversibility of automated decisions[41].

Another key element in re-establishing transparency is the reallocation of the burden of comprehension. Rather than placing the duty on the consumer to decipher complex systems, the law should impose on professionals the responsibility to design systems that are inherently understandable or, at a minimum, auditable and subject to external verification. This orientation is mirrored in Article 6 of the ELI Model Rules, which imposes a duty of explanation that is clear, intelligible, and verifiable on the part of the professional, thereby ensuring that algorithmic decisions can be meaningfully understood by the consumer. In this context, transparency becomes not merely an informational duty, but a substantive element of contractual fairness.

To remain effective, legal systems must evolve toward substantive transparency, capable of safeguarding the consumer’s capacity for autonomous decision-making in technologically mediated environments.

4. Protection of Consent and Digital Manipulation

Consent is the conceptual linchpin of contract law[42]. It embodies the principle of individual autonomy and functions as the normative foundation upon which contractual obligations are formed[43]. In order to be valid, consent must be freely given, informed, and the result of a deliberate and autonomous decision-making process. However, the growing pervasiveness of algorithmic systems in business-to-consumer (B2C) interactions has given rise to novel forms of interference with the formation of consent—interference that often operates below the threshold of legal visibility[44].

Today, digital contracting takes place within engineered environments, designed not only to facilitate transactions but also to influence and optimise user behaviour. Platforms rely on massive data collection, behavioural profiling, and predictive analytics to adapt their interface and offerings in real time. Through machine learning and behavioural targeting, these environments are structured to anticipate consumer preferences and to steer decision-making toward outcomes that serve commercial objectives.

Such architectures enable forms of silent cognitive manipulation[45]. These do not entail explicit coercion or deception, but rather strategic shaping of the choice environment, often exploiting psychological biases, heuristics, or cognitive fatigue. The consumer, though technically free to choose, acts within an interface designed to channel attention, limit options, and frame decisions in ways that subtly predetermine contractual choices. The phenomenon of ‘hypernudging’—a technologically amplified version of behavioural nudging—illustrates how personalised and dynamic choice architecture can undermine the authenticity of consent while maintaining the appearance of voluntary action[46].

This raises fundamental questions for legal theory: can consent obtained through such means be regarded as valid and binding? Traditional doctrines on vitiated consent—such as mistake, fraud, or duress—do not readily apply to these situations, as the manipulation is neither overtly deceptive nor forceful. Instead, it is embedded in the design of the system itself and operates through cumulative and systemic influence rather than isolated wrongful acts.

Accordingly, scholars have proposed expanding the legal understanding of consent to include not only the formal expression of will, but also the quality and conditions of its formation[47]. This implies a shift from a purely formalistic model to a substantive and contextual evaluation, in which the legitimacy of consent depends on the transparency, neutrality, and fairness of the environment in which the choice is made.

The protection of consent in algorithmic settings therefore demands a broader regulatory approach[48]. One promising avenue is the theory of structural manipulation[49], according to which contractual autonomy is compromised not necessarily by intent to deceive, but by the deliberate exploitation of cognitive vulnerabilities through system design. In this perspective, the harm lies not in a single misleading statement, but in the architecture of interaction itself, which gradually undermines the user’s ability to form independent judgments[50]. Behavioural engineering through hyper-personalised interfaces can be conceptualised as a form of structural manipulation, particularly when consent is captured not through coercion but via the exploitation of cognitive shortcuts and decision fatigue[51].

From a normative standpoint, this justifies the introduction of protective standards that address the systemic features of digital environments. These may include prohibitions on exploitative interface design, obligations of fairness in personalisation algorithms, and requirements for human oversight or contestability. Moreover, legal systems should develop new remedies that account for manipulation as a mode of relational imbalance, such as the voidability of contracts formed under undue algorithmic influence or the annulment of clauses based on predatory personalisation.

Protecting consent in the digital age requires moving beyond a formal conception of voluntariness. It entails constructing a legal framework capable of recognizing and remedying the subtle, systemic ways in which technological systems affect human choice.

These phenomena are reinforced by the behavioural design of interfaces, which often exploit cognitive biases through what has been termed ‘hypernudging’ and ‘dark patterns’, aimed at structurally distorting consent.

In this respect, it is worth recalling the Model Rules adopted by the European Law Institute, which underline the consumer’s right to contest, revoke or withdraw algorithmically influenced contractual decisions. These rules advocate the implementation of accessible and timely mechanisms—whether automated or human-mediated—that ensure effective control over digital consent processes and safeguard the substantive autonomy of the individual.

5. Rebalancing Contractual Autonomy in the Algorithmic Age

Contractual autonomy, in the civil law tradition, is regarded as a manifestation of personal freedom and the legal foundation of binding obligations[52]. It presupposes a context of equality, self-determination, and informational sufficiency between the contracting parties[53]. However, in digital markets governed by algorithmic architectures, these assumptions are increasingly out of step with empirical reality. The intervention of algorithms in the negotiation, formation, and execution of contracts profoundly alters the structural conditions under which choices are made, creating new forms of dependency, opacity, and informational imbalance[54].

In such contexts, autonomy can no longer be conceived as a purely formal faculty to choose but must be reinterpreted as a relational and context-sensitive construct, subject to the technological configuration of the contracting environment. In this sense, autonomy can no longer be presumed but must be institutionally supported through legal safeguards that recognise the algorithmic reshaping of bargaining conditions[55]. The growing asymmetry between consumers and professionals—driven not only by disparities in legal knowledge or economic leverage, but also by technological power differentials—demands a legal response aimed at rebalancing the autonomy of the weaker party.

The first dimension of this rebalancing effort concerns the recognition of new vulnerabilities[56]. Digital consumers are exposed to informational overload[57], personalised persuasion[58], and non-transparent decision-making, all of which diminish their capacity for genuine self-determination[59]. These factors constitute a new type of structural weakness that the law must address, not through paternalistic restrictions, but by ensuring the conditions for meaningful autonomy.

This can be achieved by integrating substantive fairness standards into the evaluation of contractual autonomy. Classical duties of good faith, fairness, and loyalty—found in various forms across European legal systems—should be reinterpreted as normative tools for mitigating algorithmic asymmetries[60]. Professionals would thus bear a positive duty to safeguard the integrity of the decision-making process, ensuring that the technological systems they employ do not distort the consumer’s capacity to evaluate, compare, and consent[61].

A second aspect involves the specific regulation of automated contracting behaviours. Algorithmically generated offers, dynamic pricing models, and real-time personalisation must be addressed not merely as technical processes, but as legally relevant acts with implications for the balance of contractual power[62]. This calls for the development of sector-specific obligations, such as requirements of auditability, non-discrimination in algorithmic pricing, and transparency in the logic of personalisation.

Additionally, the principle of responsible design should be codified as a legal standard applicable to systems used in contract formation. Professionals should be obliged to deploy algorithmic tools that respect not only data protection requirements but also the structural conditions of fair contracting. This includes ensuring that systems are comprehensible to users, do not exploit cognitive vulnerabilities, and are subject to ongoing human oversight and evaluation.

This obligation could be further reinforced by introducing, as envisaged in the ELI’s regulatory framework, a legal standard of responsible design, obliging professionals to configure algorithmic systems in accordance with the minimum conditions of fairness, neutrality, and intelligibility required to safeguard the consumer’s self-determination.

Finally, rebalancing contractual autonomy requires the availability of tailored remedies. Legal systems should recognize the need for contract annulment or modification in cases of technologically induced imbalance, as well as for injunctive relief against systems that systematically undermine fair dealing[63]. These remedies should be grounded in the recognition that the locus of contractual injustice has shifted—from overt coercion or deception to latent, systemic distortions embedded in code.

Ultimately, the legal system must rise to the normative challenge posed by algorithmic contracting[64]. Autonomy, to remain a meaningful principle, must be reconstructed in light of the technological conditions under which it is exercised[65]. Law must move beyond formal declarations of liberty and toward a pragmatic framework for the governance of power asymmetries in digital contracting environments.

Conclusions

The analysis conducted throughout this paper has tried to demonstrate that the algorithm’s intrusion into the contractual relationship between professional and consumer constitutes a structural transformation in private law[66]. Far from being a mere technological adjunct, the algorithm functions as a decisive actor in the formation, content, and execution of contracts, thereby challenging the conceptual stability of key legal categories such as consent, fault, liability, and autonomy.

The notion of algorithmic liability emerges as a pivotal point in this transformation[67]. It is not to be understood as an entirely novel legal category, but rather as an adaptive evolution of traditional contractual liability, recalibrated to account for technological mediation[68]. The professional who deploys algorithmic systems cannot disavow responsibility for their effects. Whether via fault-based principles (organizational negligence) or risk-based approaches (strict liability), the law must ensure that there is always a legally accountable party in cases of distortion or harm caused by automated systems.

Concurrently, the analysis has highlighted the inadequacy of traditional transparency mechanisms in algorithmic environments. When systems operate opaquely or adaptively, the provision of information alone does not suffice. What is required is a shift toward substantive transparency, defined not merely by access to data but by the intelligibility and contestability of the decision-making process. Explainability, auditing, and verifiability thus become core elements of a legally functional transparency regime.

Closely linked to this is the protection of consent, which must be reimagined in light of behavioural influence and persuasive system design. In the digital environment, consent risks being reduced to a formal act devoid of autonomy. Legal systems must therefore develop normative standards that evaluate the contextual integrity of the consent process, recognizing manipulation not only in its overt forms but also in its subtle, architectural manifestations.

From this emerges the need to reconstruct contractual autonomy as a normative ideal[69]. Autonomy can no longer be presumed; it must be actively supported and protected through legal design. This includes obligations of fair system configuration, duties to prevent manipulation, and remedies for technologically induced imbalance. A legal order that fails to adjust to the realities of algorithmic power runs the risk of transforming the contract into a vehicle of asymmetrical control, rather than mutual agreement.

The broader implication is that private law must be capable of evolving in response to systemic technological change[70]. The algorithmic turn in contracting does not call for the abandonment of classical principles, but for their re-interpretation and re-functionalization. Consent, responsibility, and autonomy must be rearticulated in ways that preserve their normative essence while accommodating the realities of a digital, data-driven marketplace.

Only by doing so can private law fulfil its fundamental mission: to ensure that, even in the age of artificial intelligence, contracting remains a voluntary, intelligible, and equitable process—one in which technological innovation serves, rather than supplants, the human subject.

In this direction, the Guiding Principles and Model Rules adopted by the European Law Institute represent a valuable resource for guiding the evolution of European private contract law, translating the demands of fairness, responsibility, and transparency into operational normative standards capable of legislative or jurisprudential implementation.


Note e riferimenti bibliografici

[1] C Iurilli, ‘Il Manierismo Consumerista nell’Era Digitale’ (2023) 2 Judicium 1.

[2] F Foltran, ‘Professionisti, Consumatori e Piattaforme Online: la Tutela delle Parti Deboli nei Nuovi Equilibri Negoziali’ (2019) 2 MediaLaws 162. The author analyses the role of online platforms in the digital economy and states the main legal matters concerning the relationship among the users and the providers of online platforms.

[3] L Ammannati/GL Greco, ‘Piattaforme Digitali, Algoritmi e Big Data: il Caso del Credit Scoring’ in V Lemma/E Venturi/D Rossano/N Casalino/A Troisi (eds), Rivista Trimestrale di Diritto dell’Economia. Rassegna di Dottrina e Giurisprudenza (2021).

[4] M Calu, ‘Automated Decision-Making: is the EU Consumer Law Fit for the Emerging Technology?’ (2024) Challenges of the Knowledge Society 148.

[5]LH Scholz, ‘Algorithmic Contracts’ (2017) 20 Stan. Tech. L. Rev. 128. The author argues that in these contracts an algorithm determines a party’s obligations: some contracts are algorithmic because the parties used algorithms as negotiators before contract formation, choosing which terms to offer or accept; other ones are algorithmic because the parties agree that an algorithm to be run at some time after the contract formation will serve as a gap-filler.

[6] DJ Brand, ‘Algorithmic Decision-Making and the Law’ (2020) 12(1) J Democr. Open Gov. 114.

[7] J Linarelli, ‘Advanced Artificial Intelligence and Contract’ (2019) 24 Unif. L. Rev. 1.

[8] S Grundmann/P Hacker, ‘Digital Technology as a Challenge to European Contract Law. From the Existing to the Future Architecture’ (2017) 13(3) ERCL 255.

[9] European Law Institute, Guiding Principles and Model Rules on Algorithmic Digital Assistants in Consumer Contracts (2021) https://www.europeanlawinstitute.eu.

[10] B Turner, ‘The Smarts of “Smart Contracts”: Risk Management Capabilities and Applications of Self-Executing Agreements’ (2021) 2(1) J Law & Technol. 89. The author finds that the use of smart contracts appears to yield real benefits from the automation of common contractual processes, particularly in the integration of risk control techniques into the fabric of the contract. To support his thesis, he examines practical cases, each of which raises further practical and legal issues associated with implementation.

[11] D Barnhizer, ‘Contracts and Automation: Exploring the Normativity of Automation in the Context of U.S. Contract Law and E.U. Consumer Protection Directives’ (2016) 9 Yearb. Antitrust Regul. Stud. 15.

[12] S Williams, ‘Predictive Contracting’ (2019) 2 Colum. Bus. L. Rev. 623.

[13] G Alper, ‘Contract Law Revisited: Algorithmic Pricing and the Notion of Contractual Fairness’ in D Restrepo Amariles (ed), (2022) 47 Comput. Law Secur. Rev. 1.

[14] H Eidenmüller, ‘The Advent of the AI Negotiator: Negotiation Dynamics in the Age of Smart Algorithms’ (2025) 20(1) J Bus. Technol. Law 1. The author conducts a personal cost-benefit analysis of the impact of artificial intelligence on the contracting sector, believing verbatim that «smart algorithms will drastically reduce information and transaction costs, improve the efficiency of negotiation processes, and identify optimal value creation options. The expected net welfare benefit for negotiators and societies at large is huge. At the same time, asymmetric information will assist the algorithmic negotiator, allowing them to claim the biggest share of the pie. The greatest beneficiary of this information power play could be BigTech and big businesses more generally. These negotiators will increasingly deploy specialized negotiation algorithms at scale, exploiting information asymmetries and executing value claiming tactics with precision. In contrast, smaller businesses and consumers will likely have to settle for generic tools like the free version of ChatGPT. However, who will ultimately be the big winners in AI-powered negotiations depends crucially on the laws that regulate the market for AI applications».

[15] L Ouyang/Y Yuan/FY Wang, ‘Learning Markets: An AI Collaboration Framework Based on Blockchain and Smart Contracts’ (2022) 9(16) IEEE Internet Things J. 273.

[16] M Herbosch, ‘Contracting with Artificial Intelligence: A Comparative Analysis of the Intent to Contract’ (2023) 88 RabelsZ 1.

[17] Ex multis: CHW Mak/J Mante, ‘Blockchain and Smart Contracts: a Game Changer in Mediation?’ (2023) Asian J Mediation 47.

[18] AK Kaliyamurthy/HJ Schau, ‘How Algorithms Constrain Consumer Experience’ (2025) J Consum. Res. 1.

[19] W Deng/H Wei/T Huang/C Cao, ‘Smart Contract Vulnerability Detection Based on Deep Learning and Multimodal Decision Fusion’ (2023) 23(16) Sensors 1. The authors, through an analysis conducted in the field of smart contracts applied to construction, analyse the risks and vulnerabilities arising from the use of these new technologies.

[20]P Księzak/S Wojtczak, ‘Artificial Intelligence and Legal Subjectivity’ in Toward a Conceptual Network for the Private Law of Artificial Intelligence, vol. 51 Law, Governance and Technology Series (2023) 13.

[21] P Henz, ‘Ethical and Legal Responsibility for Artificial Intelligence’ (2021) 1(2) Discover Artif. Intell. 1. For the author «AI acts on information to predict the outcomes of its potential decisions. The person or organization who is using the AI, is responsible that the algorithms get feed by adequate datasets to reduce biased decisions. Not only datasets can be biased, also the algorithm itself. Best protection is to have diverse and inclusive programmer groups, and, of course, regular auditing of the algorithm, including its integration into product or system».

[22] MA Lipchanskaya/MA Eremina/SA Privalov, ‘Artificial Intelligence Responsibilities: Ethical And Legal Issues’ in I Savchenko (ed), Freedom and Responsibility in Pivotal Times: European Proceedings of Social and Behavioural Sciences (2022) 125. This doctrine considers that «holding manufacturers or developers solely responsible for the harm caused by their actions is certainly possible, but hardly advisable, as the result of training in such a case becomes dependent not only on the pre-installed in its data, but also on numerous environmental factors, which are simply impossible to predict. As we know, with respect to any offense, the factors (determinants) contributing to their commission are a generalized concept, the content of which are the causes and conditions of offenses».

[23] C Twigg-Flesner, ‘Consumers, Digital Delegates, Contract Formation and Consumer Law’ in L Di Matteo/C Poncibò/G Howells (eds), The Cambridge Handbook of AI and Consumer Law: Comparative Perspectives (2024) 1. The author introduces the concept of the “digital delegate” (for an analysis of this figure, please refer to the cited text), stating that «However, the interests of consumer protection necessitate several requirements that need to be met to give consumers sufficient control over digital delegates. It seems the best way of stipulating these requirements in law is through regulation of digital delegates directly, combined with consumer rights in respect of non-conformity/lack of satisfactory quality where this is not done. Overall, though, digital delegates are unlikely to pose significant difficulties for existing law. Existing consumer law would require some recalibration, but it is unlikely that it would need fundamental recasting. Adjustments would be needed to reflect the use of digital delegate and the potential for their manipulation that might be exploited by some traders or third parties. Furthermore, whilst digital delegates could compensate for some of the typical limitations (both cognitive and resulting from weaker bargaining positions) which are usually advanced to justify consumer law rules, this would not seem to necessitate a full overhaul of existing consumer law – at least not until very powerful and robust digital delegates become available and are very widely adopted by consumers».

[24] G Sartor, ‘Cognitive Automata and the Law’ (2009) 17(4) Artificial Intelligence and Law 253.

[25] EWK Lim, ‘Law by Algorithm’ (2023) 43(3) Oxf. J Legal Stud. 650.

[26] T Braun, ‘Liability for Artificial Intelligence Reasoning Technologies – a Cognitive Autonomy that does not Help’ (2025) Corp. Gov. 1.

[27] M Oliver, ‘Contracting by Artificial Intelligence: Open Offers, Unilateral Mistakes, and why Algorithms are not Agents’ (2021) 2(1) ANU J Law Technol. 45.

[28] D Powell, ‘Autonomous Systems as Legal Agents: Directly by the Recognition of Personhood or Indirectly by the Alchemy of Algorithmic Entities’ (2020) 18 Duke Law Technol. Rev. 306.

[29] K Geddes, ‘The Death of the Legal Subject: How Predictive Algorithms Are (Re)constructing Legal Subjectivity’ (2022) 25 Vand. J Ent. & Technol. Law 1.

[30] C Amato, ‘Responsabilità da Inadempimento dell’Obbligazione’ in E Navarretta (ed), Codice della responsabilità civile. Le fonti del diritto italiano. I testi fondamentali commentati con la dottrina e annotati con la giurisprudenza (2021) 44.

[31] G Dezio, Cyberbullismo e Profili di Responsabilità Civile (TAB Edizioni 2020) 7.

[32] R Hasting, ‘Smart Contracts: Implications on Liability and Competence’ (2019) 28 U. Miami Bus. L. Rev. 358.

[33] P Hacker/R Krestel/S Grundmann/F Naumann, ‘Explainable AI under Contract and Tort Law: Legal Incentives and Technical Challenges’ (2020) 28 Artif. Intell. Law 415.

[34] Z Y Lee/ME Karim/K Ngui, ‘Deep Learning Artificial Intelligence and the Law of Causation: Application, Challenges and Solutions’ (2021) 30(3) Inf. Commun. Technol. Law 255.

[35] K Ziemianin, ‘Civil Legal Personality of Artificial Intelligence: Future or Utopia?’ (2021) 10 Internet Policy Rev. 1.

[36] S Pagliantini, ‘Trasparenza Contrattuale’ (2012) 5 Annali Enc. Dir. 1280.

[37] G Pignataro, ‘Etica, Buona fede e Governo dell’Intelligenza Artificiale Generativa’ (2024) 4 Dir. Civ. Compar. 7.

[38] A Borselli, ‘Smart Contracts in Insurance: A Law and Futurology Perspective’ in P Marano/K Noussia (eds), InsurTech: A Legal and Regulatory View (2020) 101.

[39] S Koos, ‘Artificial Intelligence as Disruption Factor in the Civil Law’ (2021) 36(1) Yuridika 235.

[40] P Hacker et al. (n 33).

[41] S Wachter/B Mittelstadt/L Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 International Data Privacy Law 76.

[42] G Alpa, ‘Il contratto in Generale. Principi e problemi’ (2021) 2 Riv. Dir. Civ. 1.

[43] M Barela, ‘Accordo, Consenso e Assenso (Brevi Note nella Prospettiva della Crisi del Contratto)’ (2018) 2 Riv. Dir. Priv. 225.

[44] D Op Heij, ‘The Digital Content Contract in a B2C Legal Relationship from a European Consumer Protection Perspective’ (2022) 11(2) J EuCML 53.

[45] S Faraoni, ‘A Contract Law Perspective on Manipulative Persuasive Technology Led by an Artificial Intelligence’ (2024) 1.

[46] M Rizzi/N Skead, ‘Algorithmic Contracts and the Equitable Doctrine of Undue Influence: Adapting Old Rules to a New Legal Landscape’ (2020) 14(3) J Equity 1.

[47] Ex multis: J Fortuna, ‘Contractual Mistake, Smart Contract and Artificial Intelligence from a Comparative Perspective’ (2024) 15(2) Comp. Law Rev. 69.

[48] A Giannopoulou, ‘Algorithmic Systems: The Consent is the Detail?’ (2020) 9(1) Internet Policy Rev. 1.

[49] D W Slemmer, ‘Artificial Intelligence & Artificial Prices: Safeguarding Securities Markets from Manipulation by Non-Human Actors’ (2019) 14(1) Brooklyn J Corp. Fin. & Com. Law 149.

[50] L Wanting, ‘The Contract in AI Era: Vulnerability and Risk Allocation’ (2021) 9 China Legal Sci. 125.

[51] R Calo, ‘Digital Market Manipulation’ (2014) 82 George Washington Law Review 995.

[52] L Di Donna, ‘Autonomia Contrattuale’ in L Di Donna (ed), Casi di diritto contrattuale (2009) 1.

[53] See: F Criscuolo, ‘Autonomia Negoziale e Autonomia Contrattuale’ (2008) 1; G Smorto, ‘Autonomia Contrattuale e Diritto Europeo’ (2007) 2 Europa e dir. priv. 325.

[54] E Corren, ‘The Consent Burden in Consumer and Digital Markets’ (2023) 36(2) Harv. J Law & Technol. 551; S Grundmann, ‘Information, Party Autonomy and Economic Agents in European Contract Law’ (2002) 39(2) Common Mark. Law Rev. 269.

[55] L Niglia, ‘The Structural Transformation of European Private Law: A Critique of Juridical Hermeneutics’ (2023) 1 Mod. Stud. Eur. Law 20.

[56] M Ebers, ‘Liability for Artificial Intelligence and EU Consumer Law’ (2021) 12 J Intellect. Prop. Inf. Technol. & Electron. Commer. Law 204; K Niziol, ‘The Challenges of Consumer Protection Law Connected with the Development of Artificial Intelligence on the Example of Financial Services (Chosen Legal Aspects)’ (2021) 192 Procedia Comput. Sci. 4103.

[57] R Badescu/B Hrib, ‘Consumer’s Perception on Information Overload in a Digital Society’ in R Pamfilie/V Dinu/L Tăchiciu/D Pleșea/C Vasiliu (eds), 7th BASIQ International Conference on New Trends in Sustainable Business and Consumption (2021) 793.

[58] M Amazigh, ‘From Persuasion to Automation: How Digital Culture Redefines Consumer Behavior’ (2025) 4(1) Int. J Human Stud. 117.

[59] MA Islam/SI Fakir/SB Masud/MD Hossen/MT Islam/MR Siddiky, ‘Artificial Intelligence in Digital Marketing Automation: Enhancing Personalization, Predictive Analytics, and Ethical Integration’ (2024) 8(6) Edelweiss Appl. Sci. Technol. 6498.

[60] H Sargeant, ‘Algorithmic Decision-Making in Financial Services: Economic and Normative Outcomes in Consumer credit’ (2023) 3 AI Ethics 1295.

[61] T Rodriguez de Las Heras Ballell, ‘Digital Vulnerability and the Formulation of Harmonised Rules for Algorithmic Contracts: A Two-Sided Interplay’ in C Crea/A De Franceschi (eds), The New Shapes of Digital Vulnerability in European Private Law (2024) 259.

[62] B Singh/C Kaunert, ‘Future of Digital Marketing: Hyper-Personalized Customer Dynamic Experience with AI-Based Predictive Models’ in Revolutionizing the AI-Digital Landscape. A Guide to Sustainable Emerging Technologies for Marketing Professionals (2024) 188.

[63] F Martin-Bariteau/M Pavlovic, ‘AI and Contract Law’ in F Martin-Bariteau/T Scassa (eds), Artificial Intelligence and the Law in Canada (2021) 3.

[64] J Goossens, ‘Blockchain and Democracy: Challenges and Opportunities of Blockchain and Smart Contracts for Democracy in the Distributed, Algorithmic State’ in O Pollicino/G De Gregorio (eds), Blockchain and Public Law (2021) 77.

[65] A Kubiak Cyrul, ‘Challenges of Smart Contracts in Contract Law: Do Algorithmic Tools Undermine Human Autonomy?’ in L Miraut Martín/M Zalucki (eds), Artificial Intelligence and Human Rights (2021) 327.

[66] L Niglia (n 55) 20.

[67] C Coglianese/E Lampmann, ‘Contracting for Algorithmic Accountability’ (2021) 6 Admin. Law Rev. Accord 175.

[68] C Frattone, ‘Algorithmic Mistakes in Machine-Made Contracts: The Legal Consequences of Errors in Automated Contract Formation’ (2024) 28(3–4) Unif. L. Rev. 407.

[69] A Abat i Ninet (2025) ‘Freedom and Personal Autonomy As the Foundation of Private International Law and the Cornerstone of Individual Rights in the AI Era’ 11 (1) Journal of Liberty and International Affairs 22.

[70] EA Kirillova, OE Blinkov, NI Ogneva, AS Vrazhnov, N V Sergeeva (2020) ‘Artificial Intelligence as a New Category of Civil Law’ (11) Journal of Advanced Research in Law and Economics 91.