Could an Arbitral Award Rendered by AI Systems be Recognized or Enforced? Analysis from the Perspective of Public Policy
Guillermo Argerich, María Blanca Noodt Taquela (Universidad de Buenos Aires) and Juan Jorge (Marval, O’Farrell & Mairal)/February 6, 2020
Questioning About the (Inexorable?) Future
Could artificial intelligence (AI) carry out decision-making? Is it just a matter of time? Will AI replace human arbitrators? Further, will emotional intelligence always trump AI, or will AI enhance the arbitral process?
Despite the topicality of the subject, the arbitration rules remain silent about AI. However, there is also no express provision regarding the human quality of arbitrators. Does this open the door to AI? In particular: should the recognition or enforcement of an award rendered by these systems be denied on the basis of public policy grounds contained in the Convention on the Recognition and Enforcement of Foreign Arbitral Awards (New York, 1958)?
Public Policy under the New York Convention: Importance of Global Values
According to article V(2) b of the New York Convention, the competent authority in the country where recognition and enforcement is sought may refuse them, if the arbitral award is contrary to the public policy of his or her country.
To answer whether the use of AI for decision-making would imply a violation of such public policy, it is worth starting from the premise that public policy is a variable concept, which continuously evolves to meet the changing needs of political, social, cultural and economic contexts.
The notion of public policy has been considered vague and hard to define. Despite that, to overcome this inconvenience we may resort to globalization, which first appeared as an economic phenomenon, but has later been evidenced in numerous aspects, including political, cultural, legal and ideological concepts. Thus, we can speak of global values that prevail in a global society and that influence the evaluation of international public policy at the time of deciding on the recognition and enforcement of foreign arbitral awards.1) In fact, assuming that countries share some essential values, it seems difficult to imagine a successful defence based on public policy since there will be a coincidence between the prevailing conceptions in the country of the seat of the arbitration, the country whose law is applicable to the substance of the dispute and the country where the recognition and enforcement of the foreign arbitral award is intended.
Furthermore, the existence of global values may be one of the reasons for which there are relatively few decisions in which public policy is debated as grounds for refusing recognition or enforcement of foreign arbitral awards. In addition, we believe that the existence of global values recognized by most countries may explain why some awards set aside in the country of the seat of the arbitration were later enforced in other countries.2) When these global values are not respected by the judge of the seat of the arbitration, such as by annulling an award based on conceptions deviating from these values, then the “international community” allows its enforcement in other countries. Note that this case law arose only in the mid-1990s, precisely when globalization was at its peak. Interestingly, there is no record of judgments having enforced annulled awards during the first decades of application of the New York Convention.
Within these global values, party autonomy is of particular importance, as it is a cornerstone of arbitration. This is reflected in the need for relying on a valid arbitration agreement to get the parties to subject themselves to arbitration, and in the fact that party autonomy prevails over most arbitration rules, provided that the essential principles of due process are respected.
AI and Decision-Making in Arbitration
Remembering that the principles of public policy reflect the needs and values upheld by a society at a given time, it should be noted that AI applied to decision-making is still at an embryonic state, and therefore some obstacles may appear in the way of recognition or enforcement of an award rendered by those systems. In fact, arbitration practitioners could raise ethical reasons because of the absence of human qualities (e.g.: emotions) or due process defences based on the so-called “black box”, which refers to the impossibility of directly explaining the results or predictions of the AI system.
Emotions such as empathy, or even anger, play an important role in legal decision-making. In addition, it seems that we assume that there is an intrinsic value in being heard by a human being, who is subject to duties of justice and respect.3) Nappert, however, relativizes this point: it is no less true that such emotions often lead to nonsense and resolutions contrary to the ideal of justice.4)
Despite a certain scepticism, some authors consider that if the applicable rules do not expressly prohibit AI systems to act as arbitrators, and there is an agreement of the parties regarding those, the public policy defence would not be successful in refusing recognition of an arbitral award.5) Others go further and relativize any prohibition that might exist in this sense: It has been said that if the parties trust AI, then who has authority to stop them from using it, particularly in arbitration where freedom of choice is paramount? Ultimately, all responses will depend on the reception of local courts to technology, and the importance attached to a global value such as party autonomy.
An interesting paper carried out on the basis of the Korean legal regime concluded that an arbitral award rendered by AI could face certain inconveniences (or at least queries) for reasons of public policy, given that the Korean Arbitration Act, subsection 36(2)2(b), states that the court may set aside an award that “is in conflict with the good morals or other public policy of the Republic of Korea.”6)
As can be seen, to use AI systems as arbitrators, normative certainty is extremely important. For AI to be successfully integrated in the international arbitration system, its definition should be crystallized and thus be offered as an option devoid of practical and theoretical uncertainties.
As Kemelmajer de Carlucci points out, novelty and temporariness characterize an increasingly complex society to which, in order to be able to adapt, the Law must be more elastic and receptive to interference from different variables.7)
At this point, we are faced with two possible paths: the creation of an avant-garde legal framework for arbitration and AI; or the modification of existing international treaties (in addition to national legislation and arbitration rules). It seems that this last option is not the most appropriate, especially with respect to the New York Convention.
Rightly, in 2006, the UNCITRAL Working Group II (Arbitration) warned that “formally amending or creating a protocol to the New York Convention was likely to exacerbate the existing lack of harmony in interpretation and that adoption of such a protocol or amendment by a number of States would take a significant number of years and, in the interim, create more uncertainty”. Therefore, UNCITRAL prepared a recommendation concerning the interpretation of article II, paragraph 2, and article VII, paragraph 1, of the New York Convention. Here, the soft law technique triumphed over a hard law solution that probably would have not prospered.
Should an AI regulation follow the same path? Instead of modifying the New York Convention, some authors propose certain amendments to the UNCITRAL Secretariat Guide on the New York Convention, as “intends to be a more dynamic tool that allows for the adjustment of the provisions of the Convention, to the mutable necessities of the international arbitration system and its changing application by local courts”.8)
Although the future is unknown, an unavoidable certainty appears to emerge: AI must be studied and regulated, and either admitted or prohibited (totally or partially), bearing in mind both justice and efficiency.
Reference: Kluwer Arbitration Blog