sg-crest A Singapore Government Agency Website
Official website links end with .gov.sg
Secure websites use HTTPS
Look for a lock () or https:// as an added precaution. Share sensitive information only on official, secure websites.

Justice Goh Yihan: Speech given at the Inaugural Singapore-France Judicial Roundtable

INAUGURAL SINGAPORE-FRANCE JUDICIAL ROUNDTABLE
"The Legal Implications of Recent Developments in Artificial Intelligence"
Wednesday, 31 January 2024
The Honourable Justice Goh Yihan(1)
Supreme Court of Singapore


The Honourable First President of the Court of Cassation, Mr Christophe Soulard
The Honourable the Chief Justice Sundaresh Menon
President of Chamber, Sandrine Zientara
Judge Isabelle Goanvic
Judge Referee Caroline Azar
Colleagues

Introduction

1                 In this presentation, I will touch on the legal implications of some recent developments in AI from a common law perspective, with a focus on relevant Singapore developments.

2                 More specifically, I will cover three capabilities that many advanced AI systems can be said to possess. In short, these are AI systems’ capabilities to learn, to create, and to decide.

AI’s capability to learn

3                 I turn first to consider the capacity of some AI systems to “learn”. Broadly speaking, many of today’s AI systems are considered machine learning systems. In order to understand the significance of machine learning systems, it is useful to consider the distinction between rules-based AI systems and machine learning systems.

4                 Rules-based AI systems are more basic. It takes a human user’s input, and then applies predetermined rules to that input, so as to make decisions or recommendations. One example that is used in law is expert systems. For instance, Singapore’s Legal Aid Bureau has a chatbot, the Divorce Assets Informative Division Estimator (or “Divorce AIDE”), which can give layman users an estimate of “how much [they] can potentially receive as [their] share of the matrimonial assets post-divorce”.(2)

5                 In contrast, the core idea in machine learning is that examples, also called “input data”, are gathered for the relevant AI system’s algorithm to program itself based on experience.(3) The algorithm does so by applying statistics and probability to the input data to try and produce a model that identifies the associations and patterns in the data.(4)

6                 One legal controversy that has arisen from the ability of machine learning systems to learn concerns whose state of mind is relevant if a machine learning system arrives at a model that causes damage to an individual.

7                 The answer under the common law could turn on the nature of the AI system. Where the machine learning system is deterministic, the answer could lie in the programmers’ state of mind. The Singapore Court of Appeal in Quoine v B2C2(5) addressed the question of the relevant state of mind in cases concerning deterministic AI systems. For deterministic AI systems in general, the Court of Appeal held that, “working backwards from the output that emanated from the programs”, the relevant state of knowledge is that of the programmers.(6)

8                 Yet, because the court in Quoine confined its reasoning to deterministic AI systems, the logic outlined above may not apply if the AI system in question is non-deterministic. Some possibilities as to how the law could deal with this question of state of mind include attributing the state of mind of either the programmer or user to the machine learning system, regarding the system as an agent for the relevant operator,(7) or perhaps even creating a novel type of intention for non-deterministic machine learning systems.

AI’s capability to create

9                 Another capability that some AI systems possess, especially machine learning systems, is the ability to, loosely speaking, “create”.

10                 The uniqueness of generative AI systems is that their models, instead of spotting similarities between the training data, predict missing data in the media that they have been provided with. A prominent example of such an AI system is ChatGPT.(8)

11                 Yet it is important to understand the limits of generative AI systems. They do not actually “understand” the “meaning” of the input texts in the same way that a human does – they are merely making probabilistic predictions as to the word that should follow after a given string of text. This explains some of ChatGPT’s limitations: for instance, its propensity for what has been referred to as “hallucinations”(9) – entirely inaccurate or fabricated answers.

12                 The ability of generative AI systems to “create” works gives rise to a number of intellectual property questions, two of which I will briefly touch on today.

13                 The first concerns how society should deal with AI “inventions” and “works”. One option is to conclude that, as the invention or work has no human inventor or author, it should fall into the public domain.(10) For example, the UK Supreme Court in Thaler v Comptroller-General of Patents, Designs and Trade Marks(11) ruled that because an inventor within the meaning of the UK Patents Act 1977 must be a natural person, it follows that Dr Thaler’s AI, DABUS, which is not a person at all, cannot be an inventor under the Act. This follows the approach taken in other jurisdictions. While the Singapore courts have yet to consider if an AI can be an inventor, s 19(1) of the Patents Act 1994 expressly refers to a “person” who may be able to make an application for a patent. It makes no mention of a “non-person”. Similarly, in the area of copyright law, the Singapore Court of Appeal has held in Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte Ltd(12) that copyright protection will only arise where the work was created by a human author or authors.

14                 Against this, it has been argued that, as a matter of policy, without intellectual property rights over AI-created works, there would be “no tangible incentive for developers of AI machines to continue creating, using, and improving their capabilities”.(13) In order to prevent this undesirable outcome, there have emerged various law reform suggestions regarding how AI-created works can be protected. These include comparing AI systems to employees creating works in the course of their employment, attributing authorship to AI systems, or creating sui generis rights for AI-created works.(14)

15                 However, these suggestions uncritically accept that the policy reason behind protecting human-created works (ie, incentivising creative works) applies equally to protecting AI-created works. This may not be the case. For instance, this reasoning overlooks the possibility of new business models creating other incentives for AI developers to continue developing generative AI systems. In this regard, it has been observed that in the competitive landscape for AI music generation, a number of companies are not betting their future on the possibility of claiming copyright on the music that their AI systems generate; rather, their services are based on providing access to the generative AI system.(15)

16                 A second question in the context of generative AI is whether the training of generative AI systems using existing (human-created) work infringes copyright in those existing works. This is a question that has come into the spotlight after the New York Times sued OpenAI on the basis that OpenAI’s use of New York Times’ articles constituted copyright infringement.

17                 In assessing this question, we must be vigilant to the way generative AI is framed. For instance, a broad-brush characterisation of the machine learning process, where the news articles are said to be merely “training” the AI, might lead to scepticism regarding the New York Times’ claim. After all, even human authors seek inspiration in the writings of other human authors. Thus, the use of texts to train generative AI systems could be “transformative” fair use per US copyright law.

18                 Yet a more granular understanding of how machine learning works could suggest a different conclusion. If it is true that large quantities of input texts from the Internet are passed through and stored on generative AI systems when training them,(16) then this could constitute copying. As for the question of fair use, the notion that generative AI systems like ChatGPT engage in “transformative use” may be more difficult to sustain when one recalls that generative AI systems do not really think like humans. Indeed, generative AI systems like ChatGPT are in reality producing mathematical models which are based almost entirely on the training data that is provided to them.

19                 In contrast, under new changes introduced to the Singapore Copyright Act 2021, there is now a rather permissive text and data mining exception which, subject to the requirement of lawful access, extends to commercial uses and cannot be contractually excluded or modified. This exception is said to increase the availability of AI training data.

20                 Beyond these intellectual property questions, I should mention that, just two weeks ago, the Singapore Infocomm Media Development Authority (IMDA) and AI Verify Foundation introduced a draft Model AI Governance Framework for Generative AI. This Framework adopts a systematic and balanced approach to addressing concerns posed by generative AI while continuing to facilitate innovation. It is based on nine dimensions, including Accountability and AI for the Public Good. Like previous frameworks, these are not binding but intended to guide the continued adoption of AI.

AI’s capability to make decisions

21                 Finally, I will touch on the capability of many AI systems to make decisions independently. This can be loosely described as “autonomy”.

22                 AI “autonomy” has been cited as a challenge to existing liability regimes.(17) In Singapore, the Land Transport Authority (“LTA”) has approved the testing of some autonomous vehicles (“AVs”) by various private operators in certain neighbourhoods as part of the LTA’s upcoming AV pilot deployment scheme.(18) But how would the common law ascribe liability when a AV causes an accident?

23                 One might suggest that the AV’s safety operator should be responsible.(19) To this end, one might put forward the instrumentalist argument that responsibility should fall on the driver because he or she benefits from using the AV.(20) Moral arguments include the idea that the human is expected to retain ultimate supervision over the operation of the vehicle because the human is still the “driver”.(21) Conversely, there is also the possibility of holding the developers and fleet operators of AVs liable.(22)

24                 Some commentators have observed flaws in either of the above solutions, with the result that there is a “liability gap”. However, instead of viewing “autonomous” AI systems as black boxes into which liability “disappears”, we could recast them as complex systems which are developed and acted upon by a range of parties, who thereby collectively influence and control the risks arising from the activities of such AI systems.

25                 To take the example of AVs, they are controlled by a “complex system of interconnected software elements that graft onto hardware systems”.(23) As such, fault and liability could be assigned to different actors in different proportions depending on the extent to which each party contributed to the relevant risk which materialised in an accident.(24)

26                 Therefore, while it is true that complex AI systems like AVs challenge traditional common law liability regimes, this may not be in the manner that is commonly supposed. Rather than a “liability gap”, what might be important is the distribution of liability, according to the degree and manner of control over risk, amongst parties across the AV supply and consumption chain.(25) In a sense, the problem is not that there is no longer a driver, but that “in some sense, everyone is a driver, though in a different way from the past and from each other.”(26) This may not require a wholesale reimagining of the common law liability regime but, rather, a more careful understanding of the relevant AI systems in individual cases.

Conclusion

27                 To conclude, there are many other issues occasioned by the rise of AI systems in the larger context of the interaction of law and technology, which I am sure we will benefit from a mutual exchange of ideas between our two jurisdictions.

28                 Thank you.


(1)       I am grateful to my law clerk, Adam Goh, for his excellent assistance with the preparation of this paper.
(2)       Response by Senior Parliamentary Secretary Rahayu Mahzam, Committee of Supply Debate 2023 (27 February 2023): https://www.mlaw.gov.sg/news/parliamentary-speeches/response-by-sps-rahayu-mahzam-at-committee-of-supply-debate-2023/. The link to access Divorce AIDE can be found here: https://lab.mlaw.gov.sg/resources/divorce-aide-lab-matrimonial-asset-division-estimator/.
(3)       Hӓuselmann in Law and Artificial Intelligence at p 48.
(4)       Hӓuselmann in Law and Artificial Intelligence at pp 49–50.
(5)       Quoine Pte Ltd v B2C2 Ltd [2020] 2 SLR 20 (Quoine).
(6)       Quoine at [98].
(7)       Vincent Ooi, “Contracts Formed by Software: An Approach from the Law of Mistake” (2022) J.B.L. 97 discusses this suggestion in the context of contract formation but rejects it for a variety of reasons.
(8)       See Susman Godfrey LLP and Rothwell, Figg, Ernst & Manbeck, P.C, Plaintiff’s Complaint in The New York Times Company v Microsoft Corporation, OpenAI, Inc. and others, US District Court, Southern District of New York (filed 27 December 2023) (“NYT Complaint”) at [75]−[81]. Accessible at Courthouse News Service: https://www.courthousenews.com/wp-content/uploads/2023/12/new-york-times-microsoft-open-ai-complaint.pdf.
(9)       Menon CJ, “Legal Systems in a Digital Age: Pursuing the Next Frontier”, speech at the 3rd Annual France-Singapore Symposium on Law and Business (11 May 2023): https://www.judiciary.gov.sg/news-and-resources/news/news-details/chief-justice-sundaresh-menon-speech-delivered-at-3rd-annual-france-singapore-symposium-on-law-and-business-in-paris-france.
(10)       Jan Smits and Tijn Borghuis, “Generative AI and Intellectual Property Rights” in Law and Artificial Intelligence (Bart Clusters and Eduard Fosch-Villaronga (eds)) (Springer, 2022) (“Smits and Borghuis in Law and Artificial Intelligence”) at p 330.
(11)       [2023] UKSC 49.
(12)       [2011] SGCA 37.
(13)       Smits and Borghuis in Law and Artificial Intelligence at p 330.
(14)       Smits and Borghuis in Law and Artificial Intelligence at pp 330−335.
(15)       Smits and Borghuis in Law and Artificial Intelligence at p 337.
(16)       NYT Complaint at [83]−[101].
(17)       Soh, “Legal Dispositionism” at p 9. Also, see generally Silvia De Conca, “Bridging the Liability Gaps: Why AI Challenges the Existing Rules on Liability and How to Design Human-empowering Solutions in Law and Artificial Intelligence (Bart Clusters and Eduard Fosch-Villaronga (eds)) (Springer, 2022) (“De Conca in Law and Artificial Intelligence”) at pp 249−253, even though the word “autonomy” is not explicitly used.
(18)       Smart Nation Singapore, “Driving Into the Future”: https://www.smartnation.gov.sg/initiatives/transport/autonomous-vehicles/; Land Transport Authority, “Autonomous Vehicles”: https://www.lta.gov.sg/content/ltagov/en/industry_innovations/technologies/autonomous_vehicles.html.
The goal of having AV buses is to improve first-and-last mile connectivity to improve convenience and accessibility for consumers. The anticipated benefits include: fewer accidents due to human errors, optimal use of road space, job creation, and more efficient use of manpower.

(19)       I note that the employer of the driver or safety operator could be held liable under the doctrine of vicarious liability.
(20)       De Conca in Law and Artificial Intelligence at p 250.
(21)       Madeleine Claire Elish, “Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction” (2019) 5 Emerging Science, Technology and Society 40 criticises this trend at 49 (in the context of autopilots used in aviation) and 53 (in the context of the potential extension of said philosophy to AVs).
(22)       Jerrold Soh, “Towards a Control-Centric Account of Tort Liability for Automated Vehicles” (2021) 26 Torts L.J. (“Soh, ‘Control-Centric Account’”) at pp 23−24.
(23)       Soh, “Control-Centric Account” at p 20. See also Liu, et al, Creating Autonomous Vehicle Systems (Morgan & Claypool Publishers, 2018), which makes the point at p 13 that autonomous driving “is not just one single technology, it is an integration of many technologies.”
(24)       See generally Soh, “Control-Centric Account”, especially at pp 13−20, which advances this thesis.
(25)       Soh, “Control Centric Account” at pp 24−25.
(26)       Soh, “Control Centric Account” at p 26.

2024/02/08

Share this page:
Facebook
X
Email
Print