sg-crest A Singapore Government Agency Website
Official website links end with .gov.sg
Secure websites use HTTPS
Look for a lock () or https:// as an added precaution. Share sensitive information only on official, secure websites.

Justice Philip Jeyaretnam: Speech at the Asia-Pacific (APAC) Legal Congress 2026


Asia-Pacific (APAC) Legal Congress 2026

Adapting Commercial Law for the Advent of AI: Insights from the Singapore International Commercial Court

Thursday, 09 April 2026

The Honourable Justice Philip Jeyaretnam 
Judge of the High Court 
Supreme Court of Singapore 
President, Singapore International Commercial Court

I. Introduction

1. In this short address, I will make and then elaborate on two points, first that the advent of AI brings with it new and difficult legal questions and secondly that the resolution of these questions will require nuanced and sophisticated judicial analysis – analysis that will need to draw on multiple legal perspectives and employ strong case management, especially of relevant expert evidence.

II. The Legal Challenges of AI

2. What are these legal challenges of AI? Previous technological advances offered tools for human-directed capability, whether that capability was physical (such as the wheel thousands of years ago) or mental (such as the abacus). By contrast, AI may potentially substitute for human decision-making at least to some extent. AI systems operate through pattern recognition and probabilistic matching rather than conscious deliberation yet produce decisions that appear – and I stress the word appear – to reflect human¬like judgment and reasoning.

3. Three characteristics distinguish AI:(1) First, AI systems, if complex, can become opaque “black boxes” – they learn their own rules and relationships from training data, reaching conclusions by pathways we may not always be able to identify or follow. Second, AI systems are autonomous and adaptive – and once deployed, they modify their behaviour based on real-world inputs, so systems that work one way in testing may behave unexpectedly in practice. Third, AI development often involves multiple actors – dataset curators, algorithm designers, system procurers, deployers, and end-users – all may participate in the process.

4. Because AI operates as a black box, when something goes wrong, we may struggle to identify the cause. Because AI is self-learning, its behaviour is unpredictable. And because AI development involves multiple actors, responsibility may be unclear. Underlying all of these questions is the fundamental one of how we apply traditional legal concepts that depend on investigating human states of mind – concepts such as intention and knowledge – to decisions made or actions taken by AI that at most simulates such states of mind without ever actually thinking like a human.

5. Let me give two examples of how AI may give rise to novel legal questions. 

Chatbot representations

6. The first concerns chatbots and liability for misrepresentation. Generative LLM in the form of chatbots employed for the commercial purpose of interacting with the general public make representations of fact all the time. Should the company that has deployed the chatbot be liable when the representation made is false? In Moffatt v Air Canada,(2)  a British Columbia tribunal held Air Canada liable for negligent misrepresentation after its chatbot gave a customer incorrect information about bereavement fares, leading him to purchase a full-fare ticket in reliance on the representation that he could then obtain a partial refund. The tribunal found that Air Canada owed the customer a duty of care, that the chatbot’s inaccurate output constituted a negligent misrepresentation, and that the customer’s reliance on it was reasonable.(3) The tribunal firmly rejected the airline’s argument that the chatbot was a separate legal entity – a separate person – for whose outputs Air Canada bore no responsibility.(4)

7. The decision may be intuitively fair: a company that presents a chatbot as a reliable source of information should bear responsibility when it misleads a customer to their detriment. But it leaves an important question unanswered: what must a deployer do to discharge its duty of care when the AI, by its autonomous and self-learning nature, may produce erroneous results? The approach by the tribunal in that matter essentially treats AI-generated output as directly attributable to the deployer. And that perhaps sits uncomfortably with how AI systems actually work. Unlike traditional software which is constructed line-by-line and whose outputs can be traced to specific code, AI systems learn from training data and generate outputs through processes that cannot be easily audited.(5) Could Air Canada have defended itself by demonstrating that its training, testing and monitoring procedures for their AI chatbot were adequate, even though the specific output was inaccurate?(6) Could a deployer disclaim liability through terms and conditions, on the basis that AI hallucination is a known and disclosed limitation?(7) I’m not going to answer those questions today, but the courts may have to do so soon.

AI Agents

8. I now turn to AI agents which enter into contracts on behalf of companies. It seems that we have accepted that the exchange of offer and acceptance between machines can give rise to valid contracts even though no human mind was involved. Algorithmic trades are commonplace today. And perhaps this fits with the traditional inquiry of the law into the objective meaning of words uttered, without looking into the subjective states of mind of the actual human actors – the actual states of mind of the offeror or offeree. But what about the application of contract doctrines that do depend at least in part on human states of mind, such as the doctrine of mistake?

9. The SICC has already confronted this question in the context of deterministic algorithms in B2C2 Ltd v Quoine Pte Ltd (“Quoine”),(8) and it is surely just a matter of time before we see a similar dispute in the context of agentic AI, where the trade occurs not on the basis of mechanistic steps based on trading condition inputs but by an AI agent’s probabilistic assessment of the most advantageous trade in all the circumstances.

10. Quoine operated a cryptocurrency exchange platform, on which B2C2 was a market-maker, trading entirely through its own algorithmic software without any direct human involvement in individual transactions. Due to Quoine’s failure to update certain operating systems, its platform lost access to external market data, activating B2C2’s fail-safe “deep price” mechanism – set at 10 Bitcoin to 1 Ethereum. Thirteen trades were executed at approximately 250 times the then-prevailing market rate. When Quoine discovered this, it unilaterally reversed the trades, arguing that there was unilateral or common mistake. B2C2 sued for breach of contract.

11. Simon Thorley IJ, at first instance, reasoned from first principles. He distinguished between the operation of a deterministic algorithm, which does only what it was programmed to do, and a truly autonomous AI system, which he recognised could in principle “be said to have a mind of its own”.(9) For deterministic algorithms, he considered that the relevant knowledge and intention should be that of the programmer, assessed at the time of programming.(10) Since the programmer in Quoine did not have actual or constructive knowledge that the trades would occur at artificial prices, mistake could not be made out and Quoine’s reversal was not mandated by the platform’s terms and conditions for trading.

12. The SICC Court of Appeal comprised three Singapore judges including our Chief Justice as well as two international judges, namely French IJ, a former Chief Justice of the High Court of Australia, Australia’s apex court, and Mance IJ, a former deputy president of the UK Supreme Court. The majority upheld the analysis of Thorley IJ, with some tweaks and elaboration, that the focus should be on the programmer’s mental state.(11) Mance IJ, however, proposed an alternative approach where the court should ask whether a reasonable and honest trader, observing the circumstances, would have concluded that the transaction could only have resulted from a fundamental error.(12) This “reasonable trader” test, he suggested, may be better suited to truly autonomous systems that operate beyond what their programmers contemplated. Because when an AI evolves through machine learning, the programmer’s state of mind at the time of coding may bear little resemblance to the AI's decision-making rationale – if one may call it that – at the time of contracting. I should reiterate and stress that the case concerned a deterministic and not an autonomous algorithm, so those were of course obiter dicta and, in any event, were expressed only by one of the five judges.

13. The decision in that case and the reasoning nonetheless offers paths forward that may be explored on an incremental case-by-case basis. Let me then turn to my second point, which is the SICC’s institutional features that support the development of the law in complex and novel cases.

III. Features of the SICC

Early, active and continuous case management

14. The first feature of the SICC is early, active and continuous case management led by the judges who will hear the case. In a conventional adversarial proceeding, the court’s role is largely reactive: it rules on applications brought by the parties and just manages a timetable to trial. That is case management in the traditional sense – a more administrative role that the court takes on to move parties along. But the SICC’s approach is different: in the SICC, cases are docketed to judges who will hear it and regularly hold judge-led case management conferences through to the conclusion of the proceedings.

15. Active case management, among other things, ensures that expert evidence addresses the relevant questions in the case. In the context of AI, these might include how the AI system was trained, how it operated in practice, what caused the failure, and whether the developer’s or deployer’s conduct met a reasonable standard.

A bench that draws on multiple legal traditions

16. The second feature of the SICC I will touch on briefly is its mixed bench. Cases before the SICC may be heard at first instance by a coram of three, and each judge in this three-pillared composition brings complementary expertise and experience.

17. The first pillar comprises Singapore’s own commercial judges – judges of the Supreme Court of Singapore with expertise in commercial law, developed over many years of adjudicating complex, cross-border disputes.

18. The second pillar comprises International commercial Judges from common law jurisdictions, including the United Kingdom, Australia, the United States and India, among others.

19. The third pillar comprises International commercial Judges from civil law jurisdictions such as China, France, Germany and Japan.

20. And so when the legal questions are novel ones, it is beneficial to bring to bear multiple comparative perspectives – multiple legal and judicial traditions. This was demonstrated very clearly in the Quoine case which I have mentioned earlier, and it really is at the heart of what the SICC offers to litigants and to the development of the law. 


(1)   C Seah, “Liability Arising from the Use of Artificial Intelligence”, The Singapore Law Gazette, May 2023, p 1,
<https://lawgazette.com.sg/feature/liability-arising-from-the-use-of-artificial-intelligence/>, citing UK 
Government White Paper, “A pro-innovation approach to AI regulation”, 29 March 2023, paras 39–40, <https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper>.
(2)   Moffatt v Air Canda [2024] BCCRT 149.
(3)   Moffatt v Air Canda [2024] BCCRT 149, paras 26–32; L R Lifshitz and R Hung, “BC Tribunal Confirms Companies Remain Liable for Information Provided by AI Chatbot”. Business Law Today, 29 February 2024, < https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-february/bc-tribunal-confirms-companies-remain-liable-info>.
(4)   M Higgins, “Air Canada chatbot case highlights AI liability risks”, Pinsent Masons, 27 February 2024, <https://www.pinsentmasons.com/out-law/news/air-canada-chatbot-case-highlights-ai-liability-risks>.
(5)   B B. Sookman, “Moffatt v. Air Canada: A Misrepresentation by an AI Chatbot”, McCarthy Tetrault, 19 February 2024, <https://www.mccarthy.ca/en/insights/blogs/techlex/moffatt-v-air-canada-misrepresentation-ai-chatbot>.
(6)   B B. Sookman, “Moffatt v. Air Canada: A Misrepresentation by an AI Chatbot”, McCarthy Tetrault, 19 February 2024, <https://www.mccarthy.ca/en/insights/blogs/techlex/moffatt-v-air-canada-misrepresentation-ai-chatbot>.
(7)   B B. Sookman, “Moffatt v. Air Canada: A Misrepresentation by an AI Chatbot”, McCarthy Tetrault, 19 February 2024, <https://www.mccarthy.ca/en/insights/blogs/techlex/moffatt-v-air-canada-misrepresentation-ai-chatbot>.
(8)   B2C2 Ltd v Quoine Pte Ltd [2019] 4 SLR 17.
(9)   B2C2 Ltd v Quoine Pte Ltd [2019] 4 SLR 17 at [206]–[211].
(10)   B2C2 Ltd v Quoine Pte Ltd [2019] 4 SLR 17 at [211].
(11)   Quoine Pte Ltd v B2C2 Ltd [2020] 2 SLR 20 at [112]–[128].
(12)   Quoine Pte Ltd v B2C2 Ltd [2020] 2 SLR 20 at [192]–[194], [198].
2026/04/10

Share this page:
Facebook
X
Email
Print