IT Law Series 2025: Legal and Regulatory Issues with Artificial Intelligence
The Use (and Abuse) of AI in Court
Wednesday, 30 July 2025
1 All of you would have heard and seen by now, numerous presentations, videos, discussions about both the threat and promise of AI. AI is now a given in all our lives, at work, in school, and at home. It is already ubiquitous, and will continue to become more indispensable. The genie has been let out of the bottle and we cannot put it back. AI is in the end a tool. Tools can maim and kill, just as much as they can help us in our work, enabling us to do more, better and faster. What we can and must do is to be clear eyed, use the tool wisely and guard against the risks, danger and abuse.
2 We, in the judiciary, are trying to do just that. In the next thirty or so minutes, I hope to outline our approach to AI, bearing in mind the way it can be misused, and what we hope to achieve with it.
Abuse
3 Let me address the abuses first. Hallucinations by generative AI would be the most well known, and constantly make the headlines. Most recently, we have had two cases in England and Wales heard together by the Divisional Court of the King’s Bench, Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank and Anor [2025] EWHC 1383 (Admin), each with fictitious authorities being cited to the court. And similarly in a defamation case in the US District Court in Colorado, fictitious case citations were tendered. Similar instances have cropped up in Singapore, thankfully, not by lawyers, who should know better, but by self-represented persons, who may not. Guidance has been issued by the Courts placing the responsibility to check and verify on those who tender documents to the court. We understand the challenges faced by self-represented, but they too must be cautious when they use gen AI and make sure they verify.
4 False evidence is another issue in the use of AI. Fake videos can be readily generated by AI: we have many examples on social media of increasingly realistic fake videos, thankfully labelled as such in most of the posts. But fake voices are already being used to cheat and con. And it creates a level of distrust of information that can hamper how we function in society. Taking the recent sinkhole incident, it is indeed fortunate that the lady involved was rescued by the foreign workers. What is relevant for this discussion is that I can readily admit that when I first saw some of the initial photos of the recent sinkhole with the buckled road, seemingly with the asphalt melting away like cake-icing and the car dramatically half submerged, my immediate reaction was, ‘Is that real? It looks fake’. I spent my time peering at the hands of the construction workers and the text in the photograph to see if I could spot any tell-tale malformed fingers or bad gibberish text that’s supposed to be characteristic of AI images. You can imagine how this situation will be transposed to the courtroom: we will be spending a lot of time, with a lot of experts, trying to distinguish between fake video or documents.
5 But aside from false information, another threat is the possibility that AI may lead to a degradation of the legal skills that lawyers, judicial officers and judges are expected to possess. Analysis, retention, composition and formulation of arguments are all skills that need to be honed through daily use. With Generative AI potentially being used for drafting, research and analysis, the question is whether lawyers and judges will see abilities dulled through misuse. Part of this fear is perhaps an overreaction, much in the same way that it was feared that the advent of cheap and widespread books would dull memory and retention, reducing depth of learning. We have had such fears expressed about the internet itself. But while some concern may be overstated and exaggerated, there is probably some risk that widespread AI use may remove opportunities for use of foundational skills, reducing the effectiveness of lawyers, especially those more junior, and crucially may also remove the ability to critique the products of AI. After all, aside from football and Star Wars episodes 1-3 and 7-9, if you cannot produce the work, can you truly critique the quality and know how to do better?
6 But let me reiterate that all change involves some measure of risk, some plowing on through the unknown. We must try to assess what the risk is, take measures to mitigate and safeguard, and if the risk is deemed to be manageable, to move ahead to use the technology for our betterment.
Where we are
7 As with the other branches of the State, and like many law firms, the Singapore judiciary is exploring and testing various AI products. No one can sit still. We need to deliver justice, in ever better ways, so that we can continue to meet the needs of the community.
8 In trying to improve what we do, we are aware that we must approach our exploration and testing in a principled way, so that we do not go chasing after every single new shiny thing, and that we make the best use of the resources ——manpower, money, and time ——available to us. We do not need to be on the bleeding edge, but neither should we be so far behind that we fail in our mission to serve the community. We must find that right balance.
9 To that end, we have a guiding philosophy of ensuring that what we pursue should generally be to achieve efficiency in what we do, to promote better access to justice, or to better harness the data available to us. These three targets are our primary objectives, giving a schema to prioritise and judge the many opportunities that come up.
10 Within that framework, AI’s central promise to us is as a capability booster, allowing us to do our tasks faster, cheaper and, hopefully, better. On occasion this boost may also enable us to be redeployed on other tasks, particularly those that require greater flexibility, or the human touch. For the courts then, AI is primarily as an assistant or aide for the litigant, the lawyer, the court administrator or the judge. AI is thus a bridge, allowing users to cross gaps in the justice system.(1)
11 In pursuing these objectives and weighing the risks, we cannot let edge cases hold back where there is clear and tangible benefit to the majority. Only rare instances of change or innovation will leave everyone entirely better off. Some may, at least temporarily, need to take greater effort to adapt their practices or behaviour. Some facilities may be replaced. What is essential is that the organisation takes steps to try to facilitate the transition, but we cannot hold back change if it is needed to improve things for the majority.
12 With these in mind, let me expand on each of the innovation targets I have mentioned: efficiency, access to justice and harnessing data.
Efficiency
13 The need to improve efficiency goes beyond a response to increased volume, and demands for speed and productivity. Increasingly, there is greater complexity in the cases that we handle. At one end, in commercial cases, the length and intricacy of arguments and evidence has multiplied many times. We have not yet developed a metric for determining complexity, though some may suggest themselves, but a rough and ready measure can be seen in the length of our judgments, as well as the number of factual and legal issues that we have to deal with. Compared to even the 1990s, the amount of information and discussion dealt with in our judgments has clearly ballooned
14 But it is not just commercial cases. Even in small claims cases, and other matters involving individuals seeking relatively limited remedies or small value awards, the amount of information tendered to the court is substantial. In part, this reflects the greater availability of information, whether it is in the form of social media posts, WhatsApp and SMS messages, emails and photographs or videos. A lot of information is therefore available and a lot more is thrown into the court process. To digest this mountain of information requires a lot of time and effort; the mental load and organisational skills required are not trivial, and can be burdensome when the volume of cases keeps increasing.
15 Thus at both ends of the spectrum, both in commercial and Main Street, every day cases, the Courts need to provide the tools to assist at least our judges and administrators with dealing with the information. Lawyers will also need to equip themselves with their own tools; what the courts can do is to ensure that they can bring in their tools easily to conduct their cases in court with minimal trouble and fuss. The Courts should also assist the self-represented persons in their conduct of their cases, as part of its systemic function. That will be tackled below where we turn to access to justice.
16 But focusing for the moment on efficiency, we are exploring the use of Gen AI to assist our court administrators, judicial officers and judges in their work. Some of it is in common with other parts of the public as well as the private sector: we are examining the use of commercial products such as CoPilot to assist our officers with their daily work, whether it is minute taking, slide deck preparation or drafting papers. We are also looking at the use of AI to assist judicial officers and judges. AI-assisted legal research is one obvious area. Pair Search, as you would know, and which is publicly available, allows for some analysis of supreme court judgements. SAL’s LawNet is of course also developing AI capabilities.
17 But what may be of interest aside from legal research, is that an important area of exploration for us is using AI to query and interrogate submissions and evidence. At its best, we think that AI tools that would allow judges to pose questions and obtain answers from submissions and evidence would greatly enable the judges to get to grips with the case before them, identify the live issues, and weigh the merits of the respective arguments and evidence. As it is, many of the available AI tools are readily able to generate chronologies, comparative tables and summaries. The aim is not to replace the submitted materials, and to depart from the adversarial process, but to better enable the judges to recall and digest information, pursue lines of questioning, and examine the coherence and strength of what has been put before them. Hallucinations are tampered down because of grounding in specific document references. The results thus far are promising.
18 But we do face a number of challenges. The first is the size of our documents. The AI tools we have access to in government are not geared for the lengthy submissions and affidavits that are the norm these days. I do not know whether this speaks to the productivity of lawyers, or whether our civil servants are clearly not reading enough. We are getting larger token sizes, but it is still not quite where we want to be. Secondly, while some of our cases do involve information that would be regarded as being publicly available or open, in some areas, such as family law, arbitration, and criminal law, greater care has to be taken to protect the information contained in the documents. Thirdly, it is not always easy to specify the level of detail to be given in the summaries and comparisons done up by AI models – for judicial work we often do want quite a bit of detail even in summaries. We are constantly tweaking our prompts to get better results.
19 These issues are however relatively minor. Having AI assistance in querying submissions and evidence will allow the judicial officers and judges to master their briefs and juggle their work in an increasingly demanding environment.
20 Another area we are thinking about is the use of AI in assisting the drafting of judgment. This has to be thought through carefully. What we are examining is not merely improving grammar or suggesting words. Gen AI is capable of more than that, as we all know. It is indeed possible to have Gen AI assemble relevant authorities and evidence from provided documents. Hallucination and seemingly overly creative leaps in reasoning can occur. But aside from these risks, we also need to maintain public trust and confidence in judges. People will want decisions to be made by human judges and not left by the judges to others or to machines. Letting AI loose on writing judgments would seem to go against that. We have had cases, without AI, in which we have cautioned against judges not exercising their minds. So it is already a live problem.
21 Nonetheless we think it is fruitful to explore AI assistance in judgment writing. A meaningful distinction can be drawn between the making of a decision and writing a judgment. As it is, we do have bench memoranda and parties’ submissions to assist us in our work. Having AI generate drafts that we can look at, consider and adapt would seem to be worth having, as it would save time, and help with the process of composition. So we are not ruling it out, but we will be trying it out carefully, and impose appropriate training, safeguards and supervision.
22 In our experiments thus far, the results have been quite promising. When we give the appropriate prompts, with sufficient detail, and documents, gen AI has been able to create drafts that are fairly readable and fairly accurate, albeit with some hallucination or creative leaps. It is likely with the continued progress in the development of models, better prompts and scrutiny of products that we will be able to obtain fairly accurate and well-written, and seemingly well-reasoned drafts on a regular basis.
23 One particular area of use of this capability is for red-teaming, i.e. to generate drafts for different outcomes, to allow the judge to test his or her reasoning and conclusions, to identify weaknesses and strengthen his or her own work. Again, this will require some guidance and monitoring, but we think that this will be good to explore.
24 Moving away from what may be controversial use case, AI transcription is a more accepted area. Accurate transcription is very important for us as we have a lot of oral testimony, particularly by way of cross-examination, as well as oral arguments. Much time is spent by our contractors, staff , judicial officers and judges in preparing and checking transcripts. Automated transcription does aid considerably; indeed in many instances, some AI and automation is already being used. Its increased adoption and widening scope is to be welcome. Some challenges remain, some of which come from the AI having to deal with multiple languages being used at the same time, and some from what can be described as the acoustically challenging environment of the courtroom.
25 A related area to transcription is translation. AI Translation is fairly good, and its increased use aid efficiency. We do not see AI removing human interpreters from the court room for some time to come. The human touch is needed in many settings, especially in criminal and family law practice. Even in other areas of law, nuance will be important. But where it can assist is in bulk translation, and as I will expand when looking at access to justice, something we are already implementing as part of our Harvey AI collaboration.
Access to Justice
26 The Courts play an active role in promoting access to justice.(2) We are not merely a passive courthouse, aloof from the needs of our users. We have to help people through the justice process, without abandoning our independence and impartiality.
27 It is against that background that we have embarked on our collaboration with Harvey AI. Harvey AI as you know has a presence in the Asia-Pacific and some of you no doubt would be its customers. In addition to its commercial activity, Harvey AI has generously embarked on an important pro bono effort with the Singapore judiciary to see what Gen AI can do to assist with Access to Justice. We have chosen to test out this effort in the context of cases in the small claims tribunals, where parties proceed without lawyers. We want to explore and test what benefits AI can provide for self-represented persons, who would face a number of challenges in navigating the justice system, as well as for the judicial officers involved, who can often have to deal with a lot of unstructured, less organised information.
28 What we have been able to get in place is translation of documents, and we hope to complete summarisation of materials shortly. Such summarisation will assist the parties in understanding each other’s case, see how their own case comes together and also it will assist the tribunal magistrate in making senses of what is before him or her. We do need to make sure that the system is as accurate as possible, and we want to make sure that it is properly integrated with the tribunal case management and filing system. So it has been a good learning experience for us, and even as we continue our exploration with Harvey AI here, we are thinking of what else can be done. We do hope to have a substantial update by the time of the TechLaw Fest in September. My personal dream is to have AI assist in presenting the case, and nudging towards settlement, but we will have to see whether we can get there, and indeed whether we should.
29 What I thought would be instructive is to discuss one of the areas where we have tried AI and shelved it, at least for the moment. We have experimented with AI chatbots to assist litigants with understanding court processes and rules. While the chatbots were able on many occasions to give good responses in appropriate language, we encountered issues when asked for information where either materials were incomplete, or when the information is more complex and nuanced, and the query was ambiguous. An example of this was when the chatbot was asked about appeal periods – there are various appeal periods that govern different contexts. The chatbots unfortunately were not geared towards asking for clarification, or owning up to not knowing – that is a challenge with many gen AI models. We could have expended time and energy to train and refine but the team concluded that it was best to put this on the shelf for the time being. We could achieve the objective of assistance through the simple FAQ or guided question and answer.
30 Another area we are considering to promote access to justice is to deploy AI to provide first-cut indications of outcome for some case types. The challenge we have in some areas of practice is a large volume of cases, being pursued largely by self-represented persons, who may face difficulties in attending court, even remotely, given their work and family commitments. Where there is sufficient data on the reliefs granted, and there are clear criteria and factors influencing the outcome, there is, we think, room to consider having the Courts provide an indication of the likely outcome based on data. This can be done by AI, or perhaps by some other technology and method. The parties are then free to accept the indication, and move on, or if they consider that the indication is wrong in their case, continue to pursue a hearing before a judicial officer. Adopting this mechanism, may help parties get a rough and ready outcome, at less expense of time, effort and cost, and allow the judiciary to focus its resources on the harder cases. Much of this has parallels in some dispute resolution mechanisms used by online markets and e-portals, and may be quite familiar to much of the public. But we would also note that not everyone would be suitable users, and that it may not be good for all case types. Our work studying this issue continues.
Harnessing Data
31 As for harnessing data, one aspect is to make use of AI to analyse data for better efficiency, such as scheduling and performance management. But aside from this, the wealth of data we have could potentially be useful in identifying trends among our court users, so that adequate responses and preparation can be made. AI is likely to be particularly useful in such analysis. We have not embarked yet on substantial work using AI in this area, but it remains an area of interest for us. We will, of course, implement adequate safeguards to anonymise or redact particulars, and ensure that there is safe aggregation of information.
Conclusion
32 This has been a quick overview of where we are with AI in the judiciary. There are many interesting areas that are ripe for further exploration and discussion, including agentic AI and better reasoning models. It is likely that what will come to pass in 10 to 20 years will be quite different from what all of us picture in our heads right now. The aim in what we do in the courts is not to perfectly forecast what will or will not work, but to experiment and test out new technologies that on our current assessment will be helpful to our court users and to our judges.
33 I would emphasise the administration of justice must retain a human element, for the foreseeable future. While AI can assist with decision making, judges do exercise coercive powers on behalf of the state, and are expected by the public to exercise justice, that is, fairness and mercy, in appropriate situations. I do not think that even if AI could emulate the reasoning in rulings, and develop a kind of simulacrum of fairness and equity, that people would be ready to accept such rulings in place of human decisions. Fairness is based ultimately on empathy and humanity, and decisions about fairness are ultimately political decisions. It will take quite a cultural shift before we will see acceptance of fairness by machine.
34 Thank you.