sg-crest A Singapore Government Agency Website
Official website links end with .gov.sg
Secure websites use HTTPS
Look for a lock () or https:// as an added precaution. Share sensitive information only on official, secure websites.

Justice Aidan Xu: Keynote address at the seminar 'The Expanded Scope of Dispute Resolution in Civil Justice'

Civil Justice: Courts, Technology and Access to Justice

Keynote Address 

“Designing Digital Justice: Access, Effectiveness, and Confidence”

Thursday, 23 April 2026

The Honourable Justice Aidan Xu

Judge of the General Division of the High Court, Supreme Court of Singapore

Judge in Charge of Transformation and Innovation, Singapore Judiciary


1    Good morning.

2    I am grateful for the invitation from the organisers to give this speech. The promise and the risks of digital justice are important topics, both in practice and in academia. What I hope to do today is to try to give an insight into the approach that we in the Singapore judiciary take in trying to improve the delivery of justice to all those in Singapore. It is sometimes about making use of technology to improve things, but it is always about how we structure the justice system to resolve disputes fairly and efficiently, to improve the lives of all.

3    When we speak about digital justice, we are not really speaking about screens, software, or systems as ends in themselves. We are speaking about how a justice system can better serve those who depend on it, while using digital tools carefully, proportionately, and with a proper appreciation of both their promise and their risks.

4    For some years, much of the conversation on this topic has been framed around innovation in a narrower sense. Attention has focused on remote hearings, automated workflows, intelligent search, digital filing, real-time transcription, and, more recently, generative AI. These are important developments. It is important to think about the specific challenges and issues for each of these technologies, and their impact on the courts. But such discussion about the type of technology to be used is not enough. We must also look at the whole picture and not lose sight of the ultimate objective.

5    Justice is about achieving fair and just outcomes, at a cost in time and money that is proportionate. Digital justice aims to do that with the available tools we have now. The aim of digital justice is not about exploiting or deploying technology for its own sake, but to ensure that aim of justice is achieved. That effort comes through three related aspects: access, effectiveness, and confidence. These are not separate distinct areas pulling in different directions. Rather, they are three aspects of asking whether justice is actually being done. People must reach justice, justice must be done, and they must trust the outcome and the system. A system that cannot be accessed offers justice only in theory. A system that can be entered, but only through delay, cost, confusion, and procedural exhaustion, fails as well. And a system that is accessible and efficient, yet does not command public confidence, stands on fragile foundations.

6    So the challenge before courts is not to become more technological for its own sake. It is to become better institutions of justice, using technology where it genuinely helps us widen access, improve effectiveness, and strengthen confidence.

7    But one matter to note before we delve deeper is that while there are issues especially with the use of gen AI, with occasional horror stories about hallucinations, AI is improving, and will continue to improve. We should not be blind to the possible benefits even as we take appropriate safeguards and risk management.

Access

8    Let us begin with access. When we think about access to justice, we sometimes think first of rights, jurisdiction, or affordability. Those are important, and their provision is a complex issue. But from the court's perspective, facilitating access includes addressing whether a litigant can understand what is happening, identify what must be done next, and remain able to participate in the process.

9    For many court users, especially those without lawyers, the problem is not that they cannot identify a dispute or grievance. They are either contemplating pursuing a claim, or defending against one lodged against them. Their problem is that they often cannot find a path from the problem they face to a remedy that exists within the system. What professionals describe as process or procedure may, to an ordinary person, feel like a maze of forms, unfamiliar language, deadlines, directions, and implied consequences. Digitalised, it may involve websites, often of various entities, that can be bewildering no matter how helpful they aim to be. Furthermore, the challenge of navigating that maze is often encountered at a time of stress and challenge. The family may be breaking apart, the person may be facing financial difficulty or unemployment, eviction looms or often in the worst situations, they may stand accused of some wrongdoing.

10    Users must be given tools to allow them to figure out where they are. That is part of the systemic role of the Courts as noted by our Chief Justice, in his opening address at the launch of ‘Conversations with the Community’, here at SMU. The users must be assisted to at least understand where they are. This requires some understanding of the issues that have arisen, the possible routes forward, what action may be taken now, and what may come later. Thus, one of the first tasks of digital justice is to convert the maze into a map.

11    This raises a challenge for the Courts. Traditionally, the courts stood aloof.  Judges judge what others argue and adduce; they do not advise. But that is an outmoded and self-limiting approach. As has been emphasized by the Chief Justice, the Courts have a role in providing information and laying out the possible ways forward. The courts can provide information without giving advice, and coming down on one side or the other. A court does not surrender its neutrality merely because it explains procedure clearly. Being impartial and non-partisan is not the same thing as stony indifference.

12    A justice system that remains a mystery to ordinary users is a failure. The represented litigant has someone who can decode the process. The unrepresented litigant may have no such assistance. If legal participation depends on prior legal literacy, then the promise of equal justice is weakened at the point of entry. Giving information is part of justice.

13    This is why user-centered design matters so much. A digital court system should not merely reflect the Court’s inherited ways of doing things, or the way it happened to be organized decades ago. We have had our own experience with design thinking in the Courts. Design thinking is not merely management fashion, but when properly implemented, it will provide a useful perspective on the experience of the actual users of the system. The justice system should be built around the user's journey, and help answer the questions that the users would ask:

(a) What kind of dispute is this?

(b) What do I need to do next?

(c) What documents do I need? 

(d) What are the deadlines, and what happens if I miss them?

(e) What happens when I come to court or attend online?

(f) What do I do before the judge?

(g) What happens if I win or lose?

And in answering all of these, we must aim to use simple and commonly understood language. We must know when to be legally precise but also when to be humanly understandable.

14    Thus, within the Singapore judiciary, we have embarked, through various teams, in the Access to Justice Division and the Office of Transformation and Innovation, to provide such information that is simple and accurate. This has taken many non-digital forms, including better signage and better location of services, but also in constant examination of our online information. We have experimented with various AI chatbot systems, but are not yet at the stage where we are confident of the output. The main concern is the flattening or oversimplification of answers, especially when AI has to deal with seemingly similar rules or when there is ambiguity in language. We did not think it worthwhile to spend limited resources attempting to train and improve the output at that time. But we remain open to revisiting it, when we feel that the answers can be made more sound. Machine translation of court documents in the Small Claims Process is one area where we have deployed AI, to improve access.

15    We have also made available access to e-filing systems on mobile. Where we are still a little bit behind is in the ability for self-represented persons to file documents, on their own and on their most used devices, their phones. While some ability for self-represented persons exists to file on their own, this is not available across the board. We have kept track of what has been made available abroad, and it remains an area that we hope we can provide substantial improvements in access as we unify and refresh our filing systems, without compromising security and integrity.

Effectiveness

16    Effectiveness is the next aspect we need to examine. Effectiveness is often confused with speed. It is also sometimes reduced to efficiency. But a justice system is not effective merely because it moves quickly. It is effective when it helps produce the right outcome through a process that applies the law accurately to the true facts, with appropriate attention to fairness and context, and indeed timeliness.

17    In a digital setting, effectiveness increasingly depends on what we might call intelligent support, meaning systems or tools that help with the necessary work in the courthouse or courtroom, reducing friction wherever possible.

18    Digital justice can help with a lot of seemingly mundane, but often complex and impactful activity, such as categorization of cases, scheduling, or the gritty details of hearings. Let me take first categorization. This is important to allow the Courts to decide what resources are to be deployed for what type of cases – the aim is to ensure that complex or demanding cases obtain the attention and deployment of resources required. Sometimes this may entail close case management, with various case management conferences lined up, taking up judicial time. The judge may need to be prepped to consider fairly intricate directions and orders. Sometimes, some cases are ripe for mediation, and early judicial oversight of the process can be fruitful in helping parties move to settlement, while freeing up the court calendar for other cases. Counsellors may need to be brought in early for certain disputes. Sometimes, the number of days required for a specific case may require careful consideration. Certainly, rules setting out different pathways can be laid out – but such rules on their own do impose significant load on the courts and indeed the users. In all of this, AI and other digital tools hold the promise of enabling our work to be done more smoothly, for the benefit of all of the users in the system.

19    Thus, used responsibly, digital tools can support this allocation of attention. They may help identify urgency, complexity, non-compliance, or other indicators that a matter requires closer supervision. Properly designed, such tools do not replace judicial decision-making. They help ensure that human judgment is directed where it is most needed. At the same time, this is an area that requires care. Any system that classifies matters carries risks. The categories chosen, the proxies used, and the consequences attached to those classifications all matter. For that reason, these systems must be transparent, reviewable, and subject to continuing human oversight. Classification may assist judgment, but it must never become a hidden substitute for it.

20    We are exploring the use of AI in our systems to assist with this.

21    A second domain of effectiveness lies in dealing with evidence. Modern litigation often involves a very large volume of digital material: emails, messages, social media posts, photographs, audio and video. Widespread mobile phone use and the ubiquity of digital life has not helped. The sheer volume can be overwhelming even in simple transactions.

22    Technology, especially AI, can help in substantial ways. AI’s ability to summarise autonomously has been a great game changer. It can also generate chronologies, organise files, and reveal connections or relationships within them that could otherwise be difficult to see. None of this replaces legal analysis. But it does reduce the burden that so often consumes time and energy better devoted to reasoning and thinking.

23    We have tried to make use of this ability in our work with Harvey-AI, in small claims proceedings providing summaries for the use of the parties, so that they can understand the materials relied upon by the other side: we assume they know what their own materials say. Importantly, similar summaries are available for judges. Considerable effort was put in to ensure accuracy and faithfulness of these summaries, both in the prompts and the response.

24    On a broader level, our judges are able to make use of government-provided AI models to summarise and organise arguments and evidence, especially affidavits and documents. We do encourage our judges to experiment with these various tools, subject to validation and audit through our guidelines for judges. The results are generally useful. The main challenge has been the limited capacity thus far of these models – while the token sizes have grown, which we are grateful for, often I have the impression that the rest of the public organizations have to contend with much less reading than the courts.

25    There is also considerable value in what might be called daily work assistance. A great deal of legal and judicial work is routine and commonplace, yet requiring considerable effort. Translation is often required, and transcription almost always is. Resources have to be deployed to carry these out. Traditionally, transcription may take considerable time as well. Employing digital tools to carry out these functions holds the promise of enabling proceedings to move more expeditiously, without any major adverse impact. They allow parties, lawyers, and judges to focus more fully on the legal issues that are at stake.

26    Thus we are continuing in the Singapore judiciary to employ digital tools to transcribe. We are not yet fully at a point where we can let loose these tools with minimal human intervention. A number of challenges still remain, but we are hopeful for continued improvement. The availability of transcribed records of proceedings at lower cost promises to help reduce the cost of proceedings for all parties. Translation, which also impacts access, is another area of fruitful use of AI technology. Yet here, while we see that machine translation can be good enough for many contexts, interpretation of oral testimony by witnesses will still probably need human translators for years to come: there are nuances in oral testimony, including emotion and ambiguity, that will probably need the human touch.

27    What we must do as we continue on this, is to ensure that courts do not become factories. Speed, and efficiency, and efficacy, cannot be at the expense of the trust that the people have in the justice system. Without that continued trust, all that we do would be for naught.

Confidence

28    That brings me to the third aspect, namely confidence in the Courts. Courts are not merely service providers. The authority and indeed the existence of the courts depends in substantial part on public trust that disputes will be handled fairly, independently, and in a principled way. Confidence is easy to invoke in the abstract, but in reality it is built over decades and centuries, through the slow accumulation of decided cases and public acceptance of the rulings of judges. It can be lost in a matter of days or even minutes.

29    A clear challenge to confidence from digital justice is in any perception that the decision comes from the machine, not the judge. Authority is vested in the judge. Any decision that is perceived to be emanating from some other source, such as an AI system, or from other persons, will be regarded as illegitimate and a betrayal.

30    Thus if users suspect that important decisions are being shaped by hidden systems, unclear criteria, or unchallengeable automation, trust can deteriorate very quickly. That is so even where the underlying tool is modest. In matters of justice, perceptions matter because legitimacy depends not only on fairness itself, but on the public's ability to recognise it. This is why it is important for us to be able to explain what the technology does, what it does not do, and where responsibility remains. 

31    These concerns underly the Singapore judiciary’s approach to using AI in decision making. We are aware of the concerns of erosion of confidence if judges are seen to abandon judgment, leaving it to AI or other systems to make the decision. We do still want to see what assistance AI can give to the process without overstepping the mark. On one side of the line, we can see that digital tools, including AI, can help vet and verify judgments, for example highlighting weaknesses in draft judgments, where parties’ arguments may have been overlooked, or where interpretations of law run counter to authorities. Red-teaming, that is positing what the criticisms may be and the opposite outcome may be, does also seem to be useful, again to ensure that judges consider and take into account the outcome opposite to what they may be inclined to.

32    What does give us, the court’s hesitation, is the use of AI to draft entire judgments from scratch. The concern is of course, on abandonment by the judge of his role, and relying solely on the AI to craft an outcome without exercising any of his own judgment. But in practice the line between such wholesale abandonment of the role here, and achieving the same result by way of red-teaming or testing of drafted judgments, is not a bright and clear one. We are still thinking through the challenges and issues. We note that some jurisdictions do try to approach the issue by positing questions or issues, requiring resolution by the judge before a machine drafted judgment is issued. This may be one approach, to consider, though it may still lead to perceptions of abandonment of judgment.

33    Another area we are experimenting with is considering whether we could have an initial decision on a case made by a system, perhaps using rules rather than AI. The intention is for this initial decision to be presented to parties, leaving it to them to consider whether to accept that decision or challenge it or have it heard fully before a human judge. We think this approach is suitable for those situations where the decision is reached on clearly defined and common criteria, such as financial information, with a large amount of precedent available to give guidance, and it can be readily determined what the criteria might be and what the usual outcomes are. In many ways, this would be quite similar how online disputes are resolved by e-commerce companies. We are progressing with thought and deliberation.

34    Throughout all of this, restraint is an important aspect of maintaining confidence. Not every technological capability should be adopted simply because it is available. Innovation should be careful, staged, evidence-based, and governed by clear principles. 

The way forward

35    If access, effectiveness, and confidence are the three evaluative pillars, a further question follows. How should courts govern digital transformation so that those values remain aligned over time.

36    Every proposed tool should be tested and weighed against a series of questions, such as:

(a) Does it widen meaningful access?

(b) Does it reduce friction without impairing fairness?

(c) Do we know whether it can be wrong, and how easily can this be determined? And

(d) Can any error be identified and corrected?

37    Secondly, digital justice requires interdisciplinary responsibility. Good systems are not built by the tech team or by lawyers alone, or even with specialists. The users are important and must always be part of the process.

38    Thirdly, all progress requires iteration, and patient improvement. Mistakes and errors matter: rejected filings, repeat queries, escalation requests, user confusion, and frustration, can be instructive. If everyone faces the same hurdle, we probably have bad design.

39    Finally, the judges must continue to be the ones deciding and must be supported in their work. They must also be at least technologically comfortable, and be aware of the impact that technology can have on the users of the system, and the hurdles that might sometime arise. They need not become technologists. But they must understand enough to ask the right questions about data, assumptions, error, and usability.

40    So as courts continue to evolve, the aim should not be to build a more technological judiciary simply because technology is available. The aim should be to build a more capable, more intelligible, and more trusted system of justice. Every platform, every workflow, every automated aid, and every interface should be tested against three questions:

(a) Does it widen access?

(b) Does it improve effectiveness?

(c) Does it strengthen confidence?

41    If the answer is no, then however bright and shiny it is, it is not something we should deploy. But if the answer is yes, technology can make a serious contribution. It can help make the justice process more navigable for those who would otherwise be lost in the process. The technology can help ensure that the authority of the courts remains anchored where it has always ultimately rested: in the confidence of the people whom the law serves.

42    Thank you.


2026/04/24

Share this page:
Facebook
X
Email
Print