Asia-Pacific (APAC) Legal Congress 2026
Opening Keynote
“Corporate Counsel: Shepherds in an Age of Generative AI”
Thursday, 09 April 2026
The Honourable the Chief Justice Sundaresh Menon
Supreme Court of Singapore
Ladies and gentlemen
I. Shepherds of a New Age
1. A very good morning to all of you. Let me first thank the SCCA for its generous invitation to be a part of its flagship Congress this year, which has as its theme, the end of the legal comfort zone. Inevitably, this is a consequence of the rise of generative AI, and that is what I wish to speak to you about today.
2. There was a time when AI-generated search results advised users to eat one rock per day, and to apply glue to pizza for better cheese adhesion.(1) Believe it or not, these anecdotes are barely two years old, yet they feel like ancient history. We inhabit a vastly different reality today – one that continues to evolve at such breakneck pace that parts of this address may be unrecognisable by the time you have the next Congress. Admittedly, some part of our collective fascination with AI may be fuelled by hype.(2) But in many respects, the future is already here, and this moment is at once profoundly unsettling but also genuinely exciting. To put this in perspective, over the long arc of human history, historians have identified only 24 general-purpose technologies that fundamentally reshaped the world, such as the printing press and electricity. Like many, I believe we are living through the 25th.(3)
3. Almost every significant enterprise today is a technology company in the making, if it is not already one. Nearly 9 in 10 organisations now deploy Gen AI in at least one business function,(4) and 8 in 10 professional services firms expect agentic AI to become central to their workflows by 2030.(5) These trends are unsurprising. As of last year, the best-performing frontier models were meeting or exceeding the human benchmark on nearly half of economically valuable tasks.(6)
4. In-house legal departments are inevitably in the thick of these changes. A global report published last month found that 87% of general counsel are now using AI for work and that 53% of legal teams have a formalised technology roadmap in place, doubling these figures from 2025.(7) The corporate counsel community is evidently not peripheral to the AI transition but its members are being pulled to its centre. And it is not hard to see why. All of you occupy a special position at the coalface of corporate decision-making – you’re close enough to the action to understand the commercial stakes, and yet trusted enough, just detached enough, to be able to temper commercial ambition when it threatens to outpace good judgment.(8) I suggest it is this combination of proximity and trust that makes you natural shepherds of effective and responsible AI adoption.
5. The question I want to pose this morning is what makes a good AI shepherd. As I see it, the role rests on two foundations. The first is the ability to serve as a trusted advisor – one who can guide the organisation through an evolving risk landscape with clarity and confidence. The second is a willingness to lead by example, by being a skilled deployer of AI who can demonstrate what responsible AI adoption means in practice. Let me take each of those two points in turn.
II. Corporate Counsel as Trusted Advisors
6. The key to being a trusted advisor in this context is surely a deep and current understanding of how AI is redrawing the corporate risk matrix. And I wish to illustrate this by reference to three fault lines: data, duties, and displacement.
A. Data
7. Data, which refers to information in its broadest sense, is where the most immediate risks have surfaced. Indeed, a 2026 survey of legal professionals ranked the risk of data leakage as the second highest AI-related concern.(9) But leakage is only one facet of a wider spectrum of data-related concerns, running the gamut from cybersecurity and data protection, to legal privilege and IP rights.
8. Take legal privilege. Every interaction with an AI platform leaves a trail. Prompts, outputs and metadata could conceivably be the subject of discovery applications, and courts are already grappling with whether such material could be privileged.(10) In February, the Southern District of New York answered this question in the negative in USA v Bradley Heppner.(11) The defendant, a CEO anticipating an indictment for fraud, used an AI platform to prepare thirty-one documents that outlined a possible defence strategy. He claimed privilege over these documents on the basis that they incorporated information that had been received from counsel from counsel and were prepared in anticipation of obtaining legal advice. The court rejected this argument and it cited the platform’s privacy policy, which permitted the disclosure of user inputs and AI outputs to third parties. AI conversations were therefore thought to be distinguishable from notes prepared by a client to be privately shared with an attorney. The court did leave one door open: had the use of the tool been directed by counsel, this might have likened it to a highly-trained professional acting as the lawyer’s agent. It remains to be seen whether that would be followed in other courts.
9. Another data-related challenge is the erosion of data integrity. Some anticipate that by 2028, which is just two years away, half of all organisations will have adopted a zero-trust posture for the purposes of data governance. The concern is that AI-generated data will have become so pervasive and indistinguishable from human-generated data that trust can no longer be the default assumption.(12) And this risk is compounded by the use of “shadow AI”, which are publicly-available tools used by employees without employer sanction or oversight.(13) Together, these developments signal a proliferation of AI-generated material, which reinforces the need for clear organisational policies on AI usage, data hygiene and verification.
B. Duties
10. But the governance implications of AI do not stop there. They extend to the realm of corporate governance and directors’ duties. We can expect directors and officers to depend increasingly on AI tools to support their work and perhaps even to consider delegating selected workstreams to AI agents. In a 2025 survey, a staggering 94% of global CEOs felt that AI could provide better counsel than a human board member.(14) This appetite is not entirely new. In 2014, a venture capital firm appointed an algorithm to its board and gave it voting rights on investment decisions. While that experiment was largely confined to the making of recommendations, what was a novelty then – perhaps a fad – is emerging as a potentially serious proposition today.(15)
11. This raises important questions about the adequacy of existing corporate governance frameworks. For instance, some academics have argued that the European Union’s AI Act(16) lays the groundwork for two novel fiduciary duties of algorithmic stewardship.(17) The first, “AI due care”, calls for technological literacy and a framework of oversight of AI systems that is guided by values such as non-discrimination and explainability. Existing duties of care are said to be insufficient because AI poses a “qualitatively different category of risk”.(18) In the authors’ words, AI tools are “decision-making structures” that “mediat[e] organisational choices in ways that often elude human comprehension, auditability and normative clarity”.(19) The second proposed duty, “AI loyalty oversight”, expands the scope of existing no-conflict and no-profit rules. It calls for assurance that the AI systems themselves remain aligned with the company’s interests and do not embed partiality. This responds to the fact that many AI systems are opaque and potentially self-executing, which can generate less verifiable sources of influence on decision-making.(20)
12. Now, reasonable people may disagree on the need for such new duties and their appropriate content, but the discussion is helpful in outlining the kinds of issues that may lie on the horizon. Indeed, the Monetary Authority of Singapore recently held a public consultation on proposed Guidelines for AI Risk Management, which aim to strengthen AI oversight and life cycle controls in financial institutions. In this light, the horizon may actually be closer than it appears.
C. Displacement
13. A third cluster of risks concerns displacement, or the impact of Gen AI on the workforce.
14. The AI revolution differs from past technological shifts in at least two important ways. First, it is knowledge workers who are most directly at risk of unemployment, because AI’s capabilities are most developed in these cognitive tasks that have long been their preserve. Second, the sheer breadth of AI’s cognitive abilities suggests that there may be few safe harbours. In earlier waves of automation, many displaced workers could retrain from sunsetting roles to adjacent ones. By contrast, AI potentially constitutes a general substitute for labour that cuts across a wide spectrum of knowledge-based work.(21)
15. To be sure, economists are divided on the precise magnitude and pace of labour displacement. But most agree that a significant period of transition lies ahead of us.(22) Some employers have already stated that they will “exit” employees who are unwilling to reskill.(23) Others have required employees to train their AI replacements – an experience that some have likened to “digging [their] own digital grave[s]”.(24) It is difficult to forecast aggregate labour outcomes in the short run, especially as the diffusion of new technologies invariably takes time.(25) But even accounting for this, Anthropic’s CEO Dario Amodei estimates that we are between 1 and 5 years away from seeing half of all entry-level white-collar jobs being displaced.(26) And this accords with a recent study by the Stanford Digital Economy Lab on AI-exposed sectors, which found that of all cohorts, the largest relative decline in employment was seen among early-career workers.(27)
16. For corporate counsel, this is not a distant macroeconomic trend. Every point in the human resources pipeline – from recruitment, to performance management, to termination – is already a source of legal exposure. As more of these decisions are driven, determined or influenced by AI, that exposure will grow.
17. In summary, the fault lines of data, duties and displacement characterise a risk landscape that is less stable and, in some ways, less forgiving than before. What this demands of corporate counsel is not just legal knowledge, but the ability to keep pace with rules that are being rewritten in real-time, to anticipate where the next fault lines might appear, and to render sound advice in a space that will remain uncertain for some time. That is what it means, I suggest, to be a trusted advisor. To be sure, it is challenging; but it is also why the role has never been more indispensable.
III. Corporate Counsel as Skilled Deployers
18. Yet, understanding the risk landscape is only half the battle. Corporate counsel should also model what responsible AI adoption looks like within their own teams. This brings me to the second part of my address, where I consider the promise that AI holds for legal teams, the pitfalls for the unwary, and what a better path forward might look like.
A. The Promise of Transformation
19. Let me begin with the promise of AI. The benchmarking firm Vals.AI produced two landmark reports last year. The first found that several leading AI tools already outperform lawyers on tasks such as document summarisation and analysis.(28) A follow-up study that focused on legal research found that various tools exceeded the human performance baseline on every metric evaluated – the accuracy of results, the citation of authoritative sources and the clarity of response.(29)
20. Perhaps the most vivid illustration of this promise comes from a recent post on X that garnered over 7 million views. Zack Shapiro, a transactions lawyer, described how his two-man boutique firm uses AI to compete with some of the largest firms in the market.(30) Three things stood out from his post. First, the AI did not apply a standard set of tools, but encoded the lawyer’s professional judgment. By creating structured instruction files or “skills”, he taught the tool his “analytical frameworks, preferred formats, voice, and judgment”. Notably, the files he created are transferable, which means that any junior lawyer could apply those standards from their first day on the job. Second, the tool’s ability to write code allowed it to reach inside documents, produce redlines, and draft cover emails seamlessly, without having to manually operate each application. Third, the tool provided insights into Mr Shapiro’s own working patterns, identifying where friction was the highest and where automation might have the greatest impact. In short, the tool was effectively an alter ego, an associate and a mentor all at once.
B. The Pitfalls of AI Adoption
21. This gives us a glimpse of what context-aware and agentic AI makes possible – and why it may soon be more accurate to think of AI not just as a tool but as a teammate. But like any teammate, it will come with its own blind spots and limitations. Let me identify three in particular.
i. Uneven performance
22. The first pitfall is the uneven performance of AI, or what has been termed its jagged frontier. Gen AI’s abilities do not form a smooth gradient. Instead, the boundary between brilliance and bewilderment tends to be irregular and sometimes counterintuitive. AI can achieve gold-standard results in mathematical Olympiads, and yet on seemingly simpler tasks, it can fail spectacularly and without warning.(31) Shapiro’s experience is real, but it is in some ways an ideal case. The reality is that even purpose-built legal AI tools continue to suffer from hallucinations, and performance weakens on more complex tasks such as research into multi-jurisdictional or novel questions.(32) More fundamentally, what we describe as AI “reasoning” is fundamentally different from what lawyers understand as the legal method. In general, AI generates plausible text based on learned patterns rather than a semantic understanding of precedent and principle.(33) Its results may look like legal reasoning, but it is not the same thing.
23. None of this is an argument against using AI. But it counsels us to learn about the terrain before setting out. We must be clear-eyed and cautious about AI’s limitations, so that we may more readily recognise when AI fails to work.
ii. Undue reliance
24. The second pitfall is undue reliance on AI and the resulting erosion of human accountability. A global survey of nearly 50,000 workers revealed that 46% of them critically evaluated AI outputs only “rarely or sometimes”.(34) The organisational risks of this should be obvious. But equally disconcerting are the risks to individual workers. If we treat AI as infallible and disengage our judgment, we risk allowing the capabilities that have defined lawyers as professionals to fade away. I’m talking about qualities such as critical thinking, domain expertise, and professional judgment. The goal of AI adoption should be to expand our capacity for higher-order and higher-impact work, not to provide cover for vacating our judgment.
25. This accountability gap also has an external dimension, in terms of the relationship with external counsel. Recent surveys indicate that around two-thirds of corporate respondents support AI use by external counsel, but fewer than a fifth have guidelines requiring such use,(35) while three-fifths are not aware whether their external counsel are even using AI.(36) This mismatch between expectation and reality means that some organisations are forfeiting efficiency gains while others are leaving potential exposures unmanaged.
iii. Undermined professional development
26. The third pitfall is that aggressive, unthinking AI adoption might undermine the professional development of our younger colleagues. The legal profession has long been sustained by apprenticeship, where tacit knowledge and professional instinct are acquired through years of practice and observation. The problem is that the formative tasks central to building one’s craft tend to be the very tasks that are most vulnerable to automation. Unsurprisingly, the erosion of professional skills development came up in one survey as the single largest concern with AI.(37) What is at stake here is not only the development of individual lawyers, but in the longer term, the sustainability of the profession as a whole.
C. The Path Forward
27. How then should corporate counsel respond to the pitfalls of uneven performance, undue reliance and undermined development? There is much that can and is already being done, but let me suggest three possible priorities.
i. Preserving human wisdom
28. The first is to pair artificial intelligence with human wisdom, and to close the AI literacy gap before it widens further. When legal professionals in Singapore and Malaysia were surveyed last year about their level of confidence on how Gen AI tools work and how they can be used, 17% responded that they were “not at all confident”, while 36% responded they were “unsure”.(38) On this basis, about half of the profession is operating somewhat in the dark.
29. What is needed is not technical mastery but situational fluency – the ability to assess whether AI systems are being deployed consistently with an organisation’s objectives, frameworks and commitments. This means developing working familiarity on topics such as “model typologies, data provenance [and] fairness metrics”.(39) It means being selective about which tools we deploy and where, instead of chasing after every shiny tool and amassing a pile of shelfware. It means being alert to automation bias, or the tendency to defer to machine-generated outputs simply because they appear polished or rigorous or because we don’t understand how they got there.(40) And, given AI’s jagged frontier, it means adopting a more considered perspective about what AI can do, what it cannot do, and what it should not be allowed to do.
30. A number of initiatives are underway to concretise these principles and to strengthen the technological core of our profession. In this year’s Budget debates, the Government announced plans to support 100,000 workers in becoming AI-bilingual by 2029, and the IMDA’s expanded TechSkills Accelerator programme will place an initial focus on the accounting and legal professions.(41) This complements the ongoing work of the Singapore Academy of Law, including its regular hands-on AI workshops(42) and its prompt engineering guide that was updated just last October and features an expanded range of sample prompts for common legal tasks.(43) The SCCA has been an active contributor to many of these initiatives, and this Congress is yet another testament to that commitment.(44)
31. Alongside these initiatives, a growing body of practical resources is also taking shape. In January, the IMDA introduced the world’s first Model AI Governance Framework for Agentic AI. And in March, the Ministry of Law published its Guide for Using Generative AI in the Legal Sector, which contains, among other things, risk assessment checklists that can aid lawyers in evaluating AI use cases and external vendors.(45) In short, the infrastructure for responsible AI adoption is already being built, and I would strongly encourage all of us to capitalise on what is available and to contribute to it.
ii. Strengthening partnerships with external counsel
32. The second priority is to actively involve external counsel in conversations about AI adoption.
33. These conversations are needed because AI is changing what in-house teams can manage for themselves. AI democratises access to capabilities that were once concentrated in large and well-resourced law firms, making it increasingly feasible to insource standard and routine work. The data hints at how far this shift may go. In a 2025 survey, nearly two-thirds of corporate counsel expected to reduce their reliance on external firms for routine legal tasks,(46) with 82% expecting lower spend on contract drafting and negotiation, 45% on litigation, and 42% on M&A.(47) This points to a potential shift in the division of legal labour.(48)
34. If handled well, this transformation would not be a loss for either side, but rather would enable both sides to play to their comparative advantages in a new operating environment. This might see external counsel reallocating their time and resources to the tasks that they are best placed to perform, such as high-stakes transactions that require a seasoned hand, or novel and complex matters that demand specialist expertise. This might also see law firms leveraging AI in a more intentional way that enables productivity gains to benefit both sides of the relationship.(49) But if this reconfiguration is to be sustainable and mutually beneficial, there will need to be proactive and inclusive conversations about how the division of work should evolve and how value should be measured and shared in an AI-augmented age.(50)
iii. Enhancing mentorship
35. The third priority is to enhance mentorship for the next generation of lawyers. As opportunities for learning by repetition and osmosis contract, we must do more to safeguard the training ground for our younger colleagues. This may include introducing deliberate friction by requiring certain tasks to be performed manually or with limited AI assistance, coupled with structured feedback and a process of internalisation.(51) It also means resisting the temptation to measure junior lawyers against unattainable AI benchmarks. At the same time, it is worth noting that mentorship need not flow in a single direction. Just as senior lawyers offer judgment and experience, junior lawyers who have been raised as digital natives can be a useful source of reverse mentorship and help to close the AI literacy gap.
36. The short point is this: AI does not mark the end of lawyers. It marks the beginning, I suggest, of a different species of lawyers: one defined not by the ability to out-process the algorithm – we can’t – but by qualities that no algorithm can replicate. The ability to provide trusted counsel; to read the organisational tea leaves; to bridge the intersection between legal risk and commercial reality; and to hold the ethical compass steady when navigating uncertain terrain. In this vision of a new collaborative model of human lawyers supported by high-functioning technological assistants, these are qualities that will distinguish the human half of the partnership. And when these qualities are paired with the analytical power and agility of AI, we can expect the reach of the legal team to be extended dramatically, and if done well, this will greatly benefit those who use our services.
IV. Conclusion
37. Let me close by referring to the suggestion advanced by Brice Challamel, the Head of AI Strategy at OpenAI, that we should come to see Gen AI not as a tool but as a teammate. This is a teammate that happens to be a remarkably versatile polymath: it is an assistant that can transform chaos and clutter into manageable workflows; an expert that can cut through the noise to reveal critical insights; a coach that holds up a mirror and gives timely and honest feedback; and a creative that generates valuable – if sometimes unexpected – ideas.(52)
38. The AI moment that we are all confronting is both profoundly disturbing and genuinely exciting. The teammate metaphor captures why both dimensions are true. It is the same multifaceted capacity for disruption that generates both the anxiety and the excitement. And what will determine which prevails will not be the technology alone, but the human teammate who brings her judgment to bear and who decides whether to run on autopilot or to become a copilot and the captain in this endeavour.
39. This is the role of the shepherd in the AI age – not to impede the flock’s progress, nor to push ahead without regard for the terrain, but to bring a sense of safety, direction and leadership in times of profound change.
40. Thank you and I look forward to our conversation.