DS101 – Fundamentals of Data Science
01 Dec 2025
Format:
Today is different from our usual lectures. You will lead the discussion based on case study readings about LLMs being used in legal proceedings and adjacent sensitive contexts (child protection, social work, healthcare).
Structure:
My role:
Before we start this case, let’s have a quick look at how LLMs work:
Watch this brief explanation:
Key points to remember:
Keep this in mind as you read today’s cases about legal proceedings.
Before diving into readings, consider what’s at stake in legal proceedings:
Adjacent sensitive contexts:
❓Question for you: What unique risks might LLMs pose in these contexts?
Read 3-4 articles from different categories. Focus on understanding what happened and why it matters.
UK LEGAL CASES (start here):
US LEGAL CASES (for comparison):
CHILD PROTECTION:
HEALTHCARE (legal-adjacent):
As you read, note:
Within your table groups:
Round 1: Sharing (5-7 min)
Round 2: Analysis (8-10 min)
Choose 2-3 of these questions to explore:
Round 3: Synthesis (5-7 min)
Tip: Jot down your group’s main points to share.
Each group shares (2-3 min per group)
We’ll explore emerging themes together based on what you raise.
Possible directions:
Ground rules:
On professional responsibility:
On access and equity:
On sensitive contexts:
On hallucinations:
In light of what you know, what are your opinions on the UAE’s proposal of rewriting its laws with LLMs?
For the Financial Times article on the topic, see here
The Guardian (Sept 2023): “Lord Justice Birss promotes ‘jolly useful’ ChatGPT for lawyers”
What happened:
Questions:
Key difference: Understanding LLM limitations and maintaining human oversight.
Bar Council guidance (November 2025):
“The growth of AI tools in the legal sector is inevitable and, as the guidance explains, the best-placed barristers will be those who make the efforts to understand these systems so that they can be used with control and integrity. Any use of AI must be done carefully to safeguard client confidentiality and maintain trust and confidence, privacy, and compliance with applicable laws.” Sam Townsend KC, Chair of the Bar Council, at the launch of the guidance
High Court warning (June 2025):
Dame Victoria Sharp, president of the King’s Bench division and Mr Justice Johnson:
“Such tools [generative AI tools] can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source”
Professional consequences:
Key message: Courts are taking this very seriously and will impose severe penalties.
If you want data:
Damien Charlotin’s AI Hallucination Cases Database (as of late 2025):
Why lawyers keep making this mistake:
Pattern: Both experienced lawyers and junior barristers make these errors.
If you want deeper understanding:
Why LLMs hallucinate:
Why it’s particularly dangerous in legal contexts:
Research findings:
Further reading (after class):
Victorian case details:
What happened:
Why this is particularly serious:
OVIC’s order:
Key principle: Some contexts require absolute prohibition, not just caution.
Current use:
Key risks:
Unlike legal cases:
AMA position (2023):
Called for stronger AI regulations after doctors began using ChatGPT for medical notes and patient interactions without proper safeguards.
Common thread with legal: Professionals delegating judgment to systems that cannot bear responsibility.
UK approach:
US approach:
Common elements:
Cultural difference: US appears more punitive, UK more regulatory (so far).
Individual reflection (2-3 min):
Consider:
Optional sharing:
Looking ahead:
Thank you for your thoughtful engagement today.
A more detailed commentary slide deck will be shared with you after class, including:
For those interested in the technical side:
Tracking projects:
Remember: This is a rapidly evolving field. What’s true today may change tomorrow.
Questions?

LSE DS101 2025/26 Autumn Term