✍️ Coursework (Formative)

2025/26 Autumn Term

Author
Published

20 August 2025

🎯 OBJECTIVE:

DUE DATE: 2 December 2025 5pm.

🗺️ Context

Governments around the world are increasingly turning to AI systems to manage welfare and social support programs. From predicting eligibility for unemployment benefits to detecting fraud in healthcare claims, algorithms are shaping how public resources are distributed. Proponents argue that automation makes welfare systems more efficient and cost-effective. Critics counter that these systems often replicate biases, lack transparency, and can wrongly deny benefits to vulnerable people. In some countries, public protests have erupted over the use of such systems, raising questions about fairness, accountability, and democracy.

📚 Case study materials

For this case study, you will rely on the following materials (Note that these resources are not exhaustive so you need to search for your own resources too!):

🗞️ News articles

🎙️ Podcast

🎓 Academic articles

📚 Book chapters

📝 Policy reports

❓ Questions

Using the provided readings, your own knowledge, and at least 3 additional references, answer the following questions. Assume you are writing for a general audience with no technical or policy expertise. You are also to assume your audience knows nothing about DS101A or its content!.

  1. What are automated welfare systems? Where and how have they been implemented?
  2. What kinds of algorithms are typically used to build such systems? How do they function at a basic level? (Some level of technical detail is needed here!)
  3. What assumptions are embedded in these models about individuals and society (e.g., who is “likely” to commit fraud, who “deserves” aid)? Are these assumptions fair?
  4. What are the potential benefits of using AI in welfare systems?
  5. What risks do they pose?
  6. How do policymakers, caseworkers, and citizens perceive these systems? Are their perspectives aligned or in conflict?
  7. How might such systems be designed or governed to mitigate risks while preserving benefits (e.g., transparency, appeals processes, audits)?
  8. What is your view: should governments rely on AI for welfare distribution? Why or why not?

📝 Instructions

  1. Your answers to the questions must be written in Quarto Markdown. You are to submit an HTML file generated with Quarto Markdown.

  2. Feel free to modify the layout and aesthetics of the Quarto Markdown template. You can also add images, tables, bullet points, etc. to your answers.

  3. Each of your answers need to be properly substantiated with evidence.

  4. You can use part or all of the materials provided but you need to include at least 3 more references than the ones provided in your answers. You must cite these references in your markdown using Zotero (revisit 💻 Quarto/Zotero tutorial).

    • Any ideas, arguments or results that were not produced by your mind must be cited in the references.
    • 👉 Avoid making explicit references to the course (e.g., writing things like “As we saw in Week 05…”), as this would go against the spirit of the exercise, which is to write to a general audience. Instead, refer to the bibliography we have provided and try to make connections between the ideas we have discussed and the case study. The same goes for AI-generated text.
  5. Make your writing clear, do not hide your thoughts behind jargon. You are not writing an academic article. Your case study is emulating a communication you would send to work colleagues who have very different educational backgrounds. You can find tips on how to write clearly and make your argumentation coherent in the Resources on clean and logical writing section of the 📄 Resources on argumentation and logical fallacies page on the course website.

  6. Do not plagiarise. It is not that difficult to spot that someone copied content from other sources and, frankly, it is very embarrassing if you get caught. Here is the link to the LSE regulation on plagiarism.

    • You are allowed to use Generative AI to help you write your essay. But you are asked to report the AI tool you used and the extent to which you used it. Read more about Generative AI in the section below. Check the Generative AI Policy for how reference use of Generative AI in your essay.
  7. Make sure you address all the questions.

🤖 Using AI help?

You are allowed to use Generative AI tools such as ChatGPT to help you write your essay. If you do use it, however minimal use you made, you are asked to report the AI tool you used and add an extra section to your essay to explain the extent to which you used it (this won’t count towards the word limit).

Note that, while these tools can be helpful, they tend to generate responses that sound convincing but are not necessarily correct. Another problem is that they tend to generate responses that are formulaic and repetitive; thus, limiting your chances of getting a high mark.

In effect, you are asked to explain the following:

  • What AI tool did you use?
  • How did you use it? For example, did you use it to generate ideas, write a draft, proofread your essay, etc.?
  • How much of your essay was written by the AI tool? For example, did you feed it the entire prompt and it wrote the entire essay? Or did you feed it guided questions?
  • If you didn’t edit the AI tool’s output, what was the output like? For example, did it produce a coherent essay?
  • What did you do to make sure that the AI tool did not produce gibberish? and that the essay was not formulaic.
  • Importantly, how did you ensure that the essay did not contain any plagiarism?

✅ Submission

  • Render your Quarto Markdown file to HTML
  • ⚠️ IMPORTANT ⚠️: Rename your HTML to DS101A-2025-2026-formative-case-study-<CANDIDATE_NUMBER>.html, replacing <CANDIDATE_NUMBER> with your candidate number. For example: DS101A-2025-2026-formative-case-study-123456.html
  • Upload this file to Moodle under the appropriate assignment.

✋ Getting Help

  • If you have any questions about the assignment, please post them on #help channel on Slack.
  • Book office hours.
  • Attend the drop-in session on Wednesday 27th November between 10.30-12 (COL 1.06).
  • Organise a study group with your classmates.

📑 Marking Scheme

(You will be graded as if this was a summative assessment.)

The following is the marking scheme we will use to mark your case study. Note that full marks mean that you have met a particular criterion to an extremely high standard, beyond our expectations. If you did “everything right”, you should expect about 70% of the marks on each criterion.

🧮 Understanding of technical theories and algorithms (0-20 marks)
Marks awarded Level Description
<5
marks
Fail to demonstrate the technical concepts
and algorithms related to the case study
You barely described any technical concept and/or algorithm related to the case study and if you did, you did so in a very superficial way that showed lack of understanding of the concepts described.
5-10
marks
Adequate knowledge of the technical concepts
and algorithms related to the case study
You show some high level understanding of the technical concepts and algorithms associated with the automated welfare systems case study and you are able to convey that understanding to a large extent.

However, there are rather major gaps in your technical concepts and algorithms associated with the automated welfare systems case study.
11-15
marks
Very good knowledge of technical concepts
and algorithms related to the case study
You clearly understand most, if not all, of the technical concepts and algorithms that underpin the automated welfare systems case study.
However, you might either:
- still have minor misconconceptions about some concepts AND/OR
- convey the concepts in a way that would be slightly confusing to a general lay audience
>15
marks
🏆 Excellent knowledge of the technical concepts
and algorithms related to the case study
Each of your answers demonstrates a very thorough and detailed understanding of the technical concepts and algorithms that relate to this case study i.e automated welfare systems.
At the same time, you understand them enough to be able to express them in layman’s language.
🤔 Critical thinking (0-30 marks)
Marks awarded Level Description
<7
marks
Complete lack of critical thinking about sources and algorithm used You don’t show no critical reflection at all on the topics linked to the case study.
7-15
marks
Limited degree of critical thinking about sources and algorithm used You present ideas mostly at face value and your reflection on them and critical examination of them remains skin deep and shallow.
16-22
marks
Some degree of critical thinking about sources and algorithms used You show some degree of critical thinking relating to the topic of the case study but your critical thinking skills lack some nuance
>22
marks
🏆 Engage critically with sources and algorithms used You are able to reflect critically on the topic of automated welfare systems but also question the quality of the materials you are engaging with to build your answers to the case study
🧬 Organisation and structure (0-15 marks)
Marks awarded Level Description
<3
marks
Poor Information and ideas are poorly sequenced. The audience has difficulty following the thread of thought.
4-7
marks
Fair Information and ideas are presented in an order that the audience can mostly follow.
8-11
marks
Good Information and ideas are presented in a logical sequence which is followed by the reader with little or no difficulty
>11
marks
🏆 Excellent Information and ideas are presented in a logical sequence which flows naturally and is engaging to the audience
🕵️ Use of literature and evidence (0-12 marks)
Marks awarded Level Description
<3
marks
Poor You fail to provide any, or accurate empirical information; you make empirical claims with no evidence to back them up; you use no or inappropriate sources.
4-6
marks
Fair You have some difficulties in identifying sufficient or relevant information; insufficient support for empirical claims from reliable sources; use of few or somewhat inappropriate sources.
7-9
marks
Good You have some success in making sufficient and relevant empirical claims and in providing sufficient support for them from a reasonable number of reliable sources
>9
marks
🏆 Excellent You accurately identified sufficient and relevant empirical information, and draw on support from sufficient and reliable sources
📝 Communication and formatting (0-8 marks)
Marks awarded Level Description
<3
marks
Poor Your Quarto formatting makes it difficult, if not impossible to read your document: major elements of the HTML document are missing. Your writing style is not fit for a general audience.
3-4
marks
Fair Your Quarto formatting is very basic (though the document is readable). Your writing is generally too complex for a general audience.
5-6
marks
Good You occasionally forget to explain the odd technical concept or abbreviation that a general audience might not be familiar with but your writing style is generally highly legible. Your Quarto formatting is neat, though a few minor elements here and there could be improved.
>6
marks
🏆 Excellent You customized your Quarto Markdown formatting, including correctly formatted and referenced tables and figured as needed. Your citations are perfectly formatted too. Your writing style is fit for a general audience, free of jargon and excessive abbreviations.
🎨 Originality in problem solving as a data scientist (0-15 marks)
Marks awarded Level Description
<3
marks
Poor Your answers lack originality: there are no new ideas, insights, or creative synthesis. You primarily reuses others’ work or perspectives without contribution or unique perspective.
4-7
marks
Fair Some originality is evident in your answers; you present original thoughts, analyses or perspectives, though they may be less fully developed or occasionally rely on conventional perspectives.
8-11
marks
Good You show considerable originality; you present well-developed, unique insights or approaches that enhance understanding of the peculiarities of the case study (i.e its technical aspects and/or its ethical aspects).
>11
marks
🏆 Excellent You demonstrates a high level of originality with innovative ideas or approaches. Your work is unique, showing creative synthesis or novel application of theories, concepts, or data.

References

Amaro, Silvia. 2021. “Dutch Government Resigns After Childcare Benefits Scandal. CNBC.” January 15, 2021. https://www.cnbc.com/2021/01/15/dutch-government-resigns-after-childcare-benefits-scandal-.html.
Amnesty International. 2025. UK: Government’s Unchecked Use of Tech and AI Systems Leading to Exclusion of People with Disabilities and Other Marginalized Groups.” Amnesty International. https://www.amnesty.org/en/latest/news/2025/07/uk-governments-unchecked-use-of-tech-and-ai-systems-leading-to-exclusion-of-people-with-disabilities-and-other-marginalized-groups/.
Bandhakavi, Swagath. 2024. UK’s AI System for Welfare Fraud Detection Faces Criticism over Bias and Transparency.” Tech Monitor. https://www.techmonitor.ai/digital-economy/ai-and-automation/uk-welfare-fraud-ai-system-faces-criticism-over-bias-and-transparency.
Björn ten Seldam, and Alex Brenninkmeijer. 2021. “The Dutch Benefits Scandal: A Cautionary Tale for Algorithmic Enforcement – EU Law Enforcement.” April 30, 2021. https://eulawenforcement.com/?p=7941.
Booth, Robert. 2024. “Revealed: Bias Found in AI System Used to Detect UK Benefits Fraud.” The Guardian, December. https://www.theguardian.com/society/2024/dec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits.
Carney, Terry. 2023. “The Automated Welfare State: Challenges for Socioeconomic Rights of the Marginalised.” In Money, Power, and AI: Automated Banks and Automated States, edited by Monika Zalnieriute and Zofia Bednarz, 95–115. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781009334297.009.
Enqvist, Lena. 2024. “Rule-Based Versus AI-Driven Benefits Allocation: GDPR and AIA Legal Implications and Challenges for Automation in Public Social Security Administration.” Information & Communications Technology Law 33 (2): 222–46. https://doi.org/10.1080/13600834.2024.2349835.
Eva Constantaras, Gabriel Geiger, Justin-Casimir Braun, Dhruv Mehrotra, and Htet Aung. 2023. “Inside the Suspicion Machine.” Wired, March. https://www.wired.com/story/welfare-state-algorithms/.
Jordan Erica Webber (presenter), and Danielle Stephens (producer). 2019. “The Digital Welfare State: Chips with Everything Podcast.” The Guardian, October. https://www.theguardian.com/technology/audio/2019/oct/21/the-digital-welfare-state-chips-with-everything-podcast.
Jurek, Łukasz. 2024. “The Use of Digital Technology in the Fight Against Welfare Fraud: Comparative Analysis of Selected National Experiences.” Publishing House of Wroclaw University of Economics; Business. https://doi.org/10.15611/2024.96.3.02.
Justin-Casimir Braun, Eva Constantaras, Htet Aung, Gabriel Geiger, Dhruv Mehrotra, and Daniel Howden. 2023. “Suspicion Machines Methodology.” Lighthouse Reports. https://www.lighthousereports.com/suspicion-machines-methodology/.
Maria Alejandra Nicolás, and Rafael Cardoso Sampaio. 2024. “Balancing Efficiency and Public Interest: The Impact of AI Automation on Social Benefit Provision in Brazil.” Info:eu-repo/semantics/article. https://doi.org/10.14763/2024.3.1799.
OECD Digital Government Studies. 2016. “Digital Government Strategies for Transforming Public Services in the Welfare Areas.” OECD. https://www.oecd.org/en/publications/digital-government-strategies-for-transforming-public-services-in-the-welfare-areas_0d2eff45-en.html.
Philip Alston. 2019. “A/74/493: Digital Welfare States and Human Rights - Report of the Special Rapporteur on Extreme Poverty and Human Rights.” OHCHR. https://www.ohchr.org/en/documents/thematic-reports/a74493-digital-welfare-states-and-human-rights-report-special-rapporteur.
Zajko, Mike. 2023. “Automated Government Benefits and Welfare Surveillance.” Surveillance & Society 21 (3): 246–58. https://doi.org/10.24908/ss.v21i3.16107.