🤖 Generative AI Policy
Using AI Tools in DS105W (Winter Term)
The policy
The use of generative AI tools is fully authorised in DS105W. This applies to lectures, labs, and all assessed work.
You can use ChatGPT, Google Gemini, Claude, Notebook LM, Microsoft Copilot, GitHub Copilot, Grammarly AI, or any similar tool. There is no restriction on which tools you use or how often you use them.
This course adopts LSE Position 3: Full authorised use of generative AI in assessment.
Source: School position on generative AI, LSE Website, since September 2024
What we expect from you
Share your AI chat logs with your submissions.
When you start working on an assignment, open a fresh chat window and use it for all your questions about that assignment. When you submit, include the chat log link in one of your notebooks. Most AI tools (ChatGPT, Gemini, Claude) let you share a link to your conversation history.
Not sharing does not cost you marks. There is no penalty for choosing not to share, or for not using AI at all.
However, sharing your chat logs gives us something valuable: a window into your learning process. If we notice that your code or analysis suggests over-reliance on AI (for example, using techniques you can’t explain, or patterns that don’t match what we taught), your chat log lets us give you specific, constructive feedback on how to use AI more effectively as a learning tool.
Without that log, we can only flag the concern. We can’t help you improve your approach. That’s the missed opportunity.
🤖 In short: sharing your AI chat logs is not about surveillance. It’s about giving us the context to support you better.
GenAI tools are not academic sources. Do not cite them the way you would cite a book or paper. They are tools you used, not authorities you consulted.
Our bespoke Claude project
We have created a dedicated Claude project for DS105W students that provides AI assistance specifically tailored to your course. This project:
- Knows your course context: It understands what you’ve learned so far and what you’re working on now
- Respects learning boundaries: It won’t give you solutions to future week content or bypass the learning process
- Provides guided assistance: It asks you questions to help you think through problems rather than just giving answers
The project is particularly useful for getting unstuck on coding problems (after you’ve tried yourself first), understanding concepts through analogies, or getting feedback on your approach to assignments.
💡 How to use it effectively: Start by explaining what you’ve already tried and what you’re thinking. The more context you provide, the better it can help you learn rather than just solve the problem for you.
Building your AI skills
If you want to develop a deeper understanding of how to use AI tools effectively (not just for this course, but for your career), LSE offers a self-paced Moodle course called AI Fluency, based on Anthropic’s curriculum.
📚 AI Fluency (self-paced, on Moodle)
It covers prompt engineering, understanding model limitations, and strategies for integrating AI into your workflow. Completing it is entirely optional but worthwhile.
Why we care about this
The LSE Data Science Institute has been studying the impact of generative AI on education since Summer 2023 through the
GENIAL project (project page).
What we’ve found so far:
Students who stayed in control of their learning gave AI tools detailed context, checked outputs against course materials, and rejected suggestions that didn’t match best practices. They used AI to accelerate work they already understood.
Students who lost control asked AI to generate code for topics they hadn’t yet grasped. The AI produced code that appeared to work but was incorrect, overly complex, or impossible to edit. These students felt confident but couldn’t explain or modify what they’d submitted.
The pattern is consistent: AI amplifies whatever you bring to it. If you understand the material, AI makes you faster. If you don’t, AI makes you feel like you do.
Read more in our published research:
Cardoso-Silva, J. et al. (2025) “Mapping Student-GenAI Interactions onto Experiential Learning: The GENIAL Framework”, Preprint.
Sallai, D. et al. (2024) “Approach Generative AI Tools Proactively or Risk Bypassing the Learning Process in Higher Education”, LSE Public Policy Review, 3(3).
How we use GenAI in this course
When creating material for the course:
- After Jon devises a plan of what to teach on a particular week or session, he drafts the headings and subheadings of the lecture notes on Cursor (or VSCode, with the GitHub Copilot extension enabled). Very frequently, the AI autocompletes something closer to what he already wanted to say, so he hits ‘Tab’ and lets it complete the sentence.
- If he gets stuck and can’t think of coding exercises that would help illustrate a concept, he goes to LSE’s Claude, imports his drafts into a project and queries the tool for ideas on how to connect everything. Most of it is generic and gets dropped, but sometimes it gives him a good idea to use.
When grading your work:
- Your class teacher reads and marks your submission first, assessing it against the marking criteria, writing short notes on what they think about your work.
- Jon then second-marks each submission. After forming his own assessment, he writes anonymised summary notes for each student (which criteria were met, where the gaps are, what mark range he is considering) and feeds those notes alongside the rubric into LSE’s Claude. Claude drafts structured feedback from those notes. Jon reviews and edits the draft before it reaches you.
- No student work is uploaded into any AI tool. Only Jon’s own summary notes enter the system.
- Although we might go to Claude to polish the writing of the feedback, all marking decisions are made by the teaching team.
