DS105 2025-2026 Autumn Term Icon

🤖 Generative AI Policy

Using AI Tools in DS105A

Author

Dr Jon Cardoso-Silva

Published

Invalid Date

🎯 Our Position
We adopt LSE Position 3: Full authorised use of generative AI in assessment. This page explains our policy, guidelines for responsible use, and how we integrate AI tools into your learning experience.

Since ChatGPT came out in November 2022, teachers and experts have been thinking about how it might affect tests and assignments. Some worry that letting students use AI for answers could be seen as cheating and should be banned. Others think AI is another tool and should be allowed (Lau and Guo 2023). There’s no agreement on this, so each college or university has to figure it out.

Because of these different opinions, the official rule at LSE is to let each department and course leader choose whether to allow AI tools in tests and assignments.

There are three official positions at LSE:

Position 1: No authorised use of generative AI in assessment. (Unless your Department or course convenor indicates otherwise, the use of AI tools for grammar and spell-checking is not included in the full prohibition under Position 1.)

Position 2: Limited authorised use of generative AI in assessment.

Position 3: Full authorised use of generative AI in assessment.
👉 This is the position we adopt in this course

Source: School position on generative AI, LSE Website, since September 2024

Our policy

We subscribe to Position 3, which allows the full authorised use of generative AI in assessments. However, because there are risks to your learning associated with the use of these tools, we have some guidelines to help you use these tools responsibly.

This means that you can use generative AI tools (GenAI) during lectures, labs, and assessments.

  1. You can use generative AI tools such as ChatGPT, Google Gemini, Claude, Notebook LM, Microsoft Copilot, GitHub Copilot, Grammarly AI, DALL·E, Midjourney, Microsoft Designer or similar during lectures, labs, and assessments.

  2. In particular for assessments, you must acknowledge the use of generative AI tools in your submission. This should identify the tool(s), and describe what you used it for and to what extent.

    💡 TIP: If you’re using ChatGPT or Gemini (or another similar AI service that lets you share a link to your chat history), you can avoid explaining how you used AI if you are disciplined at the start. When you begin working on an assignment, open a new chat window on ChatGPT or Gemini, and use that chat for all your questions about the assignment. Then, include the link to the chat history in your submission.

    The point of this acknowledgement is not punitive. We want to spot cases of when GenAI influences your learning negatively early on so we can help you improve.

  3. In code-related assessments, if you did not share your chat history, you should specify what tools were used, for what purpose and to what extent. For example:

    In Task 1, I used ChatGPT to create the skeleton of the function, then I edited the code myself to fix a problem with a variable that did not exist in the dataset. In Task 2, I typed a Python comment and let GitHub Copilot generate the code. The code worked, and it helped me realise what I had to do, but it didn’t follow the ‘no loops’ rule we learned in class. I then edited the code myself to fix this issue.

  4. In a written assessments, such as an essay or report or Jupyter Notebook, you must include a statement at the end of your submission, stating precisely how you used generative AI tools. We expect you to be honest and transparent about your use of these tools and as precise as possible. Here are some examples:

    I used ChatGPT to come up with an outline of my text, but it was too generic. I added a few paragraphs myself to make it more specific to the topic and then used the Grammarly AI to connect sentences for better readability. I didn’t accept any ‘facts’ or ‘arguments’ suggested directly by ChatGPT or Grammarly AI. I only used it to improve the structure and readability of my text.

  5. GenAI tools are not sources to be cited in the same manner as human-authored sources (books, papers, academic articles, etc.)

🗒️ Good practices for using Generative AI tools

  • DO use GenAI to personalise your learning experience. For example, you can ask it to help you better understand a concept by using an analogy related to an activity you enjoy:

    “Help me better understand the concept of data types in Python using an analogy that involves an activity I like: cooking.”

  • AVOID using GenAI when you are asked questions to check your understanding or when asked to write down your thoughts and opinions. The entire purpose of these exercises is to help you think independently – don’t delegate that to a machine!

  • DO use GenAI to produce code if you already know what you want to achieve and are confident in your coding skills to review and edit the code produced.

  • AVOID using GenAI to create the first draft of code or an essay if you’re unsure where to start. The tool might make the code unnecessarily complex or not follow standard practices. For essays, it could generate text that seems legit but that it is clear to an expert that no real thinking has been done.

  • AVOID asking GenAI questions for things you minimally understand if you don’t have enough time or the skills to fact-check the response. AI chatbots can generate plausible-sounding answers that are often wrong (in other words, “bullshit”).

  • ALWAYS give GenAI a lot of context. For example, suppose you use a tool like NotebookLM. In that case, you can attach the public links to the course materials you are studying and ask for a starting point for a task considering the content of the course materials. Alternatively, on more traditional, chat-based tools, be sure to provide a lot of context into what you’ve learned that is specific to your learning journey in this course.

Our bespoke Claude project

We have created a dedicated Claude project for DS105A students that provides AI assistance specifically tailored to your course. This project:

  • Knows your course context: It understands what you’ve learned so far and what you’re working on now
  • Respects learning boundaries: It won’t give you solutions to future week content or bypass the learning process
  • Provides guided assistance: It asks you questions to help you think through problems rather than just giving answers
  • Maintains course voice: It responds in the same conversational, supportive tone as your course materials

The Claude project is designed to support your learning journey while ensuring you develop genuine understanding of the concepts. It’s particularly useful for:

  • Getting unstuck on coding problems (after you’ve tried yourself first)
  • Understanding concepts through analogies and examples
  • Getting feedback on your approach to assignments
  • Exploring additional challenges when you’ve completed the core exercises

💡 How to use it effectively: When you ask the Claude project for help, always start by explaining what you’ve already tried and what you’re thinking. The more context you provide, the better it can help you learn rather than just solve the problem for you.

Our position

The LSE Data Science Institute has been studying the impact of generative AI on education since Summer 2023 when we launched the GENIAL project. You can read more about it project on the project page.

What we have learned so far:

Although we have not yet fully analysed all the data, it is fair to summarise the good and bad aspects of using generative AI tools in education in the following way:

  • Good: The students who made the most resourceful use of GenAI remained in control of their learning. They often gave the chatbots a lot of context (“I want to perform web scraping of this website with the library scrapy, the code must contain functions – no classes – and I want to save the data in a CSV file.”) and would always check the code/output generated by GenAI against the course materials or reputable sources. They were able to identify when the AI was suggesting something that was not correct or not following best practices and would never blindly accept the AI’s suggestions.

  • Bad: If you don’t master a subject, GenAI can make you feel like you do. This pattern was frequent, for example, among students who had gaps in their understanding of programming concepts. They would ask the AI to generate code for them, and the AI would produce code that seemed to work but that generated the incorrect response or was so complex, it was virtually impossible to edit.

Read more about it in our preprint:

Dorottya Sallai, Jonathan Cardoso-Silva, Marcos E. Barreto, Francesca Panero,Ghita Berrada, and Sara Luxmoore. “Approach Generative AI Tools Proactively or Risk Bypassing the Learning Process in Higher Education”, Preprint, July 2024.

How I use GenAI in this course

When creating material for the course:

  • After I devise a plan of what I want to teach on a particular week or session, I draft the headings and subheadings of the lecture notes myself on VSCode, with the GitHub Copilot extension enabled. Very frequently, the AI autocompletes something closer to what I already wanted to say, so I hit ‘Tab’ and let it complete the sentence.
  • If I get stuck and I can’t think of any coding exercises that would help me illustrate a concept, I go to NotebookLM, import my drafts and query the tool for ideas on how to connect everything. Most of it is generic and I drop it, but sometimes it gives me a good idea that I can use.
  • Once I finished the draft, I run it through Grammarly AI to improve readability and coherence. I typically highlight a paragraph and ask it to ‘Improve but keep it conversational, jargon-free’. I then review the changes and accept them if they make sense. It is important to me that the text remains accessible to all students, regardless of their background, yet I keep my ‘voice’ in the text.

When grading your work:

  • I don’t upload your work to any commercial AI service.
  • I don’t trust current GenAI tools enough to understand the type of feedback I want to write. I find they generate more rework than they save me time.
  • But once I write my feedback comments, I tend to run them through Grammarly AI to improve readability and coherence.
  • Not AI but related: In some cases, we use autograding tools to help us check if your code is working as expected. These tools are not AI but are automated scripts that run your code against a set of tests. They are not perfect, but they help us identify common mistakes quickly.

References

Lau, Sam, and Philip Guo. 2023. “From "Ban It Till We Understand It" to "Resistance Is Futile": How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools Such as ChatGPT and GitHub Copilot.” In Proceedings of the 2023 ACM Conference on International Computing Education Research V.1, 106–21. Chicago IL USA: ACM. https://doi.org/10.1145/3568813.3600138.