🤖 Our Generative AI policy

2023/24 Autumn Term

Author

Since ChatGPT’s release in November 2022, educators and academics have been pondering its impact in the classroom, especially regarding assessments. Some are concerned that letting students use AI for answers might be seen as academic misconduct and should be prohibited. On the flip side, others view AI as just another tool and advocate for its use (Lau and Guo 2023). There is no common consensus on the matter, and each higher education institution has to decide for itself.

At LSE, the current policy states the following:

LSE takes challenges to academic integrity and to the value of its degrees with the utmost seriousness. The School has detailed regulations and processes for ensuring academic integrity in summative work.

Unless Departments provide otherwise in guidance on the authorised use of generative AI, its use in summative and formative assessment is prohibited. Departmental Teaching Committees are strongly encouraged to define what constitutes authorised use of Generative AI tools (if any) for students taking courses in their Department. Where they do so, they must clearly communicate this to colleagues, and to students.

Source: LSE (2023) (Emphasis added)

What is Generative AI?

Generative AI refers to software capable of producing synthetic content, such as text, images, audio, or video. Machine learning algorithms power these tools, which are trained on vast datasets comprising human-created content typically scraped from the web. For example, ChatGPT generates text, having been trained on a large corpus of online data and further refined through user interaction. Similarly, DALL·E produces images, having been trained on a massive dataset of human-generated imagery.

Some arguments for using generative AI in education:

  • It’s just another tool. Students are free to use other tools, such as spell-checkers, thesauri, or even Google, so why not generative AI?
  • It’s a great learning opportunity. Students can ask the AI tools to explain why a particular answer is correct, and they can learn from it.
  • It helps to boost productivity. This is prevalent in many social media posts. If one searches for ‘ChatGPT tips’ on any social media platform, one will encounter many productivity boost claims and experts and non-experts alike. Also, GitHub claims that the users of its Copilot tool feel more productive and ‘in the flow’ when using it (GitHub 2022).
  • It helps prepare the students for the constructive use of these tools in the real world, e.g the job market. Students don’t live in a vaccuum or a greenhouse, they will likely have to use these tools in their daily life or in their jobs. It would be better if they knew how make use of them effectively and also how to question the outputs of these tools if needed.

Some arguments against using generative AI in education:

  • It’s cheating. Students should do the work themselves.
  • It’s not fair. Not all students have access to the same tools.
  • It’s not reliable. These tools can and often do produce incorrect answers, or cite sources that do not exist. Students should not rely on them.
  • It’s not transparent. even the developers don’t fully understand how it works. Additionally, the companies providing these services have not been upfront about the data they have used to train these tools nor how they use the data they collect from users (Emily Bender and Alex Hana n.d.).
  • It’s not ethical. Its use isn’t ethical due to the implicit reinforcement of a hegemonic and imperialist worldview from biased training data. Moreover, training these algorithms consumes significant energy and water, while running them has negative environmental impacts. (Bender et al. 2021).
  • It’s making students lose skills they should really gain. Excessive reliance on tools like ChatGPT or Copilot is likely to not let students genuinely practice their writing and/or programming skills and induce false confidence in students regarding their abilities when it comes to those skills. As a result, we’ll be training students unable to properly write and/or code!

Our position

At the LSE Data Science Institute, we typically don’t restrict the use of generative AI, unless it’s explicitly stated in the course materials of a lab, lecture or assessment. This means students are free to explore these tools, if they want, during lectures, labs, or assessments, whether they are formative or summative, similar to how they can browse the Internet. Generative AI-based tools are becoming increasingly prevalent, with companies like Google and Microsoft incorporating them into their products. We believe that students should have exposure to these tools and learn how to use them responsibly.

We recognize that the long-term impact and actual benefits of generative AI in education remain uncertain. However, we are committed to leading this discussion and experimenting with these technologies. We aim to learn from our students’ experiences with these tools and assess their potential benefits, as well as their drawbacks. To achieve this, we have initiated a study known as the GENIAL project. You can find more information about this project on the GENIAL project page. As we progress with this project, we will update our policy accordingly.

We have redesigned many of our assessments to lower the chances of obtaining a high grade exclusively by relying on ChatGPT or similar AI-generated content. These assessments now include elements directly linked to course materials, demanding students to exhibit engagement with concepts taught in the course. Some assessments will call for students to exercise autonomy and creativity, which cannot be replicated solely using generative AI tools, at least not with the tools available today. 1

Our policy

  1. You can use generative AI tools such as Grammarly, ChatGPT, GitHub Copilot, Bard, BingAI, DALL·E, Midjourney or similar during lectures, labs, and assessments unless it’s explicitly stated otherwise in the course materials.

  2. In particular for assessments, you must acknowledge the use of generative AI tools in your submission. This should identify the tool(s), and describe what you used it for and to what extent.

  3. If it was a written assessment, such as an essay or an exam, you must include a statement at the end of your submission, stating precisely how you used generative AI tools. We expect you to be honest and transparent about your use of these tools and as precise as possible. Here are some examples:

    I used ChatGPT to come up with an outline of the essay but then, I filled in the details myself.

    I got Grammarly to check my essay for spelling and consistency. I then re-edited the problematic parts myself.

    I wrote a first draft of the question, and then I used ChatGPT to ‘critique it’. The tool returned some good points, but as it also created some references to articles that did not exist, I still had to edit my answer to fix these issues. After that, I used Grammarly to check for spelling and consistency and then re-edited the problematic parts myself.

  4. ChatGPT is not a source to be cited in the same manner as human-authored sources (books, papers, academic articles, etc.)

  5. In code-related assessments, you should specify what tools were used, for what purpose and to what extent. For example:

    I used GitHub Copilot to write the function in Question X, it created a good skeleton but it produced the wrong output. I then had to edit the code myself to fix the issue.

    I used ChatGPT to provide an initial solution to Question X, but although the code worked, it was not efficient to the standards of vectorisation taught in the course. I then had to edit the code myself to fix the issue.

    I used a mix of ChatGPT and Copilot. I first got ChatGPT to generate a solution to Question X, it wasn’t good, but it gave me some ideas. I then used Copilot to write the code, but it produced some R code that referred to a very old version of the package dplyr. I edited the code myself to fix it. Then, I noticed that the code was using base R instead of tidyverse functions, as taught in the course. I edited the code myself to fix it. Finally, I also had to edit the modelling parts, as the tool could not produce valid tidymodels code (as taught in the course). I edited the code myself to fix it.

Important

In sum, we do not consider the use of generative AI tools an offence against the LSE Code of Academic Conduct. However, it is an offence if the above guidelines are not followed. We will treat any violation of these guidelines as a breach of the LSE Code of Academic Conduct.

References

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” In, 610–23. Virtual Event Canada: ACM. https://doi.org/10.1145/3442188.3445922.
Emily Bender, and Alex Hana. n.d. “Mystery AI Hype Theater 3000.” Accessed September 22, 2023. https://www.buzzsprout.com/2126417.
GitHub. 2022. “Research: Quantifying GitHub Copilot’s Impact on Developer Productivity and Happiness.” https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/.
Lau, Sam, and Philip Guo. 2023. “From "Ban It Till We Understand It" to "Resistance Is Futile": How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools Such as ChatGPT and GitHub Copilot.” In Proceedings of the 2023 ACM Conference on International Computing Education Research V.1, 106–21. Chicago IL USA: ACM. https://doi.org/10.1145/3568813.3600138.
LSE. 2023. LSE Short-Term Guidance for Teachers on Artificial Intelligence, Assessment and Academic Integrity in Preparation for the 2022-23 Assessment Period.” https://info.lse.ac.uk/staff/divisions/Eden-Centre/Assets-EC/Documents/AI-web-expansion-Feb-23/Updated-Guidance-for-staff-on-AI-A-AI-March-15-2023.Final.pdf.

Footnotes

  1. Parts of the text on this page were created with the help of Grammarly, Copilot, ChatGPT and a locally-run version of llama-gpt. After writing a first initial draft (a brain dump, really) with Copilot autocomplete and Grammarly check both activated on VS Code, I would copy the text to llama-gpt and ‘ask it’ to revise it for grammar and consistency. llama-gpt is not as good as ChatGPT, so sometimes I found that it worded things in a weird way that was not what I wanted to convey so I copied some excerpts into ChatGPT until I was happy with the version. This final version was then revised by me (Jon), Prof. Ken Benoit and colleagues in the GENIAL project.↩︎