Image Created with DALL·E from within BingAI GENIAL: GENerative AI Tools as a Catalyst for Learning
A Collaborative Focus Group

Context

The rise of Generative AI (GenAI) tools and their impact on teaching, learning, and assessment practices is currently a significant topic of discussion in higher education. Since November 2022, when OpenAI introduced ChatGPT, its online conversational AI chatbot, educators and students have been challenged by the capabilities of these new tools, which include similar systems from rival tech companies such as Google’s Gemini, GitHub’s Copilot, Microsoft’s Copilot, and Anthropic’s Claude.

These tools provide personalised, instant help with various tasks such as summarising literature, brainstorming ideas, and writing code and text, although some limitations in transparency and accuracy might exist.

As educators, we saw the potential of these tools and wanted to consider how best to incorporate them into our teaching and assessments to support our students. In order to do this most effectively, we set out to explore the practical applications of these tools to understand how they might specifically enhance programming skills and critical thinking. We set out to fill a knowledge gap and obtain insights that could provide valuable direction in how we approach education in light of these new technologies.

This study is led by Dr Marcos Barreto (LSE Department of Statistics) and Dr Jon Cardoso-Silva (LSE Data Science Institute).

Project

The objective of the GENIAL (Generative AI Tools as a Catalyst for Learning) study was to explore how university students in full-time undergraduate and postgraduate courses use popular GenAI tools in their learning and assessment.

The project initially launched as a small focus group initiative in June 2023, evaluating the efficacy of code generation tools. Over the 2023–2024 academic year, as interest grew in the field, the initiative evolved into a multidisciplinary research project, investigating the learning behaviours of around 220 students in four undergraduate and three postgraduate courses, including quantitative and qualitative subjects. The courses evaluated ran in the autumn and winter terms of 2023–2024 in the LSE Departments of Statistics, Management, and Public Policy and in the Data Science Institute.

We used various data collection methods to gather reliable and high-quality data. During the first term we ran a survey at the end of, dedicated in-class activities, during which students were asked to work independently and to use the chatbots as an aid. In the second term, our data collection was no longer restricted to the use of chatbots in class. We expanded our data collection efforts to include surveys and focus groups, and every week, we requested participants to share chat logs related to their learning and participation in the course, both in and out of the classroom. Furthermore, we obtained students’ assignment submissions and chat logs.

Findings

The project found that although GenAI tools can be a very helpful learning tool for some students, the growing over-reliance of HE students on these tools for learning and assessment risks circumventing rather than enhancing the learning process. The biggest pedagogical challenge is that students may use the tools to replace their learning process and critical skills.

We argue that students may rely on GenAI differently for learning and for assessments, and that they tend to focus more on the output or performance than on the learning journey itself. We also observed that some students use GenAI platforms as a substitute for learning rather than as a tool to enhance learning.

Our findings raise questions about how GenAI can be successfully integrated into the curriculum without jeopardising learning and led to the development of policy recommendations focusing on curriculum planning and assessment design so that educators can adapt to these challenges and incorporate GenAI as an aid to learning.

Go to the 📃 Outputs page to learn more about our findings.