Critical and Responsible AI Integration for Undergraduates: A case study from the Teaching and Learning Exploratory Fund

This presentation introduced the AI Teaching and Learning Exploratory Fund – an initiative launched in 2024 by the Centre for Teaching and Learning in collaboration with the AI and Machine Learning Competency Centre to support departments at Oxford in trialling generative AI for use in teaching, learning, and academic administration. We shared insights from one of the Fund’s projects, AI Skills for Learning, which focused on equipping Astrophoria Foundation Year (AFY) students in Chemistry, Engineering, and Materials Science with the skills to use ChatGPT as a personalised learning tool.

Our focus in the presentation was both institutional and practical. We outlined the structure and purpose of the Exploratory Fund, highlighting how it encouraged experimentation while aligning with university policy, ethical guidance, and the Russell Group principles on AI in education. The Fund supported twelve projects in 2024–25 with technical and pedagogical expertise from the Competency Centre and Centre for Teaching and Learning. All projects aimed to deliver not only innovation but also valuable insights on appropriate AI use.

We then used the Astrophoria Foundation Year AI Skills for Learning pilot as an example, during which ChatGPT Edu licences and structured training were provided to both a cohort of foundation year students and some of their tutors on the programme. Students were taught to use the AI critically – guiding them to use it in a way in which it did not complete work for them, but rather functioned as a tutor, study companion, and tool for generating personalised practice materials. AFY tutors were trained in using ChatGPT Edu so that they could evaluate the accuracy of AI-generated responses in their subject area and be better equipped to support student use. Over the course of the project, confidence using AI tools increased, and participants adopted increasingly sophisticated prompting strategies.

Crucially, this was not uncritical adoption. We shared candid feedback from students on both the benefits and pitfalls of AI. Both the training and experience with ChatGPT Edu helped students learn to be cautious of inaccuracies, biases in feedback, and the risk of over-reliance. They also developed practical workarounds to mitigate AI’s tendency to agree too readily.  The tutors learned the ways in which AI’s methods can inadvertently lead students into over-reliance on it, as well as useful logistics tools to reduce their own workload.  Discussions were held to help students identify the skills they were trying to acquire during study, and how to help themselves avoid AI replacing those skills.

The presentation ended by highlighting lessons for implementation: how to scaffold AI literacy in ways that empower students without replacing intellectual effort, and why co-developing training with students and tutors led to more responsible, realistic AI adoption. We argued that students will use — and indeed are already using — generative AI tools whether we like it or not, and so the university must equip them to use these tools thoughtfully and critically.

By showcasing both the Exploratory Fund and the AFY case study, this presentation contributed a grounded, evidence-based perspective on what responsible AI implementation could look like in practice, particularly in undergraduate STEM education.

View the full presentation here:

Webb-Davies, K., & Quarrell, R. (2025, September 29). Critical and Responsible AI Integration for Undergraduates: A case study from the Teaching and Learning Exploratory Fund. AIEOU Inaugural Conference, University of Oxford. Zenodo. https://doi.org/10.5281/zenodo.17227494