Webinar: The AI Personas

A person and person smiling for a picture</p>
<p>AI-generated content may be incorrect.

CLICK TO WATCH THE WEBINAR

 

Five Personas for AI literacy: A Student-Centred Approach to Generative AI

Helping students navigate AI tools thoughtfully rather than fearfully

By Kelly Webb-Davies

A recent HEPI Student Generative AI Survey showed that 92% of students are now using AI, 67% believe that having good AI skills is essential to thrive in today's world, but worryingly only 36% feel their institution is supporting them to build those skills. These results highlight an uncomfortable paradox: while we debate whether students should use AI, they're already integrating it into their learning and work processes, and often without the guidance they very much need. 

The question isn't whether to allow AI use, but how to teach students to use it thoughtfully and effectively. This aligns with the Russell Group's first principle that "universities will support students and staff to become AI-literate," recognising that institutions have a responsibility to equip students with the knowledge and skills needed to use these tools appropriately throughout their studies and future careers. 

Rethinking AI Literacy 

I don’t think that AI literacy is about mastering the latest ChatGPT features or becoming prompt engineers. At its core, it's about developing the critical thinking skills that have always been essential in education: knowing when and how to use a tool effectively, recognising when not to use it, evaluating sources, identifying bias, and communicating clearly. 

These challenges aren't new. They've simply been magnified by AI's capabilities. Students have always needed to verify information, avoid misinformation, and evaluate the reliability of their sources and the rise of generative AI makes these skills even more urgent, not obsolete. 

A Framework for Student Success: Meet the Five Personas 

I designed the personas to offer a simple, memorable framework to help students decide when and how to use AI responsibly, instead of overwhelming students with lengthy policy documents or endless rules that may or may not apply to different use cases and are easily forgotten.  Each of the five personas represents a different approach to using generative AI: 

The Stranger: Understanding Source Reliability 

Imagine meeting someone knowledgeable in a pub who confidently discusses your field of study. You might have a fascinating conversation, but would you cite "Pub Stranger, 2025" in your academic work? Of course not. You'd verify their insights through legitimate sources. 

This persona helps students understand that AI, like the concerns educators had with Wikipedia before it, can be a useful starting point but should never be a final authority. The critical skill is checking AI-generated information and following it to properly researched, verifiable academic work which can be evaluated and cited. 

The Intern: The Importance of Context and Oversight 

Think of AI as a 24/7 intern with impressive capabilities but significant limitations. They can transcribe audio, generate examples, create visuals, and help with coding, amongst other skills. However, they're new, lack context about your specific needs, and make mistakes. 

Like any intern's work, AI output requires expert review, and accountability remains with the person instructing it - you. The Intern persona teaches students to provide clear context, check outputs carefully, and never submit AI-assisted work without thorough review.

The Translator: Linguistic Accessibility 

It should not be surprising that as large language models excel most with translation. For example, when students encounter academic research written in complex language, AI can simplify the academic English while preserving concepts. This can assist learning by linguistically scaffolding understanding without losing the core information. 

AI can also transform content across language modes: converting text into podcasts, turning data into visual representations, or restructuring complex texts to suit different learning preferences. This multimodal flexibility ensures students can access academic content in formats that work for their individual needs. 

One of the most common uses of AI is in the writing process because it can help students express their own ideas more clearly which particularly valuable for those facing linguistic barriers or learning differences. As long as we ensure the intellectual work remains with the student then responsible AI use can ensure language itself doesn't become a barrier to learning. 

The Tutor: Enhancing Learning 

A simple rule of thumb: are you asking AI something you would ask a human tutor? Such as: Can you explain this concept? Can you create a practice quiz? Can you help me identify errors in my reasoning? These uses are more likely to support learning rather than replacing it. 

What wouldn't you ask a tutor? To rewrite your essay or complete assignments for you. This distinction helps students understand how AI can enhance their learning process without undermining it. 

The People Pleaser: Seeking Critical Challenge (New in Version 21) 

AI models are trained to be helpful and agreeable, often telling users what they want to hear rather than what they need to hear. Like a well-meaning friend who avoids difficult conversations, AI can reinforce existing biases and assumptions rather than challenging them. 

Students should actively seek critique, counterarguments, and challenges from AI rather than just validation. Ask it to question you, poke holes in your argument, suggest alternative perspectives, or identify weaknesses in your reasoning. This persona reminds students that growth comes from intellectual challenge, not comfortable agreement. The goal is to use AI's tendency toward agreeableness strategically by explicitly requesting the critical feedback it might otherwise avoid providing. 

Not Just Rules: Building Critical Thinking 

These personas work because they're memorable, flexible, and focused on developing judgment rather than following rigid rules. They can be applied across disciplines and educational levels, providing a framework that grows with students rather than restricting them. 

The approach recognises that students bring diverse linguistic backgrounds, learning styles, and challenges to their education. Rather than penalising these differences, we can use AI as a tool for inclusion and helping students overcome barriers while maintaining academic rigor and integrity. 

The Accessibility Imperative 

For students with ADHD, dyslexia, or other learning differences, traditional writing processes can create significant barriers to demonstrating their knowledge. For students who speak English as an additional language, or who speak stigmatised varieties of English, the additional cognitive effort of needing to express information in academic English requirements can overshadow brilliant ideas. AI offers these students tools to participate more fully in academic discourse. 

This isn't about lowering standards. It's about ensuring that linguistic gatekeeping doesn't prevent us from recognising and nurturing talent. If we view AI through an accessibility lens, we can see its potential to democratise education rather than diminish it. 

Moving Forward: From Fear to Empowerment 

Effective AI education won’t fight against the inevitable. It will embrace reality while strengthening fundamental skills. Rather than viewing AI as a threat to traditional academic abilities, we could recognise it as a catalyst that makes long-standing educational priorities more urgent than ever. This means focusing on capabilities that have always been central to higher education: 

  • Teaching evaluation skills: Students need to critically assess AI outputs just as they evaluate any other source. 
  • Developing subject expertise: The better students understand their field, the better they can direct and evaluate AI assistance. 
  • Building communication skills: Effective AI use requires clear communication in natural language about goals and context. 
  • Fostering metacognitive awareness: Students need to understand their own learning processes to use AI supportively. 

The Path Ahead 

Creating effective AI guidance requires moving beyond prohibition toward education. Students need clear, effective guidance they can remember and apply across contexts, not exhaustive lists of dos and don'ts. They need to understand not just what they shouldn't do, but how to use these powerful tools responsibly and effectively. 

The AI Personas offer one approach to this challenge: a student-centred framework that acknowledges the reality of AI use while building the critical thinking skills that will serve students throughout their lives. Rather than fearing or banning this technology, we can help students approach it thoughtfully, ensuring that AI enhances rather than replaces human intelligence and creativity. 

If the goal isn't to eliminate AI use (and it can’t be) it must be to cultivate AI literacy and help students become thoughtful, critical users of these tools who understand both their potential and their limitations. In doing so, we prepare students not just for academic success, but for a very near future where AI-assisted human work will be the norm rather than the exception. 

1: Credit to the wonderful Sara Ratner for suggesting this additional persona after the webinar 

Shape 

This post is based on a webinar presented by Kelly Webb-Davies, AI Consultant at the University of Oxford's AI Competency Centre. The AI personas framework is available under Creative Commons licensing for adaptation in educational settings. 

You can watch the webinar recording here:

AIEOU Webinar Series - Meet the AI Personas: A student-centred framework for navigating generative AI in higher education on Vimeo

Explore the AI Personas GPT here: ChatGPT - Oxford AI Personas