AI for Inclusion: Challenging the Design of Educational AI through Equity & Disability Perspectives

Artificial intelligence is often positioned as a tool for educational inclusion, yet without intentional design, it frequently reinforces existing inequalities. Drawing from lived experience and global case studies, this session explores how AI can genuinely support inclusion from equity and disability perspectives. Educational AI tools are frequently developed without input from those most affected by exclusion, learners with disabilities, students in under-resourced regions, and diverse linguistic communities. We propose human-centered, context-aware systems that accommodate diverse learning approaches. Participants will receive a practical framework for conducting 'inclusion audits' to identify exclusionary patterns and implement equity-centered alternatives.

The Promise Versus Reality

The educational technology sector promotes AI as a democratizing force that will personalize learning, break down barriers, and create equal opportunities for all students regardless of background or ability. However, the reality in classrooms reveals a troubling disconnect between these aspirations and actual outcomes. Many AI systems perpetuate bias, exclude marginalized voices, and create new forms of digital inequality rather than addressing existing educational disparities. Office for Students data shows that disabled students make up 20% of England's student population yet consistently report worse educational experiences than their non-disabled peers. This statistic alone should prompt serious reflection about how our technological solutions are serving or failing to serve significant portions of our student communities.

Missing Voices in Design

The root of exclusionary AI lies in who is absent from design processes. Students with disabilities are often excluded from initial development phases, leading to interfaces that cannot accommodate screen readers or diverse learning needs. Multilingual learners find themselves confronting AI systems trained predominantly on English datasets that struggle with accents, dialects, and cultural contexts. Rural communities face bandwidth-heavy solutions designed without consideration for infrastructure limitations or local educational contexts. This systematic exclusion is not mere oversight, it represents a fundamental failure to prioritize inclusive design principles from the outset. When educational AI development proceeds without meaningful consultation with affected communities, the resulting tools inevitably replicate and amplify existing educational inequities.

Real-World Consequences

Recent research provides stark evidence of these failures. Stanford studies reveal that AI plagiarism detection tools falsely flag non-native English speakers as cheating over 60% of the time, leading to unfair academic penalties for international students who are already navigating language barriers. Meanwhile, a 2024 study published in AERA Open found that university admission algorithms systematically underestimate Black and Hispanic student success while overestimating outcomes for other demographic groups. These examples represent thousands of students whose educational potential is being limited not by their abilities or circumstances, but by algorithmic assumptions embedded in supposedly neutral systems. Each false positive in plagiarism detection, each biased prediction in admissions algorithms, represents a human cost that extends far beyond individual educational experiences.

Inclusive Design

Genuine inclusion in educational AI requires fundamental shifts in design philosophy. Human-centered approaches must prioritize dignity, agency, and diverse learning needs over technical efficiency or market considerations. Context-aware systems need deep understanding of infrastructure limitations, linguistic diversity, and local educational practices. Participatory design processes must engage marginalized communities throughout development, not merely during testing phases. The framework we propose centers on systematic inclusion audits that examine three critical dimensions. First, representation audits assess whose perspectives are present and absent from development teams and user testing processes. Second, accessibility assessments evaluate whether diverse learners can effectively use systems regardless of ability, language, or technical access. Third, impact evaluations measure outcomes across demographic groups to identify potential harm or exclusion patterns.

Implementation and Next Steps

For educators, this means advocating for transparency and explainability from vendors, documenting inclusion challenges, and insisting on co-design in AI procurement decisions. Developers must implement accessibility-by-design principles, ensure transparent data practices, and build user configurability from the system's foundation. Policymakers need to mandate participatory processes in public AI procurement and establish accountability mechanisms for educational AI systems.

A Call for Justice

Every AI decision in education is fundamentally a justice decision. Whether choosing procurement tools, designing systems, or setting policies, we are either advancing equity or entrenching inequality. The framework for inclusion audits is not merely best practice; it represents a moral imperative for educational technology that genuinely serves all learners. The path forward requires recognizing that inclusion cannot be retrofitted onto exclusionary systems. Instead, we must reimagine design processes, governance structures, and success metrics through the lens of justice and inclusion. Only then can educational AI fulfill its promise of genuinely democratizing learning opportunities for all students. The evidence is clear, the solutions exist, and the expertise is available. The question that remains is whether we will choose to act on our commitment to inclusive education in the age of artificial intelligence.

View the presentation in full here:

Okonkwo, S., & Schmid-Meier, C. (2025, September 17). AI for Inclusion?! Challenging the Design of Educational AI through Equity & Disability Perspectives. AIEOU Inaugural Conference, University of Oxford. Zenodo. https://doi.org/10.5281/zenodo.17226470

The content expressed here is that of the author(s) and does not necessarily reflect the position of the website owner. All content provided is shared in the spirit of knowledge exchange with our AIEOU community of practice. The author(s) retains full ownership of the content, and the website owner is not responsible for any errors or omissions, nor for the ongoing availability of this information. If you wish to share or use any content you have read here, please ensure to cite the author appropriately. Thank you for respecting the author's intellectual property.