"We are really the responsible ones": Ethical University Leadership in the AI Age
"We are really the responsible ones": Ethical University Leadership in the AI Age
As artificial intelligence (AI) tools rapidly spread across college campuses and the world, higher education leaders are wondering how we can effectively and ethically navigate this technological transformation? Skilled leadership is necessary regardless of if you are an innovator, early adopter, or laggard. A recent study at James Madison University (Virginia, United States) highlights that successful AI exploration and adoption requires more than just technical expertise – it demands a new kind of leadership that prioritizes human relationships, ethical considerations, and stakeholder engagement. The contribution of these insights moves AI ethics leadership beyond important (but vague) calls to the specific skills necessary to provide ethical and effective technology leadership in the AI age.
Through interviews with employees in roles across the institution, we identified three main themes in conversations about the use of AI tools in higher education institutions (HEIs). The three themes pertain not to the tools themselves, but to the leadership that educators and education professionals need right now: encouraging and supportive leadership, leaders that empower people and deploy empowering processes, and leaders that are leading with clarity. The most insightful finding reveals what technological leadership actually resembles in the current AI landscape. While many discuss the importance of ethics or leadership, very few, if any, have made AI leadership operational in terms of what knowledge, skills, and attitudes leaders may need. This knowledge is helpful for diverse HE leaders, either with or without IT background, as AI will spread and impact diverse functions and departments throughout higher HEIs.
Beyond the Tech Hype: The Human Element of AI Adoption
"Technology cannot be a stand-alone solution," explains one administrative staff member who participated in our study. "It needs humans, leadership, and a change of culture for it to be successfully implemented and used in organizations long-term."
This observation cuts to the heart of the challenge facing higher education institutions today. Of course, keeping humans involved in AI-based processes is not new. For example, keeping the “human in the loop” is a refrain often heard in these educational technology spaces regarding AI use. Keeping the human in the loop means not relying exclusively on AI for reasoning, task completion, or whatever it is that you are working with AI-based tools for.
If your institution is using AI to help identify struggling students, keeping the human in the loop translates to not letting an AI-based tool to be the sole factor identifying interventions or even struggling students. Rather, institutions should have human professionals making sense of AI-based recommendations, validating them, or integrating with other sources before acting on them. While AI-powered tools like early alert systems (EAS) (the central technology of our study) promise to improve student retention and administrative efficiency, their successful implementation depends heavily on leadership that can bridge the gap between technological capability and human needs. Technology leadership in the AI landscape may mean that human relationships are an a priori to the so-called AI/human loop. Integrating AI-based tools into higher education spaces requires leadership that remains person-centered while simultaneously exploring and experimenting with AI. Person-centered technology leadership is not novel, but what this piece offers is what this leadership should feel like in the emerging and quickly changing AI landscape. Keeping humans in the loop is necessary, but not sufficient for ethical and successful exploration and integration of AI-based tools. The study reveals that effective technology leadership in today's AI-driven environment requires three key qualities.
1. Encouraging and Supportive Leadership
Leaders must create an environment where experimentation with AI is safe and supported. This means leading by example – actually using AI tools themselves – while being transparent about successes, failures, and support for employee experimentation.
One study participant noted that, "They [our leaders] are constantly encouraging us to better ourselves, to learn more, to participate, to take whatever classes we think that we need... they are more than supportive." Employees should be learning and experimenting alongside their leaders when it comes to AI use. Knowing that leaders are experimenting with AI also provides a clear message that employees themselves will not get in trouble or risk their career through their own professional exploration.
2. Empowering People and Processes
Rather than implementing AI tools through top-down mandates, successful leaders engage stakeholders in the process and help them understand the larger purpose behind technological adoption.
"Leadership looks like teaching people, both faculty, students, and staff, not to be terrified of it," shared one study participant. "We can be much more symbiotic about it. We can be much more clever about it if we're just not afraid." Leader experimentation not only provides support and encouragement, it communicates that employees themselves are empowered to learn, play, and not be afraid of the new technologies. Furthermore, technology leadership here needs to guide institutions of higher education with new policies, infrastructure, and culture as appropriate for the AI landscape. And in doing so, good leadership engages all interested and impacted parties early and often.
3. Leading with Clarity
In an environment of technological uncertainty, leaders must provide clear guidelines and policies about AI use. This is particularly crucial in academic settings where faculty and students need consistent parameters for AI use in teaching and learning. One interview participant shared,
How do we address these concerns, well, it's through transparency and through clear policies. And I keep hearing that over and over again from colleagues who are teaching at the faculty level. And they're saying, we need to have clear, established policies coming from, or even recommendations coming top down in terms of how do we deal with these AI issues in the classroom? Are they part of the honor council policies and procedures?
That faculty are asking for institutional policies and norms demonstrates how important technological leadership is for the AI landscape. When no identifiable policy or norm exists, there is a vacuum with potentially infinite approaches that makes it nearly impossible for students, staff, and faculty to ethically and effectively explore, learn, and adopt AI-based tools.
Conclusion
"We are really the responsible ones," concluded one study participant, emphasizing the crucial role of higher education in shaping how AI will be used in society. By focusing on ethical, human-centered leadership, institutions can work toward ensuring that AI adoption enhances rather than diminishes the educational experience.
The key is developing leadership that can guide this transition while maintaining focus on higher education's core mission and values. This means creating spaces for critical discussion about AI's impact, establishing clear ethical guidelines, and ensuring that technological adoption serves rather than supersedes human needs.
Leaders must recognize that employee resistance often stems from deeper emotional and professional concerns rather than simple technological unfamiliarity. This understanding is crucial as institutions navigate the introduction of AI tools that can automate various administrative processes. Whether it's fear of skill obsolescence, concerns about professional autonomy, or anxiety about inadvertent misuse of AI tools, these apprehensions require thoughtful leadership responses that go beyond technical training.
The key is helping employees understand how AI can enhance rather than replace their work while providing clear guidelines and support for its appropriate use. This includes creating safe spaces for experimentation and learning, where mistakes are viewed as learning opportunities rather than grounds for punishment. As another participant noted, leaders need to be "encouraging us to better ourselves, to learn more, to participate" in the technological transformation of higher education.
As higher education continues to navigate these transformations, the success of AI integration will likely depend less on the sophistication of the tools themselves and more on the quality of leadership guiding their adoption. The challenge ahead is not just technological – it's fundamentally human.
Author 1: Tatjana Titareva, PhD, AI Policy Lab, Umeå University, Sweden.
Author 2: Paul Mabrey, PhD, Student Success Analytics & School of Communication Studies, James Madison University, Virginia, USA.
The content expressed here is that of the author(s) and does not necessarily reflect the position of the website owner. All content provided is shared in the spirit of knowledge exchange with our AIEOU community of practice. The author(s) retains full ownership of the content, and the website owner is not responsible for any errors or omissions, nor for the ongoing availability of this information. If you wish to share or use any content you have read here, please ensure to cite the author appropriately. Thank you for respecting the author's intellectual property.
Suggested citation:
Titareva, T., & Mabrey, P. (2025, November 5). We are really the responsible ones: ethical university leadership in the AI age. AIEOU. https://aieou.web.ox.ac.uk/article/we-are-really-responsible-ones-ethical-university-leadership-ai-age