Issues to Address before AGI becomes reality

The increasing role of generative AI (genAI) tools in various aspects of higher education is undeniable. Its influence spans from the admissions process to research and teaching. The Chronicle of Higher Education (CHE) has released the report Communicating with Students in the Age of AI (2024), based on the survey of over 800 administrators and faculty at both community colleges and universities in the United States highlighting that the most frequent applications of AI chatbots were in recruitment, admissions, and financial aid. In 2023, CHE published views of 12 scholars and administrators about the beginning of the ChatGPT revolution (Chronicle of Higher Education, 2023) emphasizing how genAI tools’ adoption has been transforming instructional and administrative roles in colleges and universities.

The mass adoption of large language models (LLMs) like OpenAI’s ChatGPT, Anthropic’s Claude, Microsoft’s Copilot, Google’s Gemini, and others present both opportunities and challenges for higher education, calling for a special attention to such issues as equity, data privacy, transparency, and ethical concerns, including bias and discrimination. Additionally, genAI tools that produce new text, images, audio and video, offer a broad range of applications. However, its potential for misuse, as seen in the case when lawyer used court case examples provided by ChatGPT without verifying facts (Weiser, 2023) or the tragic case of a 14-year-old's death following interactions with an AI chatbot (Roose, 2024), calls for responsibility in their use.

As highlighted by Vegard Kolbjørnsrud in his article ‘Designing the Intelligent Organization: Six Principles for Human-AI collaboration’ (2023), AI is overcoming its role as a mere tool, increasingly acting as a collaborator in organizational contexts. This new dynamic between humans and technology in the workplace requires thoughtful consideration. Bill Gates in his post ‘The Age of AI has begun’ (2023) emphasized that we are only beginning to discover the potential of AI technologies. He pointed out that current limitations will soon be overcome, leading to more powerful technologies. This could serve as a reminder for educational leaders to prioritize student development and enhance faculty and administrative operations, ensuring that AI tools are integrated responsibly as an assistant, rather than a replacement.

Predictions from experts like Shane Legg from DeepMind and technology futurist Ray Kurzweil highlight the approaching arrival of Artificial General Intelligence (AGI) - a system with human-like intelligence that would be exponentially smarter than the current narrow AI and most human beings in specific tasks, with potential to overcome the best human minds. Legg estimates a 50% likelihood of achieving AGI by 2028 (TED, 2023), while Kurzweil predicts that computers will attain human-level intelligence by 2029 (Kurzweil, 2024), with the Singularity - the point at which machines' intelligence and humans would converge - by 2045 (Corbyn, 2024).

Thus, the main aim of this post is to encourage higher education institutions and their leaders to address several key areas: AI technologies’ role in teaching and learning, the ethical use of AI in content creation, and its potential implications for the job market. As one of the current narrow AI technologies’ development scenarios envision the prospect of AGI becoming a reality in the near future, higher education institutions face the potential challenge of addressing these critical issues. It is essential to engage in this conversation and approach the AI revolution with responsibility and strategic foresight.

AI in Teaching and Learning (T&L)

Oregon State University has developed an innovative update (Oregon State University, 2024; Zaphir et al., 2024), to the Bloom's Taxonomy, a widely recognized educational framework for categorizing learning objectives. This update integrates the traditional stages of the learning process – remember, understand, apply, analyze, evaluate, and create – with genAI’s capacity to supplement learning. For each stage, the university has added specific AI capabilities, such as assisting in definition of terms, constructing chronologies/timelines, describing concepts, providing students with the feedback in their learning process, inferring trends, and suggesting the pros and cons of different situations or actions in real life.

Moreover, the updated taxonomy emphasizes the importance of human skills in the learning process. These skills include contextualizing AI outputs with emotional, moral, and ethical considerations, executing and testing Human-AI generated ideas in real-world scenarios, using imagination and creativity in developed solutions, engaging in metacognitive reflections, evaluating ethical considerations holistically, and formulating final solutions based on human judgment.

This integration of AI capabilities into Bloom's Taxonomy represents a significant step in recognizing the evolving landscape of learning and the role of technology in it. By combining genAI's analytical and processing strengths with human cognitive, ethical, and creative abilities, this updated framework offers a more comprehensive and relevant approach to learning in the Industry 4.0 dominated by AI, robotics, and automation.

Additionally, the United Nations Educational, Scientific and Cultural Organization (UNESCO) has developed an AI competency framework for teachers (2024), relevant to educators in both K-12 and higher education (HE) settings (Miao and Cukurova, 2024). This proposed framework includes several key areas: Human-Centered Mindset with a focus on the integration of AI with prioritization of human values and needs. Ethics of AI addressing ethical aspects, ensuring responsible and ethical use. AI Foundations & Applications including the basic principles and diverse AI tools in educational contexts. AI Pedagogy exploring effective methods of teaching and learning with the help of AI technologies. AI for Professional Development utilizing AI to enhance the continuous learning and development of educators.

The UNESCO framework progresses through stages of Acquisition (initial learning and understanding), Deepening (advanced knowledge and application), and Creation (developing new AI-driven methodologies and tools). This structure aims to equip educators with the necessary skills and knowledge to effectively integrate AI tools into their teaching practices, ensuring they are prepared for the evolving educational landscape.

The application of AI in Teaching and Learning (T&L) is not a matter of if, but rather how, as these tools are widely available both to educators and students. It is critical to ensure safety, maintain a non-biased approach, and consider ethical factors when integrating various AI-based tools into the T&L process.

Transparent Use of AI In Content Creation in Academic Research

Since November 2022, the academic landscape of student academic works has experienced a significant shift with the mass introduction of large language models (LLMs) like ChatGPT. These tools can assist in generating academic paper titles and outlines, identifying keywords, developing code for article retrieval, summarizing the essence of academic articles, analyzing data and providing the first rough drafts of the main components of academic research.

The role of humans in this process, particularly those proficient in LLMs and prompt engineering (McKinsey & Company, 2024), has decreased. The traditional demands of graduate students of extensive reading, analyzing large amounts of data, extensive thinking and writing, and looking for innovation have been significantly reduced. LLMs capabilities are rapidly evolving. While in spring 2023, professors noted that ChatGPT struggled with producing proper citations, by fall of the same year, the tool had improved, generating existing and more accurate citations and references when provided with precise prompts (though still facing hallucination issues). Moreover, AI literacy gurus like Ethan Mollick regularly demonstrates how LLMs can handle numerous tasks in research, test taking, and home-take assignments (Co-Intelligence: AI in the Classroom with Ethan Mollick, 2024), marking a new era in LLMs precision (One Useful Thing, 2024).

However, the primary concern is not whether students and researchers should use LLMs - as completely banning them seems impractical and unlikely to be a long-term solution - but rather how and to what extent they should be used. As one of the respondents to the Chronicle of Higher Education’s survey and report ‘Communicating with Students in the Age of AI’ (2024), stated, Students should be clear on when they use AI, why they are using it, and how they are using it.’This starts with honest conversations between the faculty and students.  Initial practices could include asking students to keep records, such as screenshots of prompts used, and show their own input into the final product. Instead of shaming students for using the tools that are freely available, trust and transparency among educational stakeholders could lead to better understanding when, how, and whether students are encouraged to use the tools for certain tasks. This way, both sides will learn what is and is not useful in the learning and self-development process of the students.

AI and the Job Market

We as educators should be aware of how the AI Revolution might affect students' future job prospects. The McKinsey & Company's report (2023) on generative AI's economic potential has shown that unlike previous automation technologies that primarily impacted lower-skilled workers, generative AI is expected to have a greater effect on more-educated workers, automating some of their activities (Exhibits 12 and 13 in the report above). This shift challenges the traditional reliance on multiyear university degree programs as the primary indicator of skills, supporting more skills-based approach to workforce development. Previous automation mostly affected middle-income jobs, but genAI is set to transform the work of higher-wage knowledge workers, whose tasks were once considered safe from automation. This change in the landscape of work automation, moving from lower and middle-income roles to higher-wage knowledge positions, calls for a significant shift in the skills and training required for future employment.

Additionally, Goldman Sachs' report (2023) suggests that genAI could automate up to 300 million full-time jobs, highlighting the potential for substantial job displacement due to AI tools’ adoption. This prospect of extensive job loss has been a recurrent theme in the media, raising concerns in societies around the world. In support, Siau (2017) argues that professions with well-established procedures and policies are ready for automation. He believes that anything that can be automated will eventually be automated.

Walter Frick in his article in Harvard Business Review ‘AI Is Making Economists Rethink the Story of Automation’ (2024) states that economists are rethinking their traditionally optimistic views about automation and its impact on labor markets. Previously they believed that technology would always create new opportunities to balance job losses. However, recent evidence shows that digital technologies have increased inequality and can potentially reduce wages for all workers. The key distinction emerging from new economic models is between technologies that automate existing work versus those that create entirely new job categories - with the latter being more beneficial for workers. The article highlights that about 60% of current jobs didn't exist in 1940, showing technology's potential to create new forms of work. However, the positive impact of AI and automation will depend mainly on whether workers have a voting power to influence how these technologies are implemented in their organizations. The author provides an example of the contract between the union members of the Writers Guild of America with Hollywood and television studios, where employees achieved transparency over AI's role and extent of application to their tasks (Romine & Delouya, 2023).

Further, the National Academies report ‘Artificial Intelligence and the Future of Work’ (2024) recognizes that while AI might increase productivity across sectors, its benefits may not be distributed equally, potentially leading to greater income and wealth inequality similar to previous technological revolutions. AI is expected to automate both mass expertise tasks (like retail inventory) and portions of elite expertise work (such as legal document preparation), while also potentially augmenting workers' capabilities in fields requiring judgment and problem-solving. A key concern is that AI advances could put pressure on wages and raise issues around worker surveillance, privacy, and creative output ownership. The report emphasizes that achieving positive workforce outcomes will require investments in worker training, skill development, and transition support programs, along with ensuring workers have input into how AI is implemented in their workplaces. The authors recommend developing robust policies to handle various possible technological advances while protecting employee interests.

However, there are also positive outlooks. Dr. Joseph E. Aoun (2017) in his book ‘Robot-Proof: Higher Education in the Age of Artificial Intelligence’ suggests that the AI Revolution might free workers from monotonous tasks, offering more innovative and creative jobs, where will be valued intrinsically human qualities of compassion, kindness, understanding, and wisdom. This shows a potential path towards more creative and innovative jobs as a result of automation during Industry 4.0. Therefore, this digital transformation gives an opportunity to societies and higher education institutions to adapt to a dynamic workplace where humans and AI technologies coexist and complement each other.

Conclusion

It is vital to maintain a balanced approach to AI tools opportunities and risks, including in the context of higher education. Popenici and Kerr (2017) invite not to overestimate the AI's role in education. They refer to Ruchir Puri, the Chief Architect of IBM’s AI supercomputer Watson, who stressed that, despite widespread excitement about AI, its limitations are significant, and its actual capabilities are still quite restricted. Nevertheless, the potential evolution and growing autonomy of AI tools should not be underestimated. It is important for stakeholders to establish accountability measures, governance frameworks, and off switch for the forthcoming versions of AI tools (Wall Street Journal podcasts, 2023).

The rapid advancement of AI technologies requires higher education institutions’ shift from a reactive to a proactive mindset. They need to anticipate emerging trends and changes, integrating them into their strategic objectives and operational frameworks. Adopting this forward-thinking approach in technology management positions universities at the forefront of innovation, offering education that is relevant and future-proof.

Should higher education leaders and stakeholders fail to adequately address the concerns and key issues raised, there could be significant consequences. These include ineffective and unethical use of AI tools in HEIs, job displacement due to automation, and difficulties in keeping pace with rapid technological changes. Such outcomes might result in universities lagging in innovation, not preparing students adequately for future job markets, and negative effects on educational equity and data privacy (Smalley, 2022; Titareva, 2025).

References

Aoun, J. E., Robot-Proof: Higher Education in the Age of Artificial Intelligence, Cambridge, MA: MIT Press, 2017. Chronicle of Higher Education

Chronicle of Higher Education, 'How Will Artificial Intelligence Change Higher Ed? ChatGPT is just the beginning. 12 scholars and administrators explain', 25 May 2023, https://www.chronicle.com/article/how-will-artificial-intelligence-change-higher-ed?sra=true

Co-Intelligence: AI in the Classroom with Ethan Mollick [online video], Presenter Ethan Mollick, 16 April 2024, https://www.youtube.com/watch?v=8FnOkxj0ZuA&ab_channel=GlobalSiliconValley

Corbyn, Z., 'AI scientist Ray Kurzweil: We are going to expand intelligence a millionfold by 2045, The Guardian, 29 June 2024, https://www.theguardian.com/technology/article/2024/jun/29/ray-kurzweil-google-ai-the-singularity-is-nearer

Frick, W., 'AI Is Making Economists Rethink the Story of Automation', Harvard Business Review, 27 May 2024, https://hbr.org/2024/05/ai-is-making-economists-rethink-the-story-of-automation

Gates, B., 'The Age of AI has begun', Gates Notes, 21 March 2023, https://www.gatesnotes.com/The-Age-of-AI-Has-Begun

Goldman Sachs Economic Research, 'The Potentially Large Effects of Artificial Intelligence on Economic Growth', Briggs/Kodnani Global Economics Analyst, 26 March 2023, https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf

Kolbjørnsrud, V., 'Designing the Intelligent Organization: Six Principles for Human-AI Collaboration', California Management Review, vol. 66, no. 2, 2023, pp. 44 – 64.

Kurzweil, R., The Singularity Is Nearer: When We Merge with AI, New York: Viking, 2024.

McKinsey & Company, 'The economic potential of generative AI: The next productivity frontier', McKinsey Digital 2023, June, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction

McKinsey & Company, 'What is prompt engineering?', McKinsey Featured Insights, 22 March 2024, https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-prompt-engineering

Miao, F, and Cukurova, M., 'AI competency framework for teachers', UNESCO, 8 August 2024, https://www.unesco.org/en/articles/ai-competency-framework-teachers

Mollick, E., Co-Intelligence: Living and Working with AI, New York: Portfolio, 2024.

Mollick, E., 'Scaling: The State of Play in AI', One Useful Thing, 16 September 2024, https://www.oneusefulthing.org/p/scaling-the-state-of-play-in-ai

National Academies of Sciences, Engineering, and Medicine,'Artificial Intelligence and the Future of Work', Washington, DC: The National Academies Press, 2024, https://nap.nationalacademies.org/catalog/27644/artificial-intelligence-and-the-future-of-work

Oregon State University, Bloom's Taxonomy Revisited, Version 2.0, 2024, https://ecampus.oregonstate.edu/faculty/artificial-intelligence-tools/blooms-taxonomy-revisited-v2-2024.pdf (acce], 'Communicating with Students in the Age of AI', 2024, https://connect.chronicle.com/communicating-with-students-ai.html

Popenici, S. A. D. and Kerr, S., 'Exploring the impact of artificial intelligence on teaching and learning in higher education', Research and Practice in Technology Enhanced Learning, vol. 12, no. 22, 2017, pp. 12 – 22.

Romine, T. and Delouya, S., 'Writers union ratifies its new contract with Hollywood and TV studios', CNN, 9 October 2023, https://edition.cnn.com/2023/10/09/media/wga-contract-ratified/index.html

Roose, K., 'Can A.I. Be Blamed for a Teen's Suicide?', The New York Times, 23 October 2024, https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html

Siau, K., 'Impact of Artificial Intelligence, Robotics, and Automation on Higher Education', AMCIS 2017 Proceedings, Association for Information Systems (AIS), Boston, United States, August 2017, https://scholars.cityu.edu.hk/en/publications/publication(e68960c5-d212-4752-b38b-13340b121609).html

Smalley, S., 'A Data Collection Project at GW Leads to Privacy Questions', Inside Higher Ed, 21 February 2022, https://www.insidehighered.com/news/2022/02/22/gw-data-collection-effort-sparks-campus-privacy-concerns

The Transformative Potential of AGI — and When It Might Arrive [online video], Presenter Shane Legg and Chris Anderson, 7 December 2023, https://www.youtube.com/watch?v=kMUdrUP-QCs

Titareva, T., ‘Technology Leadership Influence on Intention to Use AI-based Early Alert Systems (EAS) at Higher Education Institutions (HEIs)’, ProQuest Dissertations and Theses Global, 2025, https://commons.lib.jmu.edu/diss202029/150/

Wall Street Journal [podcast], 'Microsoft's Brad Smith Wants an AI Off Switch, WSJ What's News, 3 November 2023, https://www.wsj.com/podcasts/whats-news/microsofts-brad-smith-wants-an-ai-off-switch/7a5e0ff5-0fe2-4ddd-b40d-9067b554b2d6

Weiser, B., 'Here's What Happens When Your Lawyer Uses ChatGPT', The New York Times, 27 May 2023, https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

Zaphir, L., Lodge, J. M., Lisec, J., McGrath, D. and Khosravi, H., 'How critically can an AI think? A framework for evaluating the quality of thinking of generative artificial intelligence', arXiv Cornell University, 20 Jun 2024, https://arxiv.org/abs/2406.14769.

author titareva

 

The content expressed in this blog is that of the author(s) and does not necessarily reflect the position of the website owner. All content provided is shared in the spirit of knowledge exchange with our AIEOU community of practice. The author(s) retains full ownership of the content, and the website owner is not responsible for any errors or omissions, nor for the ongoing availability of this information. If you wish to share or use any content you have read here, please ensure to cite the author appropriately. Thank you for respecting the author's intellectual property.