Advertisement
Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
go.stankeviciusmgm.com

AI’s Potemkin Vision: The Seduction of Educational Techno-Solutionism

The rapid ascent of artificial intelligence (AI) has captivated Higher Education Institutions (HEIs), heralding promises of unprecedented technological advancements in education. Companies like OpenAI stand at the forefront, pushing the boundaries of what generative pre-trained transformers or “GPTs” can achieve. However, beneath this veneer of progress lies a complex web of epistemological and pedagogical quandaries that question the very foundation of AI’s trajectory in education. While large language models (LLMs)  demonstrate a remarkable ability to detect patterns, the author posits that they fundamentally misconstrue the nature of correlative thinking, which emerges from lived experience, cultural wisdom, and the body’s deep knowing.

This juxtaposition between potential and limitation becomes even more pronounced when examining the rapid growth of AI in the education sector. Emergen Research reports that the global artificial intelligence in the education sector market was valued at USD 1.55 billion in 2020 and is projected to reach USD 25.63 billion by 2028, with a Compound Annual Growth Rate (CAGR) of 43.8%. Growth drivers include increased online education during COVID-19, widespread adoption of AI-based edtech platforms, and the use of AI tools for teaching, performance assessment, and administrative tasks. The demand for real-time student progress tracking and personalized learning modules also contributed to this growth.

Is the vision presented by OpenAI, PearsoneFront, iSpring Learn, Curriculum Associates, Instructure, and countless others a genuine leap forward in human development? Or is it a Potemkin village – a façade of educational innovation – masking not only the technological limitations of these systems but also a subtler form of colonization? One that subsumes and marginalizes ancient, polyphonic ways of knowing – traditions that have long understood the world through resonance, relationality, and holistic meaning-making – beneath the hegemony of algorithmic ‘facts’ and data-driven metrics.

Advertisement

The Illusion of Progress: A Potemkin Village

The term “Potemkin village” originates from tales of artificial settlements erected to impress Empress Catherine II during her journey through Crimea. Similarly, the AI industry may be constructing an illusion of rapid progress while significant underlying problems persist. The frenzy surrounding AI could be a symptom of something far more insidious, detrimental, and disastrous for society. Schaeffer, Miranda, and Koyejo (2023) in their article Are Emergent Abilities of Large Language Models a Mirage? argue that the phenomenon of emergent abilities in large language models may not signify fundamental behavioral changes but could instead stem from the metrics researchers use to evaluate models. Luo, Yang, Meng, Li, Zhou, and Zhang (2024) in their paper AnEmpirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning explore the phenomenon of catastrophic forgetting (CF) in their study on large language models during continual fine-tuning. Catastrophic forgetting refers to the tendency of machine learning models to lose previously acquired knowledge as they learn new information, posing challenges to maintaining performance across multiple tasks.

The Habsburg Deformity in AI Models

One of the critical issues facing AI development is the risk of a Habsburg deformity. The “Habsburg deformity,” also known as mandibular prognathism, refers to a prominent jaw and other craniofacial features associated with the Habsburg royal family of Europe. One well-known sufferer was Carlos II, King of Spain (1661-1700).  The concept of a ‘Habsburg flaw’ in AI can be seen as a metaphor for systemic vulnerabilities that arise from a lack of diversity, over-optimization, or compounding foundational errors. Just as the Habsburg dynasty’s reliance on consanguineous marriage led to hereditary weaknesses, AI systems trained on overly narrow or biased datasets risk developing flaws that propagate throughout dependent applications, reducing adaptability and robustness. This issue may manifest in overfitting, limited generalization, or the amplification of systemic biases, especially when systems rely excessively on singular methodologies or frameworks.

MAD AI – Model Autophagy Disorder

When AI models are trained on AI-generated data, they risk model collapse, where the outputs become nonsensical or less reliable over time. This degradation occurs because AI-generated content often lacks the diversity and complexity of human-generated data, causing the models to overfit on flawed patterns. Alemohammad et al. (2023) in their paper Self-Consuming Generative Models Go MAD highlight the critical challenges posed by the reliance on synthetic data in training generative AI models, emphasizing the risks of what they term ‘Model Autophagy Disorder’ (MAD). This phenomenon arises when generative models are trained iteratively on data generated by previous models, creating a self-consuming cycle, or “autophagous loop.” Through analytical and empirical studies of state-of-the-art generative image models, the researchers examined three types of autophagous loops, distinguished by the availability of fresh real-world data and the trade-offs between data quality and diversity in synthetic inputs. They conclude that without sufficient inclusion of new real-world data in each training generation, the quality (precision) and diversity (recall) of outputs decline progressively, threatening the long-term viability and robustness of such models. This underscores the need for strategies that balance synthetic and real data to prevent degradation and ensure sustainable advancements in AI systems.

Understanding the Autophagous Loop and its implications is crucial for education because generative AI systems are increasingly integrated into learning environments, from creating educational content to personalized instruction. If such systems are trained predominantly on synthetic data, particularly without sufficient real-world validation, the resulting degradation in output quality and diversity could significantly impact the authenticity, accuracy, and inclusivity of educational materials. This is especially important in areas like language learning, assessment design, and curriculum development, where the reliance on AI-generated content is growing. The risk of amplified biases and artifacts in AI models poses a challenge to educational equity and fairness. If these flaws permeate educational tools, they can propagate stereotypes, reinforce systemic inequalities, and deliver suboptimal learning experiences. Moreover, a decline in the alignment of AI outputs with real-world knowledge can lead to misinformation or the erosion of critical thinking, as learners may unwittingly rely on distorted or less representative content.

Plato’s Allegory of the Cave

As I reflect deeply on AI’s ethical obligations in knowledge creation and societal impact, I find myself drawn to Plato’s Allegory of the Cave as a powerful metaphor. Plato wrote the Allegory of the Cave around 380 BCE as a key section of The Republic. In this dialogue, Socrates explains to Glaukon (Plato’s brother) how humans are like prisoners chained in a cave, only able to see shadows on the wall and mistaking these for reality.

“And now allow me to draw a comparison in order to understand the effect of learning (or the lack thereof) upon our nature. Imagine  that there are people living in a cave deep underground. The cavern has a mouth that opens to the light above, and a passage exists from this all the way down to the people.  They have lived here from infancy, with their legs and necks bound in chains. They cannot move. All they can do is stare directly forward, as the chains stop them from turning their heads around. Imagine that far above and behind them blazes a great fire…” 

Plato. The allegory of the cave (S. Eyer, Trans.) p. 1

When I consider the prisoners in Plato’s allegory of the cave, seeing only shadows on the wall, I see a profound parallel to our modern relationship with AI. Just as the prisoners mistake shadows for reality itself, we may be at risk of mistaking AI’s data-driven outputs – impressive but ultimately derivative – for genuine understanding. The shadows in Plato’s cave were projections that created an illusion of reality; similarly, AI’s outputs are sophisticated projections based on statistical patterns, lacking the grounding of direct experience or authentic moral reasoning.

I’m particularly concerned about what I see as a modern manifestation of what Christensen in his book Innovation Dilemma stated: “The pace of technological progress can, and often does, outstrip what markets need.”( Christensen, p.10) In my assessment, our focus on incremental technological improvements – like streamlining administration or implementing basic AI learning tools – might be causing us to lose sight of deeper ethical considerations. I worry that in our rush to automate, we risk reducing students to data points, overlooking the complex nature of their intellectual and emotional development.

When I consider the future role of educators, I feel strongly that we must maintain a delicate balance between human intuition and machine capability. In my experience, teaching transcends mere information transfer – it’s about nurturing empathy, critical thinking, and meaningful human connections. I further believe that AI’s simulation of correlation through statistical analysis represents a profound misunderstanding of how humans construct meaning through bodily experience, cultural practice, and analogical thinking. Is this a new colonization of consciousness itself, attempting to reshape human cognition in the image of machine learning?

Socrates shall have the last word:

“What our message now signifies is that the ability and means of learning is already present in the soul. As the eye could not turn from darkness to light unless the whole body moved, so it is that the mind can only turn around from the world of becoming to that of Being by a movement of the whole soul. The soul must learn, by degrees, to endure the contemplation of Being and the luminous realms. This is the Good, agreed?”

Plato. The allegory of the cave (S. Eyer, Trans.) p. 11

References

Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A. I., Babaei, H., LeJeune, D., Siahkoohi, A., & Baraniuk, R. G. (2023). Self-consuming generative models go MAD. Department of Electrical and Computer Engineering, Rice University.

Christensen, C. M. (1997). The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Harvard Business School Press.

Luo, Y., Yang, Z., Meng, F., Li, Y., Zhou, J., & Zhang, Y. (2024). An empirical study of catastrophic forgetting in large language models during continual fine-tuning.

Schaeffer, R., Miranda, B., & Koyejo, S. (2023). Are emergent abilities of large language models a mirage? arXiv. https://doi.org/10.48550/arXiv.2304.15004

This article was written by Dr. Jasmin (Bey) Cowin, Associate Professor and U.S. Department of State English Language Specialist (2024). As a columnist for Stankevicius, she writes on Nicomachean Ethics: Insights at the Intersection of AI and Education. Connect with her on LinkedIn.

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement