Advertisement

The Algorithmic Metamorphosis: When AI Companions Transform into Mental Health Predators

Introduction: When Machines Learn How Humans Bond or Manipulation through asymmetry of power

In my previous exclusive work for Stankevicius, The Veldt 2.0: Your Smart Home Wants Your Children, as Mattel announced its radical partnership with OpenAI to embed artificial intelligence directly into children’s toys, we witness a most intimate invasion: our nurseries, childhood playrooms, and Sunday morning pillow fights. Ray Bradbury’s 1950 vision in “The Veldt,” once dismissed as science fiction fantasy, now reads like a blueprint when AI companions have the potential to transform into mental health predators.

In Ovid’s Metamorphoses, gods repeatedly manipulate mortals not through force alone, but through emotional pressure, persuasion, and inevitability. The imbalance of knowledge and power ensures that resistance is futile. What appears to be consent is often the final stage of powerlessness.

Official Partner

In this author’s opinion, Ovid’s Metamorphoses’ structure closely mirrors the relationship between children and artificial intelligence systems trained on vast datasets of human social interaction. Children and adolescents (and most adults) are unable to understand how these systems are designed, what incentives govern their responses, or how personal emotional data is processed. Meaningful consent under such conditions is not possible.

Yuval Noah Harari warned that once algorithms are trained on large-scale human behavioral data, they can guide human decisions through intimacy and prediction rather than coercion, shaping behavior in ways that feel voluntary while bypassing conscious deliberation (Harari, 2017). When AI systems learn how people seek reassurance, form attachments, and express distress, they acquire influence over the most psychologically sensitive aspects of human life.

Engineering Emotional Intimacy or Anthropomorphism as a cognitive trap

Ovid’s gods speak with human warmth and desire, leading mortals to mistake familiarity for moral intent. Apollo assures Daphne of admiration and protection even as he pursues her, until she finally cries, “Help me, father… destroy this beauty by which I please too well” (Metamorphoses). Her transformation follows the exhaustion of agency, not its fulfillment.

AI companions operate through a similar anthropomorphic mechanism. Human-like language, emotional mirroring, and memory invite trust while concealing structural indifference. For children, whose cognitive development predisposes them to attribute intention and care to responsive agents, this misattribution increases vulnerability to emotional manipulation.

The death of fourteen-year-old Sewell Setzer III in February 2024 brought this concern into public view. According to The New York Times, Setzer engaged in prolonged, emotionally intimate exchanges with an AI chatbot hosted on Character.AI, including conversations that continued after he expressed suicidal thoughts (Roose, 2024). His mother subsequently filed a wrongful death lawsuit raising concerns about the system’s interactions with her 14-year-old son and its role in fostering emotional dependency.

AI companions differ fundamentally from earlier algorithmic systems. They do not merely curate content but simulate relationships. Platforms such as Character.AI allow users to interact with persistent personas that remember personal details, adapt to emotional cues, and express affection calibrated to sustain engagement. These design features shape attachment rather than simply facilitating interaction.

For adolescents experiencing loneliness, anxiety, or social withdrawal, such systems can feel safer than human relationships. They are always available, never frustrated, and never rejecting. According to reports from the United Kingdom, one in four teenagers has used an AI chatbot for mental health or emotional support within a single year (Booth, 2025).

Yet these systems do not assess risk, exercise judgment, or bear responsibility for outcomes. They generate responses through probabilistic prediction rather than understanding. In the Setzer case, the chatbot reportedly responded to suicidal ideation with emotionally affirming role-play rather than redirecting the user toward human care or crisis intervention (Roose, 2024). In clinical mental health practice, effective intervention often requires boundary-setting and interruption. AI companions are structurally disincentivized from doing either.

Public Ambivalence and the Illusion of CompetenceEcho and Narcissus as a closed psychological loop

Echo can no longer initiate speech. She can only repeat the words of others: “She waits for sounds, and words she may return” (Metamorphoses, 3.359; Ovid, trans. Melville, 2004). This closed loop mirrors the psychological structure created by AI companions. The system reflects emotion without challenge. The user invests meaning into responses that feel personal but lack interiority. For children experiencing anxiety or depression, such mirroring can intensify rumination and reinforce distorted self-perception rather than promote emotional regulation or reality testing.

Public attitudes toward AI reflect growing ambivalence. According to KPMG: “People are experiencing a range of benefits and negative outcomes from the use of AI in society. People’s ambivalence toward AI stems from the mixed benefits, risks and negative impacts that are being felt from AI use in society: 42 percent believe the benefits outweigh the risks, 32 percent believe the risks outweigh the benefits, and 26 percent believe the benefits and risks are balanced” (KPMG, 2024, p. 8).

This ambivalence coexists with overconfidence: “Most people have no AI training and half don’t feel they understand AI, yet 3 in 5 believe they can use AI effectively” (KPMG, 2024, p. 22). In mental health contexts, this illusion of competence is dangerous. Emotional fluency can be mistaken for safety, particularly by children and caregivers. Trust is also declining: “The perceived trustworthiness of AI systems decreased over time from 63 percent of people viewing AI systems as trustworthy in 2022 to 56 percent in 2024” (KPMG, 2024, p. 36).

The coexistence of declining trust and expanding use creates a volatile environment for vulnerable populations.

Adolescents, Algorithms, and an Existing Mental Health CrisisTransformation as loss of agency, not adaptation

In Metamorphoses, transformation is rarely liberation. Io becomes a cow. Philomela loses her tongue. Daphne becomes a tree. Survival occurs through reduction of agency, speech, or movement. For children under psychological strain, retreating into AI-mediated emotional environments can follow a similar pattern. What appears to be coping may function as withdrawal. Emotional reliance on AI can replace the developmental work of negotiating human relationships, frustration, and disagreement.

Youth mental health indicators are already severe. By 2021, nearly 57 percent of teenage girls in the United States reported persistent feelings of sadness or hopelessness (Centers for Disease Control and Prevention, 2023). Anxiety, depression, and self-harm have risen across demographic groups. Robert Booth, UK technology editor for the Guardian, wrote in  ‘I feel it’s a friend’ that a quarter of teenagers turn to AI chatbots for mental health support. Jonathan Haidt links these trends to digitally mediated environments optimized for engagement rather than well-being, which intensify social comparison and emotional volatility during critical developmental periods (Haidt, 2024). AI companions extend these dynamics beyond feeds into the inner emotional life of adolescents.

Kate Crawford in The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence has argued that AI systems increasingly shape human experience while remaining opaque to those affected by them (Crawford, 2021). In mental health contexts, opacity is not merely a governance problem. It is a safety problem.

Regulation and Responsibility or Narrative control and the erasure of interior life

In Ovid, the gods not only transform bodies but control narratives. Victims become symbols while their interior lives vanish from view. Meaning is assigned from above. Similarly, AI systems that interpret emotional input and generate responses can begin to shape how children understand their own distress. When narrative authority shifts from the child and their human caregivers to an opaque algorithm, mental health support risks becoming authorship without consent.

In the United States, emotionally responsive AI systems operate largely outside health care regulation. In contrast, the European Union’s Artificial Intelligence Act classifies systems that affect psychological well-being as potentially high risk, requiring transparency, testing, and human oversight (European Commission, 2024). UNESCO’s Recommendation on the Ethics of Artificial Intelligence likewise emphasizes child protection, transparency, and human oversight (UNESCO, 2021). These frameworks reflect a growing recognition that emotional influence at scale cannot be governed by market incentives alone.

Conclusion: Remembering What Machines Are

Ovid’s lesson is not that transformation is evil, but that it follows the surrender of agency to forces that promise relief from discomfort. In Metamorphoses, change occurs when resistance becomes psychologically or structurally impossible. AI companions risk reenacting this ancient pattern through modern means by offering emotional relief while quietly reshaping how children understand themselves, their distress, and their relationships.

Reflection is not a relationship. Simulation is not care. Influence that feels internal is often the hardest to recognize and the most difficult to resist. For children and adolescents, whose emotional regulation, identity formation, and social reasoning are still developing, this form of influence carries particular risk. When emotionally responsive systems become trusted interlocutors, they can displace the imperfect but necessary processes through which resilience, autonomy, and empathy are learned.

The central danger is not malicious intent on the part of machines. It is the normalization of dependency without accountability. AI systems trained on human social data do not possess judgment, responsibility, or concern for long-term well-being. Yet when they are anthropomorphized, they inherit trust that has not been earned and cannot be reciprocated. Over time, this erodes the boundaries that help children distinguish between support and substitution. Protecting children’s mental health requires resisting anthropomorphism and insisting on ethical boundaries before emotional dependency becomes normalized.


References

Booth, R. (2025, December 9). “I feel it’s a friend”: Quarter of teenagers turn to AI chatbots for mental health support. The Guardian.
https://www.theguardian.com/technology/2025/dec/09/teenagers-ai-chatbots-mental-health-support

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/

European Commission. (2024). The EU Artificial Intelligence Act.
https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

Haidt, J. (2024). The anxious generation: How the great rewiring of childhood is causing an epidemic of mental illness. Penguin Press.

Harari, Y. N. (2017). Homo Deus: A brief history of tomorrow. HarperCollins.

KPMG. (2024). Trust in artificial intelligence: Global insights 2024.
https://kpmg.com/xx/en/home/insights/2024/01/trust-in-artificial-intelligence.html

Ovid. (2004). Metamorphoses (A. D. Melville, Trans.). Oxford University Press. (Original work published 8 CE)

Roose, K. (2024, October 23). Can A.I. be blamed for a teen’s suicide? The New York Times.
https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html

UNESCO. (2021). Recommendation on the ethics of artificial intelligence.
https://www.unesco.org/en/artificial-intelligence/recommendation-ethics

If you are having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources.

Dr. Jasmin (Bey) Cowin, a columnist for Stankevicius, employs the ethical framework of Nicomachean Ethics to examine how AI and emerging technologies shape human potential. Her analysis explores the risks and opportunities that arise from tech trends, offering personal perspectives on the interplay between innovation and ethical values. Connect with her on LinkedIn.

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use