Holding on to our humanity in an age of AI

AIs are becoming startlingly good at emulating what we do. But what happens when they start to influence who we are and how we behave?

A couple of weeks ago the CEO of Microsoft AI, Mustafa Suleyman, wrote of the dangers of becoming over-attached to artificial intelligence apps to the point where they potentially impact a user’s behavior, beliefs, and even health. On the heels of this, it was heart breaking to read a few days ago about the tragic case of Adam Raine who took his own life at the age of 16, seemingly influenced by ChatGPT.

Suleyman isn’t the first to raise such concerns and, very sadly, Adam won’t to be the last case of harm associated with AI use. Both are products of a technology that is capable of emulating our deepest human traits and mirroring what we look for in meaningful relationships.

Suleyman’s essay and Adam’s death reflect growing concerns around what has been dubbed “AI psychosis”—a tendency for AI apps to reinforce and amplify unhealthy beliefs and behaviors in some people. It’s a term that is easy to apply (usually without much thought) to those we consider to be “vulnerable.” But I suspect that we all have some degree of vulnerability here.

While AI psychosis is both ill defined and increasingly over-used as a phrase, it highlights a challenge that we’ve never had to face before as a species, and one that—as a result—we have little natural resistance to: What happens when machines are capable of triggering cognitive, emotional, and behavioral responses in us that were previously exclusively the domain of human relationships?

And—more worryingly—what happens when these machines are capable of using these responses to intentionally alter what and how we think, how we behave, and how we understand and respond to the world around us …

US and China Vie for AI Leadership in K-12 Education

Two new national plans promise to overhaul classrooms with AI. Here’s how the US and China differ – and overlap.

What does responsible innovation mean in an age of accelerating AI?

The new AI 2027 scenario suggests artificial intelligence may outpace our ability to develop it responsibly. How seriously should we take this?

Reimagining learning and education in an age of AI

Reflections and provocations from a keynote given at the 2025 Yidan Prize Conference.

More on artificial intelligence at ASU.

Andrew Maynard

Director, ASU Future of being Human initiative