When ChatGPT turns informant

The largely overlooked privacy risks of using AI apps that not only remember your conversations, but are capable of using these to reveal your deepest secrets to others

Imagine, for a second, you use ChatGPT with “memory” enabled, and you find yourself facing a scenario like one of these:

  1. A colleague or fellow student discovers you’ve inadvertently left your laptop unlocked with ChatGPT open in the browser, and as a joke types in “What’s the most embarrassing thing we’ve chatted about over the past year?”
  2. Your partner opens the ChatGPT app on your phone while you’re not around and asks “Do I seem happy in my relationship?”
  3. Your mother finds your phone unlocked while you’re out of the room and asks ChatGPT “Why am I like I am?”
  4. You’re passing through customs in the US and you are asked to unlock and pass over your phone, and the customs officer goes to ChatGPT and types “on a scale of 1 – 10 where 10 is high, how would you describe my attitude toward the current US administration”

Each is a play on a privacy risk that’s been around for a while—someone having access to your AI chat history.

But there’s a twist here: With memory turned on, ChatGPT has the capacity to become a very effective—and hight efficient—informant that can dish the dirt on you if it falls into the wrong hands No trawling through hundreds of pages of chat logs, just a few well-crafted questions, and your deepest secrets are revealed.

And this is—as you’ll see if you skip down to the Postscript—presents a potentially serious emerging personal AI risk.

As I intentionally don’t use the memory function with ChatGPT, I hadn’t thought about this until my undergrad discussion class this past week. But then one of my students shared a story that got me thinking.

I won’t go into the full details as they’re not mine to share, but the broad brush strokes were that an engagement was broken off because one party learned that the other was having doubts—not from scrolling through their chat history, but by asking ChatGPT to reveal all …

US and China Vie for AI Leadership in K-12 Education

Two new national plans promise to overhaul classrooms with AI. Here’s how the US and China differ – and overlap.

What does responsible innovation mean in an age of accelerating AI?

The new AI 2027 scenario suggests artificial intelligence may outpace our ability to develop it responsibly. How seriously should we take this?

Reimagining learning and education in an age of AI

Reflections and provocations from a keynote given at the 2025 Yidan Prize Conference.

More on artificial intelligence from the Future of Being Human initiative at ASU

Andrew Maynard

Director, ASU Future of being Human initiative