Designing the technological futures we aspire to

Why transformative technology demands responsible innovation — and the critical questions we need to be asking before it’s too late. From the Future of Being Human Substack.

Last summer I published an article on “Why all undergrads should take at least one course where they watch sci-fi movies in class” — some of you will remember it. In it I wrote about Arizona State University’s Moviegoer’s Guide to the Future class, and how it equips students to think about technology innovation and the future in ways few other classes do.

I’m beginning to gear up to teach the class again this fall (it’ll be for the 9th time I believe). And as I was preparing, I was reminded of just how relevant the class’ trailer (because every move class has to have a trailer!) is to how anyone might approach the benefits and risks of transformative technologies — not just my students.

This “trailer” was a short video I put together to give prospective students a sense of what the course is about. It was shamelessly inspired by some work I did with a couple of producers. But it also captures the essence of the questions we address during the course.

I remember thinking when I made the video that the themes it touches on transcend the course. And revisiting it, I was struck by how it’s more relevant than ever to the questions all of us should probably be asking about transformative technologies, from AI and gene editing to brain computer interfaces cloning, and much more.

So I thought I’d share it again here:

Of course, I’m well aware that not everyone who sits down to read this Substack appreciates multimedia segments when they were simply hoping for a quiet read. And so I’ve also included the transcript below.

It’s a little basic without the news clips, AIs, dinosaurs, and fantasy nanotech — but you get the gist …

US and China Vie for AI Leadership in K-12 Education

Two new national plans promise to overhaul classrooms with AI. Here’s how the US and China differ – and overlap.

What does responsible innovation mean in an age of accelerating AI?

The new AI 2027 scenario suggests artificial intelligence may outpace our ability to develop it responsibly. How seriously should we take this?

Reimagining learning and education in an age of AI

Reflections and provocations from a keynote given at the 2025 Yidan Prize Conference.

More on artificial intelligence at ASU.

Andrew Maynard

Director, ASU Future of being Human initiative