What does responsible innovation mean in an age of accelerating AI?

The new AI 2027 scenario suggests artificial intelligence may outpace our ability to develop it responsibly. How seriously should we take this? From the Future of Being Human Substack.

A new speculative scenario on AI futures is currently doing the rounds, and attracting a lot of attention. AI 2027 maps out what its authors consider to be plausible near-term futures for artificial intelligence leading to the emergence of superintelligence by 2027, followed by radical shifts in the world order.

If this sounds like science fiction to you, you wouldn’t be the only one to think this. Yet despite its highly speculative nature, the AI 2027 scenario (or, to be more accurate, scenarios, as this is something of a “choose your own ending” story) is sufficiently grounded in current trends and emerging capabilities to provide serious pause for thought.

It also reflects at least some of the thinking of growing number of leaders and developers at the cutting edge of AI.

The scenario was published just as I was heading into a workshop on AI and responsible innovation this past week, and so the question of how we ensure artificial intelligence is developed and managed appropriately was on my mind. It’s not surprising therefore that my first reaction on reading AI 2027 was to worry that, even if the projections represent an edge case, we might be facing a near term future where current efforts to develop artificial intelligence responsibly seem futile.

I hope they are not — and the scenario has already attracted considerable pushback for being too alarmist. Yet it’s also been cautiously welcomed by some big names in cutting edge AI as a salutary warning of where we may be heading.

The scenario depends on a number of assumptions — all of which can be contested, but nevertheless are useful for exploring potential (if not necessarily likely) near term AI futures.

These include …

AI in Higher Education: Students need playgrounds, not playpens

Artificial intelligence capabilities are moving so fast, and the implications are so profound, that we restrict the ability of students to learn through curiosity, experimentation, and hands-on experience at our peril.

When Agentic AI Takes Charge – First impressions of Manus

Having just got my hands on the new general AI agent from Manus I was excited to see what it could do. So I asked it to design, conduct and write up a human subjects research study – with no humans!

Are educators falling behind the AI curve?

As tech companies release a slew of generative AI updates, there’s a growing risk that educational practices and policies are struggling to keep up with new capabilities.

HomeArtificial Intelligence

Andrew Maynard

Director, ASU Future of Being Human initiative