Universities need to step up their AGI game

As Sam Altman and others push toward a future where AI changes everything, universities need to decide if they’re going to be leaders or bystanders in helping society navigate advanced AI transitions. From the Future of Being Human Substack.

In his reflections for the new year, OpenAI CEO Sam Altman boldly predicts that “in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

It’s tempting to dismiss this as hyperbole. But given recent advances in using AI models to simulate reasoning and exert control over external systems, I suspect there’s a reasonable chance he might be right.

If he is — and if AI continues along the path that Altman and OpenAI are forging toward “Superintelligent tools [that] could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity” — we urgently need new understanding and thinking on how humanity will successfully navigate the coming advanced AI transition.

But where this new understanding and thinking will come from, and how we’ll ensure that it truly benefits society, is far from clear.

We’re already seeing AI companies like OpenAI, Anthropic, and Google, dominate the intellectual space around advanced frontier models and their societally responsible development. But responsible as these companies claim to be (and I think they’re trying hard), they still lack the breadth of vision and understanding that’s necessary to succeed here …

Andrew Maynard

Director, ASU Future of being Human initiative