Spiky surfaces and jagged edges: Moving beyond what’s known in an Age of AI

Do we need to move beyond simple models of knowledge creation and discovery as AI becomes increasingly capable?

Will AI ever be able to transcend the “knowledge closure boundary” — the conceptual barrier between what’s known (or what can be inferred from what’s known and understood) and what is not — and make truly novel and transformative discoveries?

There’s a growing debate around what might be possible here versus what lies in the domain of optimistic (and, according to some, fanciful) speculation — and much of this is grounded in different theories and ideas around how discoveries are made, and new knowledge generated.

I was reminded of this by my colleague Subbarao (Rao) Kambahmpati in a post on X this past week — and it got me thinking about how we conceptualize knowledge generation in an age of AI

Rao neatly captured a perspective that’s widely held around new discoveries in the diagram above. At its core is the sum total of what we collectively know (if you want to get Rumsfeldian, the “known knowns”). Beyond this is the domain of knowledge that we can infer from what we know, including inference through combining what we know in new and interesting ways.

This is contained within a hard boundary (the knowledge closure boundary) — a boundary that cannot simply be transcended by novel combinations of what’s already known.

Rao’s argument — and that of many others in the field of AI — is that machines are incapable of unaided discovery beyond this boundary, at least without being embodied in a form where they can fully experience the physical universe and discover new things through “hands on” experimentation.

The diagram is a useful way of deflating some of the hype around AI-generated discovery (including speculations that artificial intelligence is somehow going to make the impossible possible within the next decade or so). And yet, I worry that it’s an over-simplification that potentially obscures what might indeed be possible as AI models become increasingly capable.

Part of my reasoning here is that there’s a long history of thinking around the nature of knowledge generation and discovery that suggests that things aren’t as simple as we might like to think — from the “paradigm shifts” of Thomas Kuhn, to the concept of the “adjacent possible” proposed by Stuart Kaufmann and popularized by Steven Johnson in his 2010 book Where Good Ideas Come From, and beyond.1 And while much of this thinking focuses on human-centric processes (remembering that we represent embodied and embedded intelligences that have the ability to learn through real-world experience and experimentation), it’s part of a body of understanding that suggests that discovery is complex, and not as well understood as we sometimes might like to think.

And so, spurred on by Rao’s post, I started playing around with what a conceptual diagram might look like that moves beyond the one he showed …

US and China Vie for AI Leadership in K-12 Education

Two new national plans promise to overhaul classrooms with AI. Here’s how the US and China differ – and overlap.

What does responsible innovation mean in an age of accelerating AI?

The new AI 2027 scenario suggests artificial intelligence may outpace our ability to develop it responsibly. How seriously should we take this?

Reimagining learning and education in an age of AI

Reflections and provocations from a keynote given at the 2025 Yidan Prize Conference.

More on artificial intelligence at ASU.

Andrew Maynard

Director, ASU Future of being Human initiative