Can AI Reason Like Humans?

Modem Futura Podcast

Episode 7, November 26 2024

Sean and Andrew explore the challenges and limitations of AI reasoning, especially in large language models (LLMs). They discuss recent Apple research questioning LLMs’ true reasoning abilities, emphasizing that these models rely heavily on pattern recognition rather than genuine understanding. Their conversation addresses the hype around AI, its inherent fragility, and the importance of fostering AI literacy to avoid misplaced trust. They examine AI’s potential as a writing partner, the critical need for accuracy in sensitive areas like healthcare and education, and the ethical implications of AI’s role in digital communication, advocating for a nuanced, responsible approach to AI development.

Links: 

Gary Marcus on AI [Substack]

Apple white paper – GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

Whisper in Hospitals [AP]

Samsung’s Moon Picture Controversy

Also available on Apple Podcasts, YouTube, and anywhere good podcasts are available.

Chapters

00:00 Introduction to AI Reasoning Challenges

04:46 Exploring AI’s Limitations in Reasoning

12:36 The Fragility of AI Models

20:48 The Hype vs. Reality of AI Capabilities

25:56 AI Literacy and Trust Issues

28:58 Future Directions for AI Development

30:48 The Future of AI as a Writing Partner

33:39 Trust and Literacy in AI Applications

39:13 Critical Applications and the Need for Accuracy

43:46 Manipulation in Digital Communication

51:50 The Ethics of AI in High-Stakes Interactions


Modem Futura is a production of the Future of Being Human initiative at Arizona State University. Be sure to subscribe on Apple Podcasts, Spotify, or wherever you listen to your favorite shows. To learn more about the Future of Being Human initiative and all of our other projects visit – https://futureofbeinghuman.asu.edu

Host Bios:

Sean M. Leahy, PhD – ASU Bio
Sean is an an internationally recognized technologist, futurist, and educator innovating humanistic approaches to emerging technology through a Futures Studies approach. He is a Foresight Catalyst for the Future of Being Human Initiative and Research Scientist for the School for the Future of Innovation in Society and Senior Global Futures Scholar with the Julie Ann Wrigley Global Futures Laboratory at Arizona State University.

Andrew Maynard, PhD – ASU Bio
Andrew is a scientist, author, thought leader, and Professor of Advanced Technology Transitions in the ASU School for the Future of Innovation in Society. He is the founder of the ASU Future of Being Human initiative, Director of the ASU Risk Innovation Nexus, and was previously Associate Dean in the ASU College of Global Futures.