Where the Vatican and AI Scientists Converge, Collide, and Challenge Us to Think Differently. From the Future of Being Human Substack.
Artificial intelligence is not just a technology. It is a mirror.
Depending on where you stand, it reflects a world of breathtaking potential or looming peril. Sometimes, both at once.
Two major reports released this past week—the International AI Safety Report, crafted by an elite assembly of AI scientists and policymakers, and the Vatican’s Antiqua et Nova, a theological and ethical treatise on AI’s implications—offer vastly different vantage points. But what’s fascinating isn’t just where they diverge. It’s where they unexpectedly converge—and what insights emerge when we read them not as competing narratives, but as puzzle pieces forming a larger, more profound picture.
A Synergy of Caution and Calling
At first glance, these reports might seem worlds apart—one grounded in empirical risk assessments, the other in centuries-old theological reflection. But they both start from a similar premise: AI represents a fundamental shift in humanity’s relationship with knowledge, power, and responsibility.
Both reports call for urgent oversight. Both recognize that AI is not simply a tool but a force that can reshape social, economic, and even ontological structures. And, crucially, both warn that if we let AI’s development be dictated solely by market forces or technological momentum, we may end up in a world that is not just dangerous—but deeply misaligned with human flourishing.
Where the International AI Safety Report catalogs a sobering list of risks—cyber threats, misinformation, automation-driven inequality, and even the specter of losing control over advanced systems—the Vatican’s document raises equally pressing, if less quantifiable, concerns: the erosion of truth, the commodification of intelligence, and the dehumanization of decision-making.
Put together, these insights paint a more complete picture than either report could alone. One emphasizes external threats, the other internal crises. The real challenge is that these dangers are not separate—they feed into one another.
If we fail to control AI-generated disinformation (as the safety report warns), we don’t just face a technical problem—we face the breakdown of trust in shared reality, a concern the Vatican highlights as an existential crisis. If we use AI to maximize efficiency without ethical guardrails, we don’t just create economic winners and losers—we risk eroding human dignity, reducing creativity, and offloading moral decision-making to systems that do not, and cannot, care.
The synergy here is clear: AI safety and AI ethics must evolve together …