The “hard” concept of care in technology innovation
Substack

The “hard” concept of care in technology innovation
Does a lack of care in how we develop and use new technologies risk turning us and our creations into “monsters?” From the Future of Being Human Substack.

AI and the lure of permissionless innovation
What could possibly go wrong as we prize the guardrails of responsibility off artificial intelligence and other advanced technologies? From the Future of Being Human Substack.

An AI model that can decode and design living organisms
The Arc Institute’s new EVO 2 model does for DNA what ChatGPT did for words. From the Future of Being Human Substack.

Does OpenAI’s Deep Research signal the end of human-only scholarship?
Having got my hands on Sam Altman’s latest AI offering yesterday, I’m beginning to wonder when research and scholarship that isn’t augmented by AI will be seen as an anachronism

AI turned my class into a song — and I can’t stop listening
One of my student used ChatGPT and Suno to create a musical ode to my Pizza and a Slice of Future class, and it’s awesome! From the Future of Being Human Substack.

AI at a Crossroads: The Unfinished Work of Aligning Technology with Humanity
Where the Vatican and AI Scientists Converge, Collide, and Challenge Us to Think Differently. From the Future of Being Human Substack.

I asked ChatGPT to create three video games – this is what happened
Generative AI is making it easier than ever for people with no prior experience to experiment with writing code. But just how big a deal is the emergence of “conversational coding?” From the Future of being Human Substack.

Frequently Asked Questions on Using ChatGPT in the Classroom
After two years of procrastinating I finally updated a set of FAQs that I compiled for colleagues back in 2023 – in part because I couldn’t find anything better out there! From the Future of being Human Substack.

Universities need to step up their AGI game
As Sam Altman and others push toward a future where AI changes everything, universities need to decide if they’re going to be leaders or bystanders in helping society navigate advanced AI transitions. From the Future of Being Human Substack.

Sora has a bias problem
Sora seems to think all academics are men, and predominantly white men at that. And this is a problem. From the Future of Being Human Substack.