Does a lack of care in how we develop and use new technologies risk turning us and our creations into “monsters?” From the Future of Being Human Substack.
A little over a year ago I was sitting in a meeting where some of the world’s leading experts were discussing emerging developments in AI. But what caught my attention was not the latest in frontier models or novel advances in machine learning, but a psychologist talking about the concept of care.
That psychologist was Alison Gopnick, and she was exploring the intersection between care, caregiving, and artificial intelligence.
Alison leads the The Social Science of Caregiving project at Stanford University, and studies, amongst other things, the nature and roles of care and caregiving in human society — something that’s intimately intertwined with what it means to be human. And this extends to the technologies we create, interact with, and that become an integral part of our lives.
Alison’s ideas grabbed my attention at that meeting because I was already beginning to think about how the the idea of care — and what might be called the “hard” concept of care1 as opposed to soft and wooly notions of care — plays into socially beneficial and responsible innovation. But I was approaching this from a very different angle.
I’ve been grappling with navigating advanced technology transitions for the best part of the past couple of decades. And part of this includes the complex challenge of managing the unexpected and unintended consequences of novel advances. As anyone in this field knows, this is not easy. But a series of casual conversations with my colleague Emma Frow had planted the seeds of a new way to think about socially responsible and beneficial technology innovation.
Emma has been working for some time now on how the concept of care can inform the development and use of new technologies — in particular in the area of synthetic biology …