What could possibly go wrong as we prize the guardrails of responsibility off artificial intelligence and other advanced technologies? From the Future of Being Human Substack.
A couple of months ago, Reid Hoffman posted the following on X:

I have a lot of respect for Reid, and I have a good sense of where he’s coming from here. But I found his endorsement of permissionless innovation somewhat jarring — especially as it’s an idea that I’ve cautioned against in my work and writing over the years.
The idea of permissionless innovation goes back some way, but it was succinctly defined by Adam Thierer and Jonathan Camp in 2017 as “the general freedom to innovate without prior constraint.”
A more complete description from Thierer’s 2016 10-point blueprint for permissionless innovation and public policy states that “experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.”1
Anyone who’s familiar with the Silicon Valley mantra of fail fast, fail forward — or the more aggressive version of moving fast and breaking things — will be familiar with the idea of innovating without permission. And I’d be the first to acknowledge that there’s merit in allowing people to make mistakes and learn from them rather than being hyper risk-averse.
And yet, compelling as some of Thierer’s arguments are for loosening the chains of oversight and governance, I have deep concerns about the idea that we can fix any problems that transformative and potentially destructive technologies potentially cause after the fact — rather than anticipating them and navigating around them …