Having got my hands on Sam Altman’s latest AI offering yesterday, I’m beginning to wonder when research and scholarship that isn’t augmented by AI will be seen as an anachronism. From the Future of Being Human Substack.
This past Sunday, OpenAI launched Deep Research — an extension of its growing platform of AI tools, and one which the company claims is an “agent that can do work for you independently … at the level of a research analyst.”
I got access to the new tool first thing yesterday morning, and immediately put it to work on a project I’ve been meaning to explore for some time: writing a comprehensive framing paper on navigating advanced technology transitions.
I’m not quite sure what I was expecting, but I didn’t anticipate being impressed as much as I was.
I’m well aware of the debates and discussions around whether current advances in AI are substantial, or merely smoke and mirrors hype. But even given the questions and limitations here, I find myself beginning to question the value of human-only scholarship in the emerging age of AI. And my experiences with Deep Research have only enhanced this.
Deep Research pulls together a number of features from emerging AI platforms to create what is in essence a “research agent” that can iteratively reason its way through complex questions, while drawing on web-based resources and constantly checking its work. Using it feels like giving a team of some of the best minds around PhD-level questions, and having them come back with PhD-level responses — all within a few hours.
Expect, in some respects, the results far surpass what most PhD students are capable of — not because they lack the intellectual capacity, but because they lack the ability to synthesize ideas thinking and data from a vast array of disciplines and sources, and then crunch them into something that far surpasses the restrictive training and perspective that comes with disciplinary boundaries.
Not surprisingly, Deep Research has its flaws, and is still a few steps away from where I’d like an AI research agent to be. But I suspect that reasoning research agents like this will eventually make non AI-augmented scholarship and research look intellectually limited and somewhat quaint.
But back to my first experiences with Deep Research, and what I learned along the way:
Yesterday morning I got up at 5:45 AM as usual, emptied the dishwasher, brewed a cup of tea, and by 6:00 AM I was starting to engage with Deep Research.
I’d spent some time the previous day challenging OpenAI’s o1-pro reasoning model to develop and write a series of seven papers on foundational thinking around navigating advanced technology transitions. The process was much faster than what I could have achieved my own (by a matter of months). But I still found myself feeling that there was something lacking in the depth of simulated thinking and scholarship that the model was showing …