Can ChatGPT be used to extract useful learning insights from long and meandering conversations between learners and AI?
Last week I posted what was intended as a slightly cheeky piece about why instructors shouldn’t ask their students to simply “show me your prompt” when using AI in assignments.
I hadn’t intended it going anywhere other than this. But a couple of exchanges this past week planted a “what if” question in my brain that refused to go away—what if AI models could be used to extract useful insights from long, complex, sometimes tangential, and often non-linear conversations between students and AI apps.
It’s not a particularly original question. And is one that educational theorists, practitioner, and learning companies are increasingly grappling with. For instance, Instructure (maker of the Learning Management System Canvas) recently announced a partnership with OpenAI that will, amongst other things, “create visible learning evidence” based on student-AI interactions. There are also approaches like the Prompt Analytics Dashboards described by Kim et al. to analyze student-ChatGPT interactions in English as a Foreign Language writing. I’d also be remiss if I didn’t mention my colleague Punya Mishra’s work and thinking here.
But despite these and a few other instances, I was surprised by how little work has been done on approaches to assessing student progress using informal and convoluted conversations with AI. And so the rabbit hole opened up …
Related:
More on artificial intelligence at ASU.