When is it OK to use AI to craft emails, and when do you risk stepping into a whole lot of unexpected hurt? Intrigued by the question, I brushed off my risk hat and dived in.
Imagine you email a colleague with a suggestion, and the response you get is personable (“thank you for your enthusiasm …”), empathetic (“I appreciate that you’re eager to contribute …”), affirmative (“it’s clear you bring valuable experience and energy to the office …”), but also reminds you of your place (“that said, I want to be clear about how we handle …”).
Now imagine that, at the end of the email, you see this line:
“Would you like me to make the tone slightly sharper (more “firm correction”) or slightly warmer (more “mentorship with a gentle reminder”) depending on how you want this person to walk away feeling?”1
It’s a scenario that seems to make headlines with some regularity these days where lawyers, academics and others inadvertently copy and paste AI-generated text into papers and reports—without noticing they’ve also pasted part of the model’s meta-response.
Professional cut and paste jobs like this are embarrassing but don’t directly impact most of us directly. When missteps like this occur in emails though—especially personal emails—the fallout can be considerable. And it potentially jeopardizes both the sender and the organization they’re a part of, as well as impacting the recipient.
Despite this, there’s a growing trend in using AI to craft emails within professional settings. This isn’t necessarily a bad thing as AI be useful here in a number of ways. The trouble is though that, without clear guidelines on how and when to use AI in email communications—and when not to—there are potential risks that can get serious pretty fast if not managed effectively.
And while there’s a lot known about institutional communication dynamics, and some work that’s been done on AI-mediated communication, there’s surprisingly little been written about how to avoid serious AI email mishaps.
So I thought I’d dig out my risk hat and dive a little deeper—including developing a simple risk model …
Related:
More on artificial intelligence at ASU.