ChatGPT and Emotional Outsourcing
Plenty of ink has been spilled concerning AI’s potential to plagiarize a college essay or automate people’s jobs. But what about writing that’s meant to be more personal?
Take for example the letter Vanderbilt sent to their students after the shooting at Michigan State University. This letter expresses the administration’s desire for the community to “reflect on the impact of such an event and take steps to ensure that we are doing our best to create a safe and inclusive environment.” It was not written by a human being.
The letter was written by an AI tool called ChatGPT, which is a user-friendly large language model (LLM). Similar to predictive text on your phone, ChatGPT is trained on a large body of text to produce sentences by selecting words that are likely to come next.
Many people were upset to learn that Vanderbilt’s letter was written using ChatGPT — so much so that the administration issued an apology. But it’s not clear what exactly was worth apologizing for. The content expressed in the original letter was not insincere, nor was it produced illegally. Nothing about the wording was objectionable.
This case raises questions about tasking AI with what I’ll call emotional writing: writing that is normally accompanied by certain emotions.
Examples include an apology, an offer of support, a thank you note, a love letter. What exactly is the source of unease when a human being off-loads emotional writing to an AI model? And does that unease point to something morally wrong? When we consider a few related cases, I think we’ll find that the lack of a human author is not the main concern.
Let’s start by noting that the normal writing process for a university letter is similar to the process ChatGPT uses. Normally, someone within the administration might be asked to write the first draft. That person researches similar letters, using them as a guide. This draft is then vetted, edited lightly as necessary, and sent to the campus community. It’s natural to think that the main difference is that there’s a human at one end of the process in the normal case, and not (or not really) in the ChatGPT case.
Will any human do? Consider other cases where emotional writing is done by someone outside the situation. A highschooler gets their mom to write an apology for them. A university pays a freelancer to express sympathy for its students. A man with no game hires Will Smith to tell him what to say to his crush. In these cases as well, the recipient of the speech might be reasonably disappointed to discover the source of the words.
These considerations suggest that what’s objectionable in the AI case is not specifically the lack of a human author. The problem is that the author is not bound up in the relationship for which the words are written.
What all these cases have in common is that they involve emotional outsourcing: someone avoiding an emotional task by giving it to someone (or something) else. In these cases, the deeply personal writing becomes a kind of mercenary task.
Surprisingly, even having the right person write the text may not be enough to avoid this problem! Suppose someone writes a love letter to their romantic partner, and after their breakup reuses the letter by sending it to someone new. I would be peeved. Wouldn’t you? The emotional work has been done by the right person, but not with the right aim; not with the current recipient in mind. The work has been outsourced to the writer’s prior self.
There are a couple aspects of emotional outsourcing that might seem problematic. First, outsourcing emotional writing draws attention to the fact that much of our communication is socially scripted. If even a well-trained computer model can perform the task, then that task is shown to be formulaic. In a society that prizes individuality and spontaneity as signs of authenticity, relying on a formula can seem subpar. (Consider how you might react if a person used a template for a letter of condolences: “Dear [recipient], We offer our [sincerest / most heartfelt / deepest] [condolences / sympathies] in the wake of the [tragedy / tragic event / tragic events /atrocity] of [month, day].”)
I think objecting to this feature of emotional outsourcing is a mistake. Social scripts are to some extent unavoidable, and in fact they make possible many of the actions we perform with our speech. The rule not to draw attention to the script is also ableist, insofar as it disadvantages neurodivergent people for whom explicitly-acknowledged social scripts can be more hospitable. While drawing attention to the formulaic nature of the communication is a taboo — and that partly explains people’s disapproval of emotional outsourcing — that’s not enough to make emotional outsourcing morally objectionable.
The second issue is more problematic: emotional outsourcing misses some of the action behind the speech that gives the speech its meaning. Language not only means things; it also does things. A promise binds. A statement asserts. An apology repairs. (Often the action speech performs is limited by what is taken up by the audience. I can say “I do” as often as I’d like, but I haven’t married someone unless that person accepts it.)
Emotional writing performs specific actions — consoling, thanking, wooing — not only through the words it uses. It also performs those actions in part through the act that produces those words.
Writing out a thank you note is itself an act of appreciation. Thinking through how to express care for your community is itself an act of care. Putting words to your love is itself an act of love.
Part of what makes the words meaningful is lost when those prior actions are absent — that is, when someone (or something) else produces them. People often say with respect to gestures of kindness, “it’s the thought that counts.” When ChatGPT is used for emotional writing, at least some of that thought is missing.
Keeping these issues in mind, it’s worth asking whether outsourcing emotional writing to AI is entirely bad. Thinking deeply about grief can put people in a challenging place emotionally. It could trigger past trauma, for example. Could it be a mercy to the person who would otherwise be tasked with writing a sympathy letter to leave the first draft to an LLM that feels nothing? Or is it appropriate to insist that a human feel the difficult emotions involved in putting words to sympathy?
There may also be cases where a person feels that they are simply unable to express themselves in a way that the other person deserves. Seeking outside help in such a case is understandable — perhaps even an act of care for the recipient.
I have argued that emotional outsourcing is an important part of what people find objectionable about tasking AI with emotional writing. Emotional outsourcing draws attention to the formulaic nature of communication, and it can mean missing out on what counts. However, much remains to be explored about the moral dimensions of emotional outsourcing, including what features of a case, if any, could make moral outsourcing the best choice.