With the advent of high quality computer-generated text, writers suddenly have a half-decent writing buddy who at least wants to do what they ask (even if it doesn’t always succeed) and has no desire to take any credit. Never before could writers get paragraphs of fluent text on a topic of their choice, except from another writer. (Ghostwriting may be an appropriate analogy for these writerly use cases of AI.) This is posing questions to writers everywhere: Which parts of writing are so tedious you’d be happy to see them go? Which parts bring you the inexplicable joy of creating something from nothing? And what is it about writing you hold most dear? I’ve spent the past five years working with computer-generated text systems as part of my PhD work in human-AI interaction, and talking to writers about how they do—or do not—want to incorporate them into their practice. The woman above, working on her fantasy novel, is based on a woman I interviewed as part of a study on the social dynamics of writers requesting and incorporating support from computers. It’s useful to think about three different parts of writing: planning, drafting, and revising. I consider these to be parts, and not stages, of writing; they are cognitive processes involved in writing, such that planning can occur at the beginning but also middle and even end of a writing project. By thinking through specific parts of writing, we can understand in more detail how computers will end up affecting writing as a whole. This exploration will not only help us understand the future of writing, but also help us create the kind of future we’re happy to live in. Let’s go back to the writer working on her first novel. She was happy to let a computer push her through writer’s block. But she was adamant that crafting the plot line was fundamentally human. The plot was the story she wanted to tell. This is where she felt her intention lay, what she workshopped and worked over. Other writers agree with her; there is a kind of problem-solving involved with figuring out where a story or poem or essay is going, and many writers feel they are uniquely positioned to solve such problems. But an AI writing system proposing a potential solution can be seen as a challenge, where the computer’s ideas would represent a low bar that the writer must improve on, a first step toward crafting an even better solution. One writer I interviewed was working on a TV pilot script. She was told, when it comes to TV and comedy, “If you feel like turning right, turn left.” Her writing had to be uniquely original, because people have seen so much TV already. No matter how good a computer-generated line of dialog might be, she felt she would always have to be better. She imagined using an AI writing system that would finish a scene for her, and then she’d know there would be something more unexpected and insightful she would have to find. What about the tricky act of getting words on the page? In cognitive psychology research, this is often called “translating,” because we’re translating amorphous ideas into discrete words. Most writers, or really most people who have to write, know the feeling of a mind gone blank. The average writer trains themselves out of this fear, but no matter how many times you’ve put words on the page, you’re bound to encounter that moment when you don’t know what comes next. This is literally the task most computer systems are trained to do: predict what comes next. The role of AI writing systems as drafting buddies is a big departure from how writers typically get help, yet so far it is their biggest selling point and use case. Most writing tools available today will do some drafting for you, either by continuing where you left off or responding to a more specific instruction. SudoWrite, a popular AI writing tool for novelists, does all of these, with options to “write” where you left off, “describe” a highlighted noun, or “brainstorm” ideas based on a situation you describe. Systems like Jasper.ai or Lex will complete your paragraph or draft copy based on instructions, and Laika is similar but more focused on fiction and drama. These tools are good and getting better; an AI writing system is drawing on more text than any one person can read, and its ability to lean into the unexpected can be perfect for writers looking to make their writing feel more fresh. Computer-generated text has been likened to automatic writing, or a well-read but deranged parrot, giving it abilities almost tangential to those of human writers, perhaps even complementary abilities. Yet it’s interesting that so many AI writing systems are created to finish our sentence, or predict our next one, because when I’ve talked to writers about what they typically want help with, no one ever talks about asking a person to write for them. This isn’t the way writers typically interact with people when it comes to their work, even though it’s what computers are best at, and are mostly being used for right now. While some writers are eager to get sentences on demand, others are hesitant to let an external entity choose their words. As several writers told me, once something is on the page, it’s just a little bit harder to imagine anything else. This is one reason many writers don’t like to get feedback early on in a project; the work is too delicate, they need to shore up the idea such that others can see its potential. A computer, while not explicitly bringing its own intention, can disrupt the writer’s intention. And other writers simply take pride in sitting down and pumping out a thousand words. It’s like exercise. You need to keep it up, otherwise your skills atrophy. A computational perspective on your work is an old idea dating back to early research on statistics in language, using the frequency of words like the and with to discover authorial fingerprints in essays. But new technologies will sound more like a person, able to note that you’ve introduced a new term without properly defining it, or perhaps describe why a scene might be moving too slowly. Most writers are eager to get eyes on their work, and a computational eye may feel less frightening than your best friend; the computer might judge you, but not in a way that’ll impact your future relationship with it. Some writers think workshopping with a computer might be akin to talking to yourself, in that it’s private and feels internal, that it might not feel like someone else is in the room. Computers may become early-stage editors, giving writers an ability to get feedback before showing their work to someone more important. But writers are also bound to mistrust computers because, after all, where is the computer coming from? We can stand all manner of sin when it comes to feedback as long as we can contextualize that the feedback is coming from that professor who never liked our work, or our friend who is always positive, or an editor with decades of experience. It will take some time for us to understand what the computer really brings and how often, or in what situations, we should trust it. Writers I have talked to worry about the normativity of AI writing systems, that they will reflect a straight white male perspective, but also fear a system attempting to represent another particular perspective that may lead to tokenization and stereotyping. In this way, computers will be in a bind, but system designers may find ways to bring context to a computer without needing to liken it to a person or identity group. I want to come back to this idea of what we hold most dear, and the human nature of writing. It’s easy to be for or against including computer-generated text in your work, but I predict the conversation will get much more nuanced as we encounter the various ways computers can impact our writing. Writers want to protect their authenticity and intention. It may be useful to think of computer-generated text as dancing with the writer’s text. Most writers aren’t against collaboration. They just want to be in control of their own dance. As long as the computer can match the writer’s footwork, writers are happy to let a computer contribute to the performance. But the moment the computer steps on the writer’s toes, writers may get spooked. A plot point here, a stunning sentence there, an editorial comment that pushes the writer a little further; no one task is off the table. But when a computer ends up changing what the writer set out to achieve, writers may start to wonder whether their vision is getting muddied by an entity that lacks any experience with the real world. It’s also worth reflecting on computer-generated text from the perspective of a reader. When will computers start to interfere with our sense that a person, and not a machine, had something to tell us? It may be that readers, not writers, draw the line on what’s acceptable, but we haven’t gone far enough to have to make that decision. We may get there in this century, and we’re going to need to ask ourselves what about communicative intent is so important. If I have an idea for a novel, but the computer writes most of it, is it still my story? This question will not be answered with numbers about how many words I or the computer wrote. It’s going to be answered culturally; it’s going to be a feeling we have about where authenticity or truth really lies. But thinking more concretely about where computers can get involved will help us answer this question, and make sure we answer it in a way that respects what we really value about writing.