Co-Writing with an LLM
It probably doesn’t come as a huge shock that most of the images used in Reversal are created using generative AI. I’m sure many of you—like me—can clock that distinct LLM vibe on a image from a mile away. People with faces just a little too Pixar-ish, a plethora of clutter, weirdly repetitive color-ways between background and foreground…
Using AI generated images helps me a lot in establishing the right look and feel for the story, getting me in the right headspace for writing. It helps me a lot when I’m trying to figure out how to describe the characters, what details to pull out while polishing scenes and dialogue.
But I also use LLMs on the actual writing itself.
That’s not to say LLMs are good authors. They’re actually pretty terrible. Especially with fiction. They overwrite, fill paragraphs up with cliche and have no sense of plot, characterization or pacing … which, duh, of course. An LLM is a statistical based model. It generates text by trying to predict what comes next. It’s basically fancy autocomplete. So naturally it gravitates towards bland, generic language. Its prose often borders on trite.
It also can’t actually reason about things. It can generate outlines of plots but will never be able to write an actual novel. At least not a good one.



That being said…
Some of the most fun and creative experiences for me as a writer have been writing in tandem with other writers. A bit like improv, a bit like role play, a bit like a prompt game, responding to the output of other creatives super charges my own creativity. But those moments of collaboration rely on recruiting willing coauthors, which can be tricky.
AI, if configured and trained correctly, could be that perfect compromise. The coauthor who is always online, always answers “yes and…” and never fights me on characterization.
And, in fact, LLMs are great tools to use as a writer. I find they help me get into the flow of writing faster and stay in that flow for longer.
While experimenting with different ways to have the tandem experience with AI that I longed for, I ended up building myself a simple word processor. When I feel like my flow is slowing down, I copy and paste a block of text into this simple piece of software and highlight the place where I want the LLM to make suggestions.
Sometimes I ask AI to complete a sentence, sometimes I ask it to add the next sentence. I have the number of tokens capped around fifty by default, so it never returns more than about a sentence and that’s usually more than I actually need.
The goal is not to have the LLM do my writing for me. The goal is to have the LLM nipping at my heels so that I keep writing. Most of its suggestions are bad, but looking at a bad idea often gives me a clearer mental picture of what I want to be there instead.
Look at this passage from Chapter 1.13
“Gene…”
“How? How is it possible?”
“It’s not,” Ethan shrugged. “Like you said.”
He had taken Gene to a Starbucks nearby to try to get him to calm down and sit still long enough to be talked out of believing the unbelievable.
I knew I wanted to fill things out by describing the coffee shop a little … but what to say about it? Nothing came to mind. I had a sense that it was probably a bit crowded and that the extra people made Ethan feel more comfortable that their conversation would not be overheard.
Here are the suggestions the LLM made to the sentence fragment “The coffee shop…”
bustled with noise and activity, a backdrop filled with distant laughter and espresso machines hissing—true snapshots of life’s mundane normalcy
was abuzz with laid-back conversations and soft cafe rhythms, adding an uncanny contrast to Gene’s angst a moment earlier
was buzzing with activity—at 8 am, it was the hub for the campus students and early-risers from the neighborhood—perhaps having something normal around Gene would help him snap out of his funny fever
was bustling with the usual hustle bustle—baristas yelling out complex drink orders, students quietly typing away on their laptops and contemporary music fighting to be heard over the din
vibrating with neutral tunes, mediocre jokes from students cramming for anatomy finals conveniently intermingled with silent aesthetes mutely judging each painting put up for sale around them.
None of those suggestions I like, but there are several phrases I think could work. In particular I like the expression “vibrating with neutral tunes”. Once I selected that option I can edit it to my heart’s delight.



I ended up cutting the rest of the sentence and incorporating bits of description inspired by details in the other options:
He had taken Gene to a Starbucks nearby to try to get him to calm down and sit still long enough to be talked out of believing the unbelievable. The coffee shop vibrated with neutral tunes, the white noise of baristas working their machinery and customers shuffling one by one to the cashier. Everything smelled caramelized and smokey. There was enough background noise for the place to feel safe, and enough distance from any one individual eavesdropper to feel private. Still, Gene seemed more distressed than he had been on the open plaza of the Capitol.
Sometimes all I need is a little help in between two passages I’ve already written. Sometimes I’m stuck and I need to do a couple rounds of back and forth with the LLM in order to get through the writer’s block.
It’s the same type of creative exercise that you might find in Eno’s Oblique Strategies, and I find that it works really well for me. Some days I can write a good 600-700 words without assistance, but other days I need that little nudge.
More Than Prompts
When I first started co-writing with LLMs I wanted to train one on my own writing, but most of my experiments in doing that didn’t result in anything especially impressive. In the end I defaulted to working with a more generic GPT-4 model.



If you’ve only used ChatGPT, you may not realize the level of control you can get from the models beyond their “chat” interface. LLMs take a couple of different inputs other than a written prompt, which influences what they return.
I typically interact with models over APIs, but you can actually enter all of this information into a ChatGPT window and it will parse it as config.
These are the most useful ones to know:
Temperature
How spicy do you want the model to get? A low temperature will return common phrases as the model gravitates towards the most likely combinations of words. A high temperature will encourage the model to select less traditional options as it constructs sentences. Responses at the highest temperatures tend to risk returning absolute gibberish.
Tokens
How long do you want the response to be? You can think of number of tokens is roughly the number of words. I keep this short, because I want just enough to kickstart my creative flow. Paragraphs of text are just paragraphs I have to delete.
N
How many responses do you want from the model? I have this set to 5 so that I can see a bunch of different options.
Prompts
The prompt in my custom word processor is assembled by detecting the text I’ve highlighted and wrapping it with some metadata I’ve programmed into the request. By default this is Making each option unique, what comes next: "${text}"
but I’m also able to configure additional metadata through the word processor itself (sometimes I write things like focus on giving me descriptions
or something similar)
This starts to dip our toes into the world of “prompt engineering” which is worth its own discussion.
What’s Next?
I haven’t completely given up on training a model on my own writing just yet. I didn’t care for the results, but I was training on Google Collab which has limited horsepower. A hardware upgrade should allow me to train more sophisticated models, which might give me better results.
The other thing I’d like to see if I can do once I’ve trained a model to write like me, is see if an LLM can normalize the work of two different writers. Meaning, take a passage from one writer and revise it into the style of the other. That would allow multiple writers to collaborate on the same story without disrupting the flow of the story.