Using AI as a writing partner

I’ve been curious about GPT-3 as a creative tool since reading about Matt Webb’s experiments in 2020. GPT-3 (Generative Pre-trained Transformer 3) is a language model that can create realistic text. The results are impressive, and it has even been used to write a Guardian editorial. In his experiments, Webb was confronted by phrases and ideas that did not exist before. The model produced original concepts such as the “The public bank of Britain”, and passages about “a three-mile wide black ring [that] was found in the ocean using sonar“.

The GPT-3 model is based upon millions of words of Internet content, and Webb has described elsewhere how “Reading GPT-3’s output, for me, feels like dowsing the collective unconscious. I’ve never seen anything so Jungian.

You can get a quick feel for GPT by playing with the Talk to Transformer page, which allows you to experiment with the basic trained model. There’s a good overview by the Verge, ‘OpenAI’s latest breakthrough is astonishingly powerful, but still fighting its flaws.’ Or, for a more whimsical experiment, Janelle Shae tried asking the model how many legs a horse has, concluding, “It’s grammatically correct, it’s confident, and it’s using a lot of the right vocabulary. But it’s also almost completely wrong. I’m sure I’ve had conversations like this at parties” The origins of the model means it’s also particularly well informed about topics such as Miley Cyrus and Harry Potter.

Sadly, I’ve got no chance of getting my hands on GPT-3 any time soon, since it is kept under tight control to stop it from being used for evil. But then I remembered that Shardcore had used the earlier GPT-2 model for his software-generated book length collaboration with John Higgs The Future Has Already Begun.

I realised that GPT-2 ought to be sophisticated enough to produce something worthwhile, so I decided to give the basic GPT-2 model some additional training based on my creative writing. I’ve read recommendations that you need 25MB-100MB of text, but I’m using 6MB of my writing as input (generated from the source documents using Apache Tika). I was then able to use this with a colab notebook build by Max Woolf to do the hard work.

(I’d not used colab notebooks before, but I am stunned at how they combine workbook and instructions, along with a free VM to run it all on. For more detail, check out Robin Sloan’s post The Slab and the Permacomputer. It’s amazing to see how lots of people’s hard work has combined, allowing me to play with sophisticated models without knowing much about python or machine learning).

The snippets of text generated are identifiably mine in a strange way, but there are flights of fancy that surprise me. A description of a character: “He was a man of his word, not a man of action.” A phrase: “Nobody felt safe watching another human being do something with their lives“. There was a whole mad fantasy about “a group of ‘dusk-blue crabs’ who ’went by the name of ‘the great snout’“. There are also moments where the model just goes on and on repeating “Wax tins! Wax tins! Wax tins!”. Weirdly enough there was also a passage about a John Higgs:

John Higgs, the English economist and writer, died on 26th October, 2001. He was 83 years old. He was happy to join the world scene, and for good reason. He and many of his ideas were burned at the stake for their uselessness.

The main issue I have is my training data, which is unbalanced in various ways – a few novel-length texts, lots of notes. As clever as machine learning is, it’s only as good as your inputs.

Writing with GPT-X is not simply about churning out text – this text does needs to be worked on (This is not ‘cheating’ – Burroughs used to screen his manual cut-ups, looking for poignant and interesting generated sections). There are also different ways to work with the system – Robin Sloan has described some of the techniques he has used, such as hiding prompts from the reader (but not the model) to produce effective writing. These techniques are all waiting to be explored.

Matt Webb has written in detail about his experience of this collaboration in GPT-3 is an idea machine:

Using GPT-3 is work, it’s not a one-shot automation like spellcheck or autocomplete. It’s an interactive, investigative process, and it’s down to the human user to interview GPT-3. There will be people who become expert at dowsing the A.I., just as there are people who are great at searching using Google or finding information in research libraries. I think the skill involved will be similar to being a good improv partner, that’s what it reminds me of.

GPT-3 is capable of novel ideas but it takes a human to identify the good ones. It’s not a replacement for creative imagination. In a 15 minute session with the A.I., I can usually generate one or two concepts, suitable for being worked up into a short story, or turned into a design brief for a product feature, or providing new perspectives in some analysis – it feels very much like a brainstorming workshop, or talking something through with a colleague or an editor.

GPT-X can produce text faster than anyone can read it, but as Sloan writes, “it’s clear that the best thing on the page, the thing that makes it glow, is the part supplied by a person“.

For me, the question is whether it can produce interesting art (particularly art that is not solely interesting because of its process). What I’ve seen so far is both spooky and exciting. Whether this is more than a cheap trick of text remains to be seen, but my initial explorations make me very excited about collaborating further with this model.

One thought on “Using AI as a writing partner”

Leave a Reply

Your email address will not be published. Required fields are marked *