This week I commissioned a promising young poet, artificial intelligence chatbot ChatGPT, to write a haiku about the Great Fire of London in 1666. A few seconds later, the following came out: “Ash falls like snow/Great fire sweeps through the city of London/Destruction reigns supreme.”
Well, not bad. In the first line there is a parable of the seasons. I’m less convinced of the second, which reads suspiciously after the task itself. The third has too many syllables, but also a double pun on “Reigns”, evoking both the English monarchy and ash rain. Was it on purpose?
It surpassed the sonnet I asked ChatGPT to write on the same topic, which had dodgy meter and ponderous rhymes (“In the end the fire was finally quenched/Leaving a legacy of courage and strength”). Also, remember the country chorus about New Year’s: “I’ll raise a glass to the old/And toast the new/Bring the fireworks and cheers/It’s time to start fresh.”
Where all the spare creativity came from is hard to say. As ChatGPT answered my question, “Large language models like me are trained on massive amounts of text data, ranging in size from hundreds of gigabytes to several terabytes.” But there’s a lot more out there: When I type various commands into a text field entered, it quickly filled most of them.
Many others have been playing around with ChatGPT since it was launched by OpenAI, a California company, last week. The machine-learning oracle creaked under the strain of demands from more than 1 million users, ranging from creating short essays to answering questions. It wrote letters, gave basic medical advice, and summarized history.
ChatGPT is incredibly impressive, as is Dall-E, the AI generator of digital images from text prompts first introduced by OpenAI last year. Once you’ve tried both, you can’t help feeling that natural language agents will revolutionize everything from music and video games to law, medicine, and journalism. The chatbots are coming fast for us professionals.
But ChatGPT is also like some people I know: it can turn sketchy information into fluent and persuasive answers. It sounds right even if it’s making things up based on something it’s read somewhere, itself choked up from other sources. His smooth, articulate voice is usually persuasive, but not entirely reliable.
Take the five paragraph essay he produced when I asked him to describe Hamlet’s treatment of Ophelia in Shakespeare’s play. This was a fair clarification (“Throughout the play, Hamlet is caught between his duty to avenge his father’s murder and his love for Ophelia”), but asserted that “Hamlet’s actions are motivated by a desire to protect those whom he loves”. For real?
Then there was the legal letter she had written on my orders to the other driver in a fictitious car accident. “According to the police report, you were speeding and did not stop at a red light, which caused you to collide with my car. . . I therefore request you to fully and fairly compensate the damage I have suffered,” she wrote. Sure, but imaginary.
The danger is that ChatGPT and other AI agents could create a technological version of Gresham’s 16th-century coin counterfeiting law that “bad money crowds out good”. When an unreliable linguistic mashup is freely available while the original research is costly and tedious, the former will thrive.
Because of this, Stack Overflow, an advice forum for coders and programmers, this week imposed a temporary ban on its users from sharing answers from ChatGPT. “The main problem is that the answers, which [it] Products have a high rate of bugs, they usually look like they could be good,” the moderators wrote.
ChatGPT’s creative works are less prone to being exposed as imaginative. Even if they’re mediocre, their sonnets and Dall-E’s imagery aren’t definitely wrong. But Sam Altman, CEO of OpenAI, believes his agent will become a useful research tool. “We can imagine an ‘AI clerk’ taking natural language queries like a human,” he wrote.
I can imagine that: it almost feels like it. Profound essays on Hamlet prove lengthy, but could accurately list the scenes in which Hamlet and Ophelia both occur. There was also a succinct summary of Formula 1’s top drivers in the past. This type of basic research saves people time.
But it has to be used carefully, and there’s the catch. ChatGPT is like an urban, high-spirited version of Wikipedia or Google search: useful as a starting point, but not for complete answers. It’s all too true to journalist Nicholas Tomalin’s summary of the essential qualities for his job: “Rat-like cunning, a plausible demeanor and a little literary talent.”
There is no point in trying to stop ChatGPT as it has now been unleashed and will likely get better. Over time, we will discover applications for natural language AI agents that we cannot yet imagine. In the meantime, I hope destruction isn’t the top priority.