Generative AI is quickly transforming–some might say disrupting–the practice of copywriting. Those of us who’ve made the craft of writing central to our careers are wondering where this is all going to lead.
As SADA Content Editor, I’ve been using generative AI to help produce copy on a variety of subjects, from cloud security to generative AI itself. While it still seems too early for anyone to claim to be an expert on generative AI, certain best practices and strategies are beginning to emerge.
Consider this post an attempt to contextualize not just how to use generative AI, but how to think about it as we use it.
We’re also seeing some interesting signals from Google that Workspace is about to get a lot more interesting, with the integration of generative AI tools. I don’t think it’s an exaggeration to suggest that AI is poised to upend the way we all conduct our day-to-day business as radically as when we leapt, decades ago, from hand-written memos to email.
Consider this post an attempt to contextualize not just how to use generative AI, but how to think about it as we use it. I’m going to focus specifically on writing, but know that there are larger implications here for designers, project managers, engineers–honestly, anyone who uses a computer–as well. It’s important to not only experiment with these powerful new tools, but keep a conversation going around how they influence our understanding of what it means to communicate, and to what end.
1. The Casio keyboard dilemma
When I was 12 years old, I got a Casio keyboard for Christmas. This was an incredibly exciting gift for about three minutes. After I tried out all the various fake instrument sounds and drum beats, this dazzling toy made one fact abundantly and painfully clear–I had no clue how to play the piano.
Generative AI isn’t a magic wand that can turn STEM-focused engineers into wordsmiths or, conversely, writers like me into coders. Like a Casio keyboard under the fingers of a concert pianist, generative AI can produce impactful copy when it’s in the hands of writers with a strong sense of what goes into creating a compelling reading experience, and how to achieve it through certain road-tested craft principles. If you possess strong opinions about the Oxford comma, semicolons, passive verb tense, the objective correlative, how many spaces to include around an em dash, and the difference between “that” and “which,” you’re well prepared to make your generative AI copy sing.
As an editor, you often find yourself in a position of considering two or more equally valid points of view or ways to express something. In such cases, you juggle how the copy supports business objectives, how a particular message will land for a particular audience or professional role, and even how the copy will be read by search algorithms. Some of these judgment calls are obvious, like correcting a typo or misspelled word. Most are less obvious, like deciding what features to emphasize in a product roll-out, where to give readers a break with white space, or how to manage subtext.
2. Subtext
A few words explicitly on the concept of subtext… What is it, exactly? Here’s Merriam Webster: the implicit or metaphorical meaning (as of a literary text).
You pick up on subtext whenever you “read between the lines” and perceive a message that transcends the literal meaning of the words. For instance, say you’re reading a well-crafted blog post on gardening. While the subject of the post may be about how to cultivate dahlias, the style in which it is written, the writer’s mastery of language, the design elements of the blog–even the font–all convey the message that this writer is an authority on this subject and can be trusted. Conversely, a sloppily-composed, typo-riddled post by the most brilliant horticulturist in the world makes them appear less trustworthy than they actually are.
Communication happens simultaneously on both textual and subtextual levels. Masterful editorial practices involve subtly managing subtext through tone, rhythm, sentence structure, and pacing in order to slip a layer of meaning beneath the surface meaning of the content.
Communication happens simultaneously on both textual and subtextual levels.
As election season nears, pay close attention to political ads–you’ll see the virtuosic use of subtext in nearly every frame. A particularly nasty attack ad might not say anything explicitly negative about a candidate, but the way the video is edited, the images that are chosen to accompany the narration, and the tone of voice of the narrator can convey a message that slides beneath the surface of the words. That’s subtext.
It’s debatable whether generative AI is capable of producing copy that is rich in subtext beyond reproducing artifacts of subtext latent in the dataset. Unlike a human, an algorithm can’t think about what it’s writing while it’s writing it, and can’t think about what it’s reading when it’s reading it. That’s where human editorial oversight is so far indispensable, leading us to point #2…
3. The human/AI/human sandwich
Is generative AI going to eliminate all sorts of copywriting jobs? Having played with generative AI and LLMs for a couple of years now, that question worries me less every day. But make no mistake, the process of writing and editing copy is absolutely being turned on its head, with human brains now occupying the first and last steps of a three-step process:
- Prompt engineering
- Output
- Editing
Prior to generative AI, the typical copywriting process might work something like this:
- An editor gives you a vague writing assignment with an overly ambitious deadline.
- You produce a draft through trial and error, and in the process figure out what you’re actually trying to say.
- You hand the work over to your editor, who informs you that this actually wasn’t what they were hoping for at all, that they actually want it to be about X as opposed to Y.
- You scrap/revise/amend the draft/start all over again.
This cycle is repeated a number of times and typically involves caffeine. Gradually, iteratively, a piece of writing that everyone is happy with emerges through a process of negotiation, correction, clarification, addition, and deletion.
Gen AI forever upends this process, with an inexhaustible machine collaborator sitting in the middle of the figurative copy room, inexhaustibly spouting occasionally incorrect yet consistently confident-sounding copy. It’s like playing basketball with someone who sinks flawless three-pointers and who occasionally shouts “touchdown!”
With generative AI, the onus on the human writer shifts toward prep work–prompt engineering. This requires a more premeditated, granular articulation of how you want a particular piece of copy to read.
One fascinating prompt engineering strategy involves telling the AI to adopt a particular persona. “You’re a Phd-educated researcher specializing in machine learning solutions for the healthcare industry…” or, “You’re an entertaining blogger who writes about the latest trends in cloud security” This can feel a bit like how it felt to play on a playground with your friends when you were a kid. Role-playing, essentially.
Your human editorial expertise also comes into play when you craft one- or multi-shot prompts. These are prompts in which you provide an example of the kind of output you’re hoping to generate, for instance writing the first two paragraphs of a blog post in the voice and tone you wish to convey. Think of one- and multi-shot prompts as the sourdough starter of generative AI.
Gen AI… is like playing basketball with someone who sinks flawless three-pointers and who occasionally shouts “touchdown!”
Once your prompt generates its output, the human editorial process kicks into gear, as you first attempt to spot landmines of disinformation. This is especially true of any writing pertaining to technology. The kind of editorial work I do for SADA requires me to frequently wander into technical areas that I only partially understand. I depend a lot on SADA experts who have come to their expertise through rigorous study, trial and error, and praxis (the practical application of a theory).
Can I write a blog post about cloud security posture management with authority? What about GKE or virtual machines? When there’s a real risk of generative AI just making stuff up, and when the stuff it’s potentially making up is related to Python code, Kubernetes clusters, geospatial data, identity and access management, etc., it behooves me to seek the guidance of actual, breathing homo sapiens who understand what these terms actually, you know, mean.
Importantly, this part of human editorial oversight isn’t simply about catching blatantly false information. I’ve had a number of conversations with SADA subject matter experts who will review a piece of copy and say that, well, it sort of but doesn’t quite describe a particular system or program. These sorts of discussions are crucial for making distinctions between AI-generated content that is completely hallucinatory and content that is improperly nuanced. Only humans seem capable of making these sorts of nuanced calls, at least for now.
4. Do you actually want to read this? Does an algorithm?
Gut check. Are you still reading this obnoxiously lengthy blog post? Consider all the other amazing things that could be occupying your attention instead. I mean, off the top of my head, I can think of at least a dozen articles, blogs, podcasts, TV shows, not to mention so many books that I might be otherwise enjoying instead of this post about how to use generative AI to produce a top-notch copy. And oh man, I haven’t even gotten to today’s Wordle yet.
This is to say that every writer must respect the most omnipotent force in communications–the ability of a reader to stop paying attention. When it comes to writing for the web in the age of TL;DR, we can even measure how many readers bounce from a particular post or page. We definitely know when we’ve lost your attention.
Generative AI is fantastic at churning out the roughage of the written word, paragraphs and bullet points, headers, and subheaders, brick after brick of content, content, content. What it does less well is provide a reason to continue reading because it has no mechanism to empathize with the reader.
Claims that artificial intelligence is conscious make the same mistake as the cat that thinks it sees another cat in a mirror.
Writing is an act of empathy stemming from an understanding that the drama of language isn’t happening on a page but on the proscenium of human consciousness. Claims that artificial intelligence is conscious–at this point anyway–make the same mistake as the cat that thinks it sees another cat in a mirror. A convincing facsimile of sentience generated by neural networks is categorically different from consciousness arising via the astrocytes, glial cells, microglia, and oligodendrocytes that comprise the squishy, mysterious, and skull-bound human brain.
Cards on the table. As I’ve been writing this blog post, I’ve remained aware of how boring it is in danger of becoming. In the coming weeks, I’ll be monitoring certain metrics like bounce rate, which indicate the percentage of visitors who navigate away from a website after viewing just one page, and thereby measure precisely how insufferable or insightful this post actually is. To boost my metrics, I’ve attempted to maintain your attention through rhetorical flourishes like the neuroscience vocabulary in the previous paragraph, and by varying the length and structure of sentences.
Also white space.
And sentence fragments.
They give readers a place to pause. To absorb information.
Given that this copy is appearing on a website in 2023, under the banner of a Google Cloud partner, there’s another factor at play–humans aren’t the only ones reading this post. Search Engine Optimization, the practice of producing content so that it is easier to find in search results, has trained a generation of web-focused copywriters to keep one eye on the algorithms. Consequently, many of the same attributes that make a piece of writing appealing to a human reader–plenty of links, references to unusual or out-of-the-ordinary topics–convince the algorithms that this is something worth reading, and worth bringing to the attention of readers.
Ultimately, SEO is only useful insofar as it connects people who seek information with people who provide it. A mind that understands how easily distracted minds can be is one that’s equipped to understand when copy produced by generative AI expects too much from our attention. Stepping outside of the writer role and constantly putting yourself in the position of the reader prevents the sort of copy that merely gobbles up space, and is a skill that appears restricted to humans.
5. A matter of attention and focus
Here’s where one’s brain might start to ache a little bit.
Recently, some of us at SADA were treated to a presentation on the future of Google Workspace, which is set to include myriad improvements and generative AI integrations. This includes not only gen-AI copy and imagery, but the ability to summarize, elaborate, and translate multimodally (say, turning an email into a Slides deck or a spreadsheet into a document). It’s super exciting, and I can’t wait to dive in.
Take a moment and imagine what’s coming. You’re going to be able to ask an AI to write an ebook and attach it to an email that you also asked the AI to write. Then you’ll send this email to someone who will ask their AI assistant to summarize the email and ebook for them, then compose a reply on their behalf.
It takes a while for the implications of this new way of working to sink in. If an AI is writing what “you” write, and reading what “you” read, then where exactly are… you?
I posed a version of this question to an engineer at Google and two words stood out in his response–attention and focus. He noted that he’s currently following over 100 chat threads and that his emails come in hot and heavy all day. He described an all-too-familiar mode of professional existence that most of us can certainly relate to. These new features are going to help those of us who find ourselves drinking from the digital firehose.
Electronic communication is developing a meta-layer that reinforces three skills we use every day–the ability to focus on what’s most important, ignore what isn’t, and learn about unfamiliar subjects through contextualization.
Maybe using AI to create and summarize content isn’t about making content better. Maybe it’s about making content more manageable. Just think of how many emails I could have an AI write on my behalf, and how many people wouldn’t need to actually read them. Sounds crazy, right? Are we heading into a future in which no one actually reads what no one actually writes? How will that change the way we work and collaborate?
Perhaps what’s really happening is that electronic communication is developing a meta-layer that reinforces three skills we use every day–the ability to focus on what’s most important, ignore what isn’t, and learn about unfamiliar subjects through contextualization. At the very center of these three practices is human willpower, expressed through the decisions that move information, matter, and money around the world.
So picture this. You’re sitting at your home office, reading summaries of chat and email threads, stopping occasionally to drill down into various correspondences, coming back up, making decisions, articulating the reasons behind those decisions in emails and one-pagers, sending these off to other decision makers who operate in the same manner, entwined in a fractal of articulation, elaboration, and summarization.
While artificial intelligence learns from every expression and outputs become ever more refined, we’ll plunge further into a future where our frequently inarticulate and occasionally poetic species continues its relentless pursuit of meaning and purpose through work.