I’ll admit it: I’ve yet to use ChatGPT. And I am still skeptical that it is going to fulfill all the marketing potential I see touted on LinkedIn.
But I did just write an article for a client and was given AI-generated content as the starting point. And below is what I learned during my first experience.
I pass this along to help out anyone else who is either a) handed an outline generated by ChatGPT as I was, or b) generating such outlines and giving them to writers to use.
Because, regardless of my skepticism, I realize that AI and tools like ChatGPT are here to stay. And I’m not trying to dissuade anyone from using them. I am only trying to raise awareness of some pitfalls to watch out for.
First Lesson: Don’t Trust the Citations
First off, most of the citations in the document I was given were useless. By citation, I mean the source for information ChatGPT provided. For example, if ChatGPT said 23% of companies with such and such were 11% more profitable, I—as a responsible writer and editor—would verify that.
And in most cases, I couldn’t.
Sometimes no source was provided and I was hunting down the statistic or statement to verify it, and sometimes a link didn’t work. Sometimes ChatGPT was just plain wrong.
Sometimes ChatGPT was just plain wrong.
In one case, ChatGPT gave me a statement that was embellished. It had added information to the citation that was not true. Most of the statement was true, but not a few words inserted between commas. All I had to do was remove those few words to make the sentence truthful and accurate, but I had to spend time verifying not only that citation, but then every single one, once I realized I couldn’t trust what was in front of me.
Another citation cited an academic paper that could only be accessed by creating a login, which I was not going to do. I tried to verify the information in other ways, but couldn’t, so I left that statistic out, and searched for something similar.
The lesson here? Don’t trust the citations. Verify every single one—and not just the source, but the information cited. It’s on you to make sure it’s factual and true.
Second Lesson: The Need for a Human Touch
Although the inaccurate citations were concerning, I think the biggest flaw was the need for context.ChatGPT spewed out a long list of points, but there was no context. It read like a checklist, and who wants to read a checklist?
It read like a checklist, and who wants to read a checklist?
That was where I came in as both a writer and an editor: to provide an introduction that set the stage for the article, and to give context and flow.
It’s one thing to have a list of facts (or pseudo facts) which AI can provide, but quite another to provide something enjoyable to read, something a human being chooses to spend time with. (Because let’s remember that we write to be read. We aren’t writing simply to spew words out into the world. Well, sometimes we are, but that’s word puking.)
ChatGPT also lacked a deeper insight based on human experience. For example, it cited lower turnover as a cost benefit due to money saved when a company doesn’t have to pay for recruiting and training. However, it didn’t include the more intangible but important benefit of keeping institutional knowledge within the company. That was a point I added based on my own knowledge.
The lesson here? Humans aren’t machines so humans don’t consume information in the same way that AI does. For now, at least, make sure you have a human touch when using ChatGPT for content so what you create actually gets read.
(The Plain English Foundation did a comparison of ChatGPT editing to human editing. Read the report here.)
Third Lesson: A First Draft? Or a Starting Point?
I’ve heard people say they will use ChatGPT for a first draft then polish it. That’s a scary thought. If my experience is typical, AI didn’t give me enough for a first draft. It gave me some talking points that I could start with, but not even a structure. It had a numbered list of points, but that was not structure. What it gave me took some serious re-ordering to make it flow better.
The lesson here? Don’t assume that you have a structure to start with. Make sure the flow is logical to your reader. You are building an argument or making a case or somehow trying to persuade someone of something. After all, that’s why you’re writing, right? Make sure it makes sense.
And that is my note of caution for marketers in particular: Be wary of using AI to churn out quantity over quality content.
Case in point: I recently talked to a CMO who complained that he didn’t know what to produce for content because he sees his competitors churning out a bunch of clickbait that’s simply content full of fluff, not substance. There was no value in it. He didn’t see how his content would be able to stand out. I suggested several topic ideas that got him thinking, but
…the reality of the firehose of quantity-over-quality is still something he’ll have to contend with.
So Where Does AI Fit?
Now that I’ve had my first experience with AI as a starting point for writing an article, I can see how a writer can save time in quickly gathering a lot of useful information. Perhaps, however, it wasn’t as much time saved as you might at first think, since every citation had to be either verified or corrected (or in same cases discarded). But it was admittedly better than staring at a blank page (or screen) when getting started.
Am I sold on AI for content creation? Not yet. As I’ve quoted Ben Franklin as saying before, either write something worth reading, or do something worth writing. If ChatGPT can’t do that, you need to.
Photo by Markus Winkler: https://www.pexels.com/photo/the-word-chatgpt-is-spelled-out-in-scrabble-tiles-18512795/