AI for Marketing Content—Wild Success or Epic Fail?

AI for Marketing Content—Wild Success or Epic Fail?

Disillusioned—but Hopeful

Gartner identifies a “trough of disillusionment” in their hype cycles—right after the “peak of inflated expectations.” At least for me, generative AI authoring of original marketing content is in the disillusionment phase. I’m not alone. A Hootsuite survey insight: “…marketers find themselves trapped in a time-consuming cycle of manual labor and subpar outcomes, revealing a deep disconnect between expectations and the actual utility of generative AI tools in marketing.”

My long-time product marketing client heard voices among their executives: “Can’t AI do it?” implying a need to reduce internal and external resources (and budget) for writing content—from web pages to white papers, from case studies to solution briefs. I am fully committed to helping them get there.

I had high hopes. I have a degree in linguistics, a background in programming, and a lot of writing experience. I enjoyed science fiction where a self-aware AI built into a starship could be instructed via a natural language voice interface. This initiative was made for me, and I was up for the challenge! Or so I thought.

I haven’t given up—hope springs eternal! But I, like many, have run into some “epic fails.” I’ll share what worked well, what didn’t, and what I’ve learned to improve AI’s results. If it saves you even a couple hours of frustration with AI, I will have done my job.

Why Letting AI Write Your Content May Not Work—Yet

To state the obvious, AI is not human. It is powered by large language models (LLMs), using probabilistic systems to generate text by predicting the next word based on patterns in their training data. They rely on statistical associations, not true logical reasoning or understanding. At least that’s my non-techie view of things.

My biggest mistake? Assuming that because AI can mimic human conversation and tap into seemingly endless data, it can reason like a human and flawlessly follow instructions.

Spoiler alert: They can’t! AI’s very strengths often make it seem like a more human and relatable Mr. Spock. (If you’ve never had a fun conversation with an AI, give it a try.) 

But the bottom line is this: AI is not at its best when authoring original content. So if you—or your management—thinks it can do everything with high quality and in record time, they are sadly mistaken. Consider these failings:

  • Speed isn’t everything: AI turns out text fast, but it often misses context, nuance, or structure. This can leave you with a big mess to clean up.
  • AI hallucinations are real: I have seen AI invent customer quotes and product features, swearing they are legit. 
  • Quantity over quality: Ask for brevity, and you might still get long-winded, repetitive text that buries the good stuff.
  • Poor sourcing: Even with clear sources provided, AI can ignore them, citing non-existent or wrong articles.
  • Off-the-rails outputs: Despite clear prompts, AI can skip requested details, add irrelevant ones, or ignore your guidelines entirely. And then forget what it has done.

These issues show it’s a pattern-based word generator, not a reasoning machine. So don’t be surprised if it takes more time to fix the content than writing it the old-fashioned way. Another spoiler: There is some light at the end of this tunnel.

Prompt Engineering Matters—but It Isn’t Everything

Depending upon where you are in your AI journey, you may see prompts as a simple way to ask questions or search for information. As you gain experience, you’ll find that you can give it much more complex instructions. For a good introduction, see Prompt Like a Pro: How Aventi Consultants Are Actually Using AI (And What You Can Learn). That blog offers practical tips—and shows why prompting is now a core skill for consultants and marketers alike.

Good prompts do not guarantee good results—but bad prompts will certainly lead to poor ones. Prompts will vary based on scope, complexity, audience, desired output, and the amount of source material involved. A vague or poorly defined prompt leaves AI guessing, resulting in generic or off-target content that requires heavy editing.  

Ironically, AI itself is a great source for learning good prompt design. To illustrate, I asked ChatGPT to create sample prompts that might seem reasonable but would not produce usable results. The following example prompts—direct from the AI—show a weak prompt and a revised one that includes the right elements.

To summarize, almost every prompt should have—except for simple questions:

  • Purpose or desired asset, with background and context. Examples: Web copy, solution brief, promo email, customer story
  • Target audience. Examples: IT decision-makers at mid-size B2B companies, data leaders who are prospective subscribers, tech-savvy millennials
  • Structure for output. Examples: word count, bullets, number of options, prescribed format
  • Key message or focus. Examples: Time to value, security, ease of deployment, ROI, benefits of specific product features
  • Tone, style, brand guidelines. Examples: Professional and authoritative, casual and approachable, consultative, detailed brand guidelines
  • Constraints or guardrails—things to avoid. Examples: Don’t mention pricing, use only exact customer quotes, avoid technical jargon
  • Source material or context and how it should be used. Examples: Product summaries, draft copy, competitive positioning, past content examples

AIs were useful to improve my prompts. I often ask AI to review my prompts and propose corrections. Not only does that help clarify what I need, but it also increases my chances of getting a usable result.

To help structure things more clearly, a colleague advised me to format prompts in markdown—a lightweight, plain-text formatting language that requires no special software. It’s easy to learn and makes it simpler to organize headings, bold text, lists, links, and more. You don’t have to know markdown to write effective prompts, but if you write a lot of them, it’s probably worth your time (see: https://www.markdownguide.org).

Once I had my prompt formatted in markdown and the AI confirmed it was clear and complete, I’d move on to the next step: generating content in a format I could use with minimal editing.

Unfortunately, AIs often don’t know their own limitations—and neither did I. I had visions of creating a prompt library that others on my marketing team could use. Just paste in the right source materials, and the prompt (with some tweaks from me) would take care of the rest. I created templates for case studies, solution briefs, and web pages to start.

Because my client had standardized on ChatGPT, I used it for a project to generate customer stories based on presentation transcripts from a user conference. But even my best prompts couldn’t stop it from derailing the project.

In short, it all hit the fan.

Why human review is mandatory: Lessons from the trenches

I learned the hard way that prompts aren’t everything—and why I needed to carefully review each step. My custom GPT (named PMM Team Member, ChatGPT configured for this client) had blessed my template for a customer web page as being complete, clear, and covering all the bases. I included a lot of source materials to use (most in markdown format): 

  • Transcript and presentation from the customer
  • Links to example customer pages
  • Web page template with desired sections, character limitations, and more
  • Style and messaging guidelines

I expected it to faithfully execute the prompt. But when I put it into action for a high-profile customer story, things quickly went sideways. So what went wrong? I let ChatGPT count the ways.

I didn’t expect this epic failure. And while I’d like to say this was an isolated occurrence, it was not. I’ve seen too many admissions of error from ChatGPT, many starting with “Good catch!” or “My bad.”

Finding exact customer quotes from the transcript was the most repeated and serious error. If I had not checked every quote myself, I would have had eight bogus quotes and two missed actual, high-value ones. This could have damaged the relationship between my client and an important customer. 


Here’s the simplest example I could find from ChatGPT (I have many): 

Me: Shorten “Support territory planning, segmentation, and whitespace analysis with complete, governed data” by 6-10 characters if you can.

It: Here’s a tightened version that saves 9 characters:

“Support territory planning, segmentation, and whitespace analysis with complete, governed data”

Me: You just repeated the same exact bullet yet said it was 9 characters shorter!

It: Touché—and fully deserved! :sweat_smile: That was a swing and a miss.

I’m not laughing. But this taught me valuable lessons and caused me to lower my expectations.

Keep AI From Going off the Rails: My Best Practices

I learned quickly that AI was unable—at this time!–to meet my high expectations. It’s not human, and it doesn’t follow prompt instructions well. It often didn’t see the flaws in its results until I pointed them out. And it definitely did NOT save time in many cases. 

Creating my PMM Team Member custom GPT did create better results, but it took more work than expected and the results were often still poor. If you haven’t tried custom GPTs yet, you might want to check out Liza Adams’ post on LinkedIn.

My tips from the trenches:

  • Less is often more. AIs can lose their way when overloaded with many sources, extensive GPT configuration, and long prompts. It’s counter-intuitive, I know, but adding complexity often produces worse results.
  • Check everything. Treat AI as if it were a new intern and verify anything important. Check what IS there such as quotes and product facts, but also look for what IS NOT there (key messaging, missing proof points). 
  • Don’t accept corrections blindly. Its rewrites might sound confident—but that doesn’t mean they’re right. Question edits as you might push back on a junior teammate. 
  • Refine configuration or prompts. Some AIs allow configurations, some don’t. When not available, dedicate early conversations to defining your preferences and guardrails.
  • Force step-by-step execution. With longer tasks, make it stop and confirm before moving on—or feed instructions one chunk at a time. This reduces drift, gives you more control, and avoids long, repetitive conversations.
  • Block what it should not do. Ask it to ignore any loaded documents, use only what you provide, and avoid fabricating quotes or “facts.” This won’t eliminate hallucinations, but it helps.
  • Demand source traceability. Ask for file names, slide numbers, page numbers—or better yet, excerpts. Don’t trust “this is directly from the transcript” without proof. It lies.
  • Expect hallucinations. It answers with confidence and rarely says it doesn’t know. You may get fake or incorrect URLs, fake quotes, fabricated references. Always double check.
  • Use other AIs to check another. I often have Grok check ChatGPT or vice versa—and sometimes combine outputs to get a “best of both worlds” version.
  • Don’t let it restate your rules. Have it follow your rules, not paraphrase them. It often rewrites your instructions and misses key constraints.
  • Ask it to review your prompt before execution. Request ways to improve clarity, tighten scope, or add guardrails before you proceed to save rework later.
  • Plan for extra time. AI is blazingly fast—but reviewing, refining, and verifying its output takes time. Set realistic timelines with stakeholders, especially if they assume AI means instant results.

These aren’t comprehensive, but they can help you avoid some of the mistakes AI and I made together.

My process looks like this: I provide a shorter prompt—or just a portion—and get feedback. If I need to revise prompts or configuration, I do it immediately. I then feed the prompt to AI one step at a time. I question, edit, revise interactively as we go. We continue with next steps—with plenty of rewrites and verification along the way. Anything I want to keep, I paste into a Google Doc, so I don’t have to dig it out of a long chat later. 

The Light at the End of the Tunnel

I promised you my ideas of what AI is best at today in the world of product marketing. I’m sure you have your own favorites, but these are mine:

  • Brainstorming ideas, headlines, or angles
  • Quick answers to factual or directional questions
  • Tone, phrasing, and copy refinement when I’m in a hurry
  • First-pass reviews of other’s content for correctness and completeness
  • Summarizing long but non-critical documents
  • Drafting simpler content (e.g., web landing pages, content summaries, short messaging)
  • Analyzing customer data, competitor capabilities, or industry trends for insights

These seem to be the biggest time-savers for me, as they play to AI’s strengths today. Tomorrow, the stars!

Written By

Jan Gardiner

Jan brings over 25 years of product management, product marketing, and strategy experience, with a focus on enterprise tech, AI, and cloud-based solutions. As a product marketing consultant for Aventi Group, she partners with clients to shape messaging that’s clear, credible, and customer-focused—often with a healthy dose of AI support. A self-declared AI enthusiast, Jan actively works with both free and paid versions of tools like ChatGPT, Grok, Gemini, Claude, and Perplexity to stay sharp, fast, and future-ready. She’s also skilled in presentation development, video and image editing, content creation and publishing, and web content and design—making her a versatile go-to for executing high-impact marketing deliverables from start to finish.