Discovering (Marketing) Ideas With Generative AI

Bruce Clark
19 min readNov 11, 2023

A five-step process

Source: Dall E 3.

Over the past several months, I have been using generative AI tools to explore new creative ideas. Some of these experiments have been on my own while others have involved students in my marketing classes. (For a description of one set of classroom exercises, see this: https://medium.com/@bruceclarkprof/a-generative-ai-teaching-exercise-for-marketing-classes-6fb09af0a565.)

At this point I have come up with a rough process I am now teaching to students and invite you to explore to see if it can help you come up with good ideas. The process involves five steps:

1. Ask

2. Regenerate

3. Refine

4. Develop

5. Critique

I will walk through how I think through each of these steps. While I will use marketing examples, I hope the general framework may prove useful beyond marketing. Finally, I will show a mix of mediocre vs helpful outputs working through a single long example of prompts and responses.

A Reminder of How LLMs Work

LLMs (Large Language Models) are predictive tools. Based on patterns a model has seen in its training data, it tries to predict how a string of words might evolve. There is little knowledge at the time of this writing about what is in LLM training sets, but it is clear that it is a broad swathe of digital material collected from across the internet. Tom Goodwin, an influential thinker in marketing, has described the answer we get from AIs as “the average of the internet.”

How does that average manifest in LLM output? The New York Times has a nice simple explainer (https://www.nytimes.com/interactive/2023/02/17/business/ai-text-detection.html) that I will borrow from.

Imagine a human types into an LLM, “Who is LeBron James?”

The LLM knows from the patterns it has seen online that a likely form of answer to this question begins, “LeBron James is. . . “

The question becomes what is the next word in that sentence. Given what the internet has written about LeBron James, it is much more likely the next word will be something along the lines “a” or “an” followed by a word such as “American” or “basketball” than “French” or “soccer”. LLMs roughly use a score to indicate the probability of the next word and picks the one that is highest scoring.

Let’s say that means “LeBron James is an American” is the next part of the phrase. OK, what kind of American? “Basketball” is much more likely than “soccer.” “LeBron James is an American basketball” what? “Player” probably scores more highly than “court”! So, “LeBron James is an American basketball player.” Which is correct and sounds like the LLM really knows what it is talking about, when in fact it has simply produced a probabilistically likely sequence of words based on its training set.

This has consequences to which I will return.

Ask

Give it a simple prompt related to the space you want to explore.

There is much (correct) advice out in the world that longer, more detailed prompts can be more effective. This is true, but mostly when you already know the kind of answer you want. So, for example, if you want something in a particular style or format, ask for that. Giving an AI constraints and contexts can be helpful. Ethan Mollick at the Wharton School of Business has nice prompting advice in general here: https://www.oneusefulthing.org/p/working-with-ai-two-paths-to-prompting.

But for broad creativity, I’ll argue a simple prompt is better. Give yourself the opportunity for the AI to surprise you. That will likely turn out to be both good and bad, but you can always get complicated later.

Here is a prompt I used with a group of undergraduates recently: “Suggest three creative product ideas that Apple could sell”. (Without bounds, my experience is that LLMs will routinely produce between 6 and 10 ideas for this kind of prompt.)

Here is what GPT 4 from Open AI gave me in response to this prompt:

1. Apple Home: A modular smart home system. Think of it as “Lego for your home.” Users can purchase different modules (security, lighting, heating, etc.) and snap them together seamlessly. Controlled by Apple software, these homes can adjust to user needs, learn from daily patterns, and optimize for energy savings. The system would also include AR and VR integration for design, customization, and entertainment, working in conjunction with Apple Glasses.

2. iGrow: A smart indoor garden system. Users can grow vegetables, fruits, and plants in their own home using a hydroponic or aeroponic system. The device will have integrated sensors to monitor water, nutrient levels, and light, adjusting them accordingly for optimal plant growth. Integrated with Apple’s Health app, iGrow could even suggest plants and herbs to grow based on your health needs and dietary preferences.

3. Apple Adapt: Wearable adaptive tech clothing. This could be a line of clothing items that have built-in sensors and capabilities, such as heating or cooling based on body temperature, changing color or pattern with a tap on your iPhone, or even charging your devices with built-in solar threads. Imagine a jacket that adjusts its insulation based on the weather or a shirt that tracks your vitals during workouts.

Not all ideas will be either novel, feasible, or both. The first product sounds like an incremental extension of the Apple Home app that already exists. The second would, I suspect, have many technical issues. But these are all at some level plausible extensions of the Apple brand.

The key advantage of GenAI is that it is fast and cheap. If I get ten cheap ideas in 30 seconds and nine of them are awful, that doesn’t matter as long as the remaining one is good.

There is a long-standing pre-AI debate in creativity regarding whether a higher quantity of ideas leads to a high quality of ideas. Do you simply get more hits the more often you try? The dominant theory has been that the creative process has an essential element of randomness in it, meaning more is better (sometimes called the “equal odds” rule). In general — this is not my area of expertise — my reading suggests the correlation is positive, but not all studies agree (sigh): there may be other factors. But more is not obviously worse.

More specifically, is the average of the internet any good at this? Research is coming out at a blinding pace on this topic, and I can’t say I’ve read all of it (see the resources at the end of the article for a couple of blogs on the subject), but again, it is not obviously worse.

Give it a try and see what you think.

Regenerate

Ask it again.

The easiest way is to hit the “regenerate” button or ask your AI to try again. But there are a couple of other ways to conceptually regenerate. First, you can have other people enter the same prompt in their accounts. Second, you can enter the same prompt into a different system.

Why would you do any of this? In most cases it will generate variations on the initial ideas, but in some you will get something new, and new is good (especially if ideas are fast and cheap). The other thing regeneration does is identify things that are truly “average.”

One of the things you will hear sometimes is that since regenerations can produce different results, it’s hard to say what is a “true” answer. But even if it doesn’t repeat, at this point LLMs often conceptually rhyme. To the extent the same answers keep popping up, that suggests there is a pretty strong pattern somewhere in the training set. I think about this in the form of what academics call “interrater reliability.” If two people sat down and independently evaluated some text, would they come to similar conclusions? The closer the agreement, the higher the interrater reliability.

So, when I put my same “give me Apple ideas” prompt into Claude (from Anthropic), the first idea that it replied with was virtually the same Apple Home idea as GPT gave me. When one of my classes asked the same Apple prompt of GPT across groups, there were lots of sustainability and appliance suggestions. The average of the internet apparently thinks (well, exhibits patterns) that Apple could (or should?) be offering these kinds of products.

Refine

In an AI world where ideas are cheap and plentiful, screening becomes much more important. AI may suggest dozens of ideas. You have to decide which ones are good for you.

LLMs can help humans with this. One of the things that I find powerful about LLMs is they can refine ideas in interesting ways.

I’ll recommend three basic techniques here: More, Less, Different.

More

More simply says, “give me more ideas like that one.” It may be an idea seems interesting but not quite right. I have found this useful. When I asked GPT to give me more ideas like the iGrow indoor garden concept earlier, it produced the following:

  1. iBloom: A smart flowerpot designed for ornamental plants, equipped with sensors to monitor soil moisture, light levels, and nutrient needs. The pots could also have a self-watering feature and integrate with Apple’s HomeKit to allow users to manage their plant care routines seamlessly.
  2. iAqua: A smart aquarium system that simplifies the maintenance of a freshwater or marine aquarium. It could monitor water quality, temperature, and integrate automatic feeding schedules, with alerts and controls available through an Apple app.
  3. iHarvest: A smart composting unit that fits into a kitchen, using technology to accelerate composting, reduce odors, and sync with users’ waste disposal patterns to optimize the process. The resulting compost could be used in one’s iGrow system, creating a circular ecosystem of waste and growth.

All are in the same iGrow space of some kind of “managing organic life forms,” and perhaps one would be more feasible than the iGrow.

Less

Less means to get away from ideas you don’t like. This is sometimes referred to as giving the LLM constraints. For example, when I asked GPT to come up with more ideas but to eliminate any ideas that required hardware, it gave me a number of software-based concepts including:

1. Apple Study: A substitute for the traditional learning management systems, tailored for students and educators. Integrating with the existing Calendar and Reminders apps, it can help manage assignments, provide study resources, and facilitate group projects, all while using AI to adapt to individual learning styles.

2. Apple Mentor: Adapt the concept of professional networking into a mentorship and career development app. It would leverage the user’s professional data from LinkedIn or similar services to connect them with potential mentors, industry leaders, and peer groups.

Different

Last, getting “different” means pushing your LLM away from the average. So, for example, I gave Claude a follow-up prompt on its initial Apple ideas: “Go further afield from the current Apple product line as long as the brand extension is plausible. Be more creative.” One of the more interesting suggestions was:

Apple fitness centers — Brick and mortar gyms/studios that use Apple technology to deliver personalized health and fitness experiences. Track progress using Apple Watch.

I, at least, did not see that one coming.

Interestingly, in the past month I have seen multiple people comment on social media that they are explicitly using GenAI to get away from the average. So, if the average of the internet thinks some kind of Apple Home device is a good idea, maybe that’s not a very innovative idea.

Think about the sometime complaint that all car ads basically look alike (e.g.,. https://www.thedrum.com/opinion/2021/10/14/why-are-car-ads-running-empty-creative-tank). The first ideas an LLM comes up with will often be mediocre. Pushing it to be more, less, or different can be an effective way of getting away from mediocrity. One commenter even indicated they actively asked LLMs for the “obvious” ideas so they could eliminate them from consideration.

Develop

Pick an idea and ask your LLM to help you build it out.

What would you have to do to make your idea a reality? What are the implications of your idea?

In class sessions, I have had students build out elements of a marketing campaign that would help commercialize a product idea, e.g., creative pricing, advertising, distribution, etc. One thing I’ve repeatedly done is to have them ask their LLM to suggest creative target market ideas for a product. Here is some of what GPT 4 gave me when I asked it “what kind of target market might like the iGrow” concept it developed earlier:

1. Health Enthusiasts: Individuals who prioritize organic, non-GMO, and fresh produce. They understand the health benefits of consuming freshly harvested greens and vegetables. iGrow offers them a chance to have a garden-to-table experience within their homes.

2. Urban Dwellers: People living in urban areas with limited space for traditional gardening would benefit from the compactness and efficiency of the iGrow system. Those living in apartments, condos, or houses with small yards would find this especially appealing.

. . .

7. Elderly or Individuals with Mobility Issues: Gardening has therapeutic benefits. iGrow would allow those who might have difficulty with traditional gardening due to physical constraints to engage in the activity.

8. Preppers and Self-sustainability Advocates: People who emphasize self-reliance and want to ensure they have a consistent supply of fresh produce regardless of external factors might be interested in iGrow.

The first two are pretty boring at some level and I’d argue kind of fall out of the concept. The last two are a lot more interesting to me. All raise the issue of bias in AI, but in a more nuanced way than I often see it discussed.

There have been lots of discussions about the biases inherent in LLMs. They’re based on whatever internet they have averaged, and if their training set is biased, the LLM will be, too. When my most recent student session used these Apple prompts, there were lots of “eco-conscious” and “tech-savvy” target markets suggested across groups regardless of the product involved. Apparently, the average of the internet thinks these are people who will buy pretty much anything Apple makes. When I asked GPT 4 for a persona of the Health Enthusiast, it gave me a 30-something female yoga practitioner that was so stereotypical that you could practically smell the herbal tea.

But, we humans have biases as well. For example, there has been much chatter and recrimination about how advertising people are different from “normal” people (e.g., https://appliedbrandscience.com/blog/advertising-people-are-not-normal). As an able person, thinking about products for people with mobility issues does not always come naturally to me. And I have never thought about “Apple” and “preppers” in the same sentence. Maybe that’s shame on me.

A student group I worked with this fall had a similarly illuminating experience. They had come up with an idea for a different kind of neck pillow, and unsurprisingly had thought of airline passengers as a target market. When they asked their LLM, one of its suggestions was “truck drivers.” At my expensive private university, airlines are a part of life for many students. That’s less true of truck drivers. Both they and I were impressed. Maybe that’s shame on us. But sometimes, LLMs will get you out of your own bubble.

LLMs can go beyond campaign suggestions. For example, I asked GPT 4 “What are three milestone metrics that Apple might want to track during the first year of launch to determine whether iGrow was headed for success?” it returned the following:

1. Sales and Adoption Rate:

o Metric: Number of iGrow units sold.

o Rationale: An obvious metric, but a crucial one. The raw number of units sold would provide an immediate indication of market interest and acceptance. Comparing monthly or quarterly sales figures can indicate whether interest is sustained or waning after the initial launch.

o Sub-metrics to consider:

§ Return Rate: A high return rate could indicate dissatisfaction or challenges faced by the users.

§ Repeat Purchases: If Apple offers consumables (like nutrient packs or seed pods), tracking repeat purchases can indicate ongoing use and satisfaction.

2. User Engagement and Software Interaction:

o Metric: Frequency and depth of user interaction with the iGrow companion app or software.

o Rationale: If iGrow integrates with a software application, monitoring how often users engage with the app and which features they use most can provide insights into the product’s utility and user satisfaction.

o Sub-metrics to consider:

§ Feature Usage: Are there features in the app that are frequently used or largely ignored? This could guide future software updates.

§ Integration with Apple Health: If there’s a nutrition tracking component, how often is data from iGrow imported to Apple Health or other Apple ecosystem apps?

3. Customer Feedback and Satisfaction:

o Metric: Net Promoter Score (NPS) or Customer Satisfaction Score (CSAT).

o Rationale: These metrics gauge the likelihood of customers to recommend iGrow to others (NPS) or express their satisfaction (CSAT). High scores indicate positive customer experiences, while lower scores can help Apple identify areas for improvement.

o Sub-metrics to consider:

§ Customer Reviews and Ratings: Monitoring reviews on Apple’s website, app store, or third-party platforms can provide qualitative insights into user experiences, concerns, or areas of delight.

§ Customer Support Interactions: Tracking the number of support tickets, common issues, and resolution times can offer insights into product pain points.

As GPT 4 observes, some of this is screamingly obvious, but I found some of the behavioral submetrics thoughtful. “Is anyone actually using this thing” would be important for iGrow in the long-term.

I also asked GPT 4 “What implementation issues might Apple want to pay attention to during the first year of the iGrow launch?” it came up with ten possible issues. Among the more interesting:

Quality Control: Given the interdisciplinary nature of iGrow — combining electronics, plant biology, and software — strict quality control is vital. Any malfunction, be it in the growth lights, nutrient delivery system, or the software, could result in user dissatisfaction.

User Education and Support: For many users, iGrow might be their first foray into indoor gardening. Apple would need robust educational resources — tutorials, FAQs, video guides — to help users get started. Furthermore, a strong customer support system would be necessary to address questions and concerns promptly.

Environmental and Sustainability Concerns: Given the increasing focus on sustainability, Apple should be prepared to address questions about the environmental impact of iGrow, from the materials used in its construction to its energy consumption.

Expansion of Plant Pods or Kits: If the system uses proprietary seed pods or plant kits, ensuring a diverse, high-quality, and readily available selection will be essential to keep users engaged and satisfied.

Any individual might come up with a few of these, but even an experienced person might not think of all of them or not all at once. This kind of thing could be particularly valuable for less experienced employees — there is some evidence less skilled people particularly benefit from LLMs — or those who have a very narrowly-focused job where cross-functional issues might be less appreciated. And there can be a tendency to wish implementation away among managers in love with their ideas. Here are ten things to watch can be a useful tonic in this regard, which brings me to my last step.

Critique

Ask your LLM what is wrong with your idea.

I find LLMs can be good critics of ideas, including their own. Just as they can find plausible patterns leading to ideas, they can find plausible patterns leading to flaws in ideas.

In my academic research this summer I had a moment where I was working with some data and my instinct was that there was something wrong with my approach. I couldn’t put my finger on it, though.

I fired up Bing GPT and asked it, “Imagine you are an expert statistician. Identify potential flaws in the following data analysis approach.” It came back with a crystallization of the problem I had been unable to articulate, which I was therefore able to address.

For marketing ideas of the kind discussed here, I’ll recommend the idea of what is sometimes called a “pre-mortem,” in which you imagine an initiative has failed and try to write imagine reasons why. Here’s the Wikipedia version: https://en.wikipedia.org/wiki/Pre-mortem. Pre-mortems are designed to help you identify and prioritize risks you might not have sufficiently considered.

Ethan Mollick has pointed out that LLMs can be good at this and gives an example prompt and output in this paper (which includes other prompts as well): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4475995.

Here is an example using the iGrow concept. When instructed, “Identify five possible reasons iGrow might fail commercially,” GPT 4 returned the following:

1. High Cost of Entry: Apple products are often priced at a premium, reflecting their design and technology. If the iGrow system is too expensive, it could deter a large portion of potential customers who might turn to more affordable alternatives.

2. Maintenance and Operational Challenges: While the concept of an indoor garden system is appealing, the reality of maintaining it — from changing water, ensuring nutrients are balanced, to dealing with potential plant diseases — can be daunting. If iGrow proves too complicated or time-consuming, users might abandon it.

3. Technology Overcomplication: One of Apple’s strengths is its user-friendly interface. However, if the tech features of iGrow become too complex or seem unnecessary for growing plants, it might alienate users who simply want a straightforward indoor garden.

4. Limited Crop Variety: The variety of plants that can be grown in such systems is often limited by space, light, and nutrient delivery. If users can’t grow a diverse range of their favorite plants or if the system isn’t versatile enough to accommodate different growth requirements, it might dissuade potential customers.

5. Market Saturation and Competition: The indoor garden market has seen a surge in recent years with various hydroponic and aeroponic systems available. Brands like AeroGarden and Click & Grow already have established products. If iGrow doesn’t offer significant advantages or differentiation, it might struggle to carve out a market share.

The first point and last points seem obvious: don’t have an unaffordable price point and think about competition. These would apply to pretty much any new product. (Note it does correctly identify two potential competitors.)

The three middle points are more interesting: iGrow needs to produce an indoor gardening experience that is simple enough to be alluring but varied enough to garner and maintain interest. That strikes me as a tough sweet spot to hit and would be something I might think about more. How could I address this? Recall the “user education and support” item that GPT identified as an implementation issue previously: maybe I ought to think about that more.

One of the nice things about using an LLM for this is that the criticism is not personal. Everyone loves their own idea. If I criticize your idea, maybe it’s not because it’s a bad idea but because I don’t like you or have some ulterior political motive. Or maybe an idea is terrible, but it’s the CEO’s idea so no one is willing to criticize it out loud.

The AI doesn’t know any of this. If AI criticizes your idea, it’s, well, the AI. While it has biases, it doesn’t have your biases.

Other Considerations

Can you use a more structured approach?

One can provide more structure to a creativity prompt. For example, I have asked GPT to apply the SCAMPER framework (https://www.designorate.com/a-guide-to-the-scamper-technique-for-creative-thinking/) to suggest new products based on existing products, and it applies it reasonably well. It also can plausibly evaluate ideas using Rogers’ five innovation characteristics (https://en.wikipedia.org/wiki/Diffusion_of_innovations). You have to check any analysis it produces is true, but it’s another starting point.

A recent Harvard Business Review article suggests GPT can be used to evaluate ideas on the criteria of Novelty, Feasibility, Specificity, Impact, and Workability (https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity). I found GPTs output somewhat speculative for this — you would have to know a lot about the inside of a company to accurately answer some of these questions.

Will this work for small businesses?

One might argue that Apple is an overly rosy example here. As one of the most followed brands in the world, LLM training sets likely carry extensive information that would help it do well in suggesting ideas about Apple. And you are not Apple.

As a test, I tried the same initial ideation prompt for a pair of restaurants in my local community owned by a couple I happen to know. I simply told GPT 4 to look up the two restaurants and suggest possible offerings the restaurants could provide. I received a better list than I expected based on my private knowledge of the business. The main thing a smaller business might need to do here is describe its situation and business more. GPT 4 produced a high proportion of ideas that my restaurant friends have already tried or considered because the average of the internet doesn’t “know” these things.

Will “agents” change all this?

Everything here is conditional on technology at the time of writing, and Open AI’s developer day occurred in the middle of my preparation of this piece.

A new feature that emerged from this day was the ability to use GPT to build independent agents who are pre-programmed to address certain issues in a certain way. Open AI described these “GPTs” as “custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.” One can create a GPT that will implement most if not all of what I do in this article. Other agent building technologies will surely come along.

For this particular application, the “Product Idea GPT” I built didn’t produce very different results than my manual process, and as an individual I find the manual process more attractive. But, for frequently executed and/or highly structured tasks, the efficiency of giving instructions once and then “pressing a button” each time you want an answer may be more appealing. GPTs may also be shared — though currently only with other paying users — so if you have a successful process, you can export it to others. Maybe I would not have to write this article!

Conclusion

Start.

I’ll finish by suggesting you simply try an LLM or two. GPT 3.5 (which does appear inferior to GPT 4), Claude, Bing, and Bard are (so far) all free. My sense of using these is that, like a lot of software, you only really learn to use it through practice. I work mostly in marketing areas, and Gen AI has been a useful addition to my repertoire. Things might be different in your domain. Try the five steps I have suggested and see if they work for you.

The higher-level skills at this point are prompting, iterating, and editing. Prompt, iterate on the results, and filter the ideas. The answers are not going to be perfect: initial hits will often be mediocre or just weird. Go back to the image I generated using Dall E 3 to illustrate this article. When you look closely, the woman is wearing one shoe with a heel and one that is flat. The table is oddly canted relative to the floor. And her legs are embedded inside the table! But it is a place to start. And poor starts and mediocre outputs can be fixed:

Source: Dall E 3.

Wishing you good discoveries . . .

Resources

· Ethan Mollick has become an important resource for educational change for business schools in this area, and he frequently reviews new developments in GenAI. His substack is well worth a look if you are interested in more ideas and examples: https://www.oneusefulthing.org/.

o Here is Ethan’s take on automating creativity with AI, which includes links to some academic studies: https://www.oneusefulthing.org/p/automating-creativity

· Here is a blog post from August 2023 talking about various studies: https://jakobnielsenphd.substack.com/p/ideation-is-free-ai-strong-creativity

· This is an interesting recent piece on how to use AI as part of a broader creative process: https://www.sps.nyu.edu/homepage/emerging-technologies-collaborative/blog/2023/embracing-creativity-how-ai-can-enhance-the-creative-process.html

Bruce Clark is an Associate Professor of Marketing at the D’Amore-McKim School of Business at Northeastern University. He researches, writes, speaks, and consults on managerial decision-making, especially regarding marketing and branding strategy, and how managers learn about their markets. You can find him on LinkedIn at https://www.linkedin.com/in/bruceclarkprof/.

Beyond the two images and the quoted text, no AI was used in the writing of this article!

--

--

Bruce Clark

A practical business professor musing on marketing and management from his not quite ivory tower. Writings do not represent the views of Northeastern University