A Generative AI Teaching Exercise for Marketing Classes

Bruce Clark
16 min readJun 14, 2023

--

Generated by Bing Image Generator. Prompt: “robots studying in a college library”

(that might inspire ideas for others)

The buzz around all things generative AI has hit academia like a sledgehammer. From concerns about academic integrity to how using an “assistant” might affect student learning, professors everywhere are talking about it. My department had a vigorous online discussion and debate over the December break, but it was far too early for us to set policies.

So I decided to run a couple of experiments.

This article describes what I did with two classes in late March and early April of 2023, one of 35 undergraduate marketing concentrators and the other of 12 graduate marketing concentrators. These were both second courses in the concentration — i.e., they had had a classic Intro course — and the second course at both levels dove more deeply into applications and examples.

This was a learning experience for all of us, and I want to share the details in the hope that it might prove useful to other instructors. It is an exploration of marketing topics, but you may find it inspires ideas even outside a marketing classroom.

The Setup

For this exercise, you need a core product concept of some kind. I used different short cases with my two classes (links at the end of this article), but a case is not necessary. You could do this with any company with whom students are likely to be familiar. A current news article might also be a good source.

Students will need access to generative AI programs. My experience in March was that this was not a given. Here is an example of the assignment text I provided to my undergraduates:

We are going to experiment in this session with generative AI tools. I would like you to bring a laptop and have access to one tool for use during class. I do not care which tool it is, but here are two that are satisfactory:

· Chat GPT. You can register for a free account at OpenAI.com.

· Bing Chat GPT AI. Also free. You will have to be approved from a waitlist. This can take a few days, but I gather has become much faster. Here are instructions: How to Sign Up to Try the New AI-Powered Bing Search Engine (makeuseof.com). Installing Microsoft Edge is the easiest way to use Bing on a PC, but Bing is also available as an app for the iPhone (https://apps.apple.com/us/app/bing-your-ai-copilot/id345323231) and an extension for Safari (https://apps.apple.com/us/app/microsoft-bing-for-safari/id1560727432?mt=12 )

Readings:

  1. Case: The Lululemon Mirror (in the assignment for March 28 in the module)

Discussion Questions

1. What marketing initiatives should Michael Aragon pursue to improve Mirror’s fortunes? Consider:

a. Target Market and Positioning

b. Marketing mix [for non-marketing people, these are the product, pricing, communication, and distribution decisions an organization makes about an offering]

Details on how to sign up have obviously changed since March. For graduate students, I added an overview reading on AI (https://www.bcg.com/x/artificial-intelligence/generative-ai) and asked them to have access to an image generator such as Midjourney or DALL-E. Bard was also available by the time I worked with my grad students and I added it as an option.

There is not nearly enough information in either of the cases to do a deep analysis of the discussion questions. Rather, I asked students to read the cases as background to an in-class discussion. This was a light preparation day.

The Opening

I described the purpose of the class as understanding what these tools “can and cannot do,” specifically in the context of how good they are at coming up with “marketing strategy ideas.” This was highly relevant to students, as they were all working on a marketing plan project that would require them to answer the discussion questions about the product concept their groups were proposing.

How many of you are using generative AI?

I started by simply asking how many students had used one of these programs prior to this class. About half of the grad students and about a third of the undergrads indicated they had. With the undergrads, I asked how many had used it “for another class” which may have lowered the proportion, and there’s also likely some reluctance to admit you used it for class at this point.[1] That said, questions and later feedback suggested overall familiarity among general marketing students was relatively low. (I have heard from colleagues that the proportions in more technical, programming-oriented classes are higher.)

How many of you have told your professors?

With the undergrads, I then asked how many of their professors knew they were using the programs. Exactly one hand went up. The (very good) student was in a psychology class where using GPT was required as a way of provoking students to consider how the program “thinks.” I let the difference go unremarked at this point.

The Exercise

I am mostly going to focus on the undergraduates here and then will talk about some differences with the grad students. Note both classes were approximately 100 minutes in length.

I broke the undergrad class into groups of four or five. (I find these good group sizes generally). Students self-formed seven teams. I had hoped to have groups that mixed different AI packages so that students could explore these differences, but a quick poll revealed the vast majority of the class was on GPT.

What would be good target markets for the Lululemon Mirror?

As a warmup, I asked students in their groups to discuss among themselves what some “sensible target markets” would be for the Lululemon Mirror.[2] Groups reported out their targeting ideas, which I recorded on the board.

I then asked students to have AI suggest target markets for the Mirror and to discuss as a group what they found. Students were instructed to keep track of their prompts. Students were asked to report out interesting targets that their groups had not thought of, again recorded on the board.

What do you think of these targets? Are they useful?

There were several new ideas, and GPT spontaneously created a persona (a fictional customer) for one group. I then asked students what they thought of the ideas the AI came up with and whether AI was useful. There was a mix of optimism and pessimism (this was consistent throughout the exercise). The discussion next turned to a discussion of the prompts people used. This was highly informative for students, as it showed how different approaches produced different results (a learning objective for this exercise).

Take one of the target markets AI proposed and ask it to come up with a positioning for the Mirror

This initial approach set the stage for the remainder of the exercise. I next asked groups to pick one of the target markets their AI came up with and ask AI to come up with a positioning (roughly, a statement of key benefits) of Mirror to that target market. Groups reported out their chosen targets and positionings, which often included a formal positioning statement. (GPT can produce these.) This in turn led to another discussion of differing prompts and where outputs were similar or differed, and how useful the outputs were in general. A student summarized the outputs as “average.”

Pick your favorite positioning and ask AI to suggest creative product and pricing ideas for that positioning

At this point we were halfway through the class, so I speeded things up. With more time, one could ask for separate sets of ideas around each element of the marketing mix, but my prompt at this point was to ask AI for product and price ideas. This was a rich discussion, as GPT suggested everything from simple product features to entirely new products and pricing models. We discussed the quality of the ideas produced. Both here and in positioning, the quality of ideas ranged from poor/infeasible to very innovative.

Now ask your AI to create a communications campaign to execute against this positioning. Have it script a TV ad as an example

Discussion occurred as in previous sections. I dropped a distribution question for time.

Could you have come up with these ideas on your own?

I asked students if they could have come up with all these ideas on their own given enough time. The (perhaps optimistic) sense was probably, but this led to a discussion of the sheer speed of these tools. The value of iteration was highlighted by one student.

Teaching Tips

You should be at least somewhat familiar with how large language models (LLMs) work and their pros and cons to help guide the discussion. It’s also very helpful to do the exercise yourself prior to class so that you have some sense of the kind of answers AI might come up with. You’ll need to do this before each class session, as the technology is advancing over time.

Note generally I gave less than five minutes for each question that groups asked AI. It’s fast! What eats time is how far you go in discussion of outputs. I asked groups to reflect on their experiences, what worked and didn’t, what surprised them. I reflected on my own experience of using AI and talked some about what the state of the art appeared to be at the end of March. The point is not so much to come up with great outputs, but to help students understand generative AI’s strengths and weaknesses and where and how they might use it.

Beyond the questions I have so far highlighted in this article, here are other discussion prompts that may be helpful:

· What were your assumptions about [topic]? Does AI change them?

· What were your assumptions about how generative AI would work? Did this exercise change them?

· Why did you use that prompt?

· Enter the same prompt again/regenerate the response. How do the outputs change?

· Give your AI more context/have it imagine it is a kind of person. How do the outputs change?

· How would you decide X was a good idea? What else would you want to know? How would you test it?

· How would you try to control for bias in the output?

· Should generative AI be regulated? Why or why not? If yes, how?

As a first pass, this was an exercise on ideation. How can you get generative AI to produce helpful ideas? I’ll talk more about what else I might do in a later section.

Stepping Back

Have we achieved magic?

At this point I asked students if we had “achieved magic,” noting the relevance of these kind of questions to their marketing plan projects. Does AI produce helpful marketing strategy and tactics recommendations? The mix of optimism and pessimism noted earlier persisted, though there was a strong consensus that speed was a virtue.

Is this cheating?

I talked about our university’s academic integrity policy as a set up for this section. This again proved a rich ground for discussion and a divided class. One student raised the IP issues around image generators. Another suggested that the key question is whether you use AI to help you think or to do the thinking for you, arguing the former is not cheating but the latter is. Students raised the comparison to other tools such as calculators, Google, or Wikipedia.

I then asked if this would represent “cheating” at work. (Many but not all students in the class had already completed at least one co-op experience at Northeastern.) If you come up with a great idea, does your boss care if it came from AI? Students were divided. We talked about how some tasks might be automated in the workplace.

Does disclosure solve the problem?

If your professor or your boss knows you’re doing it, is it OK? One student positioned this in terms of speed: do you tell your boss if something that used to take five days takes 50 minutes.

We then talked about what value humans add to GPT output. In this regard, I talk about higher level skills being prompting, iterating, and editing, a point to which I will return.

I closed the class with about a ten-minute lecture summarizing some key highlights about the state of AI and marketing as of late March 2023.

Aftermath

One of the interesting post-class results of the exercise was in written student reflections. Many students found the concept of using AI incredibly intriguing, while at the same time recognizing its limitations (false statements, hallucinated sources, “average” outputs, ethical issues). One student remarked on its inability to capture a “human touch” in its writing. Offset against this was speed. One student wrote: “GPT’s strengths are centered around quantity of information and speed.” Many students picked up on the varying quality of the outputs, which was a learning objective for the class.

One student mentioned its power as a first draft producer: “I always have trouble with the first draft and it is always terrible when I make one. Being able to take away that step of a relatively bad first try and having AI do it instead, takes away some tedious work for me and then I can build all of my own creative ideas on to that base of a script.

Overall, both classes exhibited what I’ll characterize as interest, but healthy skepticism.

I will say there also were a handful of students who found AI depressing in terms of their job prospects, enough so that I opened the next undergrad class session with a short “we’re not all doomed” talk.

I scheduled this late in the term so that students were already well along on their marketing plan projects with three workshops and three sets of draft notes submitted. This was intentional; in January I was unsure how much I wanted them using AI in these projects. However, after this class I decided to allow students to use AI as they finished their marketing plan projects as long as they disclosed its use. Two student teams effectively used image generators for advertising and packaging concepts respectively.

The Grad Class

Evaluating an analysis

The graduate class used a case (Guardian Angel, GA, link below) where one could imagine the product competing in several different areas. In the opening to this class, I had student teams ask AI to suggest product categories in which the GA product could compete. I then had students pick a product category and ask AI to indicate what the key success factors (KSF) would be in this category. I instructed students to ask for sources if their program did not provide them automatically (Bing GPT does at the time of this writing).

I asked teams to evaluate the KSFs produced and to look at the sources. The former showed uneven quality. The latter was unsurprisingly revelatory, with incorrect, invented, or simply missing sources across teams. We then went on to complete the marketing strategy and tactics exercise in a manner similar to that I described previously.

Image generation

I asked grad students to also have access to an image generating tool in this class. Beyond the basic marketing strategy exercise, I then had students use AI images in creating potential ads. This was especially helpful in illustrating the limitations of image generation at that time: it took several iterations to get to anything near reasonable results for a given team. It also was an opportunity to discuss algorithmic bias. Prior to class, I had attempted to create an image for an ad targeted at caregivers of elderly Alzheimer’s patients. My prompts on the theme of “elderly woman walking in rain carrying an umbrella” produced 27 images of white women and one image of a South Asian woman. Why did AI produce this mix?

Workplace issues

We also spent more time talking about this in the context of the workplace, as most of these students had substantial work experience of some kind. The disclosure issue loomed large. We talked about Reid Hoffman’s metaphor of AI as a research assistant, and my own (disclosure!) use of the tool in this regard. If I used “Katherine” as a research assistant, it would be routine to acknowledge her help in writing an article. On the other hand, no one says “this research benefited from the assistance of Google.” Is AI Katherine or Google? One student observed that given GPTs notorious reputation for inaccuracy, indicating you had GPTs help might send a negative quality signal.

With older adults, I also directly provoked the “are we all doomed?” discussion. The sense was “no,” but there was also some discomfort in the room. One student remarked, “I hate that it’s good at some of this.” We talked about what tasks might be outsourced to AI.

Expanding/Modifying the Exercise

First time through, you always learn something with an exercise. Here are some thoughts on things one could do differently in this context and areas where I might go.

Assign students to tools

One of my plans was to have teams where half the students had one tool while half had a different tool so that they could compare the strengths and weaknesses of tools. In the event, in both classes this did not work. Assuming they remain free, I might assign students to specific tools to make this discussion possible.

Assign prework

With both classes, I had them walk in with little prior preparation. Partly that was because I wanted to see what kind of work they could produce with the tools. In another round, I might have them do some work as individuals with the software and the targeting question. That would save some time in the live class.

Have them work to improve outputs

The current exercise was all about a chain of first draft ideas to highlight the possibilities (and perils). With time or in a different class, it would be helpful to have them take first drafts and make them better. If one uses generative AI, this is an important skill to develop.

Have the class work on different problems

One could assign different teams to work on different situations or different aspects of the same situation. This would allow the class to explore more deeply.

Post results

For time, I did not have students post results to the class. This could be a useful post-class activity to show students different kinds of prompts and tools. If one posted these live, it would also be an interesting way to explore prompts more systematically.

Require a systematic reflection

My current classes use modules that organize several sessions around a single topic area. The reflections I quoted earlier were extracted from general reflections on the module, but one could require all students to systematically reflect on this exercise.

Using Generative AI Differently in Class

Beyond this specific exercise, there are other types of classes I might try going forward.

An Analysis Class

This would expand on the analysis topic I used with the graduate students. It’s quite clear at this point that AI can produce the kind of basic analyses for an industry that MBA students are taught to work with, such as SWOT or PESTEL analysis. Ethan Mollick of Wharton has demonstrated that it can analyze spreadsheets (Twitter thread here: https://twitter.com/emollick/status/1650001457804988419?s=20). Rather than an ideation class, this class would have students use AI to produce and evaluate some kind of strategic analysis.

An Ethics Class

There’s easily a whole class session on the ethical and legal issues involved in generative AI. Image generators are the easiest place to go here for a marketing topic, as it’s clear these can produce biased and inaccurate outputs.

A Tactic Class

Rather than a broad brush swing at a marketing plan, one could focus an entire class on a particular topic. It would be quite easy, I expect, to fill a class session on having AI produce a communications plan featuring personas (GPT will make them up!), media ideas, sample ads, and sample visuals. This also would allow me to work more with skills in iterating (e.g., repeat prompts, diving into details) and editing (e.g., building on AI writing, improving the ideas presented).

Mix with a Creativity Class

This fall, I am likely to combine generative AI with a creativity class session I already run around product concepts. Students would do the creativity activity in the first third of the class and then spend the remainder of the class using AI to build out modifications and/or a plan around their idea.

Implement a “Sixth Man”

This is a term from basketball for the first person to come off the bench to help the five starters on a basketball team. Students in both classes remarked on the idea of AI helping a group. In this regard, I like the idea of using AI as a sixth man for a student team. I could ask students on their project teams to have AI evaluate their ideas, identify risks, etc. Rather than create the first draft, here AI evaluates the first draft.

Ethan Mollick, for example, has used GPT to create “pre-mortems” that identify the problems with a new product idea (Twitter thread here: https://twitter.com/emollick/status/1633175746435862529?s=20). This strikes me as a case where the AI can act as an objective critic (beyond me!) of the idea a student team has fallen in love with. I expect this might be useful in a number of settings beyond marketing classes.

Allow use of AI on projects from the beginning

I have not made a decision about this, and my university may yet produce a policy, but I am tempted to allow students to use this from the start, perhaps with an early training session.

I hope this has been helpful to you. Please reach out if you wish to know/discuss more.

Resources: Free Cases

Here are two short cases that I have written, either of which is suitable for use for this exercise. Both are free for use as long as the copyright notice is retained. The Guardian Angel case is fictional, but inspired by a real company.

Lululemon Mirror: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4471290

Guardian Angel: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4471285

Other Resources

· Ethan Mollick has become an evangelist for how to think about educational change for business schools in this area. His substack is well worth a look if you are interested in more ideas and examples: https://www.oneusefulthing.org/

· While I am skeptical about books in general in a category that is so fast moving, the Prompt for Brands ebook by Richard Bowman and David Boyle is a useful compendium of many ways GPT can be used in marketing. It has many examples that one can reproduce or modify for learning purposes: https://prompt.mba/en-us [Disclosure: I was given a free copy of this book to review. I receive no compensation from its sales.]

Bruce Clark is an Associate Professor of Marketing at the D’Amore-McKim School of Business at Northeastern University where he has been a teaching mentor for both online and on-ground teaching. He researches, writes, speaks, and consults on managerial decision-making, especially regarding marketing and branding strategy, and how managers learn about their markets. You can find him on LinkedIn at https://www.linkedin.com/in/bruceclarkprof/.

Beyond the opening image, no AI was used in the writing of this article!

[1] I’ll suggest this maps well to Spring 2023 surveys that suggested more employees were using GPT than telling their boss that they were using GPT, e.g., https://www.businessinsider.com/70-of-people-using-chatgpt-at-work-havent-told-bosses-2023-3.

[2] Following is the description of the Mirror product from the case: “a full-length wi-fi and Bluetooth-enabled fitness mirror that doubled as a video screen. (See Exhibit 1 for pictures of the Mirror.) The screen allowed owners to easily access a large number of recorded and live virtual fitness routines and classes encompassing many different genres. A two-way camera allowed customers and instructors to see each other to improve workout techniques.”

--

--

Bruce Clark

A practical business professor musing on marketing and management from his not quite ivory tower. Writings do not represent the views of Northeastern University