Claude Has Thoughts on the Google I/0 Keynote
I ask Claude to debate the implications of the keynote for marketing
Yesterday morning, I opened Shelly Palmer’s excellent e-mail newsletter on GenAI and business. It usually starts with a few hundred thoughtful words talking about some current issue or news in the GenAI world, followed by links to interesting news items. Yesterday it opened with an urgent notification: “Here’s the link to Sundar’s keynote at Google I/O. Stop what you’re doing and watch it.”
I find Shelly very good and I can’t remember when he has last said something urgent like this, so last night I click on the link.
It’s a two-hour video.
Sorry, Shelly. Not enough hours in the day. That’s what I have you for. Or possibly, Claude.
I am going to give you a case study of one of my favorite uses for LLMs and also talk about some substantive and process implications in a discussion at the end.
Summarize This Transcript
I am forever grateful to one of my Twitter correspondents, Gillian Rightford, for pointing out an extremely good use case for LLMs: summarizing transcripts. I strongly dislike videos and podcasts for learning information — I’m a reader. You have to give me a really good reason to suffer though a two-hour video.
I opened the YouTube transcript of Pichai’s talk and copied it into Claude, and asked it to “summarize this transcript.” It spat back a well-organized compilation of product announcements. Note I did not fact check this, though it appeared broadly consistent with the news coverage I had seen and a separate Perplexity search on coverage of the I/O event.
Something I have added to Gillian’s method is to ask an LLM to summarize transcripts in different ways (e.g., by speaker, by theme, by time, etc.). This different cut at the transcript often suggests different ideas. Since I am a marketing professor, I asked it about marketing implications. I prompted, “Now summarize this transcript around implications for marketing of the various announcements”.
It presented a second, very different transcript from the first. It was a rosy summary, since it was summarizing the Google keynote at a Google event. But, lots of marketing implications that are consonant with the general tone of “AI will transform marketing.” (I’m not reproducing these here because I’m trying to keep this article short enough that you don’t need Claude to summarize it.)
Criticize the Summary
One of the great things about LLMs as thought partners is that you can ask them “what’s wrong with this idea?” and then debate the LLM on the topic. LLM as thought partner is a good use case.
My next prompt:
Step beyond the transcript. How would an AI skeptic react to these marketing implications?
Claude response begins — -
An AI Skeptic’s Perspective on the Marketing Implications
Skepticism About Promised Capabilities
- Reality vs. Demonstration Gap: The polished demos at I/O likely represent ideal conditions that mask limitations in real-world use. AI skeptics would point out Google’s history of showcasing capabilities (like Duplex) that took years to deliver at scale, if ever.
- Technological Determinism: The presentation assumes technology adoption as inevitable rather than acknowledging consumer resistance to AI-mediated experiences, especially for high-consideration purchases where trust is paramount.
Economic and Social Concerns
- Concentration of Power: Rather than democratizing marketing, these tools may further concentrate power with large platforms and brands with resources to optimize for AI systems, creating winner-take-all dynamics that harm competition.
- Employment Displacement: The creative tools like Flow and Imagen 4 threaten to displace millions of marketing professionals, photographers, and videographers without clear pathways for economic transition.
- Algorithmic Manipulation: AI-mediated purchases could create new forms of market manipulation, where subtle optimizations invisibly steer consumers toward higher-margin products or platform-preferred brands.
Privacy and Autonomy Issues
- Surveillance Capitalism on Steroids: “Personal context” integration across Google services represents an unprecedented privacy intrusion, creating detailed behavioral profiles beyond what consumers can meaningfully consent to.
- Agency Undermining: When AI agents make purchasing decisions, consumers potentially lose direct connection with brands and products, reducing their agency and potentially their satisfaction.
- Cognitive Outsourcing: Dependence on AI for product discovery and decision-making may atrophy consumers’ critical thinking skills about marketing claims and product quality.
Technical and Practical Limitations
- Training Data Biases: The models powering these tools reflect biases in their training data, potentially causing demographic and cultural blind spots in generated marketing content and recommendations.
- Environmental Impact: The computational resources required for these AI systems (especially for multimodal models like Gemini 2.5) have significant environmental costs that contradict many brands’ sustainability commitments.
- Reliability Questions: Early adoption of these technologies may lead to embarrassing failures, inconsistent experiences, and customer frustration that damages brand reputation.
Ethical and Regulatory Concerns
- Authenticity Crisis: Despite SynthID watermarking, the line between AI-generated and human-created marketing content will blur, potentially undermining consumer trust in all marketing communications.
- Regulatory Backlash: The aggressive integration of AI into marketing will likely trigger regulatory responses around transparency, consent, and anti-competitive practices that could disrupt implementation.
- Accessibility Divides: While touting accessibility benefits, these tools may create new digital divides between consumers who can navigate AI interfaces and those who cannot, particularly affecting older demographics.
A skeptic would argue that Google’s vision represents a techno-utopian perspective that minimizes significant downsides, overlooks consumer resistance, and fails to address who truly benefits from this AI-mediated marketing ecosystem. The narrative of inevitable progress serves Google’s business interests while glossing over profound questions about choice, transparency, and the future relationship between brands and consumers in an increasingly automated marketplace.
Claude response ends — -
I found this a delightfully pointed rebuttal. I asked Claude to develop the Agency Undermining point and then briefly debated that — hasn’t Google Search already undermined the kind of agency this critique suggests?
Debate the Issue
Then I decided to try something new. Next prompt:
Use our discussion so far to script a 20-minute debate between Jack and Jill. Jack will present and defend the argument that GenAI is good for marketers and consumers. Jill will present and defend the argument that GenAI is bad for marketers and consumers. Allow each debater to spend three minutes presenting their core argument, then use the remainder of the time to have Jack and Jill engage in civil and thoughtful debate
What follows is one of the most civil and thoughtful debate in complete sentences you will ever read. But it’s pretty good.
Claude response begins — -
GenAI and Marketing: A Debate
Opening Statements
Jack’s Opening Statement (3 minutes)
Good evening. I’m here to argue that generative AI represents an extraordinary opportunity for both marketers and consumers — a true win-win that will transform commerce for the better.
For marketers, GenAI democratizes capabilities that were previously available only to large enterprises with substantial resources. Small businesses can now create professional-quality visual assets, personalized content at scale, and sophisticated customer experiences without massive creative teams. The tools revealed at Google’s I/O — from Imagen 4 to Veo 3 to Flow — dramatically lower production costs while raising creative possibilities. A small business can now produce a high-quality video advertisement in hours instead of weeks, and at a fraction of the cost.
For consumers, AI enhancements mean more relevant, personalized experiences that respect their time and attention. AI-powered search understands nuanced queries, saving consumers from sifting through irrelevant results. Virtual try-on eliminates the frustration of ordering clothing online only to find it doesn’t fit or look as expected. Price tracking ensures consumers get the best deals without constant vigilance.
Perhaps most importantly, AI addresses the paradox of choice that has plagued modern consumers. The overwhelming array of options in today’s marketplace creates decision fatigue and often leads to purchase dissatisfaction. By intelligently filtering options based on personal preferences and needs, AI acts as a knowledgeable concierge rather than a restrictive gatekeeper.
The future of AI-mediated commerce isn’t about removing human decision-making — it’s about enhancing it. It’s about elevating marketing from manipulation to genuine value creation. When AI handles routine tasks and information processing, marketers can focus on creating genuinely superior products and more meaningful brand experiences. Meanwhile, consumers gain powerful tools that level the information asymmetry that has traditionally disadvantaged them in the marketplace.
What we’re witnessing isn’t the undermining of agency but its augmentation. Just as calculators didn’t destroy our ability to understand mathematics but freed us to focus on higher-level concepts, AI shopping assistants won’t erode our decision-making capabilities but will elevate our consumption to more considered, values-aligned choices.
The transition won’t be perfect, and thoughtful regulation will be necessary. But the core proposition remains compelling: AI will enable more efficient, more personalized, and ultimately more satisfying commercial relationships between brands and consumers. This isn’t just good for business — it’s good for society.
Jill’s Opening Statement (3 minutes)
Good evening. I’m here to challenge the rosy picture of AI-mediated marketing that we’ve just heard — a vision that prioritizes convenience and efficiency while glossing over profound threats to consumer autonomy, market competition, and social welfare.
Let’s start with the impact on marketers themselves. While Jack celebrates democratization, the reality is that these AI systems will primarily benefit large platforms and enterprises that can optimize for algorithmic visibility. Google’s I/O announcements revealed a world where marketing success increasingly depends on understanding and gaming AI systems rather than connecting authentically with human audiences. Small businesses lacking AI expertise will find themselves increasingly disadvantaged, creating new digital divides.
For creative professionals — the photographers, videographers, copywriters, and designers who bring marketing to life — tools like Flow and Imagen 4 don’t represent opportunity but existential threat. These aren’t just productivity enhancements; they’re displacement technologies that will eliminate countless creative jobs without clear alternatives for those workers.
For consumers, the convenience narrative masks a troubling loss of agency. The “AI concierge” Jack describes is actually an intermediary with divided loyalties, optimizing for platform revenue and partner relationships while creating the illusion of serving consumer interests. Each step in AI automation — from search results to recommendations to autonomous purchasing — further distances consumers from meaningful choice and market awareness.
This new paradigm also intensifies surveillance capitalism. The “personal context” integration showcased at I/O represents unprecedented data harvesting across services, creating detailed behavioral profiles that enable increasingly sophisticated manipulation. The trade of privacy for convenience is rarely made with fully informed consent.
Perhaps most concerning is how these systems shape rather than merely serve preferences. When AI consistently filters and prioritizes based on past behavior, it creates feedback loops that narrow consumer horizons rather than expanding them. Serendipity, exploration, and genuine discovery — essential elements of a vibrant marketplace — are sacrificed for algorithmic efficiency.
What’s being presented as enhancement is actually intermediation — the systematic insertion of proprietary black-box systems between consumers and brands. This doesn’t augment agency; it transfers it to corporate entities whose interests fundamentally diverge from those of both consumers and smaller market participants.
The future of AI in marketing isn’t inevitable — it’s a choice. We can develop systems that genuinely empower consumers with transparency, meaningful control, and enhanced market knowledge. But Google’s vision, despite its appealing packaging, moves us toward a more manipulated, concentrated, and ultimately less human marketplace. We deserve better.
The Debate
Moderator
Thank you both for those thoughtful opening statements. Now let’s move into our discussion phase where you can engage directly with each other’s arguments. Jack, you’ve heard Jill’s concerns about job displacement and agency loss. How would you respond?
Jack
Thank you. Jill raises important concerns, but I think they reflect a static view that doesn’t account for how markets and human behavior adapt to technological change.
First, regarding creative professionals: history consistently shows that automation tends to transform rather than eliminate work. When desktop publishing emerged, it didn’t eliminate designers — it changed their focus from production tasks to higher-level creativity. Similarly, AI tools like Flow will likely shift creative professionals toward direction, curation, and strategy rather than technical execution.
As for consumer agency, I think there’s a fundamental misunderstanding here. Agency isn’t measured by the number of micro-decisions we make but by how effectively we can realize our values and intentions through our choices. When I use Google Maps, I’m not lamenting the loss of agency from not personally calculating every possible route — I’m celebrating how the tool helps me more effectively reach my destination. Similarly, AI shopping assistants don’t remove choice — they make it more effective by handling information processing that humans find overwhelming.
The surveillance concern is legitimate, but it presents a trade-off that consumers should be allowed to evaluate rather than having the decision made for them by restricting technology. Many consumers will gladly share data in exchange for personalized experiences if the value proposition is clear and controls are transparent.
What I find most striking is how Jill’s argument could apply to virtually any marketplace innovation throughout history. Department stores were once criticized for undermining the “authentic” relationship between consumers and individual merchants. But they created tremendous consumer value through convenience and selection. AI is simply the next evolution in this ongoing process.
Jill
Thanks for that response, Jack. I appreciate your historical perspective, but I think it misses crucial differences between past innovations and what we’re seeing with GenAI.
The scale, speed, and scope of potential displacement is unprecedented. When desktop publishing emerged, it happened over decades, giving the market time to adapt. The current AI revolution is compressing that transition into mere years or even months. The Google I/O demonstrations showed systems that can already replace significant portions of creative work, and they’re improving exponentially. This isn’t gradual transformation — it’s rapid displacement.
Your Maps analogy actually illustrates my concern perfectly. Yes, Google Maps enhanced our ability to navigate, but it also decimated an entire industry of independent mapping companies and created near-monopolistic control over navigation data. Now businesses live or die by their visibility on Google Maps, creating power asymmetries that didn’t exist before. This isn’t hypothetical — it’s the established pattern of digital intermediation.
On the agency point, you’re creating a false dichotomy between overwhelming choice and algorithmic filtering. There’s a vast middle ground where tools could enhance decision-making while preserving autonomy — providing information, education, and comparison tools without pre-selecting options based on opaque criteria. The demos at I/O showed systems designed to minimize conscious engagement rather than enhance it.
Finally, framing this as consumer choice overlooks how these systems shape the environment in which choices are made. When AI-mediated commerce becomes the default, meaningful alternatives disappear. It’s like saying workers “chose” to use email — technically true but practically meaningless when the entire professional world standardized around it.
Real innovation should enhance human capabilities rather than replacing human judgment. I’m not against AI in marketing — I’m against implementations that transfer power from humans to proprietary algorithms without meaningful transparency or accountability.
Jack
You make compelling points about the pace of change, Jill, but I think you’re overlooking how these tools could actually rebalance power dynamics rather than reinforce them.
Consider small businesses that currently can’t compete with large corporations in creating engaging content. Tools like Imagen 4 and Flow give them capabilities previously available only to enterprises with massive creative departments. A local restaurant can now create professional-quality video promotions without expensive equipment or specialized staff. This democratization could actually reduce market concentration by lowering barriers to effective marketing.
Regarding transparency, I agree that black-box systems are problematic. But that’s an implementation issue, not an inherent feature of AI-enhanced marketing. Google’s demonstrations included features like thought summaries in their API that make AI reasoning more transparent. We should advocate for explainable AI and meaningful controls rather than rejecting the technology outright.
Your point about shaping environments is fair, but all technologies do this. The question is whether the new environment offers net benefits over the previous one. For many consumers — particularly those with disabilities, time constraints, or decision anxiety — AI assistance represents a meaningful improvement in their commercial experience. The try-on feature demonstrated at I/O, for instance, makes online shopping more accessible and reduces waste from returns.
I’d also challenge the notion that AI necessarily reduces serendipity. Well-designed systems can actually increase discovery by occasionally introducing novel options based on adjacent interests or emerging trends. This is fundamentally different from the filter bubbles of current recommendation systems.
The key is thoughtful implementation with appropriate guardrails. We shouldn’t conflate potential misuses with the technology itself.
Jill
I appreciate your nuanced response, Jack. Let me address a few of your points while bringing us back to what I think is the core issue.
While democratization sounds appealing, the reality is that these tools will likely create new hierarchies rather than flattening existing ones. The small business that creates AI-generated content still needs to understand how to optimize for AI-mediated discovery, which requires expertise and resources. The I/O keynote showed how Google’s AI will increasingly determine what consumers see — creating a new gatekeeper that businesses must satisfy. This isn’t democratization; it’s changing the rules of an already uneven playing field.
Explainability is indeed crucial, but the trend is moving in the opposite direction. The most capable models like Gemini 2.5 are increasingly complex and opaque. More importantly, even with perfect transparency about how these systems work, the average consumer lacks the time and technical literacy to meaningfully evaluate algorithmic decisions. Transparency without practical ability to utilize that information doesn’t preserve agency.
The environment reshaping I’m concerned about isn’t just about interface changes but about fundamental market dynamics. When AI agents mediate commerce, they create powerful network effects that benefit dominant platforms. Google’s announcements around personal context integration and agentic shopping demonstrate how they’re positioning themselves as the essential intermediary in commercial relationships.
What was particularly revealing in the I/O presentation was how Google frames consumer “control” as simply approving pre-formulated options rather than participating in the formation of those options. The price tracking demo showed a system that monitors prices and prepares purchases with minimal human involvement beyond final confirmation. This represents a profound shift in how we understand consumer choice.
We need to ask: who ultimately benefits from this technology? The capabilities showcased at I/O primarily serve to entrench platform power while creating the illusion of consumer benefit through marginal convenience improvements.
Jack
You raise thoughtful concerns about power dynamics, Jill, but I think your framework overlooks some fundamental realities about how consumers actually make decisions and what they value.
First, let’s acknowledge that the current marketing ecosystem is already far from ideal. Consumers face information overload, predatory tactics, and manipulation through psychological triggers that exploit cognitive biases. In this context, AI assistants can actually serve as protective intermediaries, filtering out manipulative content and identifying genuinely valuable offers. The price tracking feature isn’t disempowering — it’s leveling the playing field between consumers and sellers who previously held informational advantages.
Second, your vision of consumer agency seems to idealize a form of deliberative decision-making that doesn’t reflect how people actually shop. Research consistently shows that consumers use various heuristics and shortcuts rather than comprehensive evaluation for most purchases. AI assistance aligns with these natural decision-making patterns rather than undermining some idealized rational process.
Third, you present a false choice between complete algorithmic control and pure human autonomy. In reality, the most promising implementations involve collaborative intelligence — systems that handle information processing while humans provide values, preferences, and final approval. The demonstrations at I/O consistently emphasized this collaborative model, with humans remaining in the decision loop.
Regarding gatekeeping, all information environments have curators. The question is whether AI curation offers improvements over previous models. Traditional marketing already restricts consumer visibility through placement, pricing, and promotion decisions made by retailers and marketers. AI potentially makes this process more personalized and relevant rather than being driven purely by who pays the most for placement.
The key to addressing your valid concerns isn’t rejecting AI in marketing but ensuring it develops in ways that genuinely serve consumer interests — through competition, transparency requirements, data portability, and regulations that prevent abuse.
Jill
I appreciate your thoughtful engagement, Jack. Let me clarify a few points and perhaps find some common ground.
You’re absolutely right that the current marketing ecosystem is deeply flawed. My concern isn’t about preserving the status quo but ensuring we don’t replace one problematic system with another that has even fewer checks and balances. The I/O demonstrations showed a vision where Google increasingly controls both the discovery and transaction layers of commerce — a level of vertical integration that should concern anyone who values market competition.
I’m not advocating for some idealized rational consumer model. Rather, I’m suggesting that effective heuristics require genuine choice among meaningfully different options. The AI systems showcased don’t just assist with heuristics — they increasingly make the heuristic decisions themselves based on criteria consumers can’t evaluate. There’s a crucial difference between using shortcuts and outsourcing the entire decision process.
The collaborative intelligence model you describe is appealing in theory, but the implementation demonstrated at I/O progressively minimizes the human contribution. The trajectory is clear: from presenting options, to suggesting choices, to preparing actions, to autonomous execution with mere confirmation. Each step further reduces meaningful human involvement while maintaining the illusion of control.
What particularly concerns me is how these systems handle preference formation versus preference satisfaction. When AI increasingly determines what options we see based on past behavior, it creates feedback loops that narrow rather than expand our horizons. This isn’t just about individual transactions but about how markets evolve when discovery becomes algorithmically mediated.
Perhaps we can agree on this: AI in marketing has tremendous potential, but its current trajectory primarily serves platform interests rather than meaningfully empowering consumers or smaller market participants. What we need isn’t less technology but technology designed with different values — transparency, genuine choice architecture, and preserving the possibility of discovery beyond algorithmic prediction.
Jack
I think we’re finding valuable common ground, Jill. We both want technology that genuinely serves human flourishing rather than merely extracting value for platforms.
Your point about vertical integration is well-taken. There are legitimate antitrust concerns when a company controls both discovery and transaction layers. However, I’d suggest this is a regulatory issue rather than an inherent problem with AI in marketing. With appropriate competition policy, we could have multiple AI assistants with different approaches and priorities, allowing consumers to choose systems that align with their values.
I also agree that preference formation is crucial. The best implementations would include deliberate serendipity and discovery mechanisms — not just predicting preferences but occasionally challenging them. This isn’t at odds with AI itself but requires intentional design choices that may not align with short-term engagement metrics. We should advocate for these design principles rather than assuming narrowing is inevitable.
Where I still see things differently is in the assessment of consumer agency. I believe agency is enhanced when technology handles routine, low-value decisions while preserving human control over high-value choices. The autonomous purchasing demo at I/O still required setting parameters in advance — specifying price thresholds and product characteristics. This represents delegating execution while retaining strategic control, which many consumers would find empowering rather than limiting.
We should also acknowledge that different consumers have different needs. Some may want high involvement in all purchase decisions, while others — particularly those with cognitive limitations, time constraints, or decision anxiety — may genuinely benefit from more automated assistance. Technology should support this diversity rather than imposing a one-size-fits-all model of consumer behavior.
Perhaps what we’re really discussing is how to ensure AI in marketing evolves to serve human values rather than merely platform interests. On that, I think we share significant common ground.
Jill
I appreciate this constructive dialogue, Jack. You’re right that we share important common ground regarding the need for technology that serves human values.
Your point about regulatory approaches is important. We need both better-designed systems and appropriate guardrails to ensure those designs serve broader interests. My concern is that regulation typically lags technological deployment by years, during which time market dynamics can become entrenched. The I/O announcements reflect capabilities that are rolling out now, while meaningful AI regulation remains largely theoretical.
I particularly like your framing around strategic versus tactical control. That’s a useful distinction. The question becomes: are consumers given meaningful strategic controls, or merely the illusion of control through binary approve/reject decisions? True strategic control would require understanding how algorithms make decisions, having genuine alternatives, and being able to modify algorithmic behavior beyond simple preference toggles.
Your point about diverse consumer needs is well-taken. Some individuals genuinely benefit from more automated assistance. My concern is ensuring this remains a choice rather than becoming the only viable option as traditional commercial channels atrophy. We’ve seen this pattern before with social media and mobile applications — alternatives technically exist but become practically irrelevant as network effects take hold.
Perhaps we can synthesize our perspectives this way: AI in marketing has tremendous potential to enhance both business capabilities and consumer experiences, but realizing this potential requires:
- Designing for augmentation rather than replacement of human judgment
- Ensuring transparency and explainability appropriate to the stakes of the decision
- Preserving genuine market competition at both the platform and discovery layers
- Protecting against feedback loops that narrow rather than expand possibilities
- Developing governance frameworks that evolve alongside the technology
This approach would address my concerns while preserving the benefits you’ve articulated so well.
Jack
I think your synthesis is excellent, Jill. Those five principles provide a constructive framework for evaluating and guiding AI development in marketing.
I’d add that we should recognize the agency of both consumers and smaller businesses in shaping how these technologies evolve. The history of technology adoption shows that users often repurpose tools in ways developers didn’t anticipate, finding creative applications that serve their needs. We’re already seeing this with current AI tools, where small businesses are using them in ways that enhance rather than replace their unique value propositions.
Your point about the lag between technological deployment and regulation is important. This suggests a role for professional ethics and corporate responsibility in the interim. Companies developing these technologies should adopt rigorous impact assessment practices and involve diverse stakeholders in the design process.
I also think there’s room for consumer education that goes beyond simple digital literacy. We need to develop a more sophisticated cultural understanding of how to maintain autonomy in algorithmically-mediated environments — skills that will serve people across domains as AI becomes more prevalent.
The I/O announcements represent both opportunity and challenge. Many of the capabilities demonstrated could genuinely improve commercial experiences while others risk undermining important market functions. Our task as a society is to steer this powerful technology toward human flourishing rather than merely platform growth.
Perhaps what we’re really advocating for is technological humanism — an approach that places human values, agency, and diversity at the center of design decisions rather than treating them as secondary considerations. This framework could help us navigate the complex trade-offs involved in AI-enhanced marketing.
Moderator
Thank you both for this thoughtful and nuanced debate. It’s clear that AI in marketing presents both significant opportunities and substantial challenges. You’ve helped us understand that the impact will depend not just on the technology itself but on design choices, regulatory frameworks, and cultural factors that shape its implementation. . . [Deleted a summary of the two sides]
Your conversation suggests that the way forward lies not in wholesale adoption or rejection of AI in marketing, but in thoughtful implementation guided by principles you both articulated: transparency, augmentation rather than replacement, preservation of genuine choice, and governance frameworks that evolve alongside the technology.
As these capabilities begin rolling out following Google’s I/O announcements, all stakeholders — technology companies, marketers, consumers, and regulators — have roles to play in ensuring that AI enhances rather than undermines human agency in the marketplace.
Thank you both for this illuminating discussion.
Claude response ends — -
Discussion
I want to pick up on some substantive points in the debate and then step back a bit on the process.
On Marketing and GenAI
Substantively, I think Jill’s five principles for responsible development of GenAI in marketing are pretty good. The devil is in the implementation, but these are all solid starting principles. And both Jack and Jill agree we need guidelines of some kind, allowing that they might disagree on how strict guidance should be.
I also think the agency point Claude highlighted (and I reinforced by asking it to develop more) is important. When and why will a consumer trust an agent to act for them? For agents to become important in commerce, that soft side of the customer experience is going to need to be understood. Amazon hoped Alexa devices would become this kind of buying agent (“Alexa, can you order toilet paper?”) and that has not worked out as far as I can tell.
More broadly, I’m struck that both Jack and Jill agree that GenAI will sharply change marketing. I’m in this camp as well. It’s hard to see how we go back to pre-AI days unless the technology somehow collapses (which is a possibility to discuss on another day — I am a raging moderate on GenAI).
Jack and Jill’s argument is largely about whether this transformation is going to lead to a better situation or a worse one. The debate ends up being more about how to shape the trajectory of whatever transformation unfolds. That is a very good goal.
In terms of process, I hope this illustrates that LLMs can be interesting thinking partners. This is one of my most frequent professional use cases. I bounce ideas off it or ask it to develop ideas. I debate it. It can be sycophantic — I’m clearly the most interesting debate partner Claude has ever had — but once you prompt it to be critical it will be. I find it can be a good way to think through an issue in private before I have to think through it in public.
On Education
I am a marketing professor and if you have read my other articles and posts, you know I am often commenting on how to use GenAI effectively in teaching. What I’ve done here could be the basis for an interesting class exercise in marketing and other disciplines. Have students generate the summary, critique, and debate on some issue, and build a session around that.
I’ll also note that my first step here is arguably what many educators are worried students will do: rather than watch the video, I have asked AI to summarize the video and not fact-checked it. Guilty as charged. There’s probably something wrong with this interaction. But in this case I am less interested in accuracy than ideas. I found this a useful interaction from which I learned something. And certainly thought about things Sundar probably did not want me to think about! As long as the critique and debate part is in the learning exercise, I think this is an appropriate use.
With that, I will sign off to contemplate our GenAI future. . .
Bruce Clark is an Associate Professor of Marketing at the D’Amore-McKim School of Business at Northeastern University, where he is a member of the DMSB AI Teaching Group. He specializes in managerial decision-making, especially regarding marketing and branding strategy, but at present is deeply engaged with GenAI in business and higher ed. You can find him posting and discussing regarding these topics on LinkedIn at https://www.linkedin.com/in/bruceclarkprof/.