As I have more conversations with AI leaders and use more generative tools (both at work and for learning’s sake), it’s becoming clear to me that there’s a huge positioning opportunity for generative AI companies that no one has claimed. The details clarified for me yesterday when I received an embargoed preview of Adobe’s new genAI tool—I wrote this one fast and am still developing my thinking, so I’ll be reading your comments closely (though of course I always do). 💥
You’re here because you recently subscribed or signed up for one of my resources—my course waitlist on Maven, lightning lesson, or Notion templates.
If someone sent you this post and you’re not subscribed, join those people learning how to tactically advocate for brand at your company. 📬
Today, Adobe launched the latest iteration of Firefly (their image and video generation tool) with an accompanying video spot. This slick 60-second ad is made up of over 500 images and video clips and was created entirely with AI in 10 days. The production cost is zero beyond a $199.99/month software subscription and music licensing.
Firefly’s described as an “all-in-one solution for AI-assisted content ideation, creation and production” (you know my feelings on all-in-one) and the video itself is pure whimsy—floating objects, impossible physics, fantasy worlds that plainly scream “AI-generated.” It’s beautiful, it’s impressive, and it’s completely missing the point.
Adobe markets AI as a playground for imagination, but the same tools can create photorealistic humans, believable news footage, and fake product demonstrations that are indistinguishable from reality. Their campaign stays safely in fantasy land, but the tech they’re showcasing operates in murkier territory.
And that’s the opportunity I’m seeing: while the hottest companies in the generative AI race embrace positioning of “limitless creativity,” an entirely different conversation about trust, transparency, and authentic creativity is begging for a market leader.
The whimsy-washing of generative AI
Adobe isn’t alone; the leading lights of the generative AI industry are working to perfect what I call “whimsy-washing”—marketing AI’s creative potential while avoiding engaging with synthetic realism, deepfakes, and the erosion of an objective reality. Look at how a few of these companies position their tools:
Midjourney’s explore page exclusively showcases dreamy artwork and surreal compositions (often with an outsized focus on sexualized female figures).
DALL-E, too, focuses on playful, obviously artificial images.
Runway leans into creative use cases, though we start to see more “realistic” examples as well.
Same deal with Veo (Google). (Why is it always giraffes walking around?)
Yet all of these platforms (especially when combined with generative audio tools) can generate personalized deepfakes, synthetic news anchors, and fake product reviews or imagery that bad actors can use across the internet (and beyond) with impunity. No links (not that kind of newsletter), but if you haven’t at least scratched the surface on how far AI generated porn has already come, you may want to lock the door and take a look (and hope the face looking back at you isn’t your own).
The pattern is obvious: market the magic, ignore the misconduct.
The integrity opportunity nobody’s taking (yet 🤞)
Apple didn’t invent privacy, but they owned a spotlighted narrative at the precise moment where it mattered. They marketed around it proactively. They turned consumer anxiety about data collection into a competitive advantage, aiming to make “privacy” synonymous with their brand identity. The payoff of that investment (that indeed seems to be paying off) is pricing power, customer loyalty, and differentiation that competitors still can’t match.
Adobe or Midjourney could do the same with responsible AI creativity, but so far they’re not. Instead of “AI without limits” (scary?), Adobe could own “AI with integrity.” Instead of showcasing only fantastical impossibilities like whales leaping out of swimming pools and pink cats underwater (inexplicably surrounded by cake), they could demonstrate transparent creativity—showing exactly how content was generated, what sources were used, and where human creativity intersected with machine assistance. Nobody wants to be the company that teaches people how easy it is to fake things, but there’s real brand value in being the company that helps people navigate AI literacy.
Would this sell, though? Maybe consumers won’t ultimately care how content gets created, just like we don’t get into the technical production weeds of every movie or photo or cool transition effect we see on TikTok. But I’m still thinking about
’s take on “proof of reality” content as an upcoming social trend, and about how an inevitable AI backlash is coming. The opportunity feels obvious to me.The first major brand to authentically address AI’s reality problem will achieve differentiation. Perhaps that won’t equate with record-setting revenue growth. And perhaps it will hurt more than help in a world where people seem to be flocking en masse to cake cats and their ilk.
But I doubt it, because differentiation is extremely valuable in commoditized spaces (audio and visual asset generation is already there) and enough people out there understand the risks of this tech that there will be a market. The Veo 3 (Google’s video generation model) “fake news anchors” video that went viral on social two weeks ago is proof enough without going into the darker rabbit holes of the web.
I made this video to warn my parents about AI scams (And to test out Veo 3 to see first hand how these programs are evolving )
The other day I saw a video of a grandma who thought PS3 Grand Theft Auto was the news. And then Google released Veo 3 which looks about hundred times better than that … so yeah … we’re pretty much cooked. But maybe this video can help us boil a little slower.
While today’s competitors’ positioning coheres around the already-tired “endless possibilities” angle, trust, transparency, and authentic creativity are waiting to be owned by a market leader.
What brand leaders should actually do
If you’re building and executing brand strategy in the age of AI, you probably have a sense of these dynamics already. But maybe, like me, you’re trying to figure out exactly how to bring this alternative positioning to life. Especially because, clearly, whimsy-washing is working to power this current stage of growth across this industry. What would owning authentic creativity look like in practice? How can I convince ARR-hungry executives to deviate from founder and VC groupthink? I don’t have the answers to these questions, but I’m pretty sure of a few steps that will set your brand up to remain flexible and prepared as consumer sentiment shifts:
Develop disclosure frameworks now. Don’t wait for regulation to force your hand. Create clear policies about when and how you’ll label AI-generated content. Your customers are already suspicious! And transparency builds trust.
Make transparency a differentiator. Transparency is trending. When everyone can create “perfect” content, imperfection becomes valuable. Human flaws, behind-the-scenes process, and vulnerability (even the corporate version) might be your strongest competitive advantages.
Reframe your idea of premium positioning. I think we’re heading toward a world where “human-made” becomes premium positioning. I’m not sure on how that will extend to software, but it’s worth starting to think about how you’ll show off the human role behind your brand touchpoints. Adobe’s Firefly ad could’ve easily included the production details (how humans and AI worked together, how long it took, etc.) in the creative itself, and it would’ve been more compelling.
When generative tools are accessible to every competitor, and perfect product photography costs pennies, and every brand can generate Hollywood-quality video content overnight—what’s your sustainable competitive advantage? (I’ll leave you to ponder that… but I have some thoughts.)
The higher stakes
Beyond marketing tactics, we’re all experiencing a massive change in how we as a society process information and truth. When synthetic content becomes totally indistinguishable from reality, shared truth, already fragile, shatters. Without shared truth, social cohesion erodes. Without social cohesion, life gets worse for everyone.
Leaders (like you, reading this) have a responsibility that extends beyond quarterly metrics. The choices you make about AI transparency and limitations today shape the information ecosystem your business (and you, and your family and friends) will operate in tomorrow. I try not to get too personal in this newsletter, but people, we’re at a turning point and we are among the very few placed to influence, even in small ways, these powerful, emerging technologies.
Whimsy-washing your brand is the easy route that sidesteps a harder conversation about responsibility in an age of synthetic reality. That conversation is coming whether brands participate in it or not.
And that’s why I think the most successful generative AI brands of the next decade are not going to be those with the most impressive or realistic generated content. They’ll be the ones customers trust to tell the truth in a world where truth has become optional. 🐋
Thanks for reading.
Did this take resonate with you? If you liked what you read, consider:
saying hi or dropping a question in the comments!
connecting with me on LinkedIn: 👩🏼💻 Kira Klaas
sending to a friend 💌 or coworker 💬
Thanks for this insight! Great read.
Appreciate the shout and agree with this thinking!