Dove’s Takedown Of AI: Brilliant But Troubling Brand Marketing

The Dove brand has just placed a substantial stake in the battleground over the use of AI in media. In a campaign called “Keep Beauty Real”, the brand released a 2-minute video showing how AI can create an unattainable and highly biased (read “white”) view of what beauty is.

If we’re talking branding strategy, this campaign in a master class. It’s totally on-brand with Dove, who introduced its “Campaign for Real Beauty” 18 years ago. Since then, the company has consistently fought digital manipulation of advertising images, promoted positive body image and reminded us that beauty can come in all shapes, sizes and colors. The video itself is brilliant. You really should take a couple minutes to see it if you haven’t already.

But what I found just as interesting is that Dove chose to use AI as a brand differentiator. The video starts with by telling us, “By 2025, artificial intelligence is predicted to generate 90% of online content” It wraps up with a promise: “Dove will never use AI to create or distort women’s images.”

This makes complete sense for Dove. It aligns perfectly with its brand. But it can only work because AI now has what psychologists call emotional valency. And that has a number of interesting implications for our future relationship with AI.

“Hot Button” Branding

Emotional valency is just a fancy way of saying that a thing means something to someone. The valence can be positive or negative. The term valence comes from the German word valenz, which means to bind. So, if something has valency, it’s carrying emotional baggage, either good or bad.

This is important because emotions allow us to — in the words of Nobel laureate Daniel Kahneman — “think fast.” We make decisions without really thinking about them at all. It is the opposite of rational and objective thinking, or what Kahneman calls “thinking slow.”

Brands are all about emotional valency. The whole point of branding is to create a positive valence attached to a brand. Marketers don’t want consumers to think. They just want them to feel something positive when they hear or see the brand.

So for Dove to pick AI as an emotional hot button to attach to its brand, it must believe that the negative valence of AI will add to the positive valence of the Dove brand. That’s how branding mathematics sometimes work: a negative added to a positive may not equal zero, but may equal 2 — or more. Dove is gambling that with its target audience, the math will work as intended.

I have nothing against Dove, as I think the points it raises about AI are valid — but here’s the issue I have with using AI as a brand reference point: It reduces a very complex issue to a knee-jerk reaction. We need to be thinking more about AI, not less. The consumer marketplace is not the right place to have a debate on AI. It will become an emotional pissing match, not an intellectually informed analysis. And to explain why I feel this way, I’ll use another example: GMOs.

How Do You Feel About GMOs?

If you walk down the produce or meat aisle of any grocery store, I guarantee you’re going to see a “GMO-Free” label. You’ll probably see several. This is another example of squeezing a complex issue into an emotional hot button in order to sell more stuff.

As soon as I mentioned GMO, you had a reaction to it, and it was probably negative. But how much do you really know about GMO foods? Did you know that GMO stands for “genetically modified organisms”? I didn’t, until I just looked it up now. Did you know that you almost certainly eat foods that contain GMOs, even if you try to avoid them? If you eat anything with sugar harvested from sugar beets, you’re eating GMOs. And over 90% of all canola, corn and soybeans items are GMOs.

Further, did you know that genetic modifications make plants more resistance to disease, more stable for storage and more likely to grow in marginal agricultural areas? If it wasn’t for GMOs, a significant portion of the world’s population would have starved by now. A 2022 study suggests that GMO foods could even slow climate change by reducing greenhouse gases.

If you do your research on GMOs — if you “think slow’ about them — you’ll realize that there is a lot to think about, both good and bad. For all the positives I mentioned before, there are at least an equal number of troubling things about GMOs. There is no easy answer to the question, “Are GMOs good or bad?”

But by bringing GMOs into the consumer world, marketers have shut that down that debate. They are telling you, “GMOs are bad. And even though you consume GMOs by the shovelful without even realizing it, we’re going to slap some GMO-free labels on things so you will buy them and feel good about saving yourself and the planet.”

AI appears to be headed down the same path. And if GMOs are complex, AI is exponentially more so. Yes, there are things about AI we should be concerned about. But there are also things we should be excited about. AI will be instrumental in tackling the many issues we currently face.

I can’t help worrying when complex issues like AI and GMOs are broad-stroked by the same brush, especially when that brush is in the hands of a marketer.

Feature image: Body Scan 002 by Ignotus the Mage, used under CC BY-NC-SA 2.0 / Unmodified

Fooling Some of the Systems Some of the Time

If there’s a system, there’s a way to game it. Especially when those systems are tied to someone making money.

Buying a Best Seller

Take publishing, for instance. New books that say they are on the New York Times Best-Seller List sell more copies than ones that don’t make the list. A 2004 study by University of Wisconsin economics professor Alan Sorenson found the bump is about 57%. That’s; certainly motivation for a publisher to game the system.

There’s also another motivating factor. According to a Times op-ed, Michael Korda, former editor in chief of Simon and Schuster, said that an author’s contract can include a bonus of up to $100,000 for hitting No. 1 on the list.

This amplifying effect is not a one-shot deal. Make the list for just one week, in any slot under any category, and you can forever call yourself a “NY Times bestselling author,” reaping the additional sales that that honor brings with it. Given the potential rewards, you can guarantee that someone is going to be gaming the system.

And how do you do that? Typically, by doing a bulk purchase through an outlet that feeds its sales numbers to TheTimes. That’s what Donald Trump Jr. and his publisher did for   his book “Triggered,” which hit No. 1 on its release in November of 2019, according to various reports.  Just before the release, the Republican National Committee reportedly placed a $94,800 order with a bookseller, which would equate to about 4,000 books, enough to ensure that “Triggered” would end up on the Times list. (Note: The Times does flag these suspicious entries with a dagger symbol when it believes that someone may be potentially gaming the system by buying in bulk.)

But it’s not only book sales where you’ll find a system primed for rigging. Even those supposedly objective 5-star buyer ratings you find everywhere have also been gamed.

5-Star Scams

A 2021 McKinsey report said that, depending on the category, a small bump in a star rating on Amazon can translate into a 30% to 200% boost in sales. Given that potential windfall, it’s no surprise that you’ll find fake review scams proliferate on the gargantuan retail platform.

A recent Wired exposé on these fake reviews found a network that had achieved a level of sophistication that was sobering. It included active recruitment of human reviewers (called “Jennies” — if you haven’t been recruited yet, you’re a “Virgin Jenny”) willing to write a fake review for a small payment or free products. These recruitment networks include recruiting agents in locations including Pakistan, Bangladesh and India working for sellers from China.

But the fake review ecosystem also included reviews cranked out by AI-powered automated agents. As AI improves, these types of reviews will be harder to spot and weed out of the system.

Some recent studies have found that, depending on the category, over one-third of the reviews you see on Amazon are fake. Books, baby products and large appliance categories are the worst offenders.

Berating Ratings…

Back in 2014, Itamar Simonson and Emanuel Rosen wrote a book called “Absolute Value: What Really Influences Customers in the Age of (Nearly) Perfect Information.” Spoiler alert: they posited that consumer reviews and other sources of objective information were replacing traditional marketing and branding in terms of what influenced buyers.

They were right. The stats I cited above show how powerful these supposedly objective factors can be in driving sales. But unfortunately, thanks to the inevitable attempts to game these systems, the information they provide can often be far from perfect.

A Look Back at 2023 from the Inside.

(Note: This refers to the regular feature on Mediapost – The Media Insider – which I write for every Tuesday)

It seems that every two years, I look back at what the Media Insiders were musing about over the past year. The ironic part is that I’m not an Insider. I haven’t been “inside” the Media industry for over a decade. Maybe that affords me just enough distance to be what I hope could be called an “informed observer.”

I first did this in 2019, and then again in 2021. This year, I decided to grab a back of an envelope (literally) and redo this far from scientific poll. Categorization of themes is always a challenge when I do this, but there are definitely some themes that have been consistent across the past 5 years.  I have tremendous respect for my fellow Insiders and I always find it enlightening to learn what was on their minds.

In 2019, the top three things we were thinking about were (in order): disruptions in the advertising business, how technology is changing us and how politics changed social media.

In 2021, the top three topics included (again) how technology was changing us, general marketing advice and the toxic impact of social media.

So, what about 2023? What were we writing about? After eliminating the columns that were reruns, I ended up with 230 posts in the past year.

It probably comes as a surprise to no one that artificial intelligence was the number one topic by a substantial margin. Almost 15% of all our Insider posts talked about the rise of AI and its impact on – well – pretty much everything!

The number two topic – at 12% – was TV, video and movies. Most of the posts touched on how this industry is going through ongoing disruption in every aspect – creation, distribution, buying and measurement.

Coming in at number three, at just under 12%, was social media. Like in the previous years, most of the posts were about the toxic nature of social media, but there was a smattering of positive case studies about how social platforms were used for positive change.

We Insiders have always been an existential bunch and last year was no different. Our number four topic was about our struggling to stay human in a world increasingly dominated by tech. This accounted for almost 11% of all our posts.

The next two most popular topics were both firmly grounded in the marketing industry itself. Posts about how to be a better marketer generated almost 9% of Insider content for 2023 and various articles about the business of tech marketing added another 8% of posts.

Continuing down the list, we have world events and politics (Dave Morgan’s columns about the Ukraine were a notable addition to this topic), examples of marketing gone wrong and the art and science of brand building.

We also looked at the phenomenon of fame and celebrity, sustainability, and the state of the News industry. In what might have been a wistful look back at what we remember as simpler times, there were even a few columns about retro-media, including the resurgence of the LP.

Interestingly, former hot topics like performance measurement, data and search all clustered near the bottom of the list in terms of number of posts covering these topics.

With 2023 in our rear view mirror, what are the takeaways? What can we glean from the collected year-long works of these very savvy and somewhat battle-weary veterans of marketing?

Well, the word “straddle” comes to mind. We all seem to have one foot still planted in the world and industry we thought we knew and one tentatively dipping its toes into the murky waters of what might come. You can tell that the Media Insiders are no less passionate about the various forms of media we write about, but we do go forward with the caution that comes from having been there and done that.

I think that, in total, I found a potentially worrying duality in this review of our writing. Give or take a few years –  all my fellow Insiders are of the same generation. But we are not your typical Gen-Xers/Baby Boomers (or, in my case, caught in the middle as a member of Generation Jones). We have worked with technology all our lives. We get it. The difference is, we have also accumulated several decades of life wisdom. We are past the point where we’re mesmerized by bright shiny objects. I think this gives us a unique perspective. And, based on what I read, we’re more than a little worried about what future might bring.

Take that for what it’s worth.

When AI Love Goes Bad

When we think about AI and its implications, it’s hard to wrap our own non-digital, built of flesh and blood brains around the magnitude of it. Try as we might, it’s impossible to forecast the impact of this massive wave of disruption that’s bearing down on us. So, today, in order to see what might be the unintended consequences, I’d like to zoom in to one particular example.

There is a new app out there. It’s called Anima and it’s an AI girlfriend. It’s not the only one. When it comes to potential virtual partners, there are plenty of fish in the sea. But – for this post, let’s stay true to Anima. Here’s the marketing blurb on her website: “The most advanced romance chatbot you’ve ever talked to. Fun and flirty dating simulator with no strings attached. Engage in a friendly chat, roleplay, grow your love & relationship skills.”

Now, if there’s one area where our instincts should kick in and alarm bells should start going off about AI, it should be in the area of sexual attraction. If there was one human activity that seems bound by necessity to being ITRW (in the real world) it should be this one.

If we start to imagine what might happen when we turn to AI for love, we could ask filmmaker Spike Jonze. He already imagined it, 10 years ago when he wrote the screenplay for “her”, the movie with Joaquin Phoenix. Phoenix plays Theodore Twombly, a soon-to-be divorced man who upgrades his computer to a new OS, only to fall in love with the virtual assistant (voiced by Scarlett Johansson) that comes as part of the upgrade.

Predictably, complications ensue.

To get back to Anima, I’m always amused by the marketing language developers use to lull us into the acceptance of things we should be panicking about. In this case, it was two lines: “No strings attached” and “grow your love and relationship skills.”

First, about that “no strings attached” thing – I have been married for 34 years now and I’m here to tell you that relationships are all about “strings.” Those “strings” can also be called by other names: empathy, consideration, respect, compassion and – yes – love. Is it easy to keep those strings attached – to stay connected with the person at the other end of those strings? Hell, no! It is a constant, daunting, challenging work in progress. But the alternative is cutting those strings and being alone. Really alone.

If we get the illusion of a real relationships through some flirty version of ChatGPT, will it be easier to cut the strings that keep us connected to other real people out there? Will we be fooled into thinking something is real when it’s just a seductive algorithm?  In “her”, Jonze brings Twombly back to the real world, ending with a promise of a relationship with a real person as they both gaze at the sunset. But I worry that that’s just a Hollywood ending. I think many people – maybe most people – would rather stick with the “no strings attached” illusion. It’s just easier.

And will AI adultery really “grow your love and relationship skills?” No. No more than you will grow your ability to determine accurate and reliable information by scrolling through your Facebook feed. That’s just a qualifier that the developer threw in so they didn’t feel crappy about leading their customers down the path to “AI-rmegeddon”.

Even if we put all this other stuff aside for the moment, consider the vulnerable position we put ourselves in when we start mistaking robotic love for the real thing. All great cons rely on one of two things – either greed or love. When we think we’re in love, we drop our guard. We trust when we probably shouldn’t.

Take the Anima artificial girlfriend app for example. We know nothing about the makers of this app. We don’t know where the data collected goes. We certainly have no idea what their intentions are. Is this really who you want to start sharing your most intimate chit chat with? Even if their intentions are benign, this is an app built a for-profit company, which means there needs to be a revenue model in it somewhere. I’m guessing that all your personal data will be sold to the highest bidder.

You may think all this talk of AI love is simply stupid. We humans are too smart to be sucked in by an algorithm. But study after study has shown we’re not. We’re ready to make friends with a robot at the drop of a hat. And once we hit friendship, can love be far behind?

In Defense of SEO

Last week, my social media feeds blew up with a plethora (yes – a plethora!) of indignant posts about a new essay that had just dropped on The Verge.

It was penned by Amanda Chicago Lewis and it was entitled: “The People that Ruined the Internet”

The reason for the indignation? Those “people” included myself, and many of my past colleagues. The essay was an investigation of the industry I used to be in. One might even call me one of the original pioneers of said industry. The intro was:

“As the public begins to believe Google isn’t as useful anymore, what happens to the cottage industry of search engine optimization experts who struck content oil and smeared it all over the web? Well, they find a new way to get rich and keep the party going.”

Am I going to refute the observations of Ms. Lewis?

No, because they are not lies. They are observations. And observations happen through the lens the observer uses to observe. What struck me is the lens Lewis chose to see my former industry through, and the power of a lens in media.

Lewis is an investigative journalist. She writes exposes. If you look at the collection of her articles, you don’t have to scroll very far before you have seen the words “boondoggle”, “hustler”, “lies”, “whitewashing”, and “hush money” pop up in her titles. Her journalistic style veers heavily towards being a “hammer”, which makes all that lie in her path “nails.”

This was certainly true for the SEO article. She targeted many of the more colorful characters still in the SEO biz and painted them with the same acerbic, snarky brush. Ironically, she lampoons outsized personalities without once considering that all of this is filtered through her own personality. I have never met Lewis, but I suspect she’s no shrinking violet. In the article, she admits a grudging admiration for the hustlers and “pirates” she interviewed.

Was that edginess part of the SEO industry? Absolutely. But contrary to the picture painted by Lewis, I don’t believe that defined the industry. And I certainly don’t believe we ruined the internet. Google organic search results are better than they were 10 years ago. We all have a better understanding of how people actually search and a good part of that research was done by those in the SEO industry (myself included). The examples of bad SEO that Lewis uses are at least 2 decades out of date.

I think Lewis, and perhaps others of her generation, suffer from “rosy retrospection” – a cognitive bias that automatically assumes things were better yesterday. I have been searching for the better part of 3 decades and – as a sample of one – I don’t agree. I can also say with some empirical backing that the search experience is quantitatively better than it was when we did our first eye tracking study 20 years ago. A repeat study done 10 years ago showed time to first click had decreased and satisfaction with that click had increased. I’m fairly certain that a similar study would show that the search experience is better today than it was a decade ago. If this is a “search optimized hellhole”, it’s much less hellish than it was back in the “good old days” of search.

One of the reasons for that improvement is that millions of websites have been optimized by SEOs (a label which, by the way Amanda, has absolutely nothing to do with wanting to be mistaken for a CEO) to unlock unindexable content, fix broken code, improve usability, tighten up and categorize content and generally make the Internet a less shitty and confusing place. Not such an ignoble pursuit for “a bunch of megalomaniacal jerks (who) were degrading our collective sense of reality because they wanted to buy Lamborghinis and prove they could vanquish the almighty algorithm.”

Amanda Chigaco Lewis did interview those who sat astride the world of search providers and the world of SEO: Danny Sullivan (“angry and defensive” – according to Lewis), Barry Schwartz (“an unbelievably fast talker”), Duane Forrester (a “consummate schmoozer”) and Matt Cutts (an “SEO celebrity”). Each tried to refute her take that things are “broken” and the SEOs are to blame, but she brushed those aside, intent on caricaturing them as a cast of characters from a carnival side show.  Out of the entire scathing diatribe, one scant paragraph grudgingly acknowledges that maybe not all SEO is bad. That said, Lewis immediately spins around and says that it doesn’t matter, because the bad completely negates the good.

Obviously, I don’t agree with Lewis’s take on the SEO industry. Maybe it’s because I spent the better part of 20 years in the industry and know it at a level Lewis never could. But what irritates me the most is that she made no attempt to go beyond taking the quick and easy shots. She had picked her lens through which she viewed SEO before the very first interview and everything was colored by that lens. Was her take untrue? Not exactly. But it was unfair. And that’s why reporters like Lewis have degraded journalism to the point where it’s just clickbait, with a few more words thrown in.

Lewis gleefully stereotypes SEOs as “content goblin(s) willing to eschew rules, morals, and good taste in exchange for eyeballs and mountains of cash.” That’s simply not true. It’s no more true than saying all investigative journalists are “screeching acid-tongued harpies who are hopelessly biased and cover their topics with all the subtlety of a flame-thrower.”

P.S.  I did notice the article was optimized for search, with keywords prominently shown in the URL. Does that make the Verge and Lewis SEOs?

X Marks the Spot

Elon Musk has made his mark. Twitter and its cute little birdy logo are dead. Like Monty Python’s famous parrot, this bird has shuffled off its mortal coil.

So Twitter is dead, Long live X?

I know — that seems weird to me, too.

Musk clearly has a thing for the letter X. He founded a company called X.com that merged with PayPal in 2000. In his portfolio of companies, you’ll find SpaceX, xAI, X Corp. Its seldom you see so much devotion to 1/26th of the Latin alphabet.

It’s not unprecedented to pick a letter and turn it into a brand. Steve Jobs managed to make the letter “i” the symbol for everything Apple. Mind you, he also tacked on helpful product descriptors to keep us from getting confused. If he had changed the name of Apple to “I” and just left it at that, it might not have worked so well.

At their best, brands should immediately bridge the gap between the DNA of a company and a long-term niche in the brains of those of us in the marketplace. Twitter did that. When you saw the iconic bird logo or hear the word Twitter, you know exactly what it referred to.

This is easier when the company is known for a handful of products. But when companies stretch into multiple areas, it’s tough to make one brand synonymous with hundreds or thousands of products. 

This brand diffusion is common with the hyper-accelerated world of tech. You launch a product and it’s so successful, it becomes a mega-corporation. At some point you’re stuck with an awkward transition: You leave the original brand associated with that product and create an umbrella brand that is vague enough to shelter a diverse and expanding portfolio of businesses. That’s why Google created the generic Alpha brand, and why Facebook became Meta.

But Musk didn’t create an umbrella to shelter Twitter and its brand. He used it to beat the brand to death. Maybe he just doesn’t like blue birds.

When a brand does its job well, we feel a personal relationship with it. Twitter’s brand did this. It was unique in tech branding, primarily because it was cute and organic. It was an accessible brand, a breath of fresh air in a world of cryptic acronyms and made-up terms with weird spellings. It made sense to us. And we are sorry to see it go.

In fact, some of us are flat-out refusing to admit the bird is dead. One programmer has already whipped together a Chrome extension that strips out the X branding and brings our favorite little Tweeter back from the beyond. Much as I admire this denial, I suspect this is only delaying the inevitable. It’s time to say bye-bye birdy. 

This current backlash against Musk’s rebranding could be a natural outcome of his effort to move from being one tied to a product to one that creates a bigger tent for multiple products. He has been pretty vocal about X becoming an “everything” app, a la China’s WeChat.

I suspect the road to making X a viable brand is going to be a rocky one. First of all, if you were going to pick the most generic symbol imaginable, X would be your choice. It literally has been a stand in for pretty much everything you could think of for centuries now. Even my great, great grandfather signed his name with an “X.”

We Hotchkisses have always been ahead of our time.

But the ubiquity of “X” brings up another problem, this time on the legal front. According to a lengthy analysis of Twitter’s rebranding by Emma Roth, you can trademark a single letter, but trying to make X your brand will come with some potentially litigious baggage. Microsoft has a trademark on X. So does Meta.

As long at Musk’s X sticks to its knitting, that might not be a problem. Microsoft registered X for its Xbox gaming console. Meta’s trademark also has to do with gaming. Apparently, as long as you don’t cross industries and confuse customers, having the same trademark shouldn’t be an issue.

But the chances of Elon Musk playing nice and following the rules of trademark law while pursuing his plan for world domination are somewhat less than zero. In this case, I think it’s fair to speculate that the formula for the future will be: X = a shitload of lawyer fees

Also, even if you succeed in making X a recognized and unique brand, protecting that brand will be a nightmare. How do you build a legal fence around X when the choice of it as a brand was literally to tear down fences?

But maybe Musk has already foreseen all this. Maybe he has some kind of superpower to see things we can’t.

Kind of like Superman’s X-Ray vision.

The Challenge in Regulating AI

A few weeks ago, MediaPost’s Wendy Davis wrote a commentary on the Federal Trade Commission’s investigation of OpenAI. Of primary concern to the FTC was ChatGPT’s tendency to hallucinate. I found this out for myself when ChatGPT told some whoppers about who I was and what I’ve done in the past.

Davis wrote, “The inquiry comes as a growing chorus of voices — including lawmakers, consumer advocates and at least one business group — are pushing for regulations governing artificial intelligence. OpenAI has also been hit with lawsuits over copyright infringement, privacy and defamation.”

This highlights a problem with trying to legislate AI. First, the U.S. is using its existing laws and trying to apply them to a disruptive and unpredictable technology. Laws, by their nature, have to be specific, which means you have to be able to anticipate circumstances in which they’d be applied. But how do you create or apply laws for something unpredictable? All you can do is regulate what you know. When it comes to predicting the future, legislators tend to be a pretty unimaginative bunch. 

In the intro to a Legal Rebels podcast on the American Bar Association’s website, Victor Li included this quote, “At present, the regulation of AI in the United States is still in its early stages, and there is no comprehensive federal legislation dedicated solely to AI regulation. However, there are existing laws and regulations that touch upon certain aspects of AI, such as privacy, security and anti-discrimination. “

The ironic thing was, the quote came from ChatGPT. But in this case, ChatGPT got it mostly right. The FTC is trying to use the laws at its disposal to corral OpenAI by playing a game of legal whack-a-mole:  hammering things like privacy, intellectual property rights, defamation, deception and discrimination as they pop their heads up.

But that’s only addressing the problems the FTC can see. It’s like repainting the deck railings on the Titanic the day before it hit the iceberg. It’s not what you know that’s going to get you, it’s what you don’t know.

If you’re attacking ChatGPT’s tendency to fabricate reality, you’re probably tilting at the wrong windmill. This is a transitory bug. OpenAI benefits in no way from ChatGPT’s tendency to hallucinate. The company would much rather have a large language-based model that is usually truthful and accurate. You can bet they’re working on it. By the time the ponderous wheels of the U.S. legislative system get turned around and rolling in the right direction, chances are the bug will be fixed and there won’t really be anything to legislate against.

What we need before we start talking about legislation is something more fundamental. We need an established principle, a framework of understanding from which laws can be created as situations arise.

This is not the first time we’ve faced a technology that came packed with potential unintended consequences. In February, 1975, 140 people gathered at a conference center in Monterey, California to attempt to put a leash on genetic manipulation, particularly Recombinant DNA engineering.

This group, which included mainly biologists with a smattering of lawyers and physicians, established principle-based guidelines that took its name from the conference center where they met. It was called the Asilomar Conference agreement.

The guidelines were based on the level of risk involved in proposed experiments. The higher the risk, the greater the required precautions.

These guidelines were flexible enough to adapt as the science of genetic engineering evolved. It was one of the first applications of something called “the precautionary principle” – which is just what it sounds like: if the future is uncertain, go forward slowly and cautiously.

While the U.S. is late to the AI legislation party, the European Union has been taking the lead. And, if you look its first attempts at E.U. AI regulation drafted in 2021, you’ll see it has the precautionary principle written all over it. Like the Asilomar guidelines, there are different rules for different risk levels. While the U.S. attempts at legislation are mired in spotty specifics, the EU is establishing a universal framework that can adapt to the unexpected.

This is particularly important with AI, because it’s an entirely different ballgame than genetic engineering. Those driving the charge are for-profit companies, not scientists working in a lab.

OpenAI is intended as a platform that others will build on. It will move quickly, and new issues will pop up constantly. Unless the regulating bodies are incredibly nimble and quick to plug loopholes, they will constantly be playing catch-up.

The Seedy, Seedy World of Keto Gummies

OK, admit it. I play games on my phone.

Also, I’m cheap, so I play the free, ad-supported versions.

You might call this a brain-dead waste of time, but I prefer to think of it as diligent and brave investigative journalism.  The time I spend playing Bricks Ball Crusher or Toy Blast is, in actuality, my research into the dark recesses of advertising on behalf of you, the more cerebral and discerning readers of this blog. I bravely sacrifice my own self-esteem so that I might tread the paths of questionable commerce and save you the trip.

You see, it was because of my game playing that I was introduced to the seediest of seedy slums in the ad world, the underbelly known as the in-game ad. One ad, in particular, reached new levels of low.

If you haven’t heard of the Keto Gummies Scam, allow me to share my experience.

This ad hawked miracle gummies that “burn the fat off you” with no dieting or exercising. Several before and after photos show the results of these amazing little miracle drops of gelatin. They had an impressive supporting cast. The stars of the TV pitchfest “Shark Tank” had invested in them. Both Rebel Wilson and Adele had used them to shed pounds. And then — the coup de grace — Oprah (yes, the Oprah!) endorsed them.

The Gummy Guys went right the top of the celebrity endorsement hierarchy when they targeted the big O.

As an ex ad guy, I couldn’t ignore this ad. It was like watching a malvertising train wreck. There was so much here that screamed of scam, I couldn’t believe it. The celebrity pics used were painfully obvious in their use of photoshopping. The claims were about as solid as a toilet paper Taj Mahal. The entire premise reeked of snake oil.

I admit, I was morbidly fascinated.

First, of all the celebrities in all the world, why would you misappropriate Oprah’s brand? She is famously protective of it. If you’re messing with Oprah, you’ve either got to be incredibly stupid or have some serious stones. So which was it?

I started digging.

First of all, this isn’t new. The Keto Gummy Scam has been around for at least a year. In addition to Oprah, they have also targeted Kevin Costner, Rhianna, Trisha Yearwood, Tom Selleck, Kelly Clarkson, Melissa McCarthy — even Wayne Gretzky.

Last Fall, Oprah shared a video on Instagram warning people that she had nothing to do with the gummies and asking people not to fall for the scam. Other celebrities have fallen suit and issued their own warnings.

Snopes.com has dug into the Keto Gummy Scam a couple of times.  One exposé focused on the false claims that the gummies were featured on “Shark Tank.” The first report focused just on the supposed Oprah Winfrey endorsement. That one was from a year ago. That means these fraudulent ads have been associated with Oprah for at least a year and legally, she has been unable to stop them.

To me, that rules out my first supposition. These people aren’t stupid.

This becomes apparent when you start trying to pick your way through the maze of misinformation they have built to support these ads. If you click on the ad you’re taken to a webpage that looks like it’s from a reliable news source. The one I found looked like it was Time’s website. There you’ll find a “one-on-one interview” with Oprah about how she launched a partnership with Weight Watchers to create the Max Science Keto gummies. According to the interview, she called the CEO of Weight Watchers and said ‘if you can’t create a product that helps people lose weight faster without diet and exercise, then I’m backing out of my investment and moving on.”

This is all complete bullshit. But it’s convincing bullshit.

It doesn’t stop there. Clickbait texts with outrageous claims, including the supposed death of Oprah, get clicks through to more bogus sites with more outrageous claims about gummies. While the sites mimic legitimate news organizations like Time, they reside on bogus domains such as genuinesmother.com and newsurvey22offer.com. Or, if you go to them through an in-app link, the URLs are cloaked and remain invisible.

If you turn to a search engine to do some due diligence, the scammers will be waiting for you. If you search for “keto gummies scam” the results page is stuffed with both sponsored and organic spam that appear to support the outrageous claims made in the ads. Paid content outlets like Outlook India have articles placed that offer reviews of the “best keto gummies,” fake reviews, and articles assuring potential victims that the gummies are not a scam but are a proven way to lose weight.

As the Snopes investigators found, it’s almost impossible to track these gummies to any company. Even if you get gummies shipped to you, there’s no return address or phone number. Orders came from a shadowy “Fulfillment Center” in places like Smyrna, Tennessee. Once they get your credit card, the unauthorized charges start.

Even the name of the product seems to be hard to nail down. The scammers seem to keep cycling through a roster of names.

This is, by every definition, predatory advertising. It is the worst example of what we as marketers do. But, like all predators, it can only exist because an ecosystem allows it to exist. It’s something we have to think about.

I certainly will. More on that soon.

Search and ChatGPT – You Still Can’t Get There From Here

I’m wrapping up my ChatGPTrilogy with a shout out to an old friend that will be familiar to many Mediaposters – Aaron Goldman. 13 years ago Aaron wrote a book called Everything I Know About Marketing I Learned from Google.  Just a few weeks ago, Aaron shared a post entitled “In a World of AI, is Everything I Know about Marketing (still) Learned from Google”. In it, he looked at the last chapter of the book, which he called Future-Proofing. Part of that chapter was based on a conversation Aaron and I had back in 2010 about what search might look like in the future.

Did we get it right? Well, remarkably, we got a lot more right than we got wrong, especially with the advent of Natural Language tools such as ChatGPT and virtual assistants like Siri.

We talked a lot about something I called “app-sistants”. I explained, “the idea of search as a destination is an idea whose days are numbered. The important thing won’t be search. It will be the platform and the apps that run on it. The next big thing will be the ability to seamlessly find just the right app for your intent and utilize it immediately.” In this context, “the information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

To be honest, this evolution in search has taken a lot longer than I thought back then, “Intent will be more fully supported from end to end. Right now, we have to keep our master ‘intent’ plan in place as we handle the individual tasks on the way to that intent.”

Searching for complex answers as it currently sits requires a lot of heavy lifting. In that discussion, I used the example of planning a trip.  “Imagine if there were an app that could keep my master intent in mind for the entire process. It would know what my end goal was, would be tailored to understand my personal preferences and would use search to go out and gather the required information. When we look at alignment of intent, [a shift from search to apps is] a really intriguing concept for marketers to consider.”

So, the big question is, do we have such a tool? Is it ChatGPT? I decided to give it a try and see. After feeding ChatGPT a couple of carefully crafted prompts about a trip I’d like to take to Eastern Europe someday, I decided the answer is no. We’re not quite there yet. But we’re closer.

After a couple of iterations, ChatGPT did a credible job of assembling a potential itinerary of a trip to Croatia and Slovenia. It even made me aware of some options I hadn’t run across in my previous research. But it left me hanging well short of the “app-ssistant” I was dreaming of in 2010. Essentially, I got a suggestion but all the detail work to put it into an actual trip still required me to do hundreds of searches in various places.

The problem with ChatGPT is that it gets stuck between the millions of functionality siloes – or “walled gardens” – that make up the Internet. Those “walled gardens” exist because they represent opportunities for monetization. In order for an app-ssistant to be able to multitask and make our lives easier, we need a virtual “commonage” that gets rid of some of these walls. And that’s probably the biggest reason we haven’t seen a truly useful iteration of the functionality I predicted more than a decade ago.

This conflict between capitalism and the concept of a commonage goes back at least to the Magna Carta. As England’s economy transitioned from feudalism to capitalism, enclosure saw the building of fences and the wiping out of lands held as a commonage. The actual landscape became a collection of walled gardens that the enforced property rights of each parcel and the future production value of those parcels.

This history, which played out over hundreds of years, was repeated and compressed into a few decades online. We went from the naïve idealism of a “free for all” internet in the early days to the balkanized patchwork of monetization siloes that currently make up the Web.

Right now, search engines are the closest thing we have to a commonage on the virtual landscape. Search engines like Google can pull data from within many gardens, but if we actually try to use the data, we won’t get far before we run into a wall.

To go back to the idea of trip planning, I might be able to see what it costs to fly to Rome or what the cost of accommodations in Venice is on a search engine, but I can’t book a flight or reserve a room. To do that, I have to visit an online booking site. If I’m on a search engine, I can manually navigate this transition fairly easily. But it would stop something like ChatGPT in its tracks.

When I talked to Aaron 13 years ago, I envisioned search becoming a platform that lived underneath apps which could provide more functionality to the user. But I also was skeptical about Google’s willingness to do this, as I stated in a later post here on Mediapost.  In that post, I thought that this might be an easier transition for Microsoft.

Whether it was prescience or just dumb luck, it is indeed Microsoft taking the first steps towards integrating search with ChatGPT, through its recent integration with Bing. Expedia (who also has Microsoft DNA in its genome) has also taken a shot at integrating ChatGPT in a natural language chat interface.

This flips my original forecast on its head. Rather than the data becoming common ground, it’s the chat interface that’s popping up everywhere. Rather than tearing down the walls that divide the online landscape, ChatGPT is being tacked up as window decoration on those walls.

I did try planning that same trip on both Bing and Expedia. Bing – alas – also left me well short of my imagined destination. Expedia – being a monetization site to begin with – got me a little closer, but it still didn’t seem that I could get to where I wanted to go.

I’m sorry to say search didn’t come nearly as far as I hoped it would 13 years ago. Even with ChatGPT thumbtacked onto the interface, we’re just not there yet.

(Feature Image: OpenAI Art generated from the prompt: “A Van Gogh painting of a chatbot on a visit to Croatia”)

Little White Paper Lies

When I was writing last week’s post about poor customer service, I remembered a study I wrote about back in 2019. The study was about how so many companies were terrible at responding to customer service emails. It was released by the Norwegian CRM provider SuperOffice.

At the time, the study was mentioned in a number of articles. The findings were compelling:

Sixty-two percent of companies didn’t respond to customer service emails. Ninety percent of companies didn’t let the customer know their email had been received. Given the topic of my post, this was exactly the type of empirical evidence I was looking for.

There was just one problem. The original study was done in 2018. I wondered if the study had been updated. After a quick search, I thought I had hit pay dirt. Based on the landing page (which came at the top of the results page for “customer service benchmark report”) a new 2023 study was available.

Perfect, I thought.  I filled in the lead contact form, knowing I was tossing my name into a lead-generation mill. I figured, “What the hell. I’m willing to trade that for some legit research.” I eagerly downloaded the report.

It was the same one I had seen four years earlier. Nothing was new.

Puzzled, I carefully went over the landing page wording. Sure enough, it said a new report had just been released. It gave some tidbits of the new findings, all of which were exactly the same as the 2018 report. After each “finding,” I was told “Tweet this!”

I was starting to get the whiff of something rotten from the State of Norway.

I tracked down the post author through LinkedIn. He was an SEO contractor based in Estonia. He replied saying he thought the company was still working on the new report.

I then reached out to the company. I not only wanted to see what they said about the report, I also wanted to see if they responded to my email. Did they walk their own talk?

To their credit, they did respond, with this, “We are sorry that the report have [sic] not been updated, and right now we have no plans to do that.”

So, the landing page was a bald-faced lie? I mentioned this in an email back to them. They apologized and said they would update the landing page to be more accurate. Based on the current version, it was nudged in this direction, but it is still exceedingly misleading.

This is just one example of how corporate white papers are churned out to grab some attention, get some organic search rankings and collect leads. I fell for it, and I should have known better. I had already seen this sausage factory from the inside out.

Back in the days when we used to do usability research, we had been asked by more than one company to do a commissioned study. These discussions generally started with these words: “Here is what we’d like the research to say.”

I’m guessing things haven’t changed much since then. Most of the corporate research I quote in this column is commissioned by companies who are selling solutions to the problems the research highlights.

For any of you in the research biz, you know ethically what a slippery slope it can be. Even in the supposedly pristine world of academic research, you don’t have to turn over too many rocks to uncover massive fraud, as documented in this Nature post. Imagine, then, the world of corporate commissioned whitepaper research, where there is no such thing as peer review or academic rigor. It’s the gloves off, no-holds-barred, grimy underbelly of research.

With our research, I tried to always make sure the research itself was done well. When we did do commissioned research, we tried to make the people who paid the bills happy by the approach we took to interpreting the research. That’s probably why we didn’t get a lot of commissions. Most of the research we did was for our own purposes, and we did our best to keep it legit. If we did get sponsors, they went in with the understanding that we were going to let the results frame the narrative, rather than the other way around.  I wanted to produce research that people could trust.

That was the biggest letdown of the SuperOffice experience. When I saw how cavalier the company was with how they presented the research on their landing page, I realized that not only could I not trust their promotion of the research, I had trouble trusting the original research itself. I suspected I may have been duped into passing questionable information along the first time.

Fool me once…