Dove’s Takedown Of AI: Brilliant But Troubling Brand Marketing

The Dove brand has just placed a substantial stake in the battleground over the use of AI in media. In a campaign called “Keep Beauty Real”, the brand released a 2-minute video showing how AI can create an unattainable and highly biased (read “white”) view of what beauty is.

If we’re talking branding strategy, this campaign in a master class. It’s totally on-brand with Dove, who introduced its “Campaign for Real Beauty” 18 years ago. Since then, the company has consistently fought digital manipulation of advertising images, promoted positive body image and reminded us that beauty can come in all shapes, sizes and colors. The video itself is brilliant. You really should take a couple minutes to see it if you haven’t already.

But what I found just as interesting is that Dove chose to use AI as a brand differentiator. The video starts with by telling us, “By 2025, artificial intelligence is predicted to generate 90% of online content” It wraps up with a promise: “Dove will never use AI to create or distort women’s images.”

This makes complete sense for Dove. It aligns perfectly with its brand. But it can only work because AI now has what psychologists call emotional valency. And that has a number of interesting implications for our future relationship with AI.

“Hot Button” Branding

Emotional valency is just a fancy way of saying that a thing means something to someone. The valence can be positive or negative. The term valence comes from the German word valenz, which means to bind. So, if something has valency, it’s carrying emotional baggage, either good or bad.

This is important because emotions allow us to — in the words of Nobel laureate Daniel Kahneman — “think fast.” We make decisions without really thinking about them at all. It is the opposite of rational and objective thinking, or what Kahneman calls “thinking slow.”

Brands are all about emotional valency. The whole point of branding is to create a positive valence attached to a brand. Marketers don’t want consumers to think. They just want them to feel something positive when they hear or see the brand.

So for Dove to pick AI as an emotional hot button to attach to its brand, it must believe that the negative valence of AI will add to the positive valence of the Dove brand. That’s how branding mathematics sometimes work: a negative added to a positive may not equal zero, but may equal 2 — or more. Dove is gambling that with its target audience, the math will work as intended.

I have nothing against Dove, as I think the points it raises about AI are valid — but here’s the issue I have with using AI as a brand reference point: It reduces a very complex issue to a knee-jerk reaction. We need to be thinking more about AI, not less. The consumer marketplace is not the right place to have a debate on AI. It will become an emotional pissing match, not an intellectually informed analysis. And to explain why I feel this way, I’ll use another example: GMOs.

How Do You Feel About GMOs?

If you walk down the produce or meat aisle of any grocery store, I guarantee you’re going to see a “GMO-Free” label. You’ll probably see several. This is another example of squeezing a complex issue into an emotional hot button in order to sell more stuff.

As soon as I mentioned GMO, you had a reaction to it, and it was probably negative. But how much do you really know about GMO foods? Did you know that GMO stands for “genetically modified organisms”? I didn’t, until I just looked it up now. Did you know that you almost certainly eat foods that contain GMOs, even if you try to avoid them? If you eat anything with sugar harvested from sugar beets, you’re eating GMOs. And over 90% of all canola, corn and soybeans items are GMOs.

Further, did you know that genetic modifications make plants more resistance to disease, more stable for storage and more likely to grow in marginal agricultural areas? If it wasn’t for GMOs, a significant portion of the world’s population would have starved by now. A 2022 study suggests that GMO foods could even slow climate change by reducing greenhouse gases.

If you do your research on GMOs — if you “think slow’ about them — you’ll realize that there is a lot to think about, both good and bad. For all the positives I mentioned before, there are at least an equal number of troubling things about GMOs. There is no easy answer to the question, “Are GMOs good or bad?”

But by bringing GMOs into the consumer world, marketers have shut that down that debate. They are telling you, “GMOs are bad. And even though you consume GMOs by the shovelful without even realizing it, we’re going to slap some GMO-free labels on things so you will buy them and feel good about saving yourself and the planet.”

AI appears to be headed down the same path. And if GMOs are complex, AI is exponentially more so. Yes, there are things about AI we should be concerned about. But there are also things we should be excited about. AI will be instrumental in tackling the many issues we currently face.

I can’t help worrying when complex issues like AI and GMOs are broad-stroked by the same brush, especially when that brush is in the hands of a marketer.

Feature image: Body Scan 002 by Ignotus the Mage, used under CC BY-NC-SA 2.0 / Unmodified

AI Customer Service: Not Quite Ready For Prime Time

I had a problem with my phone, which is a landline (and yes, I’ve heard all the smartass remarks about being the last person on earth with a landline, but go ahead, take your best shot).

The point is, I had a problem. Actually, the phone had a problem, in that it didn’t work. No tone, no life, no nothing. So that became my problem.

What did I do? I called my provider (from my cell, which I do have) and after going through this bizarre ID verification process that basically stopped just short of a DNA test, I got routed through to their AI voice assistant, who pleasantly asked me to state my problem in one short sentence.

As soon as I heard that voice, which used the same dulcet tones as Siri, Alexa and the rest of the AI Geek Chorus, I knew what I was dealing with. Somewhere at a board table in the not-too-distant past, somebody had come up with the brilliant idea of using AI for customer service. “Do you know how much money we could save by cutting humans out of our support budget?” After pointing to a chart with a big bar and a much smaller bar to drive the point home, there would have been much enthusiastic applause and back-slapping.

Of course, the corporate brain trust had conveniently forgotten that they can’t cut all humans out of the equation, as their customers still fell into that category.  And I was one of them, now dealing face to face with the “Artificially Intelligent” outcome of corporate cost-cutting. I stated my current state of mind more succinctly than the one short sentence I was instructed to use. It was, instead, one short word — four letters long, to be exact. Then I realized I was probably being recorded. I sighed and thought to myself, “Buckle up. Let’s give this a shot.”

I knew before starting that this wasn’t going to work, but I wasn’t given an alternative. So I didn’t spend too much time crafting my sentence. I just blurted something out, hoping to bluff my way to the next level of AI purgatory. As I suspected, Ms. AI was stumped. But rather than admit she was scratching her metaphysical head, she repeated the previous instruction, preceded by a patronizing “pat on my head” recap that sounded very much like it was aimed at someone with the IQ of a soap dish. I responded again with my four-letter reply — repeated twice, just for good measure.

Go ahead, record me. See if I care.

This time I tried a roundabout approach, restating my issue in terms that hopefully could be parsed by the cybernetic sadist that was supposedly trying to help me. Needless to say, I got no further. What I did get was a helpful text with all the service outages in my region. Which I knew wasn’t the problem. But no one asked me.

I also got a text with some troubleshooting tips to try at home. I had an immediate flashback to my childhood, trying to get my parents’ attention while they were entertaining friends at home, “Did you try to figure it out yourself, Gordie? Don’t bother Mommy and Daddy right now. We’re busy doing grown up things. Run along and play.”

At this point, the scientific part of my brain started toying with the idea of making this an experiment. Let’s see how far we can push the boundaries of this bizarre scenario: equally frustrating and entertaining. My AI tormenter asked me, “Do you want to continue to try to troubleshoot this on the phone with me?”

I was tempted, I really was. Probably by the same part of my brain that forces me to smell sour milk or open the lid of that unidentified container of green fuzz that I just found in the back of the fridge.  And if I didn’t have other things to do in my life, I might have done that. But I didn’t. Instead, in desperation I pleaded, “Can I just talk to a human, please?”

Then I held my breath. There was silence. I could almost hear the AI wheels spinning. I began to wonder if some well-meaning programmer had included a subroutine for contrition. Would she start pleading for forgiveness?

After a beat and a half, I heard this, “Before I connect you with an agent, can I ask you for a few more details so they’re better able to help you?” No thanks, Cyber-Sally, just bring on a human, posthaste! I think I actually said something to that effect. I might have been getting a little punchy in my agitated state.

As she switched me to my requested human, I swore I could hear her mumble something in her computer-generated voice. And I’m pretty sure it was an imperative with two words, the first a verb with four letters, the second a subject pronoun with three letters.

And, if I’m right, I may have newfound respect for AI. Let’s just call it my version of the Turing Test.

Privacy’s Last Gasp

We’ve been sliding down the slippery slope of privacy rights for some time. But like everything else in the world, the rapid onslaught of disruption caused by AI is unfurling a massive red flag when it comes to any illusions we may have about our privacy.

We have been giving away a massive amount of our personal data for years now without really considering the consequences. If we do think about privacy, we do so as we hear about massive data breaches. Our concern typically is about our data falling into the hands of hackers and being used for criminal purposes.

But when you combine AI and data, a bigger concern should catch our attention. Even if we have been able to retain some degree of anonymity, this is no longer the case. Everything we do is now traceable back to us.

Major tech platforms generally deal with any privacy concerns with the same assurance: “Don’t worry, your data is anonymized!” But really, even anonymized data requires very few dots to be connected to relink the data back to your identity.

Here is an example from the Electronic Frontier Foundation. Let’s say there is data that includes your name, your ZIP or postal code, your gender and your birthdate. If you remove your name, but include those other identifiers, technically that data is now anonymized.

But, says the EEF:

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to a study from Carnegie Mellon University, those three factors are all that’s needed to identify 87% of the US population. If we fold in AI and its ability to quickly crunch massively large data sets to identify patterns, that percentage effectively becomes 100% and the data horizon expands to include pretty much everything we say, post, do or think. We may not think so, but we are constantly in the digital data spotlight and it’s a good bet that somebody, somewhere is watching our supposedly anonymous activities.

The other shred of comfort we tend to cling to when we trade away our privacy is that at least the data is held by companies we are familiar with, such as Google and Facebook. But according to a recent survey by Merkle reported on in MediaPost by Ray Schultz, even that small comfort may be slipping from our grasp. Fifty eight percent of respondents said they were concerned about whether their data and privacy identity were being protected.

Let’s face it. If a platform is supported by advertising, then that platform will continue to develop tools to more effectively identify and target prospects. You can’t do that and also effectively protect privacy. The two things are diametrically opposed. The platforms are creating an ecosystem where it will become easier and easier to exploit individuals who thought they were protected by anonymity. And AI will exponentially accelerate the potential for that exploitation.

The platform’s failure to protect individuals is currently being investigated by the US Senate Judiciary Committee. The individuals in this case are children and the protection that has failed is against sexual exploitation. None of the platform executives giving testimony intended for this to happen. Mark Zuckerberg apologized to the parents at the hearing, saying, “”I’m sorry for everything you’ve all gone through. It’s terrible. No one should have to go through the things that your families have suffered.”

But this exploitation didn’t happen just because of one little crack in the system or because someone slipped up. It’s because Meta has intentionally and systematically been building a platform on which the data is collected and the audience is available that make this exploitation possible. It’s like a gun manufacturer standing up and saying, “I’m sorry. We never imagined our guns would be used to actually shoot people.”

The most important question is; do we care that our privacy has effectively been destroyed? Sure, when we’re asked in a survey if we’re worried, most of us say yes. But our actions say otherwise. Would we trade away the convenience and utility these platforms offer us in order to get our privacy back? Probably not. And all the platforms know that.

As I said at the beginning, our privacy has been sliding down a slippery slope for a long time now. And with AI now in the picture, it’s probably going down for the last time. There is really no more slope left to slide down.

Fooling Some of the Systems Some of the Time

If there’s a system, there’s a way to game it. Especially when those systems are tied to someone making money.

Buying a Best Seller

Take publishing, for instance. New books that say they are on the New York Times Best-Seller List sell more copies than ones that don’t make the list. A 2004 study by University of Wisconsin economics professor Alan Sorenson found the bump is about 57%. That’s; certainly motivation for a publisher to game the system.

There’s also another motivating factor. According to a Times op-ed, Michael Korda, former editor in chief of Simon and Schuster, said that an author’s contract can include a bonus of up to $100,000 for hitting No. 1 on the list.

This amplifying effect is not a one-shot deal. Make the list for just one week, in any slot under any category, and you can forever call yourself a “NY Times bestselling author,” reaping the additional sales that that honor brings with it. Given the potential rewards, you can guarantee that someone is going to be gaming the system.

And how do you do that? Typically, by doing a bulk purchase through an outlet that feeds its sales numbers to TheTimes. That’s what Donald Trump Jr. and his publisher did for   his book “Triggered,” which hit No. 1 on its release in November of 2019, according to various reports.  Just before the release, the Republican National Committee reportedly placed a $94,800 order with a bookseller, which would equate to about 4,000 books, enough to ensure that “Triggered” would end up on the Times list. (Note: The Times does flag these suspicious entries with a dagger symbol when it believes that someone may be potentially gaming the system by buying in bulk.)

But it’s not only book sales where you’ll find a system primed for rigging. Even those supposedly objective 5-star buyer ratings you find everywhere have also been gamed.

5-Star Scams

A 2021 McKinsey report said that, depending on the category, a small bump in a star rating on Amazon can translate into a 30% to 200% boost in sales. Given that potential windfall, it’s no surprise that you’ll find fake review scams proliferate on the gargantuan retail platform.

A recent Wired exposé on these fake reviews found a network that had achieved a level of sophistication that was sobering. It included active recruitment of human reviewers (called “Jennies” — if you haven’t been recruited yet, you’re a “Virgin Jenny”) willing to write a fake review for a small payment or free products. These recruitment networks include recruiting agents in locations including Pakistan, Bangladesh and India working for sellers from China.

But the fake review ecosystem also included reviews cranked out by AI-powered automated agents. As AI improves, these types of reviews will be harder to spot and weed out of the system.

Some recent studies have found that, depending on the category, over one-third of the reviews you see on Amazon are fake. Books, baby products and large appliance categories are the worst offenders.

Berating Ratings…

Back in 2014, Itamar Simonson and Emanuel Rosen wrote a book called “Absolute Value: What Really Influences Customers in the Age of (Nearly) Perfect Information.” Spoiler alert: they posited that consumer reviews and other sources of objective information were replacing traditional marketing and branding in terms of what influenced buyers.

They were right. The stats I cited above show how powerful these supposedly objective factors can be in driving sales. But unfortunately, thanks to the inevitable attempts to game these systems, the information they provide can often be far from perfect.

A Column About Nothing

What do I have to say in my last post for 2023? Nothing.

Last week, I talked about the cost of building a brand. Then, this week, I (perhaps being the last person on earth to do so) heard about Nothing.  No – not small “n” nothing as in the absence of anything – Big “N” Nothing as in the London based tech start-up headed by Chinese born entrepreneur Carl Pei.

Nothing, according to their website, crafts “intuitive, flawlessly connected products that improve our lives without getting in the way. No confusing tech-speak. No silly product names. Just artistry, passion and trust. And products we’re proud to share with our friends and family. Simple.”

Now, just like the football talents of David Beckham I explored in my last post, the tech Nothing produces is good – very good – but not uniquely good. The Nothing phone (1) and the just released Nothing Phone (2) are capable mid-range smart phones. Again, from the Nothing website, you are asked to “imagine a world where all your devices are seamlessly connected.”

It may just be me, but isn’t that what Apple has been promising (and occasionally delivering) for the better part of the last quarter century? Doesn’t Google make the same basic promise? Personally, I see nothing earth shaking in Nothing’s mission. It all feels very “been there, done that.” Or, if you’ll allow me – it all seems like much ado about Nothing (sorry). Yet people have paid thousands over the asking price when the 100 units of the first Nothing phone were put up for auction prior to its public launch.

Why?  Because of the value of the Nothing brand. And that value comes from one place. No, not the tech. The community. Pei may be a pretty good building of phones, but he’s an even better building of community. He has expertly built a fan base who love to rave about Nothing. On the “Community” section of the Nothing Website, you’re invited to “abandon the glorification of I and open up to the potential of We.”  I’m not sure exactly what that means, but it all sounds very cool and idealistic, if a little vague.

Another genius move by Pei was to open up to the potential of Nothing. In what is probably a latent (or perhaps not so latent) backlash against over advertising and in-your-face branding, we were eager to jump on the Nothing bandwagon. It seems like anti-branding, but it’s not. It’s actually expertly crafted, by-the-book branding. Just like Seinfeld, a show about nothing that became one of the most popular tv shows in history, it has been shown that there is some serious branding swagger to the concept of nothing. I can’t believe no one thought to stake a claim to this branding goldmine before now.

The Branding Case Study of David Beckham

I have to admit, I’m not a sports fan. And of the few sports I know a little about, European football is certainly not one of them. So my choice to watch the recent Beckham documentary on Netflix is certainly not typical. That said, I did find it a fascinating case study in something I was not expecting: the making and valuation of a personal brand.

First, a controversial question must be posed: was Beckham a good player? According to those that know much more about the sport than I do, the answer is definitely “Yes” – but he wasn’t the GOAT (Greatest of All Time) – he wasn’t even a GOHT (Greatest of His Time). The closest Beckham ever came to winning the Ballon d’Or, given to the best player  of the year,  was to place second behind Rivaldo Ferreira in 1999. During his time at Real Madrid CF, he wasn’t even the best player on the team. Granted, it was a stacked team and Beckham was one of the “galácticos” (superstars), along with Figo, Zidane and Ronaldo. But, unlike Beckham, all those other players have at least one Ballon d’Or in their trophy case (Note, fellow Mediapost Jon Last recently took an interesting look at this topic in his column – The Death of Meritocracy in Sports Pay).

But despite this, Beckham was certainly the highest paid player in the world when Timothy Leiweke lured him to LA Galaxy, where his contract also gave him a piece of the profits. So, if he wasn’t the greatest player, but he was the most valuable one, what created that value? Why was David Beckham worth hundreds of millions of dollars?

As the documentary showed, there was a dimension to Beckham’s signing to a team that went far beyond his ability to put a round ball in the net. He was a global brand – the most famous football player in the world. And that’s what Real Madrid president Florentino Pérez and Timothy Leiweke respectively bought when they signed Beckham.

As I said, the documentary revealed some interesting truths about branding. What creates brand value? Who owns that value? What is the price paid for the value of a personal brand?

What the Beckham documentary showed, more than anything, is that brand value is determined in a public market. Beckham certainly brought brand assets to the table: his own athletic ability, being exceedingly good looking, a kaleidoscope of hair styles, and a marriage to one of the most popular pop stars in the world, Victoria Adams – Posh Spice from the Spice Girls. Those were the table stakes for establishing his brand value, the price of entry.

But beyond that, the value of his brand was really whatever the public determined it to be. For example, after he was red-carded in a critical match against Argentina the 1998 World Cup, all of Britain decided that Beckham had cost them the championship. Whether that was true or not (there are a lorry-full of “ifs” in that opinion) it caused his brand value to plummet. There was really nothing Beckham could do. His brand was out of his control. It was owned by the media and public.

The documentary really highlights the viral and frenzied nature of the market that determines the value of a personal brand. And remember, this all took place in the days before social media and the very real impact of being publicly cancelled! Since Beckham’s prime in the 1990s and early 2000’s, the market effect of branding has since been amplified and compressed. The market of public opinion is now wired, meaning network effects happen on incredibly short timelines and without even the illusion of control.

Certainly the monetary benefits of brand usually accrue to the supposed owner of the brand. David and Victoria Beckham are reportedly worth a half billion dollars, making him one of the richest athletes in the world. But the documentary makes it clear that there was a price paid that was not monetary. Much of what we would all call “our lives” had to be traded by the Beckhams for a brand that was controlled by the public and the press. There were no boundaries, no privacy, no refuge from fame.

When we pull back from the story of David and Victoria Beckham, there are takeaways there for anyone attempting to build a brand, whether it be personal or corporate. You may be able to plant the seeds, but after that, everything else is going to be largely out of your control.

OpenAI’s Q* – Why Should We Care?

OpenAI founder Sam Altman’s ouster and reinstatement has rolled through the typical news cycle and we’re now back to blissful ignorance. But I think this will be one of the sea-change moments; a tipping point that we’ll look back on in the future when AI has changed everything we thought we knew and we wonder, “how the hell did we let that happen?”

Sometimes I think that tech companies use acronyms and cryptic names for new technologies to allow them to sneak game changers in without setting off the alarm bells. Take OpenAI for example. How scary does Q-Star sound? It’s just one more vague label for something we really don’t understand.

 If I’m right, we do have to ask the question, “Who is keeping an eye on these things?”

This week I decided to dig into the whole Sam Altman firing/hiring episode a little more closely so I could understand if there’s anything I should be paying attention to. Granted, I know almost nothing about AI, so what follows if very much at the layperson level, but I think that’s probably true for the vast majority of us. I don’t run into AI engineers that often in my life.

So, should we care about what happened a few weeks ago at OpenAI? In a word – YES.

First of all, a little bit about the dynamics of what led to Altman’s original dismissal. OpenAI started with the best of altruistic intentions, to “to ensure that artificial general intelligence benefits all of humanity.”  That was an ideal – many would say a naïve ideal – that Altman and OpenAI’s founders imposed on themselves. As Google discovered with its “Don’t Be Evil” mantra, it’s really hard to be successful and idealistic at the same time. In our world, success is determined by profits, and idealism and profitability almost never play in the same sandbox. Google quietly watered the “Don’t be Evil” motto until it virtually disappeared in 2018.

OpenAI’s non-profit board was set up as a kind of Internal “kill switch” to prevent the development of technologies that could be dangerous to the human race. That theoretical structure was put to the test when the board received a letter this year from some senior researchers at the company warning of a new artificial intelligence discovery that might take AI past the threshold where it could be harmful to humans. The board then did was it was set up to do, firing Altman and board chairman Greg Brockman and putting the brakes on the potentially dangerous technology. Then, Big Brother Microsoft (who has invested $13 billion in OpenAI) stepped in and suddenly Altman was back. (Note – for a far more thorough and fascinating look at OpenAI’s unique structure and the endemic problems with it, read through Alberto Romero’s series of thoughtful posts.)

There were probably two things behind Altman’s ouster: the potential capabilities of a new development called Q-Star and a fear that it would follow OpenAI’s previous path of throwing it out there to the world, without considering potential consequences. So, why is Q-Star so troubling?

Q-Star could be a major step closer to AI which can rationalize and plan. This moves us closer to the overall goal of artificial general Intelligence (AGI), the holy grail for every AI developer, including OpenAI. Artificial general intelligence, as per OpenAI’s own definition, are “AI systems that are generally smarter than humans.” Q-Star, through its ability to tackle grade school math problems, showed the promise of being artificial intelligence that could plan and reason. And that is an important tipping point, because something that can rationalize and plan pushes us forever past the boundary of a tool under human control. It’s technology that thinks for itself.

Why should this worry us? It should worry us because of Herbert Simon’s concept of “bounded rationality”, which explains that we humans are incapable of pure rationality. At some point we stop thinking endlessly about a question and come up with an answer that’s “good enough”. And we do this because of limited processing power. Emotions take over and make the decision for us.

But AGI throws those limits away. It can process exponentially more data at a rate we can’t possibly match. If we’re looking at AI through Sam Altman’s rose-colored glasses, that should be a benefit. Wouldn’t it be better to have decisions made rationally, rather than emotionally? Shouldn’t that be a benefit to mankind?

But here’s the rub. Compassion is an emotion. Empathy is an emotion. Love is also an emotion. What kind of decisions do we come to if we strip that out of the algorithm, along with any type of human check and balance?

Here’s an example. Let’s say that at some point in the future an AGI superbrain is asked the question, “Is the presence of humans beneficial to the general well-being of the earth?”

I think you know what the rational answer to that is.

X Marks the Spot

Elon Musk has made his mark. Twitter and its cute little birdy logo are dead. Like Monty Python’s famous parrot, this bird has shuffled off its mortal coil.

So Twitter is dead, Long live X?

I know — that seems weird to me, too.

Musk clearly has a thing for the letter X. He founded a company called X.com that merged with PayPal in 2000. In his portfolio of companies, you’ll find SpaceX, xAI, X Corp. Its seldom you see so much devotion to 1/26th of the Latin alphabet.

It’s not unprecedented to pick a letter and turn it into a brand. Steve Jobs managed to make the letter “i” the symbol for everything Apple. Mind you, he also tacked on helpful product descriptors to keep us from getting confused. If he had changed the name of Apple to “I” and just left it at that, it might not have worked so well.

At their best, brands should immediately bridge the gap between the DNA of a company and a long-term niche in the brains of those of us in the marketplace. Twitter did that. When you saw the iconic bird logo or hear the word Twitter, you know exactly what it referred to.

This is easier when the company is known for a handful of products. But when companies stretch into multiple areas, it’s tough to make one brand synonymous with hundreds or thousands of products. 

This brand diffusion is common with the hyper-accelerated world of tech. You launch a product and it’s so successful, it becomes a mega-corporation. At some point you’re stuck with an awkward transition: You leave the original brand associated with that product and create an umbrella brand that is vague enough to shelter a diverse and expanding portfolio of businesses. That’s why Google created the generic Alpha brand, and why Facebook became Meta.

But Musk didn’t create an umbrella to shelter Twitter and its brand. He used it to beat the brand to death. Maybe he just doesn’t like blue birds.

When a brand does its job well, we feel a personal relationship with it. Twitter’s brand did this. It was unique in tech branding, primarily because it was cute and organic. It was an accessible brand, a breath of fresh air in a world of cryptic acronyms and made-up terms with weird spellings. It made sense to us. And we are sorry to see it go.

In fact, some of us are flat-out refusing to admit the bird is dead. One programmer has already whipped together a Chrome extension that strips out the X branding and brings our favorite little Tweeter back from the beyond. Much as I admire this denial, I suspect this is only delaying the inevitable. It’s time to say bye-bye birdy. 

This current backlash against Musk’s rebranding could be a natural outcome of his effort to move from being one tied to a product to one that creates a bigger tent for multiple products. He has been pretty vocal about X becoming an “everything” app, a la China’s WeChat.

I suspect the road to making X a viable brand is going to be a rocky one. First of all, if you were going to pick the most generic symbol imaginable, X would be your choice. It literally has been a stand in for pretty much everything you could think of for centuries now. Even my great, great grandfather signed his name with an “X.”

We Hotchkisses have always been ahead of our time.

But the ubiquity of “X” brings up another problem, this time on the legal front. According to a lengthy analysis of Twitter’s rebranding by Emma Roth, you can trademark a single letter, but trying to make X your brand will come with some potentially litigious baggage. Microsoft has a trademark on X. So does Meta.

As long at Musk’s X sticks to its knitting, that might not be a problem. Microsoft registered X for its Xbox gaming console. Meta’s trademark also has to do with gaming. Apparently, as long as you don’t cross industries and confuse customers, having the same trademark shouldn’t be an issue.

But the chances of Elon Musk playing nice and following the rules of trademark law while pursuing his plan for world domination are somewhat less than zero. In this case, I think it’s fair to speculate that the formula for the future will be: X = a shitload of lawyer fees

Also, even if you succeed in making X a recognized and unique brand, protecting that brand will be a nightmare. How do you build a legal fence around X when the choice of it as a brand was literally to tear down fences?

But maybe Musk has already foreseen all this. Maybe he has some kind of superpower to see things we can’t.

Kind of like Superman’s X-Ray vision.

It’s All in How You Spin It

I generally get about 100 PR pitches a week. And I’m just a guy who writes a post on tech, people and marketing now and then. I’m not a journalist. I’m not even gainfully employed by anyone. I am just one step removed — thanks to the platform  MediaPost has provided me — from “some guy” you might meet at your local coffee shop.

But still, I get 100 PR pitches a week. Desperation for coverage is the only reason I can think of for this to be so. 99.9999% of the time, they go straight to my trash basket. And the reason they do is that they’re almost never interesting. They are — well, they’re pitches for free exposure.

Now, the average pitch, even if it isn’t interesting, should at least try to match the target’s editorial interest. It should be in the strike zone, so to speak.

Let’s do a little postmortem on one I received recently. It was titled “AI in Banking.” Fair enough. I have written a few posts on AI. Specifically, I have written a few posts on my fear of AI.

I have also written about my concerns about misuse of data. When it comes to the nexus between AI and data, I would be considered more than a little pessimistic. So, something linking AI and banking did pique my interest, but not in a good way. I opened the email.

There, in the first paragraph, I read this: “AI is changing how banks provide personalized recommendations and insights based on enriched financial data offering tailored suggestions, such as optimizing spending, suggesting suitable investment opportunities, or identifying potential financial risks.”

This, for those of you not familiar with “PR-ese,” is what we in the biz call “spin.” Kellyanne Conway once called it — more euphemistically — an alternative fact.

Let me give you an example. Let’s say that during the Tour de France half the Peloton crashes and bicyclists get a nasty case of road rash. A PR person would spin that to say that “Hundreds of professional cyclists discover a new miracle instant exfoliation technique from the South of France.”

See? It’s not a lie, it’s just an alternative fact.

Let’s go on. The second paragraph of the pitch continued: “Bud, a company that specializes in data intelligence is working with major partners across the country (Goldman Sachs, HSBC, 1835i, etc.) to categorize and organize financial information and data so that users are empowered to make informed decisions and gain a deeper understanding of their financial situation.”

Ah — we’re now getting closer to the actual fact. The focus is beginning to switch from the user, empowered to make better financial decisions thanks to AI, to what is actually happening: a data marketplace being built on the backs of users for sale to corporate America.

Let’s now follow the link to Bud’s website. There, in big letters on the home page, you read:

“Turn transactional data into real-time underwriting intelligence

Bud’s AI platform and data visualizations help lenders evaluate risk, reduce losses and unlock hidden revenue potential.”

Bingo. This is not about users, at least, not beyond using them as grist in a data mill. This is about slipping a Trojan Horse into your smartphone in the form of an app and hoovering your personal data up to give big banks an intimate glimpse into not just your finances, but also your thinking about those finances. As you bare your monetary soul to this helpful “Bud,” you have established a direct pipeline to the very institutions that hold your future in their greedy little fingers. You’re giving an algorithm everything it needs to automatically deny you credit.

This was just one pitch that happened to catch my eye long enough to dig a little deeper. But it serves as a perfect illustration of why I don’t trust big data or AI in the hands of for-profit corporations.

And that will continue to be true — no matter how you PR pros spin it.

The Spark in the Jar: Jon Ive and Steve Jobs

I sold all my Apple stock shortly after Steve Jobs passed away. It was premature (which is another word for stupid). Apple stock is today worth about 10 times what I sold it for.

My reasoning was thus: Apple couldn’t function without Steve Jobs – not for long, anyway.

Well, 12 years later, it’s doing quite well, thank you. It has a stock price of almost $200 per share (as of the writing of this). Sales have never been stronger. While replacement CEO Tim Cook is no Steve Jobs, financially he has grown Apple into a monolithic force with a market capitalization of almost 3 trillion dollars. There is no other company even close to that.

Now, with the benefit of hindsight, I realize I underestimated Tim Cook. But I stand with my original instinct: whatever Apple was under Steve Jobs, it couldn’t survive without him. And to understand why, let’s take a quick look back.

Jobs was infamously ousted from Apple in 1985. He remained in “NeXTile” for 12 years, coming back in 1997 to lead Apple into what many believe was its Golden Era. He passed away in 2011.

In the 14 years Jobs led Apple in his second run, the stock price went from about 20 cents to about 12 dollars. That’s growth of about 6000%.  Steve Jobs brought Apple back from the brink of death. If it wasn’t for a lifeline thrown to it by its number one competitor, Microsoft, in 1997, Apple would be no more. As Jobs himself said, “Apple was in very serious trouble,” said Jobs. “And what was really clear was that if the game was a zero-sum game where for Apple to win, Microsoft had to lose, then Apple was going to lose.

But those growth numbers are a little misleading. For you to be one of the fastest growing companies in history, it helps when you start with a very, very small number. A share price of $0.20 is a very, very small number.

Much as everyone lauds Steve Jobs for the turnaround of Apple, I would argue that Tim Cooks performance is even more impressive. To say that Apple was already on a roll when Cook took over is an understatement. In 2011, Apple was going from success to success and could seem to do no wrong. That was one of the reasons I was pessimistic about its future. I thought it couldn’t sustain its run, especially when it came to introducing new products. How many Jobs inspired home runs could it possibly have in its pipeline?

But what Tim Cook was great at was logistics. He took that pipeline and managed to squeeze out another decade plus of value building thanks to what may be the best supply chain strategy in the world. Analysts have said that half of Apple’s 3 trillion dollars in value is directly attributable to that supply chain.

But when you squeeze every last inch of efficiency out of a supply chain, something has to give. And in this case, it may have been creativity.

The Job’s era Apple was a very rare and delicate thing in the corporate world: a leader who was uncompromising on user experience and a design team able to rise and meet the challenge. Was it dictorial? Absolutely. Was it magical? Almost always. It was like catching a spark in a jar.

That design team was headed by Jonathon Ive. And when you have a team that’s the absolute best in the world, you can put up with an asshole here and there, especially when that asshole keeps challenging you to be better.  And when you keep delivering.

The alchemy that made Apple spectacularly successful from 1996 to 2011 was a fragile thing. It wouldn’t take much to change the formula forever. For example, if you removed the catalyst – which was Steve Jobs – it couldn’t survive. But equally important to that formula was Jon Ive.

As David Price, the editor of Macworld said,

“What Ive brought to Apple was a coherent personal vision. That doesn’t mean Apple’s designs on his watch were always perfect, of course; there were plenty of missteps. In broader terms, his arch-minimalism could be frustrating for those who wanted more physical controls”

David Price, Macworld

Ive and Jobs were, by all accounts, inseparable. In a heartfelt tribute to Jobs published shortly after his passing, Ive remembered,

“We worked together for nearly 15 years. We had lunch together most days and spent our afternoons in the sanctuary of the design studio. Those were some of the happiest, most creative and joyful times of my life,” Ive wrote. “I loved how he saw the world. The way he thought was profoundly beautiful.”

Jon Ive

For Jobs and Ive – “Think Different” was both a manifesto and a mantra. That philosophy started a not-so-slow death the minute Jobs passed from this earth. Finally, in June 2019, Ive announced his departure “after years of frustration, seeing the company migrate from a design-centric entity to one that was more utilitarian.”

It seems that companies can excel at either creativity or execution. It’s very difficult – perhaps impossible – to do both. The Apple of Steve Jobs was the world’s most creative corporation. The Apple of Tim Cook is a world leader in execution. But for one to happen, the other had to make room. Today, Apple is trying to be creative by committee. Macworld’s David Price mourns the Apple that was, “Maybe Apple is no longer a company that focuses on individual personality, or indeed on thinking different. This week we also got the news that Ive’s replacement will not be replaced, with a core group of 20 designers instead reporting directly to the chief operating officer, who is no stranger to design and likely has his own ideas. If design by committee has been the de facto approach for the past four years, it’s now been made official.”

And committees always suck all the oxygen from the room. In that atmosphere, the spark that once was Apple inevitably had to go out.