The Branding Case Study of David Beckham

I have to admit, I’m not a sports fan. And of the few sports I know a little about, European football is certainly not one of them. So my choice to watch the recent Beckham documentary on Netflix is certainly not typical. That said, I did find it a fascinating case study in something I was not expecting: the making and valuation of a personal brand.

First, a controversial question must be posed: was Beckham a good player? According to those that know much more about the sport than I do, the answer is definitely “Yes” – but he wasn’t the GOAT (Greatest of All Time) – he wasn’t even a GOHT (Greatest of His Time). The closest Beckham ever came to winning the Ballon d’Or, given to the best player  of the year,  was to place second behind Rivaldo Ferreira in 1999. During his time at Real Madrid CF, he wasn’t even the best player on the team. Granted, it was a stacked team and Beckham was one of the “galácticos” (superstars), along with Figo, Zidane and Ronaldo. But, unlike Beckham, all those other players have at least one Ballon d’Or in their trophy case (Note, fellow Mediapost Jon Last recently took an interesting look at this topic in his column – The Death of Meritocracy in Sports Pay).

But despite this, Beckham was certainly the highest paid player in the world when Timothy Leiweke lured him to LA Galaxy, where his contract also gave him a piece of the profits. So, if he wasn’t the greatest player, but he was the most valuable one, what created that value? Why was David Beckham worth hundreds of millions of dollars?

As the documentary showed, there was a dimension to Beckham’s signing to a team that went far beyond his ability to put a round ball in the net. He was a global brand – the most famous football player in the world. And that’s what Real Madrid president Florentino Pérez and Timothy Leiweke respectively bought when they signed Beckham.

As I said, the documentary revealed some interesting truths about branding. What creates brand value? Who owns that value? What is the price paid for the value of a personal brand?

What the Beckham documentary showed, more than anything, is that brand value is determined in a public market. Beckham certainly brought brand assets to the table: his own athletic ability, being exceedingly good looking, a kaleidoscope of hair styles, and a marriage to one of the most popular pop stars in the world, Victoria Adams – Posh Spice from the Spice Girls. Those were the table stakes for establishing his brand value, the price of entry.

But beyond that, the value of his brand was really whatever the public determined it to be. For example, after he was red-carded in a critical match against Argentina the 1998 World Cup, all of Britain decided that Beckham had cost them the championship. Whether that was true or not (there are a lorry-full of “ifs” in that opinion) it caused his brand value to plummet. There was really nothing Beckham could do. His brand was out of his control. It was owned by the media and public.

The documentary really highlights the viral and frenzied nature of the market that determines the value of a personal brand. And remember, this all took place in the days before social media and the very real impact of being publicly cancelled! Since Beckham’s prime in the 1990s and early 2000’s, the market effect of branding has since been amplified and compressed. The market of public opinion is now wired, meaning network effects happen on incredibly short timelines and without even the illusion of control.

Certainly the monetary benefits of brand usually accrue to the supposed owner of the brand. David and Victoria Beckham are reportedly worth a half billion dollars, making him one of the richest athletes in the world. But the documentary makes it clear that there was a price paid that was not monetary. Much of what we would all call “our lives” had to be traded by the Beckhams for a brand that was controlled by the public and the press. There were no boundaries, no privacy, no refuge from fame.

When we pull back from the story of David and Victoria Beckham, there are takeaways there for anyone attempting to build a brand, whether it be personal or corporate. You may be able to plant the seeds, but after that, everything else is going to be largely out of your control.

OpenAI’s Q* – Why Should We Care?

OpenAI founder Sam Altman’s ouster and reinstatement has rolled through the typical news cycle and we’re now back to blissful ignorance. But I think this will be one of the sea-change moments; a tipping point that we’ll look back on in the future when AI has changed everything we thought we knew and we wonder, “how the hell did we let that happen?”

Sometimes I think that tech companies use acronyms and cryptic names for new technologies to allow them to sneak game changers in without setting off the alarm bells. Take OpenAI for example. How scary does Q-Star sound? It’s just one more vague label for something we really don’t understand.

 If I’m right, we do have to ask the question, “Who is keeping an eye on these things?”

This week I decided to dig into the whole Sam Altman firing/hiring episode a little more closely so I could understand if there’s anything I should be paying attention to. Granted, I know almost nothing about AI, so what follows if very much at the layperson level, but I think that’s probably true for the vast majority of us. I don’t run into AI engineers that often in my life.

So, should we care about what happened a few weeks ago at OpenAI? In a word – YES.

First of all, a little bit about the dynamics of what led to Altman’s original dismissal. OpenAI started with the best of altruistic intentions, to “to ensure that artificial general intelligence benefits all of humanity.”  That was an ideal – many would say a naïve ideal – that Altman and OpenAI’s founders imposed on themselves. As Google discovered with its “Don’t Be Evil” mantra, it’s really hard to be successful and idealistic at the same time. In our world, success is determined by profits, and idealism and profitability almost never play in the same sandbox. Google quietly watered the “Don’t be Evil” motto until it virtually disappeared in 2018.

OpenAI’s non-profit board was set up as a kind of Internal “kill switch” to prevent the development of technologies that could be dangerous to the human race. That theoretical structure was put to the test when the board received a letter this year from some senior researchers at the company warning of a new artificial intelligence discovery that might take AI past the threshold where it could be harmful to humans. The board then did was it was set up to do, firing Altman and board chairman Greg Brockman and putting the brakes on the potentially dangerous technology. Then, Big Brother Microsoft (who has invested $13 billion in OpenAI) stepped in and suddenly Altman was back. (Note – for a far more thorough and fascinating look at OpenAI’s unique structure and the endemic problems with it, read through Alberto Romero’s series of thoughtful posts.)

There were probably two things behind Altman’s ouster: the potential capabilities of a new development called Q-Star and a fear that it would follow OpenAI’s previous path of throwing it out there to the world, without considering potential consequences. So, why is Q-Star so troubling?

Q-Star could be a major step closer to AI which can rationalize and plan. This moves us closer to the overall goal of artificial general Intelligence (AGI), the holy grail for every AI developer, including OpenAI. Artificial general intelligence, as per OpenAI’s own definition, are “AI systems that are generally smarter than humans.” Q-Star, through its ability to tackle grade school math problems, showed the promise of being artificial intelligence that could plan and reason. And that is an important tipping point, because something that can rationalize and plan pushes us forever past the boundary of a tool under human control. It’s technology that thinks for itself.

Why should this worry us? It should worry us because of Herbert Simon’s concept of “bounded rationality”, which explains that we humans are incapable of pure rationality. At some point we stop thinking endlessly about a question and come up with an answer that’s “good enough”. And we do this because of limited processing power. Emotions take over and make the decision for us.

But AGI throws those limits away. It can process exponentially more data at a rate we can’t possibly match. If we’re looking at AI through Sam Altman’s rose-colored glasses, that should be a benefit. Wouldn’t it be better to have decisions made rationally, rather than emotionally? Shouldn’t that be a benefit to mankind?

But here’s the rub. Compassion is an emotion. Empathy is an emotion. Love is also an emotion. What kind of decisions do we come to if we strip that out of the algorithm, along with any type of human check and balance?

Here’s an example. Let’s say that at some point in the future an AGI superbrain is asked the question, “Is the presence of humans beneficial to the general well-being of the earth?”

I think you know what the rational answer to that is.

When AI Love Goes Bad

When we think about AI and its implications, it’s hard to wrap our own non-digital, built of flesh and blood brains around the magnitude of it. Try as we might, it’s impossible to forecast the impact of this massive wave of disruption that’s bearing down on us. So, today, in order to see what might be the unintended consequences, I’d like to zoom in to one particular example.

There is a new app out there. It’s called Anima and it’s an AI girlfriend. It’s not the only one. When it comes to potential virtual partners, there are plenty of fish in the sea. But – for this post, let’s stay true to Anima. Here’s the marketing blurb on her website: “The most advanced romance chatbot you’ve ever talked to. Fun and flirty dating simulator with no strings attached. Engage in a friendly chat, roleplay, grow your love & relationship skills.”

Now, if there’s one area where our instincts should kick in and alarm bells should start going off about AI, it should be in the area of sexual attraction. If there was one human activity that seems bound by necessity to being ITRW (in the real world) it should be this one.

If we start to imagine what might happen when we turn to AI for love, we could ask filmmaker Spike Jonze. He already imagined it, 10 years ago when he wrote the screenplay for “her”, the movie with Joaquin Phoenix. Phoenix plays Theodore Twombly, a soon-to-be divorced man who upgrades his computer to a new OS, only to fall in love with the virtual assistant (voiced by Scarlett Johansson) that comes as part of the upgrade.

Predictably, complications ensue.

To get back to Anima, I’m always amused by the marketing language developers use to lull us into the acceptance of things we should be panicking about. In this case, it was two lines: “No strings attached” and “grow your love and relationship skills.”

First, about that “no strings attached” thing – I have been married for 34 years now and I’m here to tell you that relationships are all about “strings.” Those “strings” can also be called by other names: empathy, consideration, respect, compassion and – yes – love. Is it easy to keep those strings attached – to stay connected with the person at the other end of those strings? Hell, no! It is a constant, daunting, challenging work in progress. But the alternative is cutting those strings and being alone. Really alone.

If we get the illusion of a real relationships through some flirty version of ChatGPT, will it be easier to cut the strings that keep us connected to other real people out there? Will we be fooled into thinking something is real when it’s just a seductive algorithm?  In “her”, Jonze brings Twombly back to the real world, ending with a promise of a relationship with a real person as they both gaze at the sunset. But I worry that that’s just a Hollywood ending. I think many people – maybe most people – would rather stick with the “no strings attached” illusion. It’s just easier.

And will AI adultery really “grow your love and relationship skills?” No. No more than you will grow your ability to determine accurate and reliable information by scrolling through your Facebook feed. That’s just a qualifier that the developer threw in so they didn’t feel crappy about leading their customers down the path to “AI-rmegeddon”.

Even if we put all this other stuff aside for the moment, consider the vulnerable position we put ourselves in when we start mistaking robotic love for the real thing. All great cons rely on one of two things – either greed or love. When we think we’re in love, we drop our guard. We trust when we probably shouldn’t.

Take the Anima artificial girlfriend app for example. We know nothing about the makers of this app. We don’t know where the data collected goes. We certainly have no idea what their intentions are. Is this really who you want to start sharing your most intimate chit chat with? Even if their intentions are benign, this is an app built a for-profit company, which means there needs to be a revenue model in it somewhere. I’m guessing that all your personal data will be sold to the highest bidder.

You may think all this talk of AI love is simply stupid. We humans are too smart to be sucked in by an algorithm. But study after study has shown we’re not. We’re ready to make friends with a robot at the drop of a hat. And once we hit friendship, can love be far behind?