Deconstructing a Predatory Marketplace

Last week, I talked about a predatory ad market that was found in — of all places — in-game ads. And the predators are — of all things — the marketers of Keto Gummies. This week, I’d like to look at why this market exists, and why someone should do something about it.

First of all, let’s understand what we mean by “predatory.” In biological terms, predation is a zero-sum game. For a predator to win, someone has to lose.  On Wikipedia, it’s phrased a little differently: “Predatory marketing campaigns may (also) rely on false or misleading messaging to coerce individuals into asymmetrical transactions. “

 “Asymmetrical” means the winner is the predator, the loser is the prey.

In the example of the gummy market, there are three winners — predators — and three losers, or prey. The winners are the marketers who are selling the gummies, the publishers who are receiving the ad revenue and the supply side platform that mediates the marketplace and take its cut.

The losers — in ascending order of loss — are the users of the games who must suffer through these crappy ads, the celebrities who have had their names and images illegally co-opted by the marketer, and the consumers who are duped into actually buying a bottle of these gummies.

You might argue the order of the last two, depending on what value you put on the brand of the celebrity. But in terms of sheer financial loss, consumer fraud is a significant issue, and one that gets worse every year.  In February, the Federal Trade Commission reported that U.S. consumers lost $8.8 billion to scams last year, many of which occurred online. The volume of scams is up 30% over 2021, and is 70% higher than it was in 2020.

So it’s not hard to see why this market is predatory. But is it fraudulent? Let’s apply a legal litmus test. Fraud is generally defined as “any form of dishonest or deceptive behavior that is intended to result in financial or personal gain for the fraudster, and does harm to the victim.”

Based on this, fraud does seem to apply. So why doesn’t anyone do anything?

For one, we’re talking about a lot of potential money here. Statista pegs the in-game ad market at $32.5 billion worldwide in 2023, with projected annual growth rate of 9.10% That kind of money provides a powerful incentive to publishers and supply-side platforms (SSPs) to look the other way.

I think it’s unreasonable expect the marketers of the gummies to police themselves. They have gone to great pains to move themselves away from the threat of legal litigation. These corporations are generally registered in jurisdictions like China or Cyprus, where legal enforcement of copyright or consumer protections are nonexistent. If someone like Oprah Winfrey has been unable to legally shut down the fraudulent use of her image and brand for two years, you can bet the average consumer who has been ripped off has no recourse. 

But perhaps one of the winners in this fraudulent ecosystem — the SSPs – should consider cracking down on this practice.

In nature, predators are kept in check by something called a predator-prey relationship. If predators become too successful, they eliminate their prey and seal their own doom. But this relationship only works if there are no new sources of prey. If we’re talking about an ecosystem that constantly introduces new prey, nothing keeps predators in check.

Let’s look at the incentive for the game publishers to police the predators. True, allowing fraudulent ads does no favours for the users of their game. A largescale study by Gao, Zeng, Lu et al found that bad ads lead to a bad user experience.

But do game publishers really care? There is no real user loyalty to games, so churn and burn seems to be the standard operating procedure. This creates an environment particularly conducive to predators.

So what about the SSPs?

GeoEdge, an ad security solution that guards against malvertising, among other things, has just released its Q1 Ad Quality Report. In an interview, Yuval Shiboli, the company’s director of product market, said that while malicious ads are common across all channels, in-game advertising is particularly bad because of a lack of active policing: “The fraudsters are very selective in who they show their malicious ads, looking for users who are scam-worthy, meaning there is no security detection software in the environment.”

Quality of advertising is usually directly correlated with the pricing of the ad inventory. The cheaper the ad, the poorer the quality. In-game ads are relatively cheap, giving fraudulent predators an easy environment to thrive in. And this entire environment is created by the SSPs.

According to Shiboli, it’s a little surprising to learn who are the biggest culprits on the SSP side: “Everybody on both the sell side and buy side works with Google, and everyone assumes that its platforms are clean and safe. We’ve found the opposite is true, and that of all the SSP providers, Google is the least motivated to block bad ads.”

By allowing — even encouraging — a predatory marketplace to exist, Google and other SSPs are doing nothing less than aiding and abetting criminals. In the short term, this may add incrementally to their profits, but at what long-term price?

It Took a Decade, but Google Glass is Finally Broken

Did you hear that Google finally pulled the plug on Google Glass?

Probably not. The announcement definitely flew under the radar. It came with much less fanfare than the original roll out in 2013. The technology, which has been quietly on life support as an enterprise tool aimed at select industries, finally had its plug pulled with this simple statement on its support page:

Thank you for over a decade of innovation and partnership. As of March 15, 2023, we will no longer sell Glass Enterprise Edition. We will continue supporting Glass Enterprise Edition until September 15, 2023.

Talk about your ignoble demises. They’re offering a mere 6 months of support for those stubbornly hanging on to their Glass. Glass has been thrown in the ever growing Google Graveyard, along with Google Health, Google+, Google Buzz, Google Wave, Knol – well, you get the idea.

It’s been 10 years, almost to the day, that Google invited 8000 people to become “Glass Explorers” (others had a different name – “Glassholes”) and plunge into the world of augmented reality.

I was not a believer – for a few reasons I talked about way back then. That led me to say, “Google Glass isn’t an adoptable product as it sits.” It took 10 years, but I can finally say, “I told you so.”

I did say that wearable technology, in other forms, would be a game changer. I just didn’t think that Google Glass was the candidate to do that. To be honest, I haven’t really thought that much more about it until I saw the muted news that this particular Glass was a lot more than half empty. I think there are some takeaways about the fading dividing line between technology and humans that we should keep in mind.

First of all, I think we’ve learned a little more about how our brains work with “always on” technologies like Google Glass. The short answer is, they don’t – at least not very well. And this is doubly ironic because according to an Interview with Google Glass product director Steve Lee on The Verge back in 2013, that was the whole point:

“We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes.”

The theory was that it was much less distracting to have information right in the line of sight, rather than having to go to a connected screen that might be in your pocket.

Lee went on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” 

The problem here was one of incompatible operating systems – the one that drove Google Glass and the one we have baked into our brains. It turned out that maybe the technology was a little too close to our senses. A 2016 study (Lewis and Neider) found that trying to split attention between two different types of tasks – one scanning information on a heads up display and one trying to focus on the task at hand – ended up with the brain not being able to focus effectively on either. The researchers ended with this cautionary conclusion: “Our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.”

For anyone who spends even a little time wondering how the brain works, this should not come as a surprise. There is an exhaustive list of research showing that the brain is not that great at multi-tasking. Putting a second cognitive task for the brain in our line of sight simply means the distraction is all that much harder to ignore.

Maybe there’s a lesson here for Google. I think sometimes they get a little starry eyed about their own technological capabilities and forget to factor in the human element. I remember talking to a roomful of Google engineers more than a decade ago about search behaviors. I remember asking them if any of them had heard about Pirolli and Card’s pioneering work on their Information Foraging theory. Not one hand went up. I was gob smacked. That should be essential reading for anyone working on a search interface. Yet, on that day, the crickets were chirping loudly at Mountainview.

If the Glass team had done their human homework, they would have found that the brain needs to focus on one task at a time. If you’re looking to augment reality with additional information, that information has to be synthesized into a single cohesive task for the brain. This means that for augmented reality to be successful, the use case has to be carefully studied to make sure the brain isn’t overloaded.

But I suspect there was another sticking point that prevented Google Glass from being widely adopted. It challenged the very nature of our relationship with technology. We like to believe we control technology, rather than the other way around. We have defined the online world as somewhere we “go” to through our connected devices. We are in control of when and where we do this. Pulling a device out and initiating an action keeps this metaphorical divide in place.

But Google Glass blurred this line in a way that made us uncomfortable. Again, a decade ago, I talked about the inevitable tipping point that will come with the merging of our physical and virtual worlds. Back then, I said, “as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being ‘online’ will cease to be about ‘going’ and will become more about ‘being.’  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.”

I’m just speculating, but maybe Google Glass was just a step too far in this direction – for now, anyway.

(Feature image: Tim.Reckmann, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)

It Should Be No Surprise that Musk is Messing Up Twitter

I have to admit – I’m somewhat bemused by all the news rolling out of Elon Musk’s V2.0 edition of Twitter. Here is just a quick round up of headlines grabbed from a Google News search last week:

Elon Musk took over a struggling business with Twitter and has quickly made it worse – CNBC

Elon Musk is Bad at This – The Atlantic

The Elon Musk (Twitter) Era Has Been a Complete Mess – Vanity Fair

Elon Musk “Straight-up Alone,” “Winging” Twitter Changes – Business Insider

To all these, I have to say, “What the Hell did you expect?”

Look, I get that Musk is on a different plane of smart from most of us. No argument there.

The same is true, I suspect, for most tech CEOs who are the original founders of their company. The issue is that the kind of smart they are is not necessarily the kind of smart you need to run a big complex corporation. If you look at the various types of intelligence, they would excel at logical-mathematical intelligence – or what I would call “geek-smart.” But this intelligence can often come at the expense of other kinds of intelligence that would be a better fit in the CEO’s role. Both interpersonal and intrapersonal intelligence immediately come to mind.

Musk is not alone. There is a bushel load of Tech CEOs who have pulled off a number of WTF moves. In his article in the Atlantic titled Silicon Valley’s Horrible Bosses, Charlie Warzel gives us a few examples ripped straight from the handbook of the “Elon Musk School of Management.” Most of them involve making hugely impactful HR decisions with little concern for the emotional impact on employees and then doubling down on mistake by choosing to communicate through Twitter.

For most of us with even a modicum of emotional intelligence, this is unimaginable. But if you’re geek-smart, it probably seems logical. Twitter is a perfect communication medium for geek-smart people – it’s one-sided, as black and white as you can get and conveniently limited to 280 characters. There is no room for emotional nuance or context on Twitter.

The disconnect in intelligence types comes in looking at the type of problems a CEO faces. I was CEO of a very small company and even at that scale, with a couple dozen employees, I spent the majority of my time dealing with HR issues. I was constantly trying to navigate my way through these thorny and perplexing issues. I did learn one thing – issues that include people, whether they be employees or customers, generally fall into the category of what is called a “complex problem.”

In 1999, an IBM manager named Dave Snowden realized that not every problem you run into when managing a corporation requires the same approach. He put together a decision-making model to help managers identify the best decision strategy for the issue they’re dealing with. He called the model Cynefin, which is the Welsh word for habitat. In the model, there are five decision domains: Clear, Complicated, Complex, Chaotic and Confusion. Cynefin is really a sense-making tool to help guide managers through problems that are complicated or complex in the hope that chaos can be avoided.

Geek Smart People are very good at complicated problems. This is the domain of the “expert” who can rapidly sift through the “known unknowns.”

Give an expert a complicated problem and they’re the perfect fit for the job. They have the ability to hone in on the relevant details and parse out the things that would distract the rest of us. Cryptography is an example of a complicated problem. So is most coding. This is the natural habitat of the tech engineer.

Tech founders initially become successful because they are very good at solving complicated problems. In fact, in our culture, they are treated like rock stars. They are celebrated for their “expertise.” Typically, this comes with a “smartest person in the room” level of smugness. They have no time for those that don’t see through the complications of the world the same way they do.

Here we run into a cognitive obstacle uncovered by political science writer Philip E. Tetlock in his 2005 book, Expert Political Judgement: How Good Is It? How Can We Know?

As Tetlock discovered, expertise in one domain doesn’t always mean success in another, especially if one domain has complicated problems and the other has complex problems.

Complex problems, like predicting the future or managing people in a massive organization, lie in the realm of “unknown unknowns.” Here, the answer is emergent. These problems are, by their very nature, unpredictable. The very toughest complex problems fall into a category I’ve talked about before: Wicked Problems. And, as Philip Tetlock discovered, experts are no better at dealing with complexity than the rest of us. In fact, in a complex scenario like predicting the future, you’d probably have just as much success with a dart throwing chimpanzee.

But it gets worse. There’s no shame in not being good at complex problems. None of us are. The problem with expertise lies not in a lack of knowledge, but in experts sticking to a cognitive style ill-suited to the task at hand: trying to apply complicated brilliance to complex situations. I call this the “everything is a nail” syndrome. When all you have is a hammer, everything looks like a nail.

Tetlock explains, “ They [experts] are just human in the end. They are dazzled by their own brilliance and hate to be wrong. Experts are led astray not by what they believe, but by how they think.”

A Geek-Smart person believes they know the answer better than anyone else because they see the world differently. They are not open to outside input. And it’s just that type of open-minded thinking that is required to wrestle with complex problems.

When you consider all that, is it any wonder that Musk is blowing up Twitter –  and not in a good way?

The Joe Rogan Experiment in Ethical Consumerism

We are watching an experiment in ethical consumerism take place in real time. I’m speaking of the Joe Rogan/Neil Young controversy that’s happening on Spotify. I’m sure you’ve heard of it, but if not, Canadian musical legend Neil Young had finally had enough of Joe Rogan’s spreading of COVID misinformation on his podcast, “The Joe Rogan Experience.” He gave Spotify an ultimatum: “You can have Rogan or Young. Not both.”

Spotify chose Rogan. Young pulled his library. Since then, a handful of other artists have followed Young, including former band mates David Crosby, Stephen Stills and Graham Nash, along with fellow Canuck Hall of Famer Joni Mitchell.

But it has hardly been a stampede. One of the reasons is that — if you’re an artist — leaving Spotify is easier said than done. In an interview with Rolling Stone, Rosanne Cash said most artists don’t have the luxury of jilting Spotify: 

It’s not viable for most artists. The public doesn’t understand the complexities. I’m not the sole rights holder to my work… It’s not only that a lot of people who aren’t rights holders can’t remove their work. A lot of people don’t want to. These are the digital platforms where they make a living, as paltry as it is. That’s the game. These platforms own, what, 40 percent of the market share?”

Cash also brings up a fundamental issue with capitalism: it follows profit, and it’s consumers who determine what’s profitable. Consumers make decisions based on self-interest: what’s in it for them. Corporations use that predictable behavior to make the biggest profit possible. That behavior has been perfectly predictable for hundreds of years. It’s the driving force behind Adam Smith’s Invisible Hand. It was also succinctly laid out by economist Milton Friedman in 1970:

“There is one and only one social responsibility of business–to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.”

We all want corporations to be warm and fuzzy — but it’s like wishing a shark were a teddy bear. It just ain’t gonna happen.

One who indulged in this wishful thinking was a little less well-known Canadian artist who also pulled his music  from Spotify, Ontario singer/songwriter Danny Michel. He told the CBC:

“But for me, what it was was seeing how Spotify chose to react to Neil Young’s request, which was, you know: You can have my music or Joe. And it seems like they just, you know, got out a calculator, did some math, and chose to let Neil Young go. And they said, clear and loud: We don’t need you. We don’t need your music.”

Well, yes, Danny, I’m pretty sure that’s exactly what Spotify did. It made a decision based on profit. For one thing, Joe Rogan is exclusive to Spotify. Neil Young isn’t. And Rogan produces a podcast, which can have sponsors. Neil Young’s catalog of songs can’t be brought to you by anyone.

That makes Rogan a much better bet for revenue generation. That’s why Spotify paid Rogan $100 million. Music journalist Ted Gioia made the business case for the Rogan deal pretty clear in a tweet

“A musician would need to generate 23 billion streams on Spotify to earn what they’re paying Joe Rogan for his podcast rights (assuming a typical $.00437 payout per stream). In other words, Spotify values Rogan more than any musician in the history of the world.”

I hate to admit that Milton Friedman is right, but he is. I’ve said it time and time before, to expect corporations to put ethics ahead of profits is to ignore the DNA of a corporation. Spotify is doing what corporations will always do, strive to be profitable. The decision between Rogan and Young was done with a calculator. And for Danny Michel to expect anything else from Spotify is simply naïve. If we’re going to play this ethical capitalism game, we must realize what the rules of engagement are.

But what about us? Are we any better that the corporations we keep putting our faith in?

We have talked about how we consumers want to trust the brands we deal with, but when a corporation drops the ethics ball, do we really care? We have been gnashing our teeth about Facebook’s many, many indiscretions for years now, but how many of us have quite Facebook? I know I haven’t.

I’ve seen some social media buzz about migrating from Spotify to another service. I personally have started down this road. Part of it is because I agree with Young’s stand. But I’ll be brutally honest here. The bigger reason is that I’m old and I want to be able to continue to listen to the Young, Mitchell and CSNY catalogs. As one of my contemporaries said in a recent post, “Neil Young and Joni Mitchell? Wish it were artists who are _younger_ than me.”

A lot of pressure is put on companies to be ethical, with no real monetary reasons why they should be. If we want ethics from our corporations, we have to make it important enough to us to impact our own buying decisions. And we aren’t doing that — not in any meaningful way.

I’ve used this example before, but it bears repeating. We all know how truly awful and unethical caged egg production is. The birds are kept in what is known as a battery cage holding 5 to 10 birds and each is confined to a space of about 67 square inches. To help you visualize that, it’s just a bit bigger than a standard piece of paper folded in half. This is the hell we inflict on other animals solely for our own gain. No one can be for this. Yet 97% of us buy these eggs, just because they’re cheaper.

If we’re looking for ethics, we have to look in other places than brands. And — much as I wish it were different — we have to look beyond consumers as well. We have proven time and again that our convenience and our own self-interest will always come ahead of ethics. We might wish that were different, but our spending patterns say otherwise.

What Media Insiders Were Thinking (And Writing) In 2021

Note: This is a year back look at the posts in the Media Insider Column on Mediapost, for which I write every Tuesday. All the writers for the column have been part of the Marketing and Media business for decades, so there’s a lot of wisdom there to draw on. This is the second time I’ve done this look back at what we’ve written about in the previous year.

As part of the group of Media Insiders, I’ve always considered myself in sterling company. I suspect if you added up all the years of experience in this stable of industry experts, we’d be well into the triple digits. Most of the Insiders are still active in the world of marketing. For myself, although I’m no longer active in the business, I’m still fascinated by how it impacts our lives and our culture.

For all those reasons, I think the opinions of this group are worth listening to — and, thankfully,  MediaPost gives you those opinions every day.

Three years ago, I thought it would be interesting to do a “meta-analysis” of those opinions over the span of the year, to see what has collectively been on the minds of the Media Insiders. I meant to do it again last year, but just never got around to it — as you know, global pandemics and uprisings against democracy were a bit of a distraction.

This year, I decided to give it another shot. And it was illuminating. Here’s a summary of what has been on our collective minds:

I don’t think it’s stretching things to say that your Insiders have been unusually existential in their thoughts in the past 12 months. Now, granted, this is one column on MediaPost that leads to existential musings. That’s why I ended up here. I love the fact that I can write about pretty much anything and it generally fits under the “Media Insider” masthead. I suspect the same is true for the other Insiders.

But even with that in mind, this year was different. I think we’ve all spent a lot of the last year thinking about what the moral and ethical boundaries for marketers are — for everyone, really — in the world of 2021. Those ponderings broke down into a few recurring themes.

Trying to Navigate a Substantially Different World

Most of this was naturally tied to the ongoing COVID pandemic.  

Surprisingly, given that three years ago it was one of the most popular topics, Insiders said little about politics. Of course, we were then squarely in the middle of “Trump time.” There were definitely a few posts after the Jan. 6 insurrection, but most of it was just trying to figure out how the world might permanently change after 2021. Almost 20% of our columns touched on this topic.

A notable subset of this was how our workplaces might change. With many of us being forced to work from home, 4% of the year’s posts talked about how “going to work” may never look the same again.

Ad-Tech Advice

The next most popular topic from Insiders (especially those still in the biz, like Corey, Dave, Ted and Maarten) was ongoing insight on how to manage the nuts and bolts of your marketing. A lot of this focused on using ad tech effectively. That made up 15% of last year’s posts.

And Now, The Bad News

I will say your Media Insiders (myself included) are a somewhat pessimistic bunch. Even when we weren’t talking about wrenching change brought about by a global pandemic, we were worrying about the tech world going to hell in a handbasket. About 13.5% of our posts talked about social media, and it was almost all negative, with most of it aimed squarely at Facebook — sorry, Meta.

Another 12% of our posts talked about other troubling aspects of technology. Privacy concerns over data usage and targeting took the lead here. But we were also worried about other issues, like the breakdown of person-to-person relationships, disappearing attention spans, and tears in our social fabric. When we talked about the future of tech, we tended to do it through a dystopian lens.

Added to this was a sincere concern about the future of journalism. This accounted for another 5% of all our posts. This makes almost a full third of all posts with a decidedly gloomy outlook when it comes to tech and digital media’s impact on society.

The Runners-Up

If there was one branch of media that seemed the most popular among the Insiders (especially Dave Morgan), it was TV and streaming video. I also squeezed a few posts about online gaming into this category. Together, this topic made up 10.5% of all posts.

Next in line, social marketing and ethical branding. We all took our own spins on this, and together we devoted almost 9.5% of all posts in 2021 to it. I’ve talked before about the irony of a world that has little trust in advertising but growing trust in brands. Your Insiders have tried to thread the needle between the two sides of this seeming paradox.

Finally, we did cover a smattering of other topics, but one in particular rose about the others as something increasingly on our radar. We touched on the Metaverse and its implications in almost 3% of our posts.

Summing Up

To try to wrap up 2021 in one post is difficult, but if there was a single takeaway, I think it’s that both marketing and media are faced with some very existential questions. Ad-supported revenue models have now been pushed to the point where we must ask what the longer-term ethical implications might be.

If anything, I would say the past year has marked the beginning of our industry realizing that a lot of unintended consequences have now come home to roost.

It’s the Buzz That Will Kill You

If you choose to play in the social arena, you have to accept that the typical peaks and valleys of business success can suddenly become impossibly steep.

In social media networks, your brand message is whatever meme happens to emerge from the collective activity of this connected market. Marketers have little control — and sometimes, they have no control. At best, all they can do is react by throwing another carefully crafted meme into the social-sphere and hope it picks up some juice and is amplified through the network.

That’s exactly what happened to Peloton in the past week and a half.

On Dec. 9, the HBO Max sequel to “Sex and the City” killed off a major character — Chris Noth’s Mr. Big — by giving him a heart attack after his one thousandth Peloton ride. Apparently, HBO Max gave Peloton no advance warning of this branding back hand.

On Dec. 10, according to Axios,  there was a dramatic spike in social interactions talking about Mr. Big’s last ride, peaking near 80 thousand. As you can imagine, the buzz was not good for Peloton’s business.

On Dec. 12, Peloton struck back with its own ad, apparently produced in just 24 hours by Ryan Reynold’s Maximum Effort agency. This turned the tide of the social buzz. Again, according to data from Newswhip and Axios, social media mentions peaked. This time, they were much more positive toward the Peloton brand.

It should be all good — right? Not so fast. On Dec 16, two sexual assault allegations were made against Chris Noth, chronicled in The Hollywood Reporter. Peloton rapidly scrubbed its ad campaign. Again, the social sphere lit up and Peloton was forced back into defensive mode.

Now, you might call all this marketing froth, but that’s  the way it is in our hyper-connected world. You just have to dance the dance — be nimble and respond.

But my point is not about the marketing side of this of this brouhaha – which has been covered to death, at least at MediaPost (sorry, pardon the pun.) I’m more interested  in what happens to the people who have some real skin in this particular game, whose lives depend on the fortunes of the Peloton brand. Because all this froth does have some very IRL consequences.

Take Peloton’s share price, for one.

The day before the HBO show aired, Peloton’s shares were trading at $45.91. The next day, they tumbled 16%. to $38.51.

And that’s just one chapter in the ongoing story of Peloton’s stock performance, which has been a hyper-compressed roller coaster ride, with the pandemic and a huge amount of social media buzz keeping the foot firmly on the accelerator of stock performance through 2020, but then subsequently dropping like a rock for most of 2021. After peaking as high as $162 a share exactly a year ago, the share price is back down to spitting distance of its pre-pandemic levels.

Obviously, Peloton’s share price is not just dependent on the latest social media meme. There are business fundamentals to consider as well.

Still, you have to accept that a more connected meme-market is going to naturally accelerate the speed of business upticks and declines. Peloton signed up for this dance — and  when you do that, you have to accept all that comes with it.

In terms of the real-world consequences of betting on the buzz, there are three “insider” groups (not including customers) that will be affected: the management, the shareholders and the employees. The first of these supposedly went into this with their eyes open. The second of these also made a choice. If they did their due diligence before buying the stock, they should have known what to expect. But it’s the last of these — the employees — that I really feel for.

With ultra-compressed business cycles like Peloton has experienced, it’s tough for employees to keep up. On the way up the peak, the company is running ragged trying to scale for hyper-growth. If you check employee review sites like Glassdoor.com, there are tales of creaky recruitment processes not being able to keep up. But at least the ride up is exciting. The ride down is something quite different.

In psychological terms, there is something called the locus of control. These are the things you feel you have at least some degree of control over. And there is an ever-increasing body of evidence that shows that locus of control and employee job satisfaction are strongly correlated. No one likes to be the one constantly waiting for someone else to drop the other shoe. It just ramps up your job stress. Granted, job stress that comes with big promotions and generous options on a rocket ship stock can perhaps be justified. But stress that’s packaged with panicked downsizing and imminent layoffs is not a fun employment package for anyone.

That’s the current case at Peloton. On Nov. 5 it announced an immediate hiring freeze. And while there’s been no official announcement of layoffs that I could find, there have been rumors of such posted to the site thelayoff.com.  This is not a fun environment for anyone to function in. Here’s what one post said: “I left Peloton a year ago when I realized it was morphing into the type of company I had no intention of working for.”

We have built a business environment that is highly vulnerable to buzz. And as Peloton has learned, what the buzz giveth, the buzz can also taketh away.

The Tech Giant Trust Exercise

If we look at those that rule in the Valley of Silicon — the companies that determine our technological future — it seems, as I previously wrote,  that Apple alone is serious about protecting our privacy. 

MediaPost editor in chief Joe Mandese shared a post late last month about how Apple’s new privacy features are increasingly taking aim at the various ways in which advertising can be targeted to specific consumers. The latest victim in those sights is geotargeting.

Then Steve Rosenbaum mentioned last week that as Apple and Facebook gird their loins and prepare to do battle over the next virtual dominion — the metaverse — they are taking two very different approaches. Facebook sees this next dimension as an extension of its hacker mentality, a “raw, nasty networker of spammers.” Apple is, as always, determined to exert a top-down restriction on who plays in its sandbox, only welcoming those who are willing to play by its rules. In that approach, the company is also signaling that it will take privacy in the metaverse seriously. Apple CEO Tim Cook said he believes “users should have the choice over the data that is being collected about them and how it’s used.”

Apple can take this stand because its revenue model doesn’t depend on advertising. To find a corporation’s moral fiber, you always, always, always have to follow the money. Facebook depends on advertising for revenue. And it has repeatedly shown it doesn’t really give a damn about protecting the privacy of users. Apple, on the other hand, takes every opportunity to unfurl the privacy banner as its battle standard because its revenue stream isn’t really impacted by privacy.

If you’re looking for the rot at the roots of technology, a good place to start is at anything that relies on advertising. In my 40 years in marketing, I have come to the inescapable conclusion that it is impossible for business models that rely on advertising as their primary source of revenue to stay on the right side of privacy concerns. There is an inherent conflict that cannot be resolved. In a recent earnings call,  Facebook CEO Mark Zuckerberg said it in about the clearest way it could be said, “As expected, we did experience revenue headwinds this quarter, including from Apple’s [privacy rule] changes that are not only negatively affecting our business, but millions of small businesses in what is already a difficult time for them in the economy.”

Facebook has proven time and time again that when the need for advertising revenue runs up against a question of ethical treatment of users, it will always be the ethics that give way.

It’s also interesting that Europe is light years ahead of North America in introducing legislation that protects privacy. According to one Internet Privacy Ranking study, four of the five top countries for protecting privacy are in Northern Europe. Australia is the fifth. My country, Canada, shares these characteristics. We rank seventh. The US ranks 18th.

There is an interesting corollary here I’ve touched on before. All these top-ranked countries are social democracies. All have strong public broadcasting systems. All have a very different relationship with advertising than the U.S. We that live in these countries are not immune from the dangers of advertising (this is certainly true for Canada), but our media structure is not wholly dependent on it. The U.S., right from the earliest days of electronic media, took a different path — one that relied almost exclusively on advertising to pay the bills.

As we start thinking about things like the metaverse or other forms of reality that are increasingly intertwined with technology, this reliance on advertising-funded platforms is something we must consider long and hard. It won’t be the companies that initiate the change. An advertising-based business model follows the path of least resistance, making it the shortest route to that mythical unicorn success story. The only way this will change will be if we — as users — demand that it changes.

And we should  — we must — demand it. Ad-based tech giants that have no regard for our personal privacy are one of the greatest threats we face. The more we rely on them, they more they will ask from us.

Whatever Happened to the Google of 2001?

Having lived through it, I can say that the decade from 2000 to 2010 was an exceptional time in corporate history. I was reminded of this as I was reading media critic and journalist Ken Auletta’s book, “Googled, The End of the World as We Know It.” Auletta, along with many others, sensed a seismic disruption in the way media worked. A ton of books came out on this topic in the same time frame, and Google was the company most often singled out as the cause of the disruption.

Auletta’s book was published in 2009, near the end of this decade, and it’s interesting reading it in light of the decade plus that has passed since. There was a sort of breathless urgency in the telling of the story, a sense that this was ground zero of a shift that would be historic in scope. The very choice of Auletta’s title reinforces this: “The End of the World as We Know It.”

So, with 10 years plus of hindsight, was he right? Did the world we knew end?

Well, yes. And Google certainly contributed to this. But it probably didn’t change in quite the way Auletta hinted at. If anything, Facebook ended up having a more dramatic impact on how we think of media, but not in a good way.

At the time, we all watched Google take its first steps as a corporation with a mixture of incredulous awe and not a small amount of schadenfreude. Larry Page and Sergey Brin were determined to do it their own way.

We in the search marketing industry had front row seats to this. We attended social mixers on the Google campus. We rubbed elbows at industry events with Page, Brin, Eric Schmidt, Marissa Mayer, Matt Cutts, Tim Armstrong, Craig Silverstein, Sheryl Sandberg and many others profiled in the book. What they were trying to do seemed a little insane, but we all hoped it would work out.

We wanted a disruptive and successful company to not be evil. We welcomed its determination — even if it seemed naïve — to completely upend the worlds of media and advertising. We even admired Google’s total disregard for marketing as a corporate priority.

But there was no small amount of hubris at the Googleplex — and for this reason, we also hedged our hopeful bets with just enough cynicism to be able to say “we told you so” if it all came crashing down.

In that decade, everything seemed so audacious and brashly hopeful. It seemed like ideological optimism might — just might — rewrite the corporate rulebook. If a revolution did take place, we wanted to be close enough to golf clap the revolutionaries onward without getting directly in the line of fire ourselves.

Of course, we know now that what took place wasn’t nearly that dramatic. Google became a business: a very successful business with shareholders, a grown-up CEO and a board of directors, but still a business not all that dissimilar to other Fortune 100 examples. Yes, Google did change the world, but the world also changed Google. What we got was more evolution than revolution.

The optimism of 2000 to 2010 would be ground down in the next 10 years by the same forces that have been driving corporate America for the past 200 years: the need to expand markets, maximize profits and keep shareholders happy. The brash ideologies of founders would eventually morph to accommodate ad-supported revenue models.

As we now know, the world was changed by the introduction of ways to make advertising even more pervasively influential and potentially harmful. The technological promise of 20 years ago has been subverted to screw with the very fabric of our culture.

I didn’t see that coming back in 2001. I probably should have known better.

Getting Bitch-Slapped by the Invisible Hand

Adam Smith first talked about the invisible hand in 1759. He was looking at the divide between the rich and the poor and said, in essence, that “greed is good.”

Here is the exact wording:

“They (the rich) are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society.”

The effect of “the hand” is most clearly seen in the wide-open market that emerges after established players collapse and make way for new competitors riding a wave of technical breakthroughs. Essentially, it is a cycle.

But something is happening that may never have happened before. For the past 300 years of our history, the one constant has been the trend of consumerism. Economic cycles have rolled through, but all have been in the service of us having more things to buy.

Indeed, Adam Smith’s entire theory depends on greed: 

“The rich … consume little more than the poor, and in spite of their natural selfishness and rapacity, though they mean only their own conveniency, though the sole end which they propose from the labours of all the thousands whom they employ, be the gratification of their own vain and insatiable desires, they divide with the poor the produce of all their improvements.”

It’s the trickle-down theory of gluttony: Greed is a tide that raises all boats.

The Theory of The Invisible Hand assumes there are infinite resources available. Waste is necessarily built into the equation. But we have now gotten to the point where consumerism has been driven past the planet’s ability to sustain our greedy grasping for more.

Nobel-Prize-winning economist Joseph Stiglitz, for one, recognized that environmental impact is not accounted for with this theory. Also, if the market alone drives things like research, it will inevitably become biased towards benefits for the individual and not the common good.

There needs to be a more communal counterweight to balance the effects of individual greed. Given this, the new age of consumerism might look significantly different.

There is one outcome of market driven-economics that is undeniable: All the power lies in the connection between producers and consumers. Because the world has been built on the predictable truth of our always wanting more, we have been given the ability to disrupt that foundation simply by changing our value equation: buying for the greater good rather than our own self-interest.

I’m skeptical that this is even possible.

It’s a little daunting to think that our future survival relies on our choices as consumers. But this is the world we have made. Consumption is the single greatest driver of our society. Everything else is subservient to it.

Government, science, education, healthcare, media, environmentalism: All the various planks of our societal platform rest on the cross-braces of consumerism. It is the one behavior that rules all the others. 

This becomes important to think about because this shit is getting real — so much faster than we thought possible.

I write this from my home, which is about 100 miles from the village of Lytton, British Columbia. You might have heard it mentioned recently. On June 29, Lytton reported the highest temperature ever recorded in Canada  a scorching 121.3 degrees Fahrenheit (49.6 degrees C for my Canadian readers). That’s higher than the hottest temperature ever recorded in Las Vegas. Lytton is 1,000 miles north of Las Vegas.

As I said, that was how Lytton made the news on June 29. But it also made the news again on June 30. That was when a wildfire burned almost the entire town to the ground.

In one week of an unprecedented heat wave, hundreds of sudden deaths occurred in my province. It’s believed the majority of them were caused by the heat.

We are now at the point where we have to shift the mental algorithms we use when we buy stuff. Our consumer value equation has always been self-centered, based on the calculus of “what’s in it for me?” It was this calculation that made Smith’s Invisible Hand possible.

But we now have to change that behavior and make choices that embrace individual sacrifice. We have to start buying based on “What’s best for us?”

In a recent interview, a climate-change expert said he hoped we would soon see carbon-footprint stickers on consumer products. Given a choice between two pairs of shoes, one that was made with zero environmental impact and one that was made with a total disregard for the planet, he hoped we would choose the former, even if it was more expensive.

I’d like to think that’s true. But I have my doubts. Ethical marketing has been around for some time now, and at best it’s a niche play. According to the Canadian Coalition for Farm Animals, the vast majority of egg buyers in Canada — 98% — buy caged eggs even though we’re aware that the practice is hideously cruel.  We do this because those eggs are cheaper.

The sad fact is that consumers really don’t seem to care about anything other than their own self-interest. We don’t make ethical choices unless we’re forced to by government legislation. And then we bitch like hell about our rights as consumers. “We should be given the choice,” we chant.  “We should have the freedom to decide for ourselves.”

Maybe I’m wrong. I sure hope so. I would like to think — despite recent examples to the contrary of people refusing to wear face masks or get vaccinated despite a global pandemic that took millions of lives — that we can listen to the better angels of our nature and make choices that extend our ability to care beyond our circle of one.

But let’s look at our track record on this. From where I’m sitting, 300 years of continually making bad choices have now brought us to the place where we no longer have the right to make those choices. This is what The Invisible Hand has wrought. We can bitch all we want, but that won’t stop more towns like Lytton B.C. from burning to the ground.

Why Our Brains Struggle With The Threat Of Data Privacy

It seems contradictory. We don’t want to share our personal data but, according to a recent study reported on by MediaPost’s Laurie Sullivan, we want the brands we trust to know us when we come shopping. It seems paradoxical.

But it’s not — really.  It ties in with the way we’ve always been thinking.

Again, we just have to understand that we really don’t understand how the data ecosystem works — at least, not on an instant and intuitive level. Our brains have no evolved mechanisms that deal with new concepts like data privacy. So we have borrowed other parts of the brain that do exist. Evolutionary biologists call this “exaption.”

For example, the way we deal with brands seems to be the same way we deal with people — and we have tons of experience doing that. Some people we trust. Most people we don’t. For the people we trust, we have no problem sharing something of our selves. In fact, it’s exactly that sharing that nurtures relationships and helps them grow.

It’s different with people we don’t trust. Not only do we not share with them, we work to avoid them, putting physical distance between us and them. We’d cross to the other side of the street to avoid bumping into them.

In a world that was ordered and regulated by proximity, this worked remarkably well. Keeping our enemies at arm’s length generally kept us safe from harm.

Now, of course, distance doesn’t mean the same thing it used to. We now maneuver in a world of data, where proximity and distance have little impact. But our brains don’t know that.

As I said, the brain doesn’t really know how digital data ecosystems work, so it does its best to substitute concepts it has evolved to handle those it doesn’t understand at an intuitive level.

The proxy for distance the brain seems to use is task focus. If we’re trying to do something, everything related to that thing is “near” and everything not relevant to it is “far. But this is an imperfect proxy at best and an outright misleading one at worst.

For example, we will allow our data to be collected in order to complete the task. The task is “near.” In most cases, the data we share has little to do with the task we’re trying to accomplish. It is labelled by the brain as “far” and therefore poses no immediate threat.

It’s a bait and switch tactic that data harvesters have perfected. Our trust-warning systems are not engaged because there are no proximate signs to trigger them. Any potential breaches of trust happen well after the fact – if they happen at all. Most times, we’re simply not aware of where our data goes or what happens to it. All we know is that allowing that data to be collected takes us one step closer to accomplishing our task.

That’s what sometimes happens when we borrow one evolved trait to deal with a new situation:  The fit is not always perfect. Some aspects work, others don’t.

And that is exactly what is happening when we try to deal with the continual erosion of online trust. In the moment, our brain is trying to apply the same mechanisms it uses to assess trust in a physical world. What we don’t realize is that we’re missing the warning signs our brains have evolved to intuitively look for.

We also drag this evolved luggage with us when we’re dealing with our favorite brands. One of the reasons you trust your closest friends is that they know you inside and out. This intimacy is a product of a physical world. It comes from sharing the same space with people.

In the virtual world, we expect the brands we know and love to have this same knowledge of us. It frustrates us when we are treated like a stranger. Think of how you would react if the people you love the most gave you the same treatment.

This jury-rigging of our personal relationship machinery to do double duty for the way we deal with brands may sound far-fetched, but marketing brands have only been around for a few hundred years. That is just not enough time for us to evolve new mechanisms to deal with them.

Yes, the rational, “slow loop” part of our brains can understand brands, but the “fast loop” has no “brand” or “data privacy” modules. It has no choice but to use the functional parts it does have.

As I mentioned in a previous post, there are multiple studies that indicate that it’s these parts of our brain that fire instantly, setting the stage for all the rationalization that will follow. And, as our own neuro-imaging study showed, it seems that the brain treats brands the same way it treats people.

I’ve been watching this intersection between technology and human behaviour for a long time now. More often than not, I see this tendency of the brain to make split-section decisions in environments where it just doesn’t have the proper equipment to make those decisions. When we stop to think about these things, we believe we understand them. And we do, but we had to stop to think. In the vast majority of cases, that’s just not how the brain works.