What Media Insiders Were Thinking (And Writing) In 2021

Note: This is a year back look at the posts in the Media Insider Column on Mediapost, for which I write every Tuesday. All the writers for the column have been part of the Marketing and Media business for decades, so there’s a lot of wisdom there to draw on. This is the second time I’ve done this look back at what we’ve written about in the previous year.

As part of the group of Media Insiders, I’ve always considered myself in sterling company. I suspect if you added up all the years of experience in this stable of industry experts, we’d be well into the triple digits. Most of the Insiders are still active in the world of marketing. For myself, although I’m no longer active in the business, I’m still fascinated by how it impacts our lives and our culture.

For all those reasons, I think the opinions of this group are worth listening to — and, thankfully,  MediaPost gives you those opinions every day.

Three years ago, I thought it would be interesting to do a “meta-analysis” of those opinions over the span of the year, to see what has collectively been on the minds of the Media Insiders. I meant to do it again last year, but just never got around to it — as you know, global pandemics and uprisings against democracy were a bit of a distraction.

This year, I decided to give it another shot. And it was illuminating. Here’s a summary of what has been on our collective minds:

I don’t think it’s stretching things to say that your Insiders have been unusually existential in their thoughts in the past 12 months. Now, granted, this is one column on MediaPost that leads to existential musings. That’s why I ended up here. I love the fact that I can write about pretty much anything and it generally fits under the “Media Insider” masthead. I suspect the same is true for the other Insiders.

But even with that in mind, this year was different. I think we’ve all spent a lot of the last year thinking about what the moral and ethical boundaries for marketers are — for everyone, really — in the world of 2021. Those ponderings broke down into a few recurring themes.

Trying to Navigate a Substantially Different World

Most of this was naturally tied to the ongoing COVID pandemic.  

Surprisingly, given that three years ago it was one of the most popular topics, Insiders said little about politics. Of course, we were then squarely in the middle of “Trump time.” There were definitely a few posts after the Jan. 6 insurrection, but most of it was just trying to figure out how the world might permanently change after 2021. Almost 20% of our columns touched on this topic.

A notable subset of this was how our workplaces might change. With many of us being forced to work from home, 4% of the year’s posts talked about how “going to work” may never look the same again.

Ad-Tech Advice

The next most popular topic from Insiders (especially those still in the biz, like Corey, Dave, Ted and Maarten) was ongoing insight on how to manage the nuts and bolts of your marketing. A lot of this focused on using ad tech effectively. That made up 15% of last year’s posts.

And Now, The Bad News

I will say your Media Insiders (myself included) are a somewhat pessimistic bunch. Even when we weren’t talking about wrenching change brought about by a global pandemic, we were worrying about the tech world going to hell in a handbasket. About 13.5% of our posts talked about social media, and it was almost all negative, with most of it aimed squarely at Facebook — sorry, Meta.

Another 12% of our posts talked about other troubling aspects of technology. Privacy concerns over data usage and targeting took the lead here. But we were also worried about other issues, like the breakdown of person-to-person relationships, disappearing attention spans, and tears in our social fabric. When we talked about the future of tech, we tended to do it through a dystopian lens.

Added to this was a sincere concern about the future of journalism. This accounted for another 5% of all our posts. This makes almost a full third of all posts with a decidedly gloomy outlook when it comes to tech and digital media’s impact on society.

The Runners-Up

If there was one branch of media that seemed the most popular among the Insiders (especially Dave Morgan), it was TV and streaming video. I also squeezed a few posts about online gaming into this category. Together, this topic made up 10.5% of all posts.

Next in line, social marketing and ethical branding. We all took our own spins on this, and together we devoted almost 9.5% of all posts in 2021 to it. I’ve talked before about the irony of a world that has little trust in advertising but growing trust in brands. Your Insiders have tried to thread the needle between the two sides of this seeming paradox.

Finally, we did cover a smattering of other topics, but one in particular rose about the others as something increasingly on our radar. We touched on the Metaverse and its implications in almost 3% of our posts.

Summing Up

To try to wrap up 2021 in one post is difficult, but if there was a single takeaway, I think it’s that both marketing and media are faced with some very existential questions. Ad-supported revenue models have now been pushed to the point where we must ask what the longer-term ethical implications might be.

If anything, I would say the past year has marked the beginning of our industry realizing that a lot of unintended consequences have now come home to roost.

When Social Media Becomes the Message

On Nov. 23, U.K. cosmetics firm Lush said it was deactivating its Instagram, Facebook, TikTok and Snapchat accounts until the social media environment “is a little safer.” And by a “safer” environment, the company didn’t mean for advertisers, but for consumers. Jack Constantine, chief digital officer and product inventor at Lush, explains in an interview with the BBC:

“[Social media channels] do need to start listening to the reality of how they’re impacting people’s mental health and the damage that they’re causing through their craving for the algorithm to be able to constantly generate content regardless of whether it’s good for the users or not.”

This was not an easy decision for Lush. It came with the possibility of a substantial cost to its business, “We already know that there is potential damage of £10m in sales and we need to be able to gain that back,” said Constantine. “We’ve got a year to try to get that back, and let’s hope we can do that.”

In effect, Lush is rolling the dice on a bet based on the unpredictable network effects of social media. Would the potential loss to its bottom line be offset by the brand uptick it would receive by being true to its core values? In talking about Lush’s move on the Wharton Business Daily podcast, marketing lecturer Annie Wilson pointed out the issues in play here:

“There could be positive effects on short-term loyalty and brand engagement, but it will be interesting to see the long-term effect on acquiring new consumers in the future.”

I’m not trying to minimize Lush’s decision here by categorizing it as a marketing ploy. The company has been very transparent about how hard it’s been to drop — even temporarily — Facebook and its other properties from the Lush marketing mix. The brand had previously closed several of its UK social media accounts, but eventually found itself “back on the channels, despite the best intentions.”

You can’t overstate how fundamental a decision this is for a profit-driven business. But I’m afraid Lush is probably an outlier. The brand is built on making healthy choices. Lush eventually decided it had to stay true to that mission even if it hurts the bottom line.

Other businesses are far from wearing their hearts on their sleeves to the same extent as Lush. For every Lush that’s out there, there are thousands that continue to feed their budgets to Facebook and its properties, even though they fundamentally disagree with the tactics of the channel.

There has been pushback against these tactics before. In July of 2020, 1000 advertisers joined the #StopHateForProfit Boycott against Facebook. That sounds impressive – until you realize that Facebook has 9 million clients. The boycotters represented just over .01% of all advertisers. Even with the support of other advertisers who didn’t join the boycott but still scaled back their ad spend, it only had a fleeting effect on Facebook’s bottom line. Almost all the advertisers eventually returned after the boycott.

As The New York Times reported at the time, the damage wasn’t so much to Facebook’s pocketbook as to its reputation. Stephen Hahn-Griffiths, the executive vice president of the public opinion analysis company RepTrak, wrote in a follow-up post,

“What could really hurt Facebook is the long-term effect of its perceived reputation and the association with being viewed as a publisher of ‘hate speech’ and other inappropriate content.”

Of course, that was all before the emergence of a certain Facebook data engineer by the name of Frances Haugen. The whistleblower released thousands of internal documents to the Wall Street Journal this past fall. It went public in September of this year in a series called “The Facebook Files.” If we had any doubt about the culpability of Zuckerberg et al, this pretty much laid that to rest.

Predictably, after the story broke, Facebook made some halfhearted attempts to clean up its act by introducing new parental controls on Instagram and Facebook. This follows the typical Facebook handbook for dealing with emerging shit storms: do the least amount possible, while talking about it as much as possible. It’s a tactic known as “purpose-washing.”

The question is, if this is all you do after a mountain of evidence points to you being truly awful, how sincere are you about doing the right thing? This puts Facebook in the same category as Big Tobacco, and that’s pretty crappy company to be in.

Lush’s decision to quit Facebook also pinpoints an interesting dilemma for advertisers: What happens when an advertising platform that has been effective in attracting new customers becomes so toxic that it damages your brand just by being on it? What happens when, as Marshall McLuhan famously said, the medium becomes the message?

Facebook is not alone with this issue. With the systematic dismantling of objective journalism, almost every news medium now carries its own message. This is certainly true for channels like Fox News. By supporting these platforms with advertising, advertisers are putting a stamp of approval on those respective editorial biases and — in Fox’s case — the deliberate spreading of misinformation that has been shown to have a negative social cost.

All this points to a toxic cycle becoming more commonplace in ad-supported media: The drive to attract and effectively target an audience leads a medium to embrace questionable ethical practices. These practices then taint the platform itself, leading to it potentially becoming brand-toxic. The advertisers then must choose between reaching an available audience that can expand its business, or avoiding the toxicity of the platform. The challenge for the brand then becomes a contest to see how long it can hold its nose while it continues to maximize sales and profits.

For Lush, the scent of Facebook’s bullshit finally grew too much to bear — at least for now.

Why Are Podcasts so Popular?

Everybody I know is listening to podcasts. According to eMarketer, the number of monthly U.S. podcast listeners will increase by over 10% this year, to a total of 117.8 million. And this growth is ruled by younger consumers. Apparently, more than 60% of U.S. adults ages 18 to 34 will listen to podcasts.

That squares with my anecdotal evidence. Both my daughters are podcast fans. But the popularity of podcasts declines with age. Again, according to eMarketer, less than one-fifth of adults in the U.S. over 65 listen to podcasts.

I must admit, I’m not a regular podcast listener. Nor are most of my friends. I’m not sure why. You’d think we’d be the ideal target. Many of us listen to public radio, so the format of a podcast should be a logical extension of that. But maybe it’s because we’ve already made our choice, and we’re fine with listening to old-fashioned radio.

In theory, I should love podcasts. At the beginning of my career, I was a radio copywriter. I even wrote a few radio plays in my 20s. As a creator, I am very intrigued by the format of a podcast. I’m even considering experimenting in this medium for my own content. I just don’t listen to them that often.

What’s also perplexing about the recent popularity of podcasts is that they’re nothing new. Podcasts have been around forever, at least in Internet terms.

A Brief History of Podcasting

The idea of bite-sized broadcasts goes back to the 1980s and ‘90s, but the advent of the Internet in 2000 opened up the concept of the digital delivery of an audio file to the average listener. This content found a new home in 2001 when Apple introduced the iPod. For the next 10 plus years, podcasts were generally just another delivery option for existing content.

But in 2014, “This American Life” launched season one of its true-crime “Serial” podcast. Suddenly, something gelled in the medium, and the audiences started to grow. The true crime bandwagon gathered speed. Both producers and audiences found their groove; the content became more compelling, and more people started listening.

In 2013, just over 10% of the U.S. population listened to podcasts monthly. This year, podcasting will become a $1 billion industry and over 50% of Americans listen regularly.

So why did podcasting, a medium with relatively few technical bells and whistles, suddenly become so hot?

A Story Well Told

The first clue to the popularity of podcasts is that many of them (certainly the most popular ones) focus on storytelling. And we are innately connected to the power of a good story.

The one genre of podcast that has been the most popular are the true crime series. Humans have a need to resolve mysteries. These podcasts have become very good at creating a curiosity gap that itches to be closed. They hit many of our hard-wired hot buttons.

Still, there are many, many ways to tell a murder mystery. So, beyond a compelling story, what else is it about podcasts that make them so addictive?

The Beauty of Brain Bonding

When you think of how our brain interprets messages, an audio-based one seems to thread the needle between the effort of imagination and the joy of focused relaxation. It opens the door to our theater of the mind, allowing us to fill in the sensory gaps needed to bring the story alive.

As I mentioned in last week’s post, the brain works by retrieving and synthesizing memories and experiences when prompted by a stimulus. It’s a process that makes the stories a little more personal for us, a little more intimate; these are stories self-tailored for us by our own experiences and beliefs.

But there are other audio-only formats available. This clue gets us closer to understanding the popularity of podcasts, but still leaves us a bit short. For the final answer, we have to explore one more aspect of them.

An Intimate Invitation

When you google “why are podcasts popular?” you’ll often see that their appeal lies in their convenience. You can listen to them at your own pace, in your own place and on your own timeline. They are not as restrictive as a radio broadcast.

You could take that at face value, but I think there’s more that meets the ear here. There is something about the portability and convenience of a podcast that sets them up for possibly being the most intimate of media.

When we listen to a podcast, we do so in an environment of our own choosing. Perhaps it’s in our vehicle during our daily commute. Maybe it’s just sitting in our favorite recliner by a fireplace.

Whatever the surroundings, we can make sure it’s a safe space that allows us to connect with the content at a very intimate level. We generally listen to them with our earbuds in, so the juicy details don’t leak out to the world at large.

And the best podcast producers have realized this. This is not a broadcast, it’s a one-sided conversation with your smartest friend talking about the most interesting thing they know.

Whatever lies behind their popularity, it’s a safe bet that half the people you know listen to podcasts on a regular basis.

I’ll have to give them another try.

The Complexities Of Understanding Each Other

How our brain understands things that exist in the real world is a fascinating and complex process.

Take a telephone, for example.

When you just saw that word in print, your brain went right to work translating nine abstract symbols (including the same one repeated three times), the letters we use to write “telephone,” into a concept that means something to you. And for each of you reading this, the process could be a little different. There’s a very good likelihood you’re picturing a phone. The visual cortex of your brain is supplying you with an image that comes from your real-world experience with phones.

But perhaps you’re thinking of the sound a phone makes, in which case the audio center of your brain has come to life and you’re reimagining the actual sound of a phone.

recent study from the Max Planck Institute found there’s a hierarchy of understanding that activates in the brain when we think of things, going from the concrete at the lowest levels to the abstract at higher levels. It can all get quite complex — even for something relatively simple like a phone.

Imagine what a brain must go through to try to understand another person.

Another study from Ruhr University in Bochum, Germany, tried to unpack that question. The research team found, again, that the brain pulls many threads together to try to understand what another person might be going through. It pulls back clues that come through our senses. But, perhaps most importantly, in many cases it attempts to read the other person’s mind. The research team believes it’s this ability that’s central to social understanding.  “It enables us to develop an individual understanding of others that goes beyond the here and now,” explains researcher Julia Wolf. “This plays a crucial role in building and maintaining long-term relationships.”

In both these cases of understanding, our brains rely on our experience in the real world to create an internal realization in our own brains. The richer those experiences are, the more we have to work with when we build those representations in our mind.

This becomes important when we try to understand how we understand each other. The more real-world experience we have with each other, the more successful we will be when it comes to truly getting into someone else’s head. This only comes from sharing the same physical space and giving our brains something to work with. “All strategies have limited reliability; social cognition is only successful by combining them,” says study co-researcher Sabrina Coninx.

I have talked before about the danger of substituting a virtual world for a physical one when it comes to truly building social bonds. We just weren’t built to do this. What we get through our social media channels is a mere trickle of input compared to what we would get through a real flesh-and-blood interaction.

Worse still, it’s not even an unbiased trickle. It’s been filtered through an algorithm that is trying to interpret what we might be interested in. At best it is stripped of context. At worst, it can be totally misleading.

Despite these worrying limitations, more and more of us are relying on this very unreliable signal to build our own internal representations of reality, especially those involving other people.

Why is this so dangerous? It’s The negative impact of social media is twofold. First it strips us of the context we need to truly understand each other, and then it creates an isolation of understanding. We become ideologically balkanized.

Balkanization is the process through which those that don’t agree with each other become formally isolated from each other. It was first used to refer to the drawing of boundaries between regions (originally in the Balkan peninsula) that were ethnically, politically or religiously different from each other.

Balkanization increasingly relies on internal representations of the “other,” avoiding real world contact that may challenge those representations. The result is a breakdown of trust and understanding across those borders. And it’s this breakdown of trust we should be worried about.

Our ability to reach across boundaries to establish mutually beneficial connections is a vital component in understanding the progress of humans. In fact, in his book “The Rational Optimist,” Matt Ridley convincingly argues that this ability to trade with others is the foundation that has made homo sapiens dominant on this planet. But, to successfully trade and prosper, we have to trust each other. “As a broad generalisation, the more people trust each other in a society, the more prosperous that society is, and trust growth seems to precede income growth,” Ridley explains.

As I said, balkanization is a massive breakdown of trust. In every single instance in the history of humankind, a breakdown of trust leads to a society that regresses rather than advances. But if we take every opportunity to build trust and break down the borders of balkanization, we prosper.

Neuroeconomist Paul Zak, who has called the neurotransmitter oxytocin the “trust molecule,” says, “A 15% increase in the proportion of people in a country who think others are trustworthy, raises income per person by 1% per year for every year thereafter.”

We evolved to function in a world that was messy, organic and, most importantly, physical. Our social mechanisms work best when we keep bumping into each other, whether we want to or not. Technology might be wonderful at making the world more efficient, but it doesn’t do a very good job at making it more human.

The Unusual Evolution of the Internet

The Internet we have today evolved out of improbability. It shouldn’t have happened like it did. It evolved as a wide-open network forged by starry-eyed academics and geeks who really believed it might make the world better. It wasn’t supposed to win against walled gardens like Compuserve, Prodigy and AOL — but it did. If you rolled back the clock, knowing what we know now, you could be sure it would never play out the same way again.

To use the same analogy that Eric Raymond did in his now-famous essay on the development of Linux, these were people who believed in bazaars rather than cathedrals. The internet was cobbled together to scratch an intellectual and ethical itch, rather than a financial one.

But today, as this essay in The Atlantic by Jonathan Zittrain warns us, the core of the internet is rotting. Because it was built by everyone and no one, all the superstructure that was assembled on top of that core is teetering. Things work, until they don’t: “The internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks.”

The problem is, it’s no one’s job to make sure those bricks stay in place.

Zittrain talks about the holes in humanity’s store of knowledge. But there’s another thing about this evolution that is either maddening or magical, depending on your perspective: It was never built with a business case in mind.

Eventually, commerce pipes were retrofitted into the whole glorious mess, and billions managed to be made. Google alone has managed to pull over a trillion dollars in revenue in less than 20 years by becoming the de facto index to the world’s most haphazard library of digital stuff. Amazon went one better, using the Internet to reinvent humanity’s marketplace and pulling in $2 trillion in revenue along the way.

But despite all this massive monetization, the benefactors still at least had to pay lip service to that original intent: the naïve belief that technology could make us better, and  that it didn’t just have to be about money.

Even Google, which is on its way to posting $200 billion in revenue, making it the fifth biggest media company in the world (after Netflix, Disney, Comcast, and AT&T), stumbled on its way to making a buck. Perhaps it’s because its founders, Larry Page and Sergey Brin, didn’t trust advertising. In their original academic paper, they said that “advertising-funded search engines will inherently be biased toward the advertisers and away from the needs of consumers.”  Of course they ultimately ended up giving in to the dark side of advertising. But I watched the Google user experience closely from 2003 to 2011, and that dedication to the user was always part of a delicate balancing act that was generally successful.

But that innocence of the original Internet is almost gone, as I noted in a recent post. And there are those who want to make sure that the next thing — whatever it is — is built on a framework that has monetization built in. It’s why Mark Zuckerberg is feverishly hoping that his company can build the foundations of the Metaverse. It’s why Google is trying to assemble the pipes and struts that build the new web. Those things would be completely free of the moral — albeit naïve — constraints that still linger in the original model. In the new one, there would only be one goal: making sure shareholders are happy.

It’s also natural that many of those future monetization models will likely embrace advertising, which is, as I’ve said before, the path of least resistance to profitability.

We should pay attention to this. The very fact that the Internet’s original evolution was as improbable and profit-free as it was puts us in a unique position today. What would it look like if things had turned out differently, and the internet had been profit-driven from day one? I suspect it might have been better-maintained but a lot less magical, at least in its earliest iterations.

Whatever that new thing is will form a significant part of our reality. It will be even more foundational and necessary to us than the current internet. We won’t be able to live without it. For that reason, we should worry about the motives that may lie behind whatever “it” will be.

The Relationship between Trust and Tech: It’s Complicated

Today, I wanted to follow up on last week’s post about not trusting tech companies with your privacy. In that post, I said, “To find a corporation’s moral fiber, you always, always, always have to follow the money.”

A friend from back in my industry show days — the always insightful Brett Tabke — reached out to me to comment, and mentioned that the position taken by Apple in the current privacy brouhaha with Facebook is one of convenience, especially this “holier-than-thou” privacy stand adopted by Tim Cook and Apple.

“I really wonder though if it is a case of do-the-right-thing privacy moral stance, or one of convenience that supports their ecosystem, and attacks a competitor?” he asked.

It’s hard to argue against that. As Brett mentioned, Apple really can’t lose by “taking money out of a side-competitors pocket and using it to lay more foundational corner stones in the walled garden, [which] props up the illusion that the garden is a moral feature, and not a criminal antitrust offence.”

But let’s look beyond Facebook and Apple for a moment. As Brett also mentioned to me, “So who does a privacy action really impact more? Does it hit Facebook or ultimately Google? Facebook is just collateral damage here in the real war with Google. Apple and Google control their own platform ecosystems, but only Google can exert influence over the entire web. As we learned from the unredacted documents in the States vs Google antitrust filings, Google is clearly trying to leverage its assets to exert that control — even when ethically dubious.”

So, if we are talking trust and privacy, where is Google in this debate? Given the nature of Google’s revenue stream, its stand on privacy is not quite as blatantly obvious (or as self-serving) as Facebook’s. Both depend on advertising to pay the bills, but the nature of that advertising is significantly different.

57% of Alphabet’s (Google’s parent company) annual $182-billion revenue stream still comes from search ads, according to its most recent annual report. And search advertising is relatively immune from crackdowns on privacy.

When you search for something on Google, you have already expressed your intent, which is the clearest possible signal with which you can target advertising. Yes, additional data taken with or without your knowledge can help fine-tune ad delivery — and Google has shown it’s certainly not above using this  — but Apple tightening up its data security will not significantly impair Google’s ability to make money through its search revenue channel.

Facebook’s advertising model, on the other hand, targets you well before any expression of intent. For that reason, it has to rely on behavioral data and other targeting to effectively deliver those ads. Personal data is the lifeblood of such targeting. Turn off the tap, and Facebook’s revenue model dries up instantly.

But Google has always had ambitions beyond search revenue. Even today, 43% of its revenue comes from non-search sources. Google has always struggled with the inherently capped nature of search-based ad inventory. There are only so many searches done against which you can serve advertising. And, as Brett points out, that leads Google to look at the very infrastructure of the web to find new revenue sources. And that has led to signs of a troubling collusion with Facebook.

Again, we come back to my “follow the money” mantra for rooting out rot in the system. And in this case, the money we’re talking about is the premium that Google skims off the top when it determines which ads are shown to you. That premium depends on Google’s ability to use data to target the most effective ads possible through its own “Open Bidding” system. According to the unredacted documents released in the antitrust suit, that premium can amount to 22% to 42% of the ad spend that goes through that system.

In summing up, it appears that if you want to know who can be trusted most with your data, it’s the companies that don’t depend on that data to support an advertising revenue model. Right now, that’s Apple. But as Brett also pointed out, don’t mistake this for any warm, fuzzy feeling that Apple is your knight in shining armour: “Apple has shown time and time again they are willing to sacrifice strong desires of customers in order to make money and control the ecosystem. Can anyone look past headphone jacks, Macbook jacks, or the absence of Macbook touch screens without getting the clear indication that these were all robber-baronesque choices of a monopoly in action? Is so, then how can we go ‘all in’ on privacy with them just because we agree with the stance?”

The Tech Giant Trust Exercise

If we look at those that rule in the Valley of Silicon — the companies that determine our technological future — it seems, as I previously wrote,  that Apple alone is serious about protecting our privacy. 

MediaPost editor in chief Joe Mandese shared a post late last month about how Apple’s new privacy features are increasingly taking aim at the various ways in which advertising can be targeted to specific consumers. The latest victim in those sights is geotargeting.

Then Steve Rosenbaum mentioned last week that as Apple and Facebook gird their loins and prepare to do battle over the next virtual dominion — the metaverse — they are taking two very different approaches. Facebook sees this next dimension as an extension of its hacker mentality, a “raw, nasty networker of spammers.” Apple is, as always, determined to exert a top-down restriction on who plays in its sandbox, only welcoming those who are willing to play by its rules. In that approach, the company is also signaling that it will take privacy in the metaverse seriously. Apple CEO Tim Cook said he believes “users should have the choice over the data that is being collected about them and how it’s used.”

Apple can take this stand because its revenue model doesn’t depend on advertising. To find a corporation’s moral fiber, you always, always, always have to follow the money. Facebook depends on advertising for revenue. And it has repeatedly shown it doesn’t really give a damn about protecting the privacy of users. Apple, on the other hand, takes every opportunity to unfurl the privacy banner as its battle standard because its revenue stream isn’t really impacted by privacy.

If you’re looking for the rot at the roots of technology, a good place to start is at anything that relies on advertising. In my 40 years in marketing, I have come to the inescapable conclusion that it is impossible for business models that rely on advertising as their primary source of revenue to stay on the right side of privacy concerns. There is an inherent conflict that cannot be resolved. In a recent earnings call,  Facebook CEO Mark Zuckerberg said it in about the clearest way it could be said, “As expected, we did experience revenue headwinds this quarter, including from Apple’s [privacy rule] changes that are not only negatively affecting our business, but millions of small businesses in what is already a difficult time for them in the economy.”

Facebook has proven time and time again that when the need for advertising revenue runs up against a question of ethical treatment of users, it will always be the ethics that give way.

It’s also interesting that Europe is light years ahead of North America in introducing legislation that protects privacy. According to one Internet Privacy Ranking study, four of the five top countries for protecting privacy are in Northern Europe. Australia is the fifth. My country, Canada, shares these characteristics. We rank seventh. The US ranks 18th.

There is an interesting corollary here I’ve touched on before. All these top-ranked countries are social democracies. All have strong public broadcasting systems. All have a very different relationship with advertising than the U.S. We that live in these countries are not immune from the dangers of advertising (this is certainly true for Canada), but our media structure is not wholly dependent on it. The U.S., right from the earliest days of electronic media, took a different path — one that relied almost exclusively on advertising to pay the bills.

As we start thinking about things like the metaverse or other forms of reality that are increasingly intertwined with technology, this reliance on advertising-funded platforms is something we must consider long and hard. It won’t be the companies that initiate the change. An advertising-based business model follows the path of least resistance, making it the shortest route to that mythical unicorn success story. The only way this will change will be if we — as users — demand that it changes.

And we should  — we must — demand it. Ad-based tech giants that have no regard for our personal privacy are one of the greatest threats we face. The more we rely on them, they more they will ask from us.

The Terrors of New Technology

My neighbour just got a new car. And he is terrified. He told me so yesterday. He has no idea how the hell to use it. This isn’t just a new car. It’s a massive learning project that can intimidate the hell out of anyone. It’s technology run amok. It’s the canary in the coal mine of the new world we’re building.

Perhaps – just perhaps – we should be more careful in what we wish for.

Let me provide the back story. His last car was his retirement present to himself, which he bought in 2000. He loved the car. It was a hard top convertible. At the time he bought it it was state of the art. But this was well before the Internet of Things and connected technology. The car did pretty much what you expected it to. Almost anyone could get behind the wheel and figure out how to make it go.

This year, under much prompting from his son, he finally decided to sell his beloved convertible and get a new car. But this isn’t just any car. It is a high-end electric sports car. Again, it is top of the line. And it is connected in pretty much every way you could imagine, and in many ways that would never cross any of our minds.

My neighbour has had this new car for about a week. And he’s still afraid to drive it anywhere. “Gord,” he said, “the thing terrifies me. I still haven’t figured out how to get it to open my garage door.” He has done online tutorials. He has set up a Zoom session with the dealer to help him navigate the umpteen zillion screens that show up on the smart display. After several frustrating experiments, he has learned he needs to pair it with his wifi system at home to get it to recharge properly. No one could just hop behind the wheel and drive it. You would have to sign up for an intensive technology boot camp before you were ready to climb a near-vertical learning curve. The capabilities of this car are mind boggling. And that’s exactly the problem. It’s damned near impossible to do anything with a boggled mind.

The acceptance of new technology has generated a vast body of research. I myself did an exhaustive series of blog posts on it back in 2014. Ever since sociologist Everett Rogers did his seminal work on the topic back in 1962 we have known that there are hurdles to overcome in grappling with something new, and we don’t all clear the hurdles at the same rate. Some of us never clear them at all.

But I also suspect that the market, especially at the high end, have become so enamored with embedding technology that they have forgotten how difficult it might be for some of us to adopt that technology, especially those of us of a certain age.

I am and always have been an early adopter. I geek out on new technology. That’s probably why my neighbour has tapped me to help him figure out his new car. I’m the guy my family calls when they can’t get their new smartphone to work. And I don’t mind admitting I’m slipping behind. I think we’re all the proverbial frogs in boiling water. And that water is technology. It’s getting harder and harder just to use the new shit we buy.

Here’s another thing that drives me batty about technology. It’s a constantly moving target. Once you learn something, it doesn’t stay learnt. It upgrades itself, changes platforms or becomes obsolete. Then you have to start all over again.

Last year, I started retrofitting our home to be a little bit more smart. And in the space of that year, I have sensors that mysteriously go offline, hubs that suddenly stop working, automation routines that are moodier than a hormonal teenager and a lot of stuff that just fits into the “I have no idea” category. When it all works it’s brilliant. I remember that one day – it was special. The other 364 have been a pain in the ass of varying intensity. And that’s for me, the tech guy. My wife sometimes feels like a prisoner in her own home. She has little appreciation for the mysterious gifts of technology that allow me to turn on our kitchen lights when we’re in Timbuktu (should we ever go there and if we can find a good wifi signal).

Technology should be a tool. It should serve us, not hold us slave to its whims. It would be so nice to be able to just make coffee from our new coffee maker, instead of spending a week trying to pair it with our toaster so breakfast is perfectly synchronized.

Oops, got to go. My neighbour’s car has locked him in his garage.

Why Is Willful Ignorance More Dangerous Now?

In last week’s post, I talked about how the presence of willful ignorance is becoming something we not only have to accept, but also learn how to deal with. In that post, I intimated that the stakes are higher than ever, because willful ignorance can do real damage to our society and our world.

So, if we’ve lived with willful ignorance for our entire history, why is it now especially dangerous? I suspect it’s not so much that willful ignorance has changed, but rather the environment in which we find it.

The world we live in is more complex because it is more connected. But there are two sides to this connection, one in which we’re more connected, and one where we’re further apart than ever before.

Technology Connects Us…

Our world and our society are made of networks. And when it comes to our society, connection creates networks that are more interdependent, leading to complex behaviors and non-linear effects.

We must also realize that our rate of connection is accelerating. The pace of technology has always been governed by Moore’s Law, the tenet that the speed and capability of our computers will double every two years. For almost 60 years, this law has been surprisingly accurate.

What this has meant for our ability to connect digitally is that the number and impact of our connections has also increased exponentially, and it will continue to increase in our future. This creates a much denser and more interconnected network, but it has also created a network that overcomes the naturally self-regulating effects of distance.

For the first time, we can have strong and influential connections with others on the other side of the globe. And, as we forge more connections through technology, we are starting to rely less on our physical connections.

And Drives Us Further Apart

The wear and tear of a life spent bumping into each other in a physical setting tends to smooth out our rougher ideological edges. In face-to-face settings, most of us are willing to moderate our own personal beliefs in order to conform to the rest of the crowd. Exactly 80 years ago, psychologist Solomon Asch showed how willing we were to ignore the evidence of our own eyes in order to conform to the majority opinion of a crowd.

For the vast majority of our history, physical proximity has forced social conformity upon us. It leavens out our own belief structure in order to keep the peace with those closest to us, fulfilling one of our strongest evolutionary urges.

But, thanks to technology, that’s also changing. We are spending more time physically separated but technically connected. Our social conformity mechanisms are being short-circuited by filter bubbles where everyone seems to share our beliefs. This creates something called an availability bias:  the things we see coming through our social media feeds forms our view of what the world must be like, even though statistically it is not representative of reality.

It gives the willfully ignorant the illusion that everyone agrees with them — or, at least, enough people agree with them that it overcomes the urge to conform to the majority opinion.

Ignorance in a Chaotic World

These two things make our world increasingly fragile and subject to what chaos theorists call the Butterfly Effect, where seemingly small things can make massive differences.

It’s this unique nature of our world, which is connected in ways it never has been before, that creates at least three reasons why willful ignorance is now more dangerous than ever:

One: The impact of ignorance can be quickly amplified through social media, causing a Butterfly Effect cascade. Case in point, the falsehood that the U.S. election results weren’t valid, leading to the Capitol insurrection of Jan. 6.

The mechanics of social media that led to this issue are many, and I have cataloged most of them in previous columns: the nastiness that comes from arm’s-length discourse, a rewiring of our morality, and the impact of filter bubbles on our collective thresholds governing anti-social behaviors.

Secondly, and what is probably a bigger cause for concern, the willfully ignorant are very easily consolidated into a power base for politicians willing to play to their beliefs. The far right — and, to a somewhat lesser extent, the far left — has learned this to devastating impact. All you have to do is abandon your predilection for telling the truth so you can help them rationalize their deliberate denial of facts. Do this and you have tribal support that is almost impossible to shake.

The move of populist politicians to use the willfully ignorant as a launch pad for their own purposes further amplifies the Butterfly Effect, ensuring that the previously unimaginable will continue to be the new state of normal.

Finally, there is the third factor: our expanding impact on the physical world. It’s not just our degree of connection that technology is changing exponentially. It’s also the degree of impact we have on our physical world.

For almost our entire time on earth, the world has made us. We have evolved to survive in our physical environment, where we have been subject to the whims of nature.

But now, increasingly, we humans are shaping the nature of the world we live in. Our footprint has an ever-increasing impact on our environment, and that footprint is also increasing exponentially, thanks to technology.

The earth and our ability to survive on it are — unfortunately — now dependent on our stewardship. And that stewardship is particularly susceptible to the impact of willful ignorance. In the area of climate change alone, willful ignorance could — and has — led to events with massive consequences. A recent study estimates that climate change is directly responsible for 5 million deaths a year.

For all these reasons, willful ignorance is now something that can have life and death consequences.

Making Sense of Willful Ignorance

Willful ignorance is nothing new. Depending on your beliefs, you could say it was willful ignorance that got Adam and Eve kicked out of the Garden of Eden. But the visibility of it is higher than it’s ever been before. In the past couple of years, we have had a convergence of factors that has pushed willful ignorance to the surface — a perfect storm of fact denial.

Some of those effects include the social media effect, the erosion of traditional journalism and a global health crisis that has us all focusing on the same issue at the same time. The net result of all this is that we all have a very personal interest in the degree of ignorance prevalent in our society.

In one very twisted way, this may be a good thing. As I said, the willfully ignorant have always been with us. But we’ve always been able to shrug and move on, muttering “stupid is as stupid does.”

Now, however, the stakes are getting higher. Our world and society are at a point where willful ignorance can inflict some real and substantial damage. We need to take it seriously and we must start thinking about how to limit its impact.

So, for myself, I’m going to spend some time understanding willful ignorance. Feel free to come along for the ride!

It’s important to understand that willful ignorance is not the same as being stupid — or even just being ignorant, despite thousands of social media memes to the contrary.

Ignorance is one thing. It means we don’t know something. And sometimes, that’s not our fault. We don’t know what we don’t know. But willful ignorance is something very different. It is us choosing not to know something.

For example, I know many smart people who have chosen not to get vaccinated. Their reasons may vary. I suspect fear is a common denominator, and there is no shame in that. But rather than seek information to allay their fears, these folks have doubled down on beliefs based on little to no evidence. They have made a choice to ignore the information that is freely available.

And that’s doubly ironic, because the very same technology that enables willful ignorance has made more information available than ever before.

Willful ignorance is defined as “a decision in bad faith to avoid becoming informed about something so as to avoid having to make undesirable decisions that such information might prompt.”

And this is where the problem lies. The explosion of content has meant there is always information available to support any point of view. We also have the breakdown of journalistic principles that occurred in the past 40 years. Combined, we have a dangerous world of information that has been deliberately falsified in order to appeal to a segment of the population that has chosen to be willfully ignorant.

It seems a contradiction: The more information we have, the more that ignorance is a problem. But to understand why, we have to understand how we make sense of the world.

Making Sense of Our World

Sensemaking is a concept that was first introduced by organizational theorist Karl Weick in the 1970s. The concept has been borrowed by those working in the areas of machine learning and artificial intelligence. At the risk of oversimplification, it provides us a model to help us understand how we “give meaning to our collective experiences.”

D.T. Moore and R. Hoffman, 2011

The above diagram (from a 2011 paper by David T. Moore and Robert R. Hoffman) shows the sensemaking process. It starts with a frame — our understanding of what is true about the world. As we get presented with new data, we have to make a choice: Does it fit our frame or doesn’t it?

If it does, we preserve the frame and may elaborate on it, fitting the new data into it. If the data doesn’t support our existing frame, we then have to reframe, building a new frame from scratch.

Our brains loves frames. It’s much less work for the brain to keep a frame than to build a new one. That’s why we tend to stick with our beliefs — another word for a frame — until we’re forced to discard them.

But, as with all human traits, our ways of making sense of our world vary in the population. In the above diagram, some of us are more apt to spend time on the right side of the diagram, more open to reframing and always open to evidence that may cause us to reframe.

That, by the way, is exactly how science is supposed to work. We refer to this capacity as critical thinking: the objective analysis and evaluation of  data in order to form a judgment, even if it causes us to have to build a new frame.

Others hold onto their frames for dear life. They go out of their way to ignore data that may cause them to have to discard the frames they hold. This is what I would define as willful ignorance.

It’s misleading to think of this as just being ignorant. That would simply indicate a lack of available data. It’s also misleading to attribute this to a lack of intelligence.

That would be an inability to process the data. With willful ignorance, we’re not talking about either of those things. We are talking about a conscious and deliberate decision to ignore available data. And I don’t believe you can fix that.

We fall into the trap of thinking we can educate, shame or argue people out of being willfully ignorant. We can’t. This post is not intended for the willfully ignorant. They have already ignored it. This is just the way their brains work. It’s part of who they are. Wishing they weren’t this way is about as pointless as wishing they were a world-class pole vaulter, that they were seven feet tall or that their brown eyes were blue.

We have to accept that this situation is not going to change. And that’s what we have to start thinking about. Given that we have willful ignorance in the world, what can we do to minimize its impact?