It’s the Buzz That Will Kill You

If you choose to play in the social arena, you have to accept that the typical peaks and valleys of business success can suddenly become impossibly steep.

In social media networks, your brand message is whatever meme happens to emerge from the collective activity of this connected market. Marketers have little control — and sometimes, they have no control. At best, all they can do is react by throwing another carefully crafted meme into the social-sphere and hope it picks up some juice and is amplified through the network.

That’s exactly what happened to Peloton in the past week and a half.

On Dec. 9, the HBO Max sequel to “Sex and the City” killed off a major character — Chris Noth’s Mr. Big — by giving him a heart attack after his one thousandth Peloton ride. Apparently, HBO Max gave Peloton no advance warning of this branding back hand.

On Dec. 10, according to Axios,  there was a dramatic spike in social interactions talking about Mr. Big’s last ride, peaking near 80 thousand. As you can imagine, the buzz was not good for Peloton’s business.

On Dec. 12, Peloton struck back with its own ad, apparently produced in just 24 hours by Ryan Reynold’s Maximum Effort agency. This turned the tide of the social buzz. Again, according to data from Newswhip and Axios, social media mentions peaked. This time, they were much more positive toward the Peloton brand.

It should be all good — right? Not so fast. On Dec 16, two sexual assault allegations were made against Chris Noth, chronicled in The Hollywood Reporter. Peloton rapidly scrubbed its ad campaign. Again, the social sphere lit up and Peloton was forced back into defensive mode.

Now, you might call all this marketing froth, but that’s  the way it is in our hyper-connected world. You just have to dance the dance — be nimble and respond.

But my point is not about the marketing side of this of this brouhaha – which has been covered to death, at least at MediaPost (sorry, pardon the pun.) I’m more interested  in what happens to the people who have some real skin in this particular game, whose lives depend on the fortunes of the Peloton brand. Because all this froth does have some very IRL consequences.

Take Peloton’s share price, for one.

The day before the HBO show aired, Peloton’s shares were trading at $45.91. The next day, they tumbled 16%. to $38.51.

And that’s just one chapter in the ongoing story of Peloton’s stock performance, which has been a hyper-compressed roller coaster ride, with the pandemic and a huge amount of social media buzz keeping the foot firmly on the accelerator of stock performance through 2020, but then subsequently dropping like a rock for most of 2021. After peaking as high as $162 a share exactly a year ago, the share price is back down to spitting distance of its pre-pandemic levels.

Obviously, Peloton’s share price is not just dependent on the latest social media meme. There are business fundamentals to consider as well.

Still, you have to accept that a more connected meme-market is going to naturally accelerate the speed of business upticks and declines. Peloton signed up for this dance — and  when you do that, you have to accept all that comes with it.

In terms of the real-world consequences of betting on the buzz, there are three “insider” groups (not including customers) that will be affected: the management, the shareholders and the employees. The first of these supposedly went into this with their eyes open. The second of these also made a choice. If they did their due diligence before buying the stock, they should have known what to expect. But it’s the last of these — the employees — that I really feel for.

With ultra-compressed business cycles like Peloton has experienced, it’s tough for employees to keep up. On the way up the peak, the company is running ragged trying to scale for hyper-growth. If you check employee review sites like Glassdoor.com, there are tales of creaky recruitment processes not being able to keep up. But at least the ride up is exciting. The ride down is something quite different.

In psychological terms, there is something called the locus of control. These are the things you feel you have at least some degree of control over. And there is an ever-increasing body of evidence that shows that locus of control and employee job satisfaction are strongly correlated. No one likes to be the one constantly waiting for someone else to drop the other shoe. It just ramps up your job stress. Granted, job stress that comes with big promotions and generous options on a rocket ship stock can perhaps be justified. But stress that’s packaged with panicked downsizing and imminent layoffs is not a fun employment package for anyone.

That’s the current case at Peloton. On Nov. 5 it announced an immediate hiring freeze. And while there’s been no official announcement of layoffs that I could find, there have been rumors of such posted to the site thelayoff.com.  This is not a fun environment for anyone to function in. Here’s what one post said: “I left Peloton a year ago when I realized it was morphing into the type of company I had no intention of working for.”

We have built a business environment that is highly vulnerable to buzz. And as Peloton has learned, what the buzz giveth, the buzz can also taketh away.

When Social Media Becomes the Message

On Nov. 23, U.K. cosmetics firm Lush said it was deactivating its Instagram, Facebook, TikTok and Snapchat accounts until the social media environment “is a little safer.” And by a “safer” environment, the company didn’t mean for advertisers, but for consumers. Jack Constantine, chief digital officer and product inventor at Lush, explains in an interview with the BBC:

“[Social media channels] do need to start listening to the reality of how they’re impacting people’s mental health and the damage that they’re causing through their craving for the algorithm to be able to constantly generate content regardless of whether it’s good for the users or not.”

This was not an easy decision for Lush. It came with the possibility of a substantial cost to its business, “We already know that there is potential damage of £10m in sales and we need to be able to gain that back,” said Constantine. “We’ve got a year to try to get that back, and let’s hope we can do that.”

In effect, Lush is rolling the dice on a bet based on the unpredictable network effects of social media. Would the potential loss to its bottom line be offset by the brand uptick it would receive by being true to its core values? In talking about Lush’s move on the Wharton Business Daily podcast, marketing lecturer Annie Wilson pointed out the issues in play here:

“There could be positive effects on short-term loyalty and brand engagement, but it will be interesting to see the long-term effect on acquiring new consumers in the future.”

I’m not trying to minimize Lush’s decision here by categorizing it as a marketing ploy. The company has been very transparent about how hard it’s been to drop — even temporarily — Facebook and its other properties from the Lush marketing mix. The brand had previously closed several of its UK social media accounts, but eventually found itself “back on the channels, despite the best intentions.”

You can’t overstate how fundamental a decision this is for a profit-driven business. But I’m afraid Lush is probably an outlier. The brand is built on making healthy choices. Lush eventually decided it had to stay true to that mission even if it hurts the bottom line.

Other businesses are far from wearing their hearts on their sleeves to the same extent as Lush. For every Lush that’s out there, there are thousands that continue to feed their budgets to Facebook and its properties, even though they fundamentally disagree with the tactics of the channel.

There has been pushback against these tactics before. In July of 2020, 1000 advertisers joined the #StopHateForProfit Boycott against Facebook. That sounds impressive – until you realize that Facebook has 9 million clients. The boycotters represented just over .01% of all advertisers. Even with the support of other advertisers who didn’t join the boycott but still scaled back their ad spend, it only had a fleeting effect on Facebook’s bottom line. Almost all the advertisers eventually returned after the boycott.

As The New York Times reported at the time, the damage wasn’t so much to Facebook’s pocketbook as to its reputation. Stephen Hahn-Griffiths, the executive vice president of the public opinion analysis company RepTrak, wrote in a follow-up post,

“What could really hurt Facebook is the long-term effect of its perceived reputation and the association with being viewed as a publisher of ‘hate speech’ and other inappropriate content.”

Of course, that was all before the emergence of a certain Facebook data engineer by the name of Frances Haugen. The whistleblower released thousands of internal documents to the Wall Street Journal this past fall. It went public in September of this year in a series called “The Facebook Files.” If we had any doubt about the culpability of Zuckerberg et al, this pretty much laid that to rest.

Predictably, after the story broke, Facebook made some halfhearted attempts to clean up its act by introducing new parental controls on Instagram and Facebook. This follows the typical Facebook handbook for dealing with emerging shit storms: do the least amount possible, while talking about it as much as possible. It’s a tactic known as “purpose-washing.”

The question is, if this is all you do after a mountain of evidence points to you being truly awful, how sincere are you about doing the right thing? This puts Facebook in the same category as Big Tobacco, and that’s pretty crappy company to be in.

Lush’s decision to quit Facebook also pinpoints an interesting dilemma for advertisers: What happens when an advertising platform that has been effective in attracting new customers becomes so toxic that it damages your brand just by being on it? What happens when, as Marshall McLuhan famously said, the medium becomes the message?

Facebook is not alone with this issue. With the systematic dismantling of objective journalism, almost every news medium now carries its own message. This is certainly true for channels like Fox News. By supporting these platforms with advertising, advertisers are putting a stamp of approval on those respective editorial biases and — in Fox’s case — the deliberate spreading of misinformation that has been shown to have a negative social cost.

All this points to a toxic cycle becoming more commonplace in ad-supported media: The drive to attract and effectively target an audience leads a medium to embrace questionable ethical practices. These practices then taint the platform itself, leading to it potentially becoming brand-toxic. The advertisers then must choose between reaching an available audience that can expand its business, or avoiding the toxicity of the platform. The challenge for the brand then becomes a contest to see how long it can hold its nose while it continues to maximize sales and profits.

For Lush, the scent of Facebook’s bullshit finally grew too much to bear — at least for now.

Why Are Podcasts so Popular?

Everybody I know is listening to podcasts. According to eMarketer, the number of monthly U.S. podcast listeners will increase by over 10% this year, to a total of 117.8 million. And this growth is ruled by younger consumers. Apparently, more than 60% of U.S. adults ages 18 to 34 will listen to podcasts.

That squares with my anecdotal evidence. Both my daughters are podcast fans. But the popularity of podcasts declines with age. Again, according to eMarketer, less than one-fifth of adults in the U.S. over 65 listen to podcasts.

I must admit, I’m not a regular podcast listener. Nor are most of my friends. I’m not sure why. You’d think we’d be the ideal target. Many of us listen to public radio, so the format of a podcast should be a logical extension of that. But maybe it’s because we’ve already made our choice, and we’re fine with listening to old-fashioned radio.

In theory, I should love podcasts. At the beginning of my career, I was a radio copywriter. I even wrote a few radio plays in my 20s. As a creator, I am very intrigued by the format of a podcast. I’m even considering experimenting in this medium for my own content. I just don’t listen to them that often.

What’s also perplexing about the recent popularity of podcasts is that they’re nothing new. Podcasts have been around forever, at least in Internet terms.

A Brief History of Podcasting

The idea of bite-sized broadcasts goes back to the 1980s and ‘90s, but the advent of the Internet in 2000 opened up the concept of the digital delivery of an audio file to the average listener. This content found a new home in 2001 when Apple introduced the iPod. For the next 10 plus years, podcasts were generally just another delivery option for existing content.

But in 2014, “This American Life” launched season one of its true-crime “Serial” podcast. Suddenly, something gelled in the medium, and the audiences started to grow. The true crime bandwagon gathered speed. Both producers and audiences found their groove; the content became more compelling, and more people started listening.

In 2013, just over 10% of the U.S. population listened to podcasts monthly. This year, podcasting will become a $1 billion industry and over 50% of Americans listen regularly.

So why did podcasting, a medium with relatively few technical bells and whistles, suddenly become so hot?

A Story Well Told

The first clue to the popularity of podcasts is that many of them (certainly the most popular ones) focus on storytelling. And we are innately connected to the power of a good story.

The one genre of podcast that has been the most popular are the true crime series. Humans have a need to resolve mysteries. These podcasts have become very good at creating a curiosity gap that itches to be closed. They hit many of our hard-wired hot buttons.

Still, there are many, many ways to tell a murder mystery. So, beyond a compelling story, what else is it about podcasts that make them so addictive?

The Beauty of Brain Bonding

When you think of how our brain interprets messages, an audio-based one seems to thread the needle between the effort of imagination and the joy of focused relaxation. It opens the door to our theater of the mind, allowing us to fill in the sensory gaps needed to bring the story alive.

As I mentioned in last week’s post, the brain works by retrieving and synthesizing memories and experiences when prompted by a stimulus. It’s a process that makes the stories a little more personal for us, a little more intimate; these are stories self-tailored for us by our own experiences and beliefs.

But there are other audio-only formats available. This clue gets us closer to understanding the popularity of podcasts, but still leaves us a bit short. For the final answer, we have to explore one more aspect of them.

An Intimate Invitation

When you google “why are podcasts popular?” you’ll often see that their appeal lies in their convenience. You can listen to them at your own pace, in your own place and on your own timeline. They are not as restrictive as a radio broadcast.

You could take that at face value, but I think there’s more that meets the ear here. There is something about the portability and convenience of a podcast that sets them up for possibly being the most intimate of media.

When we listen to a podcast, we do so in an environment of our own choosing. Perhaps it’s in our vehicle during our daily commute. Maybe it’s just sitting in our favorite recliner by a fireplace.

Whatever the surroundings, we can make sure it’s a safe space that allows us to connect with the content at a very intimate level. We generally listen to them with our earbuds in, so the juicy details don’t leak out to the world at large.

And the best podcast producers have realized this. This is not a broadcast, it’s a one-sided conversation with your smartest friend talking about the most interesting thing they know.

Whatever lies behind their popularity, it’s a safe bet that half the people you know listen to podcasts on a regular basis.

I’ll have to give them another try.

The Complexities Of Understanding Each Other

How our brain understands things that exist in the real world is a fascinating and complex process.

Take a telephone, for example.

When you just saw that word in print, your brain went right to work translating nine abstract symbols (including the same one repeated three times), the letters we use to write “telephone,” into a concept that means something to you. And for each of you reading this, the process could be a little different. There’s a very good likelihood you’re picturing a phone. The visual cortex of your brain is supplying you with an image that comes from your real-world experience with phones.

But perhaps you’re thinking of the sound a phone makes, in which case the audio center of your brain has come to life and you’re reimagining the actual sound of a phone.

recent study from the Max Planck Institute found there’s a hierarchy of understanding that activates in the brain when we think of things, going from the concrete at the lowest levels to the abstract at higher levels. It can all get quite complex — even for something relatively simple like a phone.

Imagine what a brain must go through to try to understand another person.

Another study from Ruhr University in Bochum, Germany, tried to unpack that question. The research team found, again, that the brain pulls many threads together to try to understand what another person might be going through. It pulls back clues that come through our senses. But, perhaps most importantly, in many cases it attempts to read the other person’s mind. The research team believes it’s this ability that’s central to social understanding.  “It enables us to develop an individual understanding of others that goes beyond the here and now,” explains researcher Julia Wolf. “This plays a crucial role in building and maintaining long-term relationships.”

In both these cases of understanding, our brains rely on our experience in the real world to create an internal realization in our own brains. The richer those experiences are, the more we have to work with when we build those representations in our mind.

This becomes important when we try to understand how we understand each other. The more real-world experience we have with each other, the more successful we will be when it comes to truly getting into someone else’s head. This only comes from sharing the same physical space and giving our brains something to work with. “All strategies have limited reliability; social cognition is only successful by combining them,” says study co-researcher Sabrina Coninx.

I have talked before about the danger of substituting a virtual world for a physical one when it comes to truly building social bonds. We just weren’t built to do this. What we get through our social media channels is a mere trickle of input compared to what we would get through a real flesh-and-blood interaction.

Worse still, it’s not even an unbiased trickle. It’s been filtered through an algorithm that is trying to interpret what we might be interested in. At best it is stripped of context. At worst, it can be totally misleading.

Despite these worrying limitations, more and more of us are relying on this very unreliable signal to build our own internal representations of reality, especially those involving other people.

Why is this so dangerous? It’s The negative impact of social media is twofold. First it strips us of the context we need to truly understand each other, and then it creates an isolation of understanding. We become ideologically balkanized.

Balkanization is the process through which those that don’t agree with each other become formally isolated from each other. It was first used to refer to the drawing of boundaries between regions (originally in the Balkan peninsula) that were ethnically, politically or religiously different from each other.

Balkanization increasingly relies on internal representations of the “other,” avoiding real world contact that may challenge those representations. The result is a breakdown of trust and understanding across those borders. And it’s this breakdown of trust we should be worried about.

Our ability to reach across boundaries to establish mutually beneficial connections is a vital component in understanding the progress of humans. In fact, in his book “The Rational Optimist,” Matt Ridley convincingly argues that this ability to trade with others is the foundation that has made homo sapiens dominant on this planet. But, to successfully trade and prosper, we have to trust each other. “As a broad generalisation, the more people trust each other in a society, the more prosperous that society is, and trust growth seems to precede income growth,” Ridley explains.

As I said, balkanization is a massive breakdown of trust. In every single instance in the history of humankind, a breakdown of trust leads to a society that regresses rather than advances. But if we take every opportunity to build trust and break down the borders of balkanization, we prosper.

Neuroeconomist Paul Zak, who has called the neurotransmitter oxytocin the “trust molecule,” says, “A 15% increase in the proportion of people in a country who think others are trustworthy, raises income per person by 1% per year for every year thereafter.”

We evolved to function in a world that was messy, organic and, most importantly, physical. Our social mechanisms work best when we keep bumping into each other, whether we want to or not. Technology might be wonderful at making the world more efficient, but it doesn’t do a very good job at making it more human.

Moving Beyond Willful Ignorance

This is not the post I thought I’d be writing today. Two weeks ago, when I started to try to understand willful ignorance, I was mad. I suspect I shared that feeling with many of you. I was tired of the deliberate denial of fact that had consequences for all of us. I was frustrated with anti-masking, anti-vaxxing, anti-climate change and, most of all, anti-science. I was ready to go to war with those I saw in the other camp.

And that, I found out, is exactly the problem. Let me explain.

First, to recap. As I talked about two weeks ago, willful ignorance is a decision based on beliefs, so it’s very difficult – if not impossible – to argue, cajole or inform people out of it. And, as I wrote last week, willful ignorance has some very real and damaging consequences. This post was supposed to talk about what we do about that problem. I intended to find ways to isolate the impact of willful ignorance and minimize its downside. In doing so, I was going to suggest putting up even more walls to separate “us” from “them.”

But the more I researched this and thought about it, the more I realized that that was exactly the wrong approach. Because this recent plague of willful ignorance is many things, but – most of all – it’s one more example of how we love to separate “us” from “them.” And both sides, including mine, are equally guilty of doing this. The problem we have to solve here is not so much to change the way that some people process information (or don’t) in a way we may not agree with. What we have to fix is a monumental breakdown of trust.

Beliefs thrive in a vacuum. In a vacuum, there’s nothing to challenge them. And we have all been forced into a kind of ideological vacuum for the past year and a half. I talked about how our physical world creates a more heterogeneous ideological landscape than our virtual world does. In a normal life, we are constantly rubbing elbows with those of all leanings. And, if we want to function in that life, we have to find a way to get along with them, even if we don’t like them or agree with them. For most of us, that natural and temporary social bonding is something we haven’t had to do much lately.

It’s this lowering of our ideological defence systems that starts to bridge the gaps between us and them. And it also starts pumping oxygen into our ideological vacuums, prying the lids off our air-tight belief systems. It might not have a huge impact, but this doesn’t require a huge impact. A little trust can go a long way.

After World War II, psychologists and sociologists started to pick apart a fundamental question – how did our world go to war with itself? How, in the name of humanity, did the atrocities of the war occur? One of the areas they started to explore with vigour was this fundamental need of humans to sort ourselves into the categories of “us” and “them”.

In the 1970’s, psychologist Henri Tajfel found that we barely need a nudge to start creating in-groups and out-groups. We’ll do it for anything, even something as trivial as which abstract artist, Klee or Kandisky, we prefer. Once sorted on the flimsiest of premises, these groups started showing a strong preference to favour their own group and punish the other. There was no pre-existing animosity between the groups, but in games such as the Banker’s Game, they showed that they would even forego rewards for themselves if it meant depriving the other group of their share.

If we do this for completely arbitrary reasons such as those used by Tajfel, imagine how nasty we can get when the stakes are much higher, such as our own health or the future of the planet.

So, if we naturally sort ourselves into in groups and out groups and our likelihood to consider perspectives other than our own increases the more we’re exposed to those perspectives in a non-hostile environment, how do we start taking down those walls?

Here’s where it gets interesting.

What we need to break down the walls between “us” and “them” is to find another “them” that we can then unite against.

One of the theories about why the US is so polarized now is that with the end of the Cold War, the US lost a common enemy that united “us” in opposition to “them”. Without the USSR, our naturally tendency to categorize ourselves into ingroups and outgroups had no option but to turn inwards. You might think this is hogwash, but before you throw me into the “them” camp, let me tell you about what happened in Robbers Cave State Park in Oklahoma.

One of the experiments into this ingroup/outgroup phenomenon was conducted by psychologist Muzafer Sherif in the summer of 1954. He and his associates took 22 boys of similar backgrounds (ie they were all white, Protestant and had two parents) to a summer camp at Robbers Cave and randomly divided them into two groups. First, they built team loyalty and then they gradually introduced a competitive environment between the two groups. Predictably, animosity and prejudice soon developed between them.

Sherif and his assistants then introduced a four-day cooling off period and then tried to reduce conflict by mixing the two groups. It didn’t work. In fact, it just made things worse. Things didn’t improve until the two groups were brought together to overcome a common obstacle when the experimenters purposely sabotaged the camp’s water supply. Suddenly, the two groups came together to overcome a bigger challenge. This, by the way, is exactly the same theory behind the process that NASA and Amazon’s Blue Origin uses to build trust in their flight crews.

As I said, when I started this journey, I was squarely in the “us” vs “them” camp. And – to be honest – I’m still fighting my instinct to stay there. But I don’t think that’s the best way forward. I’m hoping that as our world inches towards a better state of normal, everyday life will start to force the camps together and our evolved instincts for cooperation will start to reassert themselves.

I also believe that the past 19 months (and counting) will be a period that sociologists and psychologists will study for years to come, as it’s been an ongoing experiment in human behavior at a scope that may never happen again.

We can certainly hope so.

Why Is Willful Ignorance More Dangerous Now?

In last week’s post, I talked about how the presence of willful ignorance is becoming something we not only have to accept, but also learn how to deal with. In that post, I intimated that the stakes are higher than ever, because willful ignorance can do real damage to our society and our world.

So, if we’ve lived with willful ignorance for our entire history, why is it now especially dangerous? I suspect it’s not so much that willful ignorance has changed, but rather the environment in which we find it.

The world we live in is more complex because it is more connected. But there are two sides to this connection, one in which we’re more connected, and one where we’re further apart than ever before.

Technology Connects Us…

Our world and our society are made of networks. And when it comes to our society, connection creates networks that are more interdependent, leading to complex behaviors and non-linear effects.

We must also realize that our rate of connection is accelerating. The pace of technology has always been governed by Moore’s Law, the tenet that the speed and capability of our computers will double every two years. For almost 60 years, this law has been surprisingly accurate.

What this has meant for our ability to connect digitally is that the number and impact of our connections has also increased exponentially, and it will continue to increase in our future. This creates a much denser and more interconnected network, but it has also created a network that overcomes the naturally self-regulating effects of distance.

For the first time, we can have strong and influential connections with others on the other side of the globe. And, as we forge more connections through technology, we are starting to rely less on our physical connections.

And Drives Us Further Apart

The wear and tear of a life spent bumping into each other in a physical setting tends to smooth out our rougher ideological edges. In face-to-face settings, most of us are willing to moderate our own personal beliefs in order to conform to the rest of the crowd. Exactly 80 years ago, psychologist Solomon Asch showed how willing we were to ignore the evidence of our own eyes in order to conform to the majority opinion of a crowd.

For the vast majority of our history, physical proximity has forced social conformity upon us. It leavens out our own belief structure in order to keep the peace with those closest to us, fulfilling one of our strongest evolutionary urges.

But, thanks to technology, that’s also changing. We are spending more time physically separated but technically connected. Our social conformity mechanisms are being short-circuited by filter bubbles where everyone seems to share our beliefs. This creates something called an availability bias:  the things we see coming through our social media feeds forms our view of what the world must be like, even though statistically it is not representative of reality.

It gives the willfully ignorant the illusion that everyone agrees with them — or, at least, enough people agree with them that it overcomes the urge to conform to the majority opinion.

Ignorance in a Chaotic World

These two things make our world increasingly fragile and subject to what chaos theorists call the Butterfly Effect, where seemingly small things can make massive differences.

It’s this unique nature of our world, which is connected in ways it never has been before, that creates at least three reasons why willful ignorance is now more dangerous than ever:

One: The impact of ignorance can be quickly amplified through social media, causing a Butterfly Effect cascade. Case in point, the falsehood that the U.S. election results weren’t valid, leading to the Capitol insurrection of Jan. 6.

The mechanics of social media that led to this issue are many, and I have cataloged most of them in previous columns: the nastiness that comes from arm’s-length discourse, a rewiring of our morality, and the impact of filter bubbles on our collective thresholds governing anti-social behaviors.

Secondly, and what is probably a bigger cause for concern, the willfully ignorant are very easily consolidated into a power base for politicians willing to play to their beliefs. The far right — and, to a somewhat lesser extent, the far left — has learned this to devastating impact. All you have to do is abandon your predilection for telling the truth so you can help them rationalize their deliberate denial of facts. Do this and you have tribal support that is almost impossible to shake.

The move of populist politicians to use the willfully ignorant as a launch pad for their own purposes further amplifies the Butterfly Effect, ensuring that the previously unimaginable will continue to be the new state of normal.

Finally, there is the third factor: our expanding impact on the physical world. It’s not just our degree of connection that technology is changing exponentially. It’s also the degree of impact we have on our physical world.

For almost our entire time on earth, the world has made us. We have evolved to survive in our physical environment, where we have been subject to the whims of nature.

But now, increasingly, we humans are shaping the nature of the world we live in. Our footprint has an ever-increasing impact on our environment, and that footprint is also increasing exponentially, thanks to technology.

The earth and our ability to survive on it are — unfortunately — now dependent on our stewardship. And that stewardship is particularly susceptible to the impact of willful ignorance. In the area of climate change alone, willful ignorance could — and has — led to events with massive consequences. A recent study estimates that climate change is directly responsible for 5 million deaths a year.

For all these reasons, willful ignorance is now something that can have life and death consequences.

Making Sense of Willful Ignorance

Willful ignorance is nothing new. Depending on your beliefs, you could say it was willful ignorance that got Adam and Eve kicked out of the Garden of Eden. But the visibility of it is higher than it’s ever been before. In the past couple of years, we have had a convergence of factors that has pushed willful ignorance to the surface — a perfect storm of fact denial.

Some of those effects include the social media effect, the erosion of traditional journalism and a global health crisis that has us all focusing on the same issue at the same time. The net result of all this is that we all have a very personal interest in the degree of ignorance prevalent in our society.

In one very twisted way, this may be a good thing. As I said, the willfully ignorant have always been with us. But we’ve always been able to shrug and move on, muttering “stupid is as stupid does.”

Now, however, the stakes are getting higher. Our world and society are at a point where willful ignorance can inflict some real and substantial damage. We need to take it seriously and we must start thinking about how to limit its impact.

So, for myself, I’m going to spend some time understanding willful ignorance. Feel free to come along for the ride!

It’s important to understand that willful ignorance is not the same as being stupid — or even just being ignorant, despite thousands of social media memes to the contrary.

Ignorance is one thing. It means we don’t know something. And sometimes, that’s not our fault. We don’t know what we don’t know. But willful ignorance is something very different. It is us choosing not to know something.

For example, I know many smart people who have chosen not to get vaccinated. Their reasons may vary. I suspect fear is a common denominator, and there is no shame in that. But rather than seek information to allay their fears, these folks have doubled down on beliefs based on little to no evidence. They have made a choice to ignore the information that is freely available.

And that’s doubly ironic, because the very same technology that enables willful ignorance has made more information available than ever before.

Willful ignorance is defined as “a decision in bad faith to avoid becoming informed about something so as to avoid having to make undesirable decisions that such information might prompt.”

And this is where the problem lies. The explosion of content has meant there is always information available to support any point of view. We also have the breakdown of journalistic principles that occurred in the past 40 years. Combined, we have a dangerous world of information that has been deliberately falsified in order to appeal to a segment of the population that has chosen to be willfully ignorant.

It seems a contradiction: The more information we have, the more that ignorance is a problem. But to understand why, we have to understand how we make sense of the world.

Making Sense of Our World

Sensemaking is a concept that was first introduced by organizational theorist Karl Weick in the 1970s. The concept has been borrowed by those working in the areas of machine learning and artificial intelligence. At the risk of oversimplification, it provides us a model to help us understand how we “give meaning to our collective experiences.”

D.T. Moore and R. Hoffman, 2011

The above diagram (from a 2011 paper by David T. Moore and Robert R. Hoffman) shows the sensemaking process. It starts with a frame — our understanding of what is true about the world. As we get presented with new data, we have to make a choice: Does it fit our frame or doesn’t it?

If it does, we preserve the frame and may elaborate on it, fitting the new data into it. If the data doesn’t support our existing frame, we then have to reframe, building a new frame from scratch.

Our brains loves frames. It’s much less work for the brain to keep a frame than to build a new one. That’s why we tend to stick with our beliefs — another word for a frame — until we’re forced to discard them.

But, as with all human traits, our ways of making sense of our world vary in the population. In the above diagram, some of us are more apt to spend time on the right side of the diagram, more open to reframing and always open to evidence that may cause us to reframe.

That, by the way, is exactly how science is supposed to work. We refer to this capacity as critical thinking: the objective analysis and evaluation of  data in order to form a judgment, even if it causes us to have to build a new frame.

Others hold onto their frames for dear life. They go out of their way to ignore data that may cause them to have to discard the frames they hold. This is what I would define as willful ignorance.

It’s misleading to think of this as just being ignorant. That would simply indicate a lack of available data. It’s also misleading to attribute this to a lack of intelligence.

That would be an inability to process the data. With willful ignorance, we’re not talking about either of those things. We are talking about a conscious and deliberate decision to ignore available data. And I don’t believe you can fix that.

We fall into the trap of thinking we can educate, shame or argue people out of being willfully ignorant. We can’t. This post is not intended for the willfully ignorant. They have already ignored it. This is just the way their brains work. It’s part of who they are. Wishing they weren’t this way is about as pointless as wishing they were a world-class pole vaulter, that they were seven feet tall or that their brown eyes were blue.

We have to accept that this situation is not going to change. And that’s what we have to start thinking about. Given that we have willful ignorance in the world, what can we do to minimize its impact?

Re-engineering the Workplace

What happens when over 60,000 Microsoft employees are forced to work from home because of a pandemic? Funny you should ask. Microsoft just came out with a large scale study that looks at exactly that question. The good news is that employees feel more included and supported by their managers than ever. But there is bad news as well:

“Our results show that firm-wide remote work caused the collaboration network of workers to become more static and siloed, with fewer bridges between disparate parts. Furthermore, there was a decrease in synchronous communication and an increase in asynchronous communication. Together, these effects may make it harder for employees to acquire and share new information across the network.”

To me, none of this is surprising. On a much smaller scale, we experienced exactly this when we experimented with a virtual workplace a decade ago. In fact, this virtually echoes the pros and cons of a virtual workplace that I have talked about in other previous posts, particularly the two (one, two) that dealt with the concept of “burstiness” – those magical moments of collaborative creativity experienced when a room full of people get “on a roll.”

What this study does do, however, is provide empirical evidence to back up my hunches. There is nothing like a global pandemic to allow the recruitment of a massive sample to study the impact of working from home.

In many, many aspects of our society, COVID was a game changer. It forcefully pushed us along the adoption curve, mandating widescale adoption of technologies that we probably would have been much happier to simply dabble in. The virtual workplace was one of these, but there were others.

Yet this example in particular, because of the breadth of its impact, gives us an insightful glimpse into one particular trend: we are increasingly swapping the ability to physically be together for a virtual connection mediated through technology. The first of these is a huge part of our evolved social strategies that are hundreds of thousands of years in the making. The second is barely a couple of decades old. There are bound to be consequences, both intended and unintended.

In today’s post, I want to take another angle to look at the pros and cons of a virtual workplace – by exploring how music has been made over the past several decades.

Supertramp and Studio Serendipity

My brother-in-law is a walking encyclopedia of music trivia. He put me on to this particular tidbit from one of my favorite bands of the 70’s and 80’s – Supertramp.

The band was in the studio working on their Breakfast in America album. In the corner of the studio, someone was playing a handheld video game during a break in the recording: Mattel’s Football. The game had a distinctive double beep on your fourth down. Roger Hodgson heard this and now that same sound can be heard at the 3:24 mark of The Logical Song, just after the lyric “d-d-digital”.

This is just one example of what I would call “Studio Serendipity.” For every band, every album, every song that was recorded collaboratively in the studio, there are examples like this of creativity that just sprang from people being together. It is an example of that “burstiness” I was talking about in my previous posts.

Billie Eilish and the Virtual Studio

But for this serendipity to even happen, you had to get into a recording studio. And the barriers to doing that were significant. You had to get a record deal – or – if you were going independent, save up enough money to rent a studio.

For the other side of the argument, let’s talk about Billie Eilish. Together with her brother Finneas, these two embody virtual production. We first heard about Billie in 2015 when they recorded Ocean Eyes in a bedroom in the family’s tiny LA Bungalow and uploaded it to SoundCloud. Billie was 14 at the time. The song went viral overnight and it did lead to a record deal, but their breakout album, When We All Fall Asleep, Where Do We Go?, was recorded in that same bedroom.

Digital technology dismantled the vertical hierarchy of record labels and democratized the industry. If that hadn’t happened, we might never have heard of Billie Eilish.

The Best of Both Worlds

Choosing between virtual and physical workplaces is not a binary choice. In the two examples I gave, creativity was a hybrid that came from both solitary inspiration and collaborative improvisation. The first thrives in a virtual workplace and the second works best when we’re physically together. There are benefits to both models, and these benefits are non-exclusive.

A hybrid model can give you the best of both worlds, but you have to take into account a number of things that might be a stretch for the typical HR policies  – things like evolutionary psychology, cognition and attentional focus, non-verbal communication strategies and something that neuroscientist Antonio Damasio calls “somatic markers.”  According to Damasio, we think as much with our bodies as we do with our brains.

Our performance in anything is tied to our physical surroundings. And when we are looking to replace a physical workplace with a virtual substitute, we have to appreciate the significance this has on us subconsciously.

Re-engineering Communication

Take communication, for example. We may feel that we have more ways than ever to communicate with our colleagues, including an entire toolbox of digital platforms. But none of them account for this simple fact: the majority of our communication is non-verbal. We communicate with our eyes, our hands, our bodies, our expression and the tone of our voice. Trying to squeeze all this through the trickle of bandwidth that technology provides, even when we have video available, is just going to produce frustration. It is no substitute for being in the same room together, sharing the same circumstances. It would be like trying to race in a car with an engine where only one cylinder was working.

This is perhaps the single biggest drawback to the virtual workplace – this lack of “somatic” connection – the shared physical bond that underlies some much of how we function. When you boil it down, it is the essential ingredient for “burstiness.” And I just don’t think we have a technological substitute for it – not at this point, anyway.

But the same person who discovered burstiness does have one rather counterintuitive suggestion. If we can’t be in the same room together, perhaps we have to “dumb down” the technology we use. Anita Williams Wooley suggests the good, old-fashioned phone call might truly be the next best thing to being there.

Adrift in the Metaverse

Humans are nothing if not chasers of bright, shiny objects. Our attention is always focused beyond the here and now. That is especially true when here and now is a bit of a dumpster fire.

The ultrarich know that this is part of the human psyche, and they are doubling down their bets on it. Jeff Bezos and Elon Musk are betting on space. But others — including Mark Zuckerberg — are betting on something called the metaverse.

Just this past summer, Zuck told his employees about his master plan for Facebook:

“Our overarching goal across all of (our) initiatives is to help bring the metaverse to life.”

So what exactly is the metaverse? According to Wikipedia, it is

“a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds, augmented reality, and the Internet.”

The metaverse is a world of our own making, which exists in the dimensions of a digital reality. There we imagine we can fix what we screwed up in the maddeningly unpredictable real world. It is the ultimate in bright, shiny objects.

Science fiction and the entertainment industry have been toying with the idea of the metaverse for some time now. The term itself comes from Neal Stephenson’s 1992 novel “Snow Crash.” It has been given the Hollywood treatment numerous times, notably in “The Matrix” and “Ready Player One.” But Silicon Valley venture capitalists are rushing to make fiction into fact.

You can’t really blame us for throwing in the towel on the world we have systematically wrecked. There are few glimmers of hope out there in the real world. What we have wrought is painful to contemplate. So we are doing what we’ve always done, reach for what we want rather than fix what we have. Take the Reporters Without Borders Uncensored Library, for example.

There are many places in the real world where journalism is censored, like Russia, the Middle East, Vietnam and China. But in the metaverse, there is the option of leapfrogging over all the political hurdles we stumble over in the real world. So Reporters without Borders and two German creative agencies built a meta library in the meta world of Minecraft. Here, censored articles are made into virtual books, accessible to all who want to check them out.

It’s hard to find fault with this. Censorship is a tool of oppression. Here, a virtual world offered an inviting loophole to circumvent it. The metaverse came to the rescue. What is the problem with that?

The biggest risk is this: We weren’t built for the metaverse. We can probably adapt to it, somewhat, but everything that makes us tick has evolved in a flesh and blood world, and — to quote a line from Joni Mitchell’s “Big Yellow Taxi,” “You don’t know what you’ve got till it’s gone.”

It’s fair to say that right now the metaverse is a novelty. Most of your neighbors, friends and family have never heard of it. But odds are it will become our life. In a 2019 article called “Welcome to the Mirror World” in Wired, Kevin Kelley explained, “we are building a 1-to-1 map of almost unimaginable scope. When it’s complete, our physical reality will merge with the digital universe.”

In a Forbes article, futurist Cathy Hackl gives us an example of what this merger might look like:

“Imagine walking down the street. Suddenly, you think of a product you need. Immediately next to you, a vending machine appears, filled with the product and variations you were thinking of. You stop, pick an item from the vending machine, it’s shipped to your house, and then continue on your way.”

That sounds benign — even helpful. But if we’ve learned one thing it’s this: When we try to merge technology with human behavior, there are always unintended consequences that arise. And when we’re talking about the metaverse, those consequences will likely be massive.

It is hubristic in the extreme to imagine we can engineer a world that will be a better match for our evolved humanware mechanics than the world we actually evolved within. It’s sheer arrogance to imagine we can build that world, and also arrogant to imagine that we can thrive within it.

We have a bright, shiny bias built into us that will likely lead us to ignore the crumbling edifice of our reality. German futurist Gerd Leonhard, for one, warns us about an impending collision between technology and humanity:

“Technology is not what we seek but how we seek: the tools should not become the purpose. Yet increasingly, technology is leading us to ‘forget ourselves.’”

Imagine a Pandemic without Technology

As the writer of a weekly post that tends to look at the intersection between human behavior and technology, the past 18 months have been interesting – and by interesting, I mean a twisted ride through gut-wrenching change unlike anything I have ever seen before.

I can’t even narrow it down to 18 months. Before that, there was plenty more that was “unprecedented” – to berrypick a word from my post from a few weeks back. I have now been writing for MediaPost in one place or another for 17 years. My very first post was on August 19, 2004. That was 829 posts ago. If you add the additional posts I’ve done for my own blog – outofmygord.com – I’ve just ticked over 1,100 on my odometer.  That’s a lot of soul searching about technology. And the last several months have still been in a class by themselves.

Now, part of this might be where my own head is at. Believe it or not, I do sometimes try to write something positive. But as soon as my fingers hit the keyboard, things seem to spiral downwards. Every path I take seems to take me somewhere dark. There has been precious little that has sparked optimism in my soul.

Today, for example, prior to writing this, I took three passes at writing something else. Each quickly took a swerve towards impending doom. I’m getting very tired of this. I can only imagine how you feel, reading it.

So I finally decided to try a thought experiment. “What if,” I wondered, “we had gone through the past 17 months without the technology we take for granted? What if there was no Internet, no computers, no mobile devices? What if we had lived through the Pandemic with only the technology we had – say – a hundred years ago, during the global pandemic of the Spanish Flu starting in 1918? Perhaps the best way to determine the sum total contribution of technology is to do it by process of elimination.”

The Cons

Let’s get the negatives out of the way. First, you might say that technology enabled the flood of misinformation and conspiracy theorizing that has been so top-of-mind for us. Well, yes – and no.

Distrust in authority is nothing new. It’s always been there, at one end of a bell curve that spans the attitudes of our society. And nothing brings the outliers of society into global focus faster than a crisis that affects all of us.

There was public pushback against the very first vaccine ever invented; the smallpox vaccine. Now granted, the early method was to rub puss from a cowpox blister into a cut in your skin and hope for the best. But it worked. Smallpox is now a thing of the past.

And, if we are talking about pushback against public health measures, that’s nothing new either. Exactly the same thing happened during the 1918-1919 Pandemic. Here’s one eerily familiar excerpt from a journal article looking at the issue, “Public-gathering bans also exposed tensions about what constituted essential vs. unessential activities. Those forced to close their facilities complained about those allowed to stay open. For example, in New Orleans, municipal public health authorities closed churches but not stores, prompting a protest from one of the city’s Roman Catholic priests.”

What is different, thanks to technology, is that public resistance is so much more apparent than it’s ever been before. And that resistance is coming with faces and names we know attached. People are posting opinions on social media that they would probably never say to you in a face-to-face setting, especially if they knew you disagreed with them. Our public and private discourse is now held at arms-length by technology. Gone are all the moderating effects that come with sharing the same physical space.

The Pros

Try as I might, I couldn’t think of another “con” that technology has brought to the past 17 months. The “pro” list, however, is far too long to cover in this post, so I’ll just mention a few that come immediately to mind.

Let’s begin with the counterpoint to the before-mentioned “Con” – the misinformation factor. While misinformation was definitely spread over the past year and a half, so was reliable, factual information. And for those willing to pay attention to it, it enabled us to find out what we needed to in order to practice public health measures at a speed previously unimagined. Without technology, we would have been slower to act and – perhaps – fewer of us would have acted at all. At worst, in this case technology probably nets out to zero.

But technology also enabled the world to keep functioning, even if it was in a different form. Working from home would have been impossible without it. Commercial engines kept chugging along. Business meetings switched to online platforms. The Dow Jones Industrial Average, as of the writing of this, is over 20% higher than it was before the pandemic. In contrast, if you look at stock market performance over the 1918 – 1919 pandemic, the stock market was almost 32% lower at the end of the third wave as it was at the start of the first. Of course, there are other factors to consider, but I suspect we can thank technology for at least some of that.

It’s easy to point to the negatives that technology brings, but if you consider it as a whole, technology is overwhelmingly a blessing.

What was interesting to me in this thought experiment was how apparent it was that technology keeps the cogs of our society functioning more effectively, but if there is a price to be paid, it typically comes at the cost of our social bonds.