Moving Beyond Willful Ignorance

This is not the post I thought I’d be writing today. Two weeks ago, when I started to try to understand willful ignorance, I was mad. I suspect I shared that feeling with many of you. I was tired of the deliberate denial of fact that had consequences for all of us. I was frustrated with anti-masking, anti-vaxxing, anti-climate change and, most of all, anti-science. I was ready to go to war with those I saw in the other camp.

And that, I found out, is exactly the problem. Let me explain.

First, to recap. As I talked about two weeks ago, willful ignorance is a decision based on beliefs, so it’s very difficult – if not impossible – to argue, cajole or inform people out of it. And, as I wrote last week, willful ignorance has some very real and damaging consequences. This post was supposed to talk about what we do about that problem. I intended to find ways to isolate the impact of willful ignorance and minimize its downside. In doing so, I was going to suggest putting up even more walls to separate “us” from “them.”

But the more I researched this and thought about it, the more I realized that that was exactly the wrong approach. Because this recent plague of willful ignorance is many things, but – most of all – it’s one more example of how we love to separate “us” from “them.” And both sides, including mine, are equally guilty of doing this. The problem we have to solve here is not so much to change the way that some people process information (or don’t) in a way we may not agree with. What we have to fix is a monumental breakdown of trust.

Beliefs thrive in a vacuum. In a vacuum, there’s nothing to challenge them. And we have all been forced into a kind of ideological vacuum for the past year and a half. I talked about how our physical world creates a more heterogeneous ideological landscape than our virtual world does. In a normal life, we are constantly rubbing elbows with those of all leanings. And, if we want to function in that life, we have to find a way to get along with them, even if we don’t like them or agree with them. For most of us, that natural and temporary social bonding is something we haven’t had to do much lately.

It’s this lowering of our ideological defence systems that starts to bridge the gaps between us and them. And it also starts pumping oxygen into our ideological vacuums, prying the lids off our air-tight belief systems. It might not have a huge impact, but this doesn’t require a huge impact. A little trust can go a long way.

After World War II, psychologists and sociologists started to pick apart a fundamental question – how did our world go to war with itself? How, in the name of humanity, did the atrocities of the war occur? One of the areas they started to explore with vigour was this fundamental need of humans to sort ourselves into the categories of “us” and “them”.

In the 1970’s, psychologist Henri Tajfel found that we barely need a nudge to start creating in-groups and out-groups. We’ll do it for anything, even something as trivial as which abstract artist, Klee or Kandisky, we prefer. Once sorted on the flimsiest of premises, these groups started showing a strong preference to favour their own group and punish the other. There was no pre-existing animosity between the groups, but in games such as the Banker’s Game, they showed that they would even forego rewards for themselves if it meant depriving the other group of their share.

If we do this for completely arbitrary reasons such as those used by Tajfel, imagine how nasty we can get when the stakes are much higher, such as our own health or the future of the planet.

So, if we naturally sort ourselves into in groups and out groups and our likelihood to consider perspectives other than our own increases the more we’re exposed to those perspectives in a non-hostile environment, how do we start taking down those walls?

Here’s where it gets interesting.

What we need to break down the walls between “us” and “them” is to find another “them” that we can then unite against.

One of the theories about why the US is so polarized now is that with the end of the Cold War, the US lost a common enemy that united “us” in opposition to “them”. Without the USSR, our naturally tendency to categorize ourselves into ingroups and outgroups had no option but to turn inwards. You might think this is hogwash, but before you throw me into the “them” camp, let me tell you about what happened in Robbers Cave State Park in Oklahoma.

One of the experiments into this ingroup/outgroup phenomenon was conducted by psychologist Muzafer Sherif in the summer of 1954. He and his associates took 22 boys of similar backgrounds (ie they were all white, Protestant and had two parents) to a summer camp at Robbers Cave and randomly divided them into two groups. First, they built team loyalty and then they gradually introduced a competitive environment between the two groups. Predictably, animosity and prejudice soon developed between them.

Sherif and his assistants then introduced a four-day cooling off period and then tried to reduce conflict by mixing the two groups. It didn’t work. In fact, it just made things worse. Things didn’t improve until the two groups were brought together to overcome a common obstacle when the experimenters purposely sabotaged the camp’s water supply. Suddenly, the two groups came together to overcome a bigger challenge. This, by the way, is exactly the same theory behind the process that NASA and Amazon’s Blue Origin uses to build trust in their flight crews.

As I said, when I started this journey, I was squarely in the “us” vs “them” camp. And – to be honest – I’m still fighting my instinct to stay there. But I don’t think that’s the best way forward. I’m hoping that as our world inches towards a better state of normal, everyday life will start to force the camps together and our evolved instincts for cooperation will start to reassert themselves.

I also believe that the past 19 months (and counting) will be a period that sociologists and psychologists will study for years to come, as it’s been an ongoing experiment in human behavior at a scope that may never happen again.

We can certainly hope so.

Why Is Willful Ignorance More Dangerous Now?

In last week’s post, I talked about how the presence of willful ignorance is becoming something we not only have to accept, but also learn how to deal with. In that post, I intimated that the stakes are higher than ever, because willful ignorance can do real damage to our society and our world.

So, if we’ve lived with willful ignorance for our entire history, why is it now especially dangerous? I suspect it’s not so much that willful ignorance has changed, but rather the environment in which we find it.

The world we live in is more complex because it is more connected. But there are two sides to this connection, one in which we’re more connected, and one where we’re further apart than ever before.

Technology Connects Us…

Our world and our society are made of networks. And when it comes to our society, connection creates networks that are more interdependent, leading to complex behaviors and non-linear effects.

We must also realize that our rate of connection is accelerating. The pace of technology has always been governed by Moore’s Law, the tenet that the speed and capability of our computers will double every two years. For almost 60 years, this law has been surprisingly accurate.

What this has meant for our ability to connect digitally is that the number and impact of our connections has also increased exponentially, and it will continue to increase in our future. This creates a much denser and more interconnected network, but it has also created a network that overcomes the naturally self-regulating effects of distance.

For the first time, we can have strong and influential connections with others on the other side of the globe. And, as we forge more connections through technology, we are starting to rely less on our physical connections.

And Drives Us Further Apart

The wear and tear of a life spent bumping into each other in a physical setting tends to smooth out our rougher ideological edges. In face-to-face settings, most of us are willing to moderate our own personal beliefs in order to conform to the rest of the crowd. Exactly 80 years ago, psychologist Solomon Asch showed how willing we were to ignore the evidence of our own eyes in order to conform to the majority opinion of a crowd.

For the vast majority of our history, physical proximity has forced social conformity upon us. It leavens out our own belief structure in order to keep the peace with those closest to us, fulfilling one of our strongest evolutionary urges.

But, thanks to technology, that’s also changing. We are spending more time physically separated but technically connected. Our social conformity mechanisms are being short-circuited by filter bubbles where everyone seems to share our beliefs. This creates something called an availability bias:  the things we see coming through our social media feeds forms our view of what the world must be like, even though statistically it is not representative of reality.

It gives the willfully ignorant the illusion that everyone agrees with them — or, at least, enough people agree with them that it overcomes the urge to conform to the majority opinion.

Ignorance in a Chaotic World

These two things make our world increasingly fragile and subject to what chaos theorists call the Butterfly Effect, where seemingly small things can make massive differences.

It’s this unique nature of our world, which is connected in ways it never has been before, that creates at least three reasons why willful ignorance is now more dangerous than ever:

One: The impact of ignorance can be quickly amplified through social media, causing a Butterfly Effect cascade. Case in point, the falsehood that the U.S. election results weren’t valid, leading to the Capitol insurrection of Jan. 6.

The mechanics of social media that led to this issue are many, and I have cataloged most of them in previous columns: the nastiness that comes from arm’s-length discourse, a rewiring of our morality, and the impact of filter bubbles on our collective thresholds governing anti-social behaviors.

Secondly, and what is probably a bigger cause for concern, the willfully ignorant are very easily consolidated into a power base for politicians willing to play to their beliefs. The far right — and, to a somewhat lesser extent, the far left — has learned this to devastating impact. All you have to do is abandon your predilection for telling the truth so you can help them rationalize their deliberate denial of facts. Do this and you have tribal support that is almost impossible to shake.

The move of populist politicians to use the willfully ignorant as a launch pad for their own purposes further amplifies the Butterfly Effect, ensuring that the previously unimaginable will continue to be the new state of normal.

Finally, there is the third factor: our expanding impact on the physical world. It’s not just our degree of connection that technology is changing exponentially. It’s also the degree of impact we have on our physical world.

For almost our entire time on earth, the world has made us. We have evolved to survive in our physical environment, where we have been subject to the whims of nature.

But now, increasingly, we humans are shaping the nature of the world we live in. Our footprint has an ever-increasing impact on our environment, and that footprint is also increasing exponentially, thanks to technology.

The earth and our ability to survive on it are — unfortunately — now dependent on our stewardship. And that stewardship is particularly susceptible to the impact of willful ignorance. In the area of climate change alone, willful ignorance could — and has — led to events with massive consequences. A recent study estimates that climate change is directly responsible for 5 million deaths a year.

For all these reasons, willful ignorance is now something that can have life and death consequences.

Making Sense of Willful Ignorance

Willful ignorance is nothing new. Depending on your beliefs, you could say it was willful ignorance that got Adam and Eve kicked out of the Garden of Eden. But the visibility of it is higher than it’s ever been before. In the past couple of years, we have had a convergence of factors that has pushed willful ignorance to the surface — a perfect storm of fact denial.

Some of those effects include the social media effect, the erosion of traditional journalism and a global health crisis that has us all focusing on the same issue at the same time. The net result of all this is that we all have a very personal interest in the degree of ignorance prevalent in our society.

In one very twisted way, this may be a good thing. As I said, the willfully ignorant have always been with us. But we’ve always been able to shrug and move on, muttering “stupid is as stupid does.”

Now, however, the stakes are getting higher. Our world and society are at a point where willful ignorance can inflict some real and substantial damage. We need to take it seriously and we must start thinking about how to limit its impact.

So, for myself, I’m going to spend some time understanding willful ignorance. Feel free to come along for the ride!

It’s important to understand that willful ignorance is not the same as being stupid — or even just being ignorant, despite thousands of social media memes to the contrary.

Ignorance is one thing. It means we don’t know something. And sometimes, that’s not our fault. We don’t know what we don’t know. But willful ignorance is something very different. It is us choosing not to know something.

For example, I know many smart people who have chosen not to get vaccinated. Their reasons may vary. I suspect fear is a common denominator, and there is no shame in that. But rather than seek information to allay their fears, these folks have doubled down on beliefs based on little to no evidence. They have made a choice to ignore the information that is freely available.

And that’s doubly ironic, because the very same technology that enables willful ignorance has made more information available than ever before.

Willful ignorance is defined as “a decision in bad faith to avoid becoming informed about something so as to avoid having to make undesirable decisions that such information might prompt.”

And this is where the problem lies. The explosion of content has meant there is always information available to support any point of view. We also have the breakdown of journalistic principles that occurred in the past 40 years. Combined, we have a dangerous world of information that has been deliberately falsified in order to appeal to a segment of the population that has chosen to be willfully ignorant.

It seems a contradiction: The more information we have, the more that ignorance is a problem. But to understand why, we have to understand how we make sense of the world.

Making Sense of Our World

Sensemaking is a concept that was first introduced by organizational theorist Karl Weick in the 1970s. The concept has been borrowed by those working in the areas of machine learning and artificial intelligence. At the risk of oversimplification, it provides us a model to help us understand how we “give meaning to our collective experiences.”

D.T. Moore and R. Hoffman, 2011

The above diagram (from a 2011 paper by David T. Moore and Robert R. Hoffman) shows the sensemaking process. It starts with a frame — our understanding of what is true about the world. As we get presented with new data, we have to make a choice: Does it fit our frame or doesn’t it?

If it does, we preserve the frame and may elaborate on it, fitting the new data into it. If the data doesn’t support our existing frame, we then have to reframe, building a new frame from scratch.

Our brains loves frames. It’s much less work for the brain to keep a frame than to build a new one. That’s why we tend to stick with our beliefs — another word for a frame — until we’re forced to discard them.

But, as with all human traits, our ways of making sense of our world vary in the population. In the above diagram, some of us are more apt to spend time on the right side of the diagram, more open to reframing and always open to evidence that may cause us to reframe.

That, by the way, is exactly how science is supposed to work. We refer to this capacity as critical thinking: the objective analysis and evaluation of  data in order to form a judgment, even if it causes us to have to build a new frame.

Others hold onto their frames for dear life. They go out of their way to ignore data that may cause them to have to discard the frames they hold. This is what I would define as willful ignorance.

It’s misleading to think of this as just being ignorant. That would simply indicate a lack of available data. It’s also misleading to attribute this to a lack of intelligence.

That would be an inability to process the data. With willful ignorance, we’re not talking about either of those things. We are talking about a conscious and deliberate decision to ignore available data. And I don’t believe you can fix that.

We fall into the trap of thinking we can educate, shame or argue people out of being willfully ignorant. We can’t. This post is not intended for the willfully ignorant. They have already ignored it. This is just the way their brains work. It’s part of who they are. Wishing they weren’t this way is about as pointless as wishing they were a world-class pole vaulter, that they were seven feet tall or that their brown eyes were blue.

We have to accept that this situation is not going to change. And that’s what we have to start thinking about. Given that we have willful ignorance in the world, what can we do to minimize its impact?

Imagine a Pandemic without Technology

As the writer of a weekly post that tends to look at the intersection between human behavior and technology, the past 18 months have been interesting – and by interesting, I mean a twisted ride through gut-wrenching change unlike anything I have ever seen before.

I can’t even narrow it down to 18 months. Before that, there was plenty more that was “unprecedented” – to berrypick a word from my post from a few weeks back. I have now been writing for MediaPost in one place or another for 17 years. My very first post was on August 19, 2004. That was 829 posts ago. If you add the additional posts I’ve done for my own blog – outofmygord.com – I’ve just ticked over 1,100 on my odometer.  That’s a lot of soul searching about technology. And the last several months have still been in a class by themselves.

Now, part of this might be where my own head is at. Believe it or not, I do sometimes try to write something positive. But as soon as my fingers hit the keyboard, things seem to spiral downwards. Every path I take seems to take me somewhere dark. There has been precious little that has sparked optimism in my soul.

Today, for example, prior to writing this, I took three passes at writing something else. Each quickly took a swerve towards impending doom. I’m getting very tired of this. I can only imagine how you feel, reading it.

So I finally decided to try a thought experiment. “What if,” I wondered, “we had gone through the past 17 months without the technology we take for granted? What if there was no Internet, no computers, no mobile devices? What if we had lived through the Pandemic with only the technology we had – say – a hundred years ago, during the global pandemic of the Spanish Flu starting in 1918? Perhaps the best way to determine the sum total contribution of technology is to do it by process of elimination.”

The Cons

Let’s get the negatives out of the way. First, you might say that technology enabled the flood of misinformation and conspiracy theorizing that has been so top-of-mind for us. Well, yes – and no.

Distrust in authority is nothing new. It’s always been there, at one end of a bell curve that spans the attitudes of our society. And nothing brings the outliers of society into global focus faster than a crisis that affects all of us.

There was public pushback against the very first vaccine ever invented; the smallpox vaccine. Now granted, the early method was to rub puss from a cowpox blister into a cut in your skin and hope for the best. But it worked. Smallpox is now a thing of the past.

And, if we are talking about pushback against public health measures, that’s nothing new either. Exactly the same thing happened during the 1918-1919 Pandemic. Here’s one eerily familiar excerpt from a journal article looking at the issue, “Public-gathering bans also exposed tensions about what constituted essential vs. unessential activities. Those forced to close their facilities complained about those allowed to stay open. For example, in New Orleans, municipal public health authorities closed churches but not stores, prompting a protest from one of the city’s Roman Catholic priests.”

What is different, thanks to technology, is that public resistance is so much more apparent than it’s ever been before. And that resistance is coming with faces and names we know attached. People are posting opinions on social media that they would probably never say to you in a face-to-face setting, especially if they knew you disagreed with them. Our public and private discourse is now held at arms-length by technology. Gone are all the moderating effects that come with sharing the same physical space.

The Pros

Try as I might, I couldn’t think of another “con” that technology has brought to the past 17 months. The “pro” list, however, is far too long to cover in this post, so I’ll just mention a few that come immediately to mind.

Let’s begin with the counterpoint to the before-mentioned “Con” – the misinformation factor. While misinformation was definitely spread over the past year and a half, so was reliable, factual information. And for those willing to pay attention to it, it enabled us to find out what we needed to in order to practice public health measures at a speed previously unimagined. Without technology, we would have been slower to act and – perhaps – fewer of us would have acted at all. At worst, in this case technology probably nets out to zero.

But technology also enabled the world to keep functioning, even if it was in a different form. Working from home would have been impossible without it. Commercial engines kept chugging along. Business meetings switched to online platforms. The Dow Jones Industrial Average, as of the writing of this, is over 20% higher than it was before the pandemic. In contrast, if you look at stock market performance over the 1918 – 1919 pandemic, the stock market was almost 32% lower at the end of the third wave as it was at the start of the first. Of course, there are other factors to consider, but I suspect we can thank technology for at least some of that.

It’s easy to point to the negatives that technology brings, but if you consider it as a whole, technology is overwhelmingly a blessing.

What was interesting to me in this thought experiment was how apparent it was that technology keeps the cogs of our society functioning more effectively, but if there is a price to be paid, it typically comes at the cost of our social bonds.

Why is Everything Now ‘Unprecedented’?

Just once, I would like to get through one day without hearing the word “unprecedented.” And I wonder, is that just the media trying to get a click, or is the world truly that terrible?

Take the Olympics. In my lifetime, I’ve never seen an Olympics like this one. Empty stands. Athletes having to leave within 48 hours of their last event. Opening and closing ceremonies unlike anything we have ever seen. It’s, well — unprecedented.

The weather is unprecedented. What is happening in politics is unprecedented. The pandemic is unprecedented, at least in our lifetimes. I don’t know about you, but I feel like I’m watching a blockbuster where the world will eventually end — but we just haven’t got to that part of the movie yet. I feel the palpable sensation of teetering on the edge of a precipice. And I’m pretty sure it’s happened before.

Take the lead-ups to the two world wars, for example. If you plot a timeline of the events that led to either July 20, 1914 or Sept. 1, 1939, there is a noticeable acceleration of momentum. At first, the points on the timeline are spread apart, giving the world a chance to once again catch its collective breath. But as we get closer and closer to those dates circled in red, things pick up. There are cascades of events that eventually lead to the crisis point. Are we in the middle of such a cascade?

Part of this might just be network knock-on effects that happen in complex environments. But I also wonder if we just become a little shell- shocked, being nudged into a numb acceptance of things we would have once found intolerable.

Author and geographer Jared Diamond calls this “creeping normality. “ In his book “Collapse: How Societies Choose to Fail or Succeed,” he used the example of the deforestation and environmental degradation that happened on Easter Island — and how, despite the impending doom, the natives still decided to chop down the last tree: “I suspect, though, that the disaster happened not with a bang but with a whimper. After all, there are those hundreds of abandoned statues to consider. The forest the islanders depended on for rollers and rope didn’t simply disappear one day—it vanished slowly, over decades.”

Creeping normality continually and imperceptibly nudges us from the unacceptable to the acceptable and we don’t even notice it’s happening. It’s a cognitive bias that keeps us from seeing reality for what it is. Creeping normality is what happens when our view of the world comes through an Overton Window.

I have mentioned the concept of the Overton Window before.  Overton Window was first introduced by political analyst Joseph Lehman and was named after his colleague, Joseph Overton. It was initially coined to show that the political policies that the public finds acceptable will shift over time. What was once considered unthinkable can eventually become acceptable or even popular, given the shifting sensitivities of the public. As an example, the antics of Donald Trump would once be considered unacceptable in any public venue — but as our reality shifted, we saw them eventually become mainstream from an American president.

I suspect that the media does the same thing with our perception of the world in general. The news media demands the exceptional. We don’t click on “ordinary.” So it consistently shifts our Overton Window of what we pay attention to, moving us toward the outrageous. Things that once would have caused riots are now greeted with a yawn. This is combined with the unrelenting pace of the news cycle. What was outrageous today slips into yesterday, to be replaced with what is outrageous today.

And while I’m talking about outrageous, let’s look at the root of that term. The whole point of something being outrageous is to prompt us into being outraged — or moved enough to take action. And, if our sensitivity to outrage is constantly being numbed, we are no longer moved enough to act.

When we become insensitive to things that are unprecedented, we’re in a bad place. Our trust in information is gone. We seek information that comforts us that the world is not as bad as we think it is. And we ignore the red flags we should be paying attention to.

If you look at the lead-ups to both world wars, you see this same pattern. Things that happened regularly in 1914 or 1939, just before the outbreak of war, would have been unimaginable just a few years earlier. The momentum of mayhem picked up as the world raced to follow a rapidly moving Overton Window. Soon, before we knew it, all hell broke loose and the world was left with only one alternative: going to war.

An Overton Window can just happen, or it can be intentionally planned. Politicians from the fringes, especially the right, have latched on to the Window, taking something intended to be an analysis and turning it into a strategy. They now routinely float “policy balloons” that they know are on the fringe, hoping to trigger a move in our Window to either the right or left. Over time, they can use this strategy to introduce legislation that would once have been vehemently rejected.

The danger in all this is the embedding of complacency. Ultimately, our willingness to take action against threat is all that keeps our society functioning. Whether it’s our health, our politics or our planet, we have to be moved to action before it’s too late.

When the last tree falls on Easter Island, we don’t want to be the ones with the axe in our hands.

The Privacy War Has Begun

It started innocently enough….

My iPhone just upgraded itself to iOS 14.6, and the privacy protection purge began.

In late April,  Apple added App Tracking Transparency (ATT) to iOS (actually in 14.5 but for reasons mentioned in this Forbes article, I hadn’t noticed the change until the most recent update). Now, whenever I launch an app that is part of the online ad ecosystem, I’m asked whether I want to share data to enable tracking. I always opt out.

These alerts have been generally benign. They reference benefits like “more relevant ads,” a “customized experience” and “helping to support us.” Some assume you’re opting in and opting out is a much more circuitous and time-consuming process. Most also avoid the words “tracking” and “privacy.” One referred to it in these terms: “Would you allow us to refer to your activity?”

My answer is always no. Why would I want to customize an annoyance and make it more relevant?

All in all, it’s a deceptively innocent wrapper to put on what will prove to be a cataclysmic event in the world of online advertising. No wonder Facebook is fighting it tooth and nail, as I noted in a recent post.

This shot across the bow of online advertising marks an important turning point for privacy. It’s the first time that someone has put users ahead of advertisers. Everything up to now has been lip service from the likes of Facebook, telling us we have complete control over our privacy while knowing that actually protecting that privacy would be so time-consuming and convoluted that the vast majority of us would do nothing, thus keeping its profitability flowing through the pipeline.

The simple fact of the matter is that without its ability to micro-target, online advertising just isn’t that effective. Take away the personal data, and online ads are pretty non-engaging. Also, given our continually improving ability to filter out anything that’s not directly relevant to whatever we’re doing at the time, these ads are very easy to ignore.

Advertisers need that personal data to stand any chance of piercing our non-attentiveness long enough to get a conversion. It’s always been a crapshoot, but Apple’s ATT just stacked the odds very much against the advertiser.

It’s about time. Facebook and online ad platforms have had little to no real pushback against the creeping invasion of our privacy for years now. We have no idea how extensive and invasive this tracking has been. The only inkling we get is when the targeting nails the ad delivery so well that we swear our phone is listening to our conversations. And, in a way, it is. We are constantly under surveillance.

In addition to Facebook’s histrionic bitching about Apple’s ATT, others have started to find workarounds, as reported on 9 to 5 Mac. ATT specifically targets the IDFA (Identified for Advertisers), which offers cross app tracking by a unique identifier. Chinese ad networks backed by the state-endorsed Chinese Advertising Association were encouraging the adoption of CAID identifiers as an alternative to IDFA. Apple has gone on record as saying ATT will be globally implemented and enforced. While CAID can’t be policed at the OS level, Apple has said that apps that track users without their consent by any means, including CAID, could be removed from the App Store.

We’ll see. Apple doesn’t have a very consistent track record with it comes to holding the line against Chinese app providers. WeChat, for one, has been granted exceptions to Apple’s developer restrictions that have not been extended to anyone else.

For its part, Google has taken a tentative step toward following Apple’s lead with its new privacy initiative on Android devices, as reported in Slash Gear. Google Play has asked developers to share what data they collect and how they use that data. At this point, they won’t be requiring opt-in prompts as Apple does.

All of this marks a beginning. If it continues, it will throw a Kong-sized monkey wrench into the works of online advertising. The entire ecosystem is built on ad-supported models that depend on collecting and storing user data. Apple has begun nibbling away at that foundation.

The toppling has begun.

The Problem With Woke Capitalism

I’ve been talking a lot over the last month or two about the concept of corporate trust. Last week I mentioned that we as consumers have a role to play in this: It’s our job to demand trustworthy behavior from the companies we do business with.

The more I thought about that idea, I couldn’t help but put it in the current context of cancel culture and woke capitalism. In this world of social media hyperbole, is this how we can flex our consumer muscles?

I think not. When I think of cancel culture and woke capitalism, I think of signal-to-noise ratio. And when I look at how most corporations signal their virtue, I see a lot of noise but very little signal.

Take Nike, for example. There is probably no corporation in the world that practises more virtue signaling than Nike. It is the master of woke capitalism. But if you start typing Nike into Google, the first suggested search you’ll see is “Nike scandal.” And if you launch that search, you’ll get a laundry list of black eyes in Nike’s day-to-day business practices, including sweatshops, doping, and counterfeit Nike product rings.

The corporate watchdog site ethicalconsumer.org has an extensive entry on Nike’s corporate faux pas. Perhaps Nike needs to spend a little less time preaching and a little more time practicing.

Then there’s 3M. There is absolutely nothing flashy about the 3M brand. 3M is about as sexy as Mr. Wood, my high school Social Studies teacher. Mr. Wood wore polyester suits (granted, it was the ‘70s) and had a look that was more Elvis Costello than Elvis Presley. But he was by far my favorite teacher. And you could trust him with anything.

I think 3M might be the Mr. Wood of the corporate world.

I had the pleasure of working with 3M as a consultant for the last three or four years of my professional life. I still have friends who were and are 3Mers. I have never, in one professional setting, met a more pleasant group of people.

When I started writing this and thought about an example of a trustworthy corporation, 3M was the first that came to mind. The corporate ethos at 3M is, as was told me to me by one vice president, “Minnesota nice.”

Go ahead. Try Googling “3M corporate scandal.” Do you know what comes up? 3M investigating other companies that are selling knockoff N95 facemasks. The company is the one investigating the scandal, not causing it. (Just in case you’re wondering, I tried searching on ethicalconsumer.org for 3M. Nothing came up.)

That’s probably why 3M has been chosen as one of the most ethical companies in the world by the Ethisphere Institute for the last eight consecutive years.

Real trust comes from many places, but a social media campaign is never one of them. It comes from the people you hire and how you treat those people. It comes from how you handle HR complaints, especially when they’re about someone near the top of the corporate ladder. It comes from how you set your product research goals, where you make those products, who you sell those products to, and how you price those products. It comes from how you conduct business meetings, and the language that’s tolerated in the lunchroom.

Real trust is baked in. It’s never painted on.

Social media has armed consumers with a voice, as this lengthy essay in The Atlantic magazine shows. But if we go back to our signal versus noise comparison, everything on social media tends to be a lot of “noise,” and very little signal. Protesting through online channels tends to create hyper-virtuous bubbles that are far removed from the context of day-to-day reality. And — unfortunately — companies are getting very good at responding in kind. Corrupt internal power structures and business practices are preserved, while scapegoats are publicly sacrificed and marketing departments spin endlessly.

As Helen Lewis, the author of The Atlantic piece, said,

“That leads to what I call the “iron law of woke capitalism”: Brands will gravitate toward low-cost, high-noise signals as a substitute for genuine reform, to ensure their survival.”

Empty “mea culpas” and making hyperbolic noise just for the sake of looking good is not how you build trust. Trust is built on consistency and reliability. It is built on a culture that is committed to doing the right thing, even when that may not be the most profitable thing. Trust is built on being “Minnesota nice.”

Thank you, 3M, for that lesson. And thank you, Mr. Wood.

The Profitability Of Trust

Some weeks ago, I wrote about the crisis of trust identified by the Edelman Trust Barometer study and its impact on brands. In that post, I said that the trust in all institutions had been blown apart, hoisted on the petard of our political divides.

We don’t trust our government. We definitely don’t trust the media – especially the media that sits on the other side of the divide. Weirdly, our trust in NGOs has also slipped, perhaps because we suspect them to be politically motivated.

So whom — or what — do we trust? Well, apparently, we still trust corporations. We trust the brands we know. They, alone, seem to have been able to stand astride the chasm that is splitting our culture.

As I said before, I’m worried about that.

Now, I don’t doubt there are well-intentioned companies out there. I know there are several of them. But there is something inherent in the DNA of a for-profit company that I feel makes it difficult to trust them. And that something was summed up years ago by economist Milton Friedman, in what is now known as the Friedman Doctrine. 

In his eponymously named doctrine, Friedman says that a corporation should only have one purpose: “An entity’s greatest responsibility lies in the satisfaction of the shareholders.” The corporation should, therefore, always endeavor to maximize its revenues to increase returns for the shareholders.

So, a business will be trustworthy as long as fits its financial interest to be trustworthy. But what happens when those two things come into conflict, as they inevitably will?

Why is it inevitable, you ask? Why can’t a company be profitable and worthy of our trust? Ah, that’s where, sooner or later, the inevitable conflict will come.

Let’s strip this down to the basics with a thought experiment.

In a 2017 article in the Harvard Business Review, neuroscientist Paul J. Zak talks about the neuroscience of trust. He explains how he discovered that oxytocin is the neurochemical basis of trust — what he has since called The Trust Molecule.

To do this, he set up a classic trust task borrowed from Nobel laureate economist Vernon Smith:

“In our experiment, a participant chooses an amount of money to send to a stranger via computer, knowing that the money will triple in amount and understanding that the recipient may or may not share the spoils. Therein lies the conflict: The recipient can either keep all the cash or be trustworthy and share it with the sender.”

The choice of this task speaks volumes. It also lays bare the inherent conflict that sooner or later will face all corporations: money or trust? This is especially true of companies that have shareholders. Our entire capitalist ethos is built on the foundation of the Friedman Doctrine. Imagine what those shareholders will say when given the choice outlined in Zak’s experiment: “Keep the money, screw the trust.” Sometimes, you can’t have both. Especially when you have a quarterly earnings target to hit.

For humans, trust is our default position. It has been shown through game theory research using the Prisoner’s Dilemma that the best strategy for evolutionary success is one called “Tit for Tat.” In Tit for Tat, our opening position is typically one of trust and cooperation. But if we’re taken advantage of, then we raise our defences and respond in kind.

So, when we look at the neurological basis of trust, consistency is another requirement. We will be willing to trust a brand until it gives a reason not to. The more reliable the brand is in earning that trust, the more embedded that trust will become. As I said in the previous post, consistency builds beliefs and once beliefs are formed, it’s difficult to shake them loose.

Trying to thread this needle between trust and profitability can become an exercise in marketing “spin”: telling your customers you’re trustworthy, while you’re are doing everything possible to maximize your profits. A case in point — which we’ve seen repeatedly — is Facebook and its increasingly transparent efforts to maximize advertising revenue while gently whispering in our ear that we should trust it with our most private information.

Given the potential conflict between trust and profit, is trusting a corporation a lost cause? No, but it does put a huge amount of responsibility on the customer. The Edelman study has made abundantly clear that if there is such a thing as a “market” for trust, then trust is in dangerously short supply. This is why we’re turning to brands and for-profit corporations as a place to put our trust. We have built a society where we believe that’s the only thing we can trust.

Mark Carney, the governor of the Bank of England and the former governor of the Bank of Canada, puts this idea forward in his new book, “Value(s).” In it, he shows how “market economies” have evolved into “market societies” where price determines the value of everything. And corporations will follow profit, wherever it leads.

If we understand that fundamental characteristic of corporations, it does bring an odd kind of power that rests in the hands of consumers.

Markets are not unilateral beasts. They rely on the balance between supply and demand. We form half that equation. It is our willingness to buy that determine how prices are determined in Carney’s “market societies.” So, if we are willing to place our trust in a brand, we can also demand that the brand proves that our trust has not been misplaced, through the rewards and penalties built into the market. 

Essentially, we have to make trust profitable.

Media: The Midpoint of the Stories that Connect Us

I’m in the mood for navel gazing: looking inward.

Take the concept of “media,” for instance. Based on the masthead above this post, it’s what this site — and this editorial section — is all about. I’m supposed to be on the “inside” when it comes to media.

But media is also “inside” — quite literally. The word means “middle layer,” so it’s something in between.

There is a nuance here that’s important. Based on the very definition of the word, it’s something equidistant from both ends. And that introduces a concept we in media must think about: We have to meet our audience halfway. We cannot take a unilateral view of our function.

When we talk about media, we have to understand what gets passed through this “middle layer.” Is it information? Well, then we have to decide what information is. Again, the etymology of the word “inform” shows us that informing someone is to “give form to their mind.” But that mind isn’t a blank slate or a lump of clay to be molded as we want. There is already “form” there. And if, through media, we are meeting them halfway, we have to know something about what that form may be.

We come back to this: Media is the midpoint between what we, the tellers,  believe, and what we want our audience to believe. We are looking for the shortest distance between those two points. And, as self-help author Patti Digh wrote, “The shortest distance between two people is a story.”

We understand the world through stories — so media has become the platform for the telling of stories. Stories assume a common bond between the teller and the listener. It puts media squarely in the middle ground that defines its purpose, the point halfway between us. When we are on the receiving end of a story, our medium of choice is the one closest to us, in terms of our beliefs and our world narrative. These media are built on common ideological ground.

And, if we look at a recent study that helps us understand how the brain builds models of the things around us, we begin to understand the complexity that lies within a story.

This study from the Max Planck Institute for Human Cognitive and Brain Sciences shows that our brains are constantly categorizing the world around us. And if we’re asked to recognize something, our brains have a hierarchy of concepts that it will activate, depending on the situation. The higher you go in the hierarchy, the more parts of your brain that are activated.

For example, if I asked you to imagine a phone ringing, the same auditory centers in your brain that activate when you actually hear the phone would kick into gear and give you a quick and dirty cognitive representation of the sound. But if I asked you to describe what your phone does for you in your life, many more parts of your brain would activate, and you would step up the hierarchy into increasingly abstract concepts that define your phone’s place in your own world. That is where we find the “story” of our phone.

As psychologist Robert Epstein  says in this essay, we do not process a story like a computer. It is not data that we crunch and analyze. Rather, it’s another type of pattern match, between new information and what we already believe to be true.

As I’ve said many times, we have to understand why there is such a wide gap in how we all interpret the world. And the reason can be found in how we process what we take in through our senses.

The immediate sensory interpretation is essentially a quick and dirty pattern match. There would be no evolutionary purpose to store more information than is necessary to quickly categorize something. And the fidelity of that match is just accurate enough to do the job — nothing more.

For example, if I asked you to draw a can of Coca-Cola from memory, how accurate do you think it would be? The answer, proven over and over again, is that it probably wouldn’t look much like the “real thing.”

That’s coming from one sense, but the rest of your senses are just as faulty. You think you know how Coke smells and tastes and feels as you drink it, but these are low fidelity tags that act in a split second to help us recognize the world around us. They don’t have to be exact representations because that would take too much processing power.

But what’s really important to us is our “story” of Coke. That was clearly shown in one of my favorite neuromarketing studies, done at Baylor University by Read Montague.

He and his team reenacted the famous Pepsi Challenge — a blind taste test pitting Coke against Pepsi. But this time, they scanned the participant’s brains while they were drinking. The researchers found that when Coke drinkers didn’t know what they were drinking, only certain areas of their brains activated, and it didn’t really matter if they were drinking Coke or Pepsi.

But when they knew they were drinking Coke, suddenly many more parts of the brain started lighting up, including the prefrontal cortex, the part of the brain that is usually involved in creating our own personal narratives to help us understand our place in the world.

And while the actual can of Coke doesn’t change from person to person, our Story of Coke can be an individual to us as our own fingerprints.

We in the media are in the business of telling stories. This post is a story. Everything we do is a story. Sometimes they successfully connect with others, and sometimes they don’t. But in order to make effective use of the media we chose as a platform, we must remember we can only take a story halfway. On the other end there is our audience, each of whom has their own narratives that define them. Media is the middle ground where those two things connect.

The Split-Second Timing of Brand Trust

Two weeks ago, I talked about how brand trust can erode so quickly and cause so many issues. I intimated that advertising and branding have become decoupled — and advertising might even erode brand trust, leading to a lasting deficit.

Now I think that may be a little too simplistic. Brand trust is a holistic thing — the sum total of many moving parts. Taking advertising in isolation is misleading. Will one social media ad for a brand lead to broken trust? Probably not. But there may be a cumulative effect that we need to be aware of.

In looking at the Edelman Trust Barometer study closer, a very interesting picture emerges. Essentially, the study shows there is a trust crisis. Edelman calls it information bankruptcy.

The slide in trust is probably not surprising. It’s hard to be trusting when you’re afraid, and if there’s one thing the Edelman Barometer shows, it’s that we are globally fearful. Our collective hearts are in our mouths. And when this happens, we are hardwired to respond by lowering our trust and raising our defenses.

But our traditional sources for trusted information — government and media — have also abdicated their responsibilities to provide it. They have instead stoked our fears and leveraged our divides for their own gains. NGOs have suffered the same fate. So, if you can’t trust the news, your leaders or even your local charity, who can you trust?

Apparently, you can trust a corporation. Edelman shows that businesses are now the most trusted organizations in North America. Media, especially social media, is the least trusted institution. I find this profoundly troubling, but I’ll put that aside for a future post. For now, let’s just accept it at face value.

As I said in that previous column, we want to trust brands more than ever. But we don’t trust advertising. This creates a dilemma for the marketer.

This all brings to mind a study I was involved with a little over 10 years ago. Working with Simon Fraser University, we wanted to know how the brain responded to trusted brands. The initial results were fascinating — but unfortunately, we never got the chance to do the follow-up study we intended.

This was an ERP study (event-related potential), where we looked at how the brain responded when we showed brand images as a stimulus. ERP studies are useful to better understand the immediate response of the brain to something — the fast loop I talk so much about — before the slow loop has a chance to kick in and rationalize things.

We know now that what happens in this fast loop really sets the stage for what comes after. It essentially makes up the mind, and then the slow loop adds rational justification for what has already been decided.

What we found was interesting: The way we respond to our favorite brands is very similar to the way we respond to pictures of our favorite people. The first hint of this occurred in just 150 milliseconds, about one-sixth of a second. The next reinforcement was found at 400 milliseconds. In that time, less than half a second in total, our minds were made up. In fact, the mind was basically made up in about the same time it takes to blink an eye.  Everything that followed was just window dressing.

This is the power of trust. It takes a split second for our brains to recognize a situation where it can let its guard down. This sets in motion a chain of neurological events that primes the brain for cooperation and relationship-building. It primes the oxytocin pump and gets it flowing. And this all happens just that quickly.

On the other side, if a brand isn’t trusted, a very different chain of events occurs just as quickly. The brain starts arming itself for protection. Our amygdala starts gearing up. We become suspicious and anxious.

This platform of brand trust — or lack of it — is built up over time. It is part of our sense-making machinery. Our accumulating experience with the brand either adds to our trust or takes it away.

But we must also realize that if we have strong feelings about a brand, one way or the other, it then becomes a belief. And once this happens, the brain works hard to keep that belief in place. It becomes virtually impossible at that point to change minds. This is largely because of the split-second reactions our study uncovered.

This sets very high stakes for marketers today. More than ever, we want to trust brands. But we also search for evidence that this trust is warranted in a very different way. Brand building is the accumulation of experience over all touch points. Each of those touch points has its own trust profile. Personal experience and word of mouth from those we know is the highest. Advertising on social media is one of the lowest.

The marketer’s goal should be to leverage trust-building for the brand in the most effective way possible. Do it correctly, through the right channels, and you have built trust that’s triggered in an eye blink. Screw it up, and you may never get a second chance.