Adrift in the Metaverse

Humans are nothing if not chasers of bright, shiny objects. Our attention is always focused beyond the here and now. That is especially true when here and now is a bit of a dumpster fire.

The ultrarich know that this is part of the human psyche, and they are doubling down their bets on it. Jeff Bezos and Elon Musk are betting on space. But others — including Mark Zuckerberg — are betting on something called the metaverse.

Just this past summer, Zuck told his employees about his master plan for Facebook:

“Our overarching goal across all of (our) initiatives is to help bring the metaverse to life.”

So what exactly is the metaverse? According to Wikipedia, it is

“a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds, augmented reality, and the Internet.”

The metaverse is a world of our own making, which exists in the dimensions of a digital reality. There we imagine we can fix what we screwed up in the maddeningly unpredictable real world. It is the ultimate in bright, shiny objects.

Science fiction and the entertainment industry have been toying with the idea of the metaverse for some time now. The term itself comes from Neal Stephenson’s 1992 novel “Snow Crash.” It has been given the Hollywood treatment numerous times, notably in “The Matrix” and “Ready Player One.” But Silicon Valley venture capitalists are rushing to make fiction into fact.

You can’t really blame us for throwing in the towel on the world we have systematically wrecked. There are few glimmers of hope out there in the real world. What we have wrought is painful to contemplate. So we are doing what we’ve always done, reach for what we want rather than fix what we have. Take the Reporters Without Borders Uncensored Library, for example.

There are many places in the real world where journalism is censored, like Russia, the Middle East, Vietnam and China. But in the metaverse, there is the option of leapfrogging over all the political hurdles we stumble over in the real world. So Reporters without Borders and two German creative agencies built a meta library in the meta world of Minecraft. Here, censored articles are made into virtual books, accessible to all who want to check them out.

It’s hard to find fault with this. Censorship is a tool of oppression. Here, a virtual world offered an inviting loophole to circumvent it. The metaverse came to the rescue. What is the problem with that?

The biggest risk is this: We weren’t built for the metaverse. We can probably adapt to it, somewhat, but everything that makes us tick has evolved in a flesh and blood world, and — to quote a line from Joni Mitchell’s “Big Yellow Taxi,” “You don’t know what you’ve got till it’s gone.”

It’s fair to say that right now the metaverse is a novelty. Most of your neighbors, friends and family have never heard of it. But odds are it will become our life. In a 2019 article called “Welcome to the Mirror World” in Wired, Kevin Kelley explained, “we are building a 1-to-1 map of almost unimaginable scope. When it’s complete, our physical reality will merge with the digital universe.”

In a Forbes article, futurist Cathy Hackl gives us an example of what this merger might look like:

“Imagine walking down the street. Suddenly, you think of a product you need. Immediately next to you, a vending machine appears, filled with the product and variations you were thinking of. You stop, pick an item from the vending machine, it’s shipped to your house, and then continue on your way.”

That sounds benign — even helpful. But if we’ve learned one thing it’s this: When we try to merge technology with human behavior, there are always unintended consequences that arise. And when we’re talking about the metaverse, those consequences will likely be massive.

It is hubristic in the extreme to imagine we can engineer a world that will be a better match for our evolved humanware mechanics than the world we actually evolved within. It’s sheer arrogance to imagine we can build that world, and also arrogant to imagine that we can thrive within it.

We have a bright, shiny bias built into us that will likely lead us to ignore the crumbling edifice of our reality. German futurist Gerd Leonhard, for one, warns us about an impending collision between technology and humanity:

“Technology is not what we seek but how we seek: the tools should not become the purpose. Yet increasingly, technology is leading us to ‘forget ourselves.’”

Imagine a Pandemic without Technology

As the writer of a weekly post that tends to look at the intersection between human behavior and technology, the past 18 months have been interesting – and by interesting, I mean a twisted ride through gut-wrenching change unlike anything I have ever seen before.

I can’t even narrow it down to 18 months. Before that, there was plenty more that was “unprecedented” – to berrypick a word from my post from a few weeks back. I have now been writing for MediaPost in one place or another for 17 years. My very first post was on August 19, 2004. That was 829 posts ago. If you add the additional posts I’ve done for my own blog – outofmygord.com – I’ve just ticked over 1,100 on my odometer.  That’s a lot of soul searching about technology. And the last several months have still been in a class by themselves.

Now, part of this might be where my own head is at. Believe it or not, I do sometimes try to write something positive. But as soon as my fingers hit the keyboard, things seem to spiral downwards. Every path I take seems to take me somewhere dark. There has been precious little that has sparked optimism in my soul.

Today, for example, prior to writing this, I took three passes at writing something else. Each quickly took a swerve towards impending doom. I’m getting very tired of this. I can only imagine how you feel, reading it.

So I finally decided to try a thought experiment. “What if,” I wondered, “we had gone through the past 17 months without the technology we take for granted? What if there was no Internet, no computers, no mobile devices? What if we had lived through the Pandemic with only the technology we had – say – a hundred years ago, during the global pandemic of the Spanish Flu starting in 1918? Perhaps the best way to determine the sum total contribution of technology is to do it by process of elimination.”

The Cons

Let’s get the negatives out of the way. First, you might say that technology enabled the flood of misinformation and conspiracy theorizing that has been so top-of-mind for us. Well, yes – and no.

Distrust in authority is nothing new. It’s always been there, at one end of a bell curve that spans the attitudes of our society. And nothing brings the outliers of society into global focus faster than a crisis that affects all of us.

There was public pushback against the very first vaccine ever invented; the smallpox vaccine. Now granted, the early method was to rub puss from a cowpox blister into a cut in your skin and hope for the best. But it worked. Smallpox is now a thing of the past.

And, if we are talking about pushback against public health measures, that’s nothing new either. Exactly the same thing happened during the 1918-1919 Pandemic. Here’s one eerily familiar excerpt from a journal article looking at the issue, “Public-gathering bans also exposed tensions about what constituted essential vs. unessential activities. Those forced to close their facilities complained about those allowed to stay open. For example, in New Orleans, municipal public health authorities closed churches but not stores, prompting a protest from one of the city’s Roman Catholic priests.”

What is different, thanks to technology, is that public resistance is so much more apparent than it’s ever been before. And that resistance is coming with faces and names we know attached. People are posting opinions on social media that they would probably never say to you in a face-to-face setting, especially if they knew you disagreed with them. Our public and private discourse is now held at arms-length by technology. Gone are all the moderating effects that come with sharing the same physical space.

The Pros

Try as I might, I couldn’t think of another “con” that technology has brought to the past 17 months. The “pro” list, however, is far too long to cover in this post, so I’ll just mention a few that come immediately to mind.

Let’s begin with the counterpoint to the before-mentioned “Con” – the misinformation factor. While misinformation was definitely spread over the past year and a half, so was reliable, factual information. And for those willing to pay attention to it, it enabled us to find out what we needed to in order to practice public health measures at a speed previously unimagined. Without technology, we would have been slower to act and – perhaps – fewer of us would have acted at all. At worst, in this case technology probably nets out to zero.

But technology also enabled the world to keep functioning, even if it was in a different form. Working from home would have been impossible without it. Commercial engines kept chugging along. Business meetings switched to online platforms. The Dow Jones Industrial Average, as of the writing of this, is over 20% higher than it was before the pandemic. In contrast, if you look at stock market performance over the 1918 – 1919 pandemic, the stock market was almost 32% lower at the end of the third wave as it was at the start of the first. Of course, there are other factors to consider, but I suspect we can thank technology for at least some of that.

It’s easy to point to the negatives that technology brings, but if you consider it as a whole, technology is overwhelmingly a blessing.

What was interesting to me in this thought experiment was how apparent it was that technology keeps the cogs of our society functioning more effectively, but if there is a price to be paid, it typically comes at the cost of our social bonds.

Why is Everything Now ‘Unprecedented’?

Just once, I would like to get through one day without hearing the word “unprecedented.” And I wonder, is that just the media trying to get a click, or is the world truly that terrible?

Take the Olympics. In my lifetime, I’ve never seen an Olympics like this one. Empty stands. Athletes having to leave within 48 hours of their last event. Opening and closing ceremonies unlike anything we have ever seen. It’s, well — unprecedented.

The weather is unprecedented. What is happening in politics is unprecedented. The pandemic is unprecedented, at least in our lifetimes. I don’t know about you, but I feel like I’m watching a blockbuster where the world will eventually end — but we just haven’t got to that part of the movie yet. I feel the palpable sensation of teetering on the edge of a precipice. And I’m pretty sure it’s happened before.

Take the lead-ups to the two world wars, for example. If you plot a timeline of the events that led to either July 20, 1914 or Sept. 1, 1939, there is a noticeable acceleration of momentum. At first, the points on the timeline are spread apart, giving the world a chance to once again catch its collective breath. But as we get closer and closer to those dates circled in red, things pick up. There are cascades of events that eventually lead to the crisis point. Are we in the middle of such a cascade?

Part of this might just be network knock-on effects that happen in complex environments. But I also wonder if we just become a little shell- shocked, being nudged into a numb acceptance of things we would have once found intolerable.

Author and geographer Jared Diamond calls this “creeping normality. “ In his book “Collapse: How Societies Choose to Fail or Succeed,” he used the example of the deforestation and environmental degradation that happened on Easter Island — and how, despite the impending doom, the natives still decided to chop down the last tree: “I suspect, though, that the disaster happened not with a bang but with a whimper. After all, there are those hundreds of abandoned statues to consider. The forest the islanders depended on for rollers and rope didn’t simply disappear one day—it vanished slowly, over decades.”

Creeping normality continually and imperceptibly nudges us from the unacceptable to the acceptable and we don’t even notice it’s happening. It’s a cognitive bias that keeps us from seeing reality for what it is. Creeping normality is what happens when our view of the world comes through an Overton Window.

I have mentioned the concept of the Overton Window before.  Overton Window was first introduced by political analyst Joseph Lehman and was named after his colleague, Joseph Overton. It was initially coined to show that the political policies that the public finds acceptable will shift over time. What was once considered unthinkable can eventually become acceptable or even popular, given the shifting sensitivities of the public. As an example, the antics of Donald Trump would once be considered unacceptable in any public venue — but as our reality shifted, we saw them eventually become mainstream from an American president.

I suspect that the media does the same thing with our perception of the world in general. The news media demands the exceptional. We don’t click on “ordinary.” So it consistently shifts our Overton Window of what we pay attention to, moving us toward the outrageous. Things that once would have caused riots are now greeted with a yawn. This is combined with the unrelenting pace of the news cycle. What was outrageous today slips into yesterday, to be replaced with what is outrageous today.

And while I’m talking about outrageous, let’s look at the root of that term. The whole point of something being outrageous is to prompt us into being outraged — or moved enough to take action. And, if our sensitivity to outrage is constantly being numbed, we are no longer moved enough to act.

When we become insensitive to things that are unprecedented, we’re in a bad place. Our trust in information is gone. We seek information that comforts us that the world is not as bad as we think it is. And we ignore the red flags we should be paying attention to.

If you look at the lead-ups to both world wars, you see this same pattern. Things that happened regularly in 1914 or 1939, just before the outbreak of war, would have been unimaginable just a few years earlier. The momentum of mayhem picked up as the world raced to follow a rapidly moving Overton Window. Soon, before we knew it, all hell broke loose and the world was left with only one alternative: going to war.

An Overton Window can just happen, or it can be intentionally planned. Politicians from the fringes, especially the right, have latched on to the Window, taking something intended to be an analysis and turning it into a strategy. They now routinely float “policy balloons” that they know are on the fringe, hoping to trigger a move in our Window to either the right or left. Over time, they can use this strategy to introduce legislation that would once have been vehemently rejected.

The danger in all this is the embedding of complacency. Ultimately, our willingness to take action against threat is all that keeps our society functioning. Whether it’s our health, our politics or our planet, we have to be moved to action before it’s too late.

When the last tree falls on Easter Island, we don’t want to be the ones with the axe in our hands.

A Hybrid Work Approach To Creativity

Last week I introduced the concept of burstiness, meaning the bursts of creativity that can happen when a group is on a roll.

Burstiness requires trust: a connection in the group that creates psychological safety. But I would go one step further. It also requires respect — an intuitive acknowledgement of the value of contribution from everyone in the group. It’s a type of recursive high that builds on itself, as each contribution sparks something else from the group. It’s like the room has caught fire and, as the burstiness continues, everyone tries to add to the flames.

We’ve used jazz as an example of burstiness. But there are other great examples, like theater improv. Research has found that the brain actually changes how it acts when it’s engaged in these activities, according to a Psychology Today article.

A 2008 fMRI study found that that different parts of the brain lit up when musicians improvised rather than just playing scales. The brain shifted into a different gear. The dorsolateral prefrontal cortex decreased in activity, and the medial prefrontal cortex increased. This is a fascinating finding, because the dorsolateral prefrontal cortex is the part of the brain where we look at ourselves critically and the media prefrontal cortex is linked with language and creativity. A follow-up study was done on improv actors, and the findings were remarkably similar.

This modality of the brain is important to understand. If we can create the conditions that lead to creativity, magic can happen.

Also, this is a team sport. Creativity is almost never exclusively a solo pursuit.

In 1995, Alfonso Montuori and Ronald Purser wrote an essay deconstructing the myth of the lone genius. In it, they showed that creativity almost always relies on social interaction. There is a system of creativity, an ecology that creates the conditions necessary for inspiration.

We love the story of the eccentric solitary genius toiling away in a loft somewhere, but it almost never happens that way. Da Vinci and Michelangelo had “schools” of apprentices that helped turn out their masterpieces. Mozart was a pretty social guy whose creativity fed off interactions with his court patrons and other composers of the era.

But we also have to understand that a little creative magic can go a long way. You don’t have to be 100% creative all the time. In a corporate setting, creativity is a spark. Then there is a lot of non-creative work required to fan it into a flame.

Given this, perhaps the advent of hybrid virtual-traditional workplace models might be a suitable fit for encouraging inspiration, if we use them correctly and not try to force-fit our intentions into the wrong workplace framework.

A virtual work-from-home environment is great for efficiency and getting stuff done. Our boss isn’t hovering over our cubicle asking us if we “have a second” to discuss whatever happens to be on his mind at this particular moment. We’re not wasting hours in tedious, unproductive meetings or on a workplace commute.

On the flip side, if creativity is our goal, there is no substitute for being “in the room where it happens.” A full bandwidth of human interaction is required for the psychological safety we need to take creative risks. These creative summits need to be in person and carefully constructed to provide the conditions needed for creativity. Interdisciplinary and diverse teams who know and trust each other implicitly need to be physically brought together for “improv” sessions. The rules of engagement should be clearly understood.

And unless bosses can participate fully “in kind” (a great example of this is Trevor Noah in the “Daily Show” example I mentioned last week from Adam Grant’s “Worklife” podcast), they should stay the hell out of the room.

Be ruthless about limiting attendance for creative sessions to just those who bring something to the table and have already built a psychological “safe space” with each other through face-to-face connections. Just one wrong person in the room can short-circuit the entire exercise.

This hybrid model doesn’t allow for the serendipity of creativity — that chance interaction in the lunchroom or the offhand comment that is the first domino to fall in an inspirational chain reaction. It also puts a constrained timeline on creativity, forcing it into specific squares on a calendar. But at least it recognizes the unique prerequisites of creativity and addresses them in an honest manner.

One last thought on creativity. Again, we go back to Anita Williams Wooley, the Carnegie Mellon professor who first identified “burstiness.” In a 2018 study with Christopher Riedl, she shows that even with a remote workplace, “bursty” communications can lead to more innovative teams.

“People often think that constant communication is most effective, but actually, we find that bursts of rapid communication, followed by longer periods of silence, are telltale signs of successful teams,” she notes.

This communication template mimics the hybrid model I mentioned before. It compartmentalizes our work activities, adopting communication styles that best suit the different modalities required: the effectiveness of collaboration and innovation, and the efficiency of getting the work done. Wooley suggests using a synchronous form of communication for the “bursts” — perhaps even the old-fashioned phone. And then leave everybody alone for a period of radio silence and let them get their work done.

Seeking “Burstiness” When Working from Home

I was first introduced to the concept of “burstiness” by psychologist Adam Grant in his podcast, “Worklife.” In one episode, he visits the writers’ room at “The Daily Show” and probes the creativity that crackles when those writers were on a roll. A big part of that energy, according to Grant, was because of “burstiness.”

The term was initially coined by Anita Williams Woolley, associate professor of organizational behavior and theory at Carnegie Mellon University.

Burstiness is, according to Grant,

“like the best moments in improv jazz. Someone plays a note, someone else jumps in with a harmony, and pretty soon, you have a collective sound that no one planned. Most groups never get to that point, but you know burstiness when you see it. At ‘The Daily Show,’ the room just literally sounds like it’s bursting with ideas.”

Last week, we reran a post I wrote at the beginning of the pandemic wondering if we might be forsaking some important elements of team effectiveness in our rush to embrace the virtual workplace. Our brains have evolved to be most effective in creating relationships with others when we’re face-to -ace. There is a rich bandwidth of communication through which we build trust in others that is reliant on physical proximity.

Zoom just doesn’t cut it.

So, would this idea of burstiness be sacrificed in a remote work environment? Let’s dig a little deeper.

Grant outlines the things that need to be in place for burstiness to occur:

  • Spending time with each other
  • Psychological safety
  • A proper balance of structure
  • The right people in the room

Let’s look at these in reverse order.

The right people in the room

First, how do you get the right people in the room – or, in the case of a remote workforce, on the same Zoom call? Here, diversity seems to be the key. You need different perspectives. Creativity comes from diversity, not sameness.

Dr. Woolley offers the example of the Kennedy and Lincoln presidential cabinets. Kennedy’s cabinet was comprised of Ivy League intellectual elites who all came from similar backgrounds and had the same ideological view of the world. Lincoln’s cabinet was fractious, to say the least. After his election, Lincoln reached out to bitter rivals who ran against him for the presidency — including Salmon Chase and William Seward — and gave them senior positions in his cabinet. Lincoln’s cabinet is generally considered by historians as the most effective political team in American history. Kennedy’s cabinet suffered from a debilitating case of “groupthink” that launched the Bay of Pigs invasion and almost ignited another world war.

There is no reason why a virtual workplace cannot embrace diversity. You just have to recruit the right people through bias-resistant practices like blind auditions and using multiple interviewers.

A proper balance of structure

Grant says the right structure provides the rules of engagement for creative bursts. You need some basic guidelines so you can focus on the work and not the mechanics of the process. To use Grant’s example, jazz improv seems unstructured, but there are actually some commonly understood ground rules on which the improvisation is built.

This brings to mind psychologist Mihaly Csikszentmihalyi’s concept of Flow, the condition where creativity just flows naturally. Structure allows Flow to happen by providing the structure the brain needs to focus wholly on the task at hand. There is no reason why the structure can’t apply equally to traditional and virtual work teams.

But the next two conditions get a little trickier for the virtual workplace. Let’s look at them together:

Psychological safety and spending time together

Psychological safety is a term coined by Harvard Business School professor Amy Edmondson. When it comes to promoting “burstiness,” psychology safety gives us the confidence to contribute without being punished or ridiculed. It allows us to take creative risk. Another word for it would be trust.

And that brings us to second part — spending time together — and the challenge for that in a virtual workplace. Trust is not built overnight, and it is not built over Zoom or Slack.

As I said in my previous post, organizational behavior specialist Mahdi Roghanizad from Ryerson University has found that the connections in our brains that create trust may not even be activated unless we’re face-to-face with someone. We need eye contact to nudge this part of ourselves into life.

So, if creativity is a requirement in the workplace, and connecting face-to-face is required to foster creativity, is a virtual office a non-starter? Not necessarily. In my next post, I’ll look at some ways we might have still be able to have burstiness — even when we’re at home in our pajamas.

Getting Bitch-Slapped by the Invisible Hand

Adam Smith first talked about the invisible hand in 1759. He was looking at the divide between the rich and the poor and said, in essence, that “greed is good.”

Here is the exact wording:

“They (the rich) are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society.”

The effect of “the hand” is most clearly seen in the wide-open market that emerges after established players collapse and make way for new competitors riding a wave of technical breakthroughs. Essentially, it is a cycle.

But something is happening that may never have happened before. For the past 300 years of our history, the one constant has been the trend of consumerism. Economic cycles have rolled through, but all have been in the service of us having more things to buy.

Indeed, Adam Smith’s entire theory depends on greed: 

“The rich … consume little more than the poor, and in spite of their natural selfishness and rapacity, though they mean only their own conveniency, though the sole end which they propose from the labours of all the thousands whom they employ, be the gratification of their own vain and insatiable desires, they divide with the poor the produce of all their improvements.”

It’s the trickle-down theory of gluttony: Greed is a tide that raises all boats.

The Theory of The Invisible Hand assumes there are infinite resources available. Waste is necessarily built into the equation. But we have now gotten to the point where consumerism has been driven past the planet’s ability to sustain our greedy grasping for more.

Nobel-Prize-winning economist Joseph Stiglitz, for one, recognized that environmental impact is not accounted for with this theory. Also, if the market alone drives things like research, it will inevitably become biased towards benefits for the individual and not the common good.

There needs to be a more communal counterweight to balance the effects of individual greed. Given this, the new age of consumerism might look significantly different.

There is one outcome of market driven-economics that is undeniable: All the power lies in the connection between producers and consumers. Because the world has been built on the predictable truth of our always wanting more, we have been given the ability to disrupt that foundation simply by changing our value equation: buying for the greater good rather than our own self-interest.

I’m skeptical that this is even possible.

It’s a little daunting to think that our future survival relies on our choices as consumers. But this is the world we have made. Consumption is the single greatest driver of our society. Everything else is subservient to it.

Government, science, education, healthcare, media, environmentalism: All the various planks of our societal platform rest on the cross-braces of consumerism. It is the one behavior that rules all the others. 

This becomes important to think about because this shit is getting real — so much faster than we thought possible.

I write this from my home, which is about 100 miles from the village of Lytton, British Columbia. You might have heard it mentioned recently. On June 29, Lytton reported the highest temperature ever recorded in Canada  a scorching 121.3 degrees Fahrenheit (49.6 degrees C for my Canadian readers). That’s higher than the hottest temperature ever recorded in Las Vegas. Lytton is 1,000 miles north of Las Vegas.

As I said, that was how Lytton made the news on June 29. But it also made the news again on June 30. That was when a wildfire burned almost the entire town to the ground.

In one week of an unprecedented heat wave, hundreds of sudden deaths occurred in my province. It’s believed the majority of them were caused by the heat.

We are now at the point where we have to shift the mental algorithms we use when we buy stuff. Our consumer value equation has always been self-centered, based on the calculus of “what’s in it for me?” It was this calculation that made Smith’s Invisible Hand possible.

But we now have to change that behavior and make choices that embrace individual sacrifice. We have to start buying based on “What’s best for us?”

In a recent interview, a climate-change expert said he hoped we would soon see carbon-footprint stickers on consumer products. Given a choice between two pairs of shoes, one that was made with zero environmental impact and one that was made with a total disregard for the planet, he hoped we would choose the former, even if it was more expensive.

I’d like to think that’s true. But I have my doubts. Ethical marketing has been around for some time now, and at best it’s a niche play. According to the Canadian Coalition for Farm Animals, the vast majority of egg buyers in Canada — 98% — buy caged eggs even though we’re aware that the practice is hideously cruel.  We do this because those eggs are cheaper.

The sad fact is that consumers really don’t seem to care about anything other than their own self-interest. We don’t make ethical choices unless we’re forced to by government legislation. And then we bitch like hell about our rights as consumers. “We should be given the choice,” we chant.  “We should have the freedom to decide for ourselves.”

Maybe I’m wrong. I sure hope so. I would like to think — despite recent examples to the contrary of people refusing to wear face masks or get vaccinated despite a global pandemic that took millions of lives — that we can listen to the better angels of our nature and make choices that extend our ability to care beyond our circle of one.

But let’s look at our track record on this. From where I’m sitting, 300 years of continually making bad choices have now brought us to the place where we no longer have the right to make those choices. This is what The Invisible Hand has wrought. We can bitch all we want, but that won’t stop more towns like Lytton B.C. from burning to the ground.

Why Our Brains Struggle With The Threat Of Data Privacy

It seems contradictory. We don’t want to share our personal data but, according to a recent study reported on by MediaPost’s Laurie Sullivan, we want the brands we trust to know us when we come shopping. It seems paradoxical.

But it’s not — really.  It ties in with the way we’ve always been thinking.

Again, we just have to understand that we really don’t understand how the data ecosystem works — at least, not on an instant and intuitive level. Our brains have no evolved mechanisms that deal with new concepts like data privacy. So we have borrowed other parts of the brain that do exist. Evolutionary biologists call this “exaption.”

For example, the way we deal with brands seems to be the same way we deal with people — and we have tons of experience doing that. Some people we trust. Most people we don’t. For the people we trust, we have no problem sharing something of our selves. In fact, it’s exactly that sharing that nurtures relationships and helps them grow.

It’s different with people we don’t trust. Not only do we not share with them, we work to avoid them, putting physical distance between us and them. We’d cross to the other side of the street to avoid bumping into them.

In a world that was ordered and regulated by proximity, this worked remarkably well. Keeping our enemies at arm’s length generally kept us safe from harm.

Now, of course, distance doesn’t mean the same thing it used to. We now maneuver in a world of data, where proximity and distance have little impact. But our brains don’t know that.

As I said, the brain doesn’t really know how digital data ecosystems work, so it does its best to substitute concepts it has evolved to handle those it doesn’t understand at an intuitive level.

The proxy for distance the brain seems to use is task focus. If we’re trying to do something, everything related to that thing is “near” and everything not relevant to it is “far. But this is an imperfect proxy at best and an outright misleading one at worst.

For example, we will allow our data to be collected in order to complete the task. The task is “near.” In most cases, the data we share has little to do with the task we’re trying to accomplish. It is labelled by the brain as “far” and therefore poses no immediate threat.

It’s a bait and switch tactic that data harvesters have perfected. Our trust-warning systems are not engaged because there are no proximate signs to trigger them. Any potential breaches of trust happen well after the fact – if they happen at all. Most times, we’re simply not aware of where our data goes or what happens to it. All we know is that allowing that data to be collected takes us one step closer to accomplishing our task.

That’s what sometimes happens when we borrow one evolved trait to deal with a new situation:  The fit is not always perfect. Some aspects work, others don’t.

And that is exactly what is happening when we try to deal with the continual erosion of online trust. In the moment, our brain is trying to apply the same mechanisms it uses to assess trust in a physical world. What we don’t realize is that we’re missing the warning signs our brains have evolved to intuitively look for.

We also drag this evolved luggage with us when we’re dealing with our favorite brands. One of the reasons you trust your closest friends is that they know you inside and out. This intimacy is a product of a physical world. It comes from sharing the same space with people.

In the virtual world, we expect the brands we know and love to have this same knowledge of us. It frustrates us when we are treated like a stranger. Think of how you would react if the people you love the most gave you the same treatment.

This jury-rigging of our personal relationship machinery to do double duty for the way we deal with brands may sound far-fetched, but marketing brands have only been around for a few hundred years. That is just not enough time for us to evolve new mechanisms to deal with them.

Yes, the rational, “slow loop” part of our brains can understand brands, but the “fast loop” has no “brand” or “data privacy” modules. It has no choice but to use the functional parts it does have.

As I mentioned in a previous post, there are multiple studies that indicate that it’s these parts of our brain that fire instantly, setting the stage for all the rationalization that will follow. And, as our own neuro-imaging study showed, it seems that the brain treats brands the same way it treats people.

I’ve been watching this intersection between technology and human behaviour for a long time now. More often than not, I see this tendency of the brain to make split-section decisions in environments where it just doesn’t have the proper equipment to make those decisions. When we stop to think about these things, we believe we understand them. And we do, but we had to stop to think. In the vast majority of cases, that’s just not how the brain works.

The Profitability Of Trust

Some weeks ago, I wrote about the crisis of trust identified by the Edelman Trust Barometer study and its impact on brands. In that post, I said that the trust in all institutions had been blown apart, hoisted on the petard of our political divides.

We don’t trust our government. We definitely don’t trust the media – especially the media that sits on the other side of the divide. Weirdly, our trust in NGOs has also slipped, perhaps because we suspect them to be politically motivated.

So whom — or what — do we trust? Well, apparently, we still trust corporations. We trust the brands we know. They, alone, seem to have been able to stand astride the chasm that is splitting our culture.

As I said before, I’m worried about that.

Now, I don’t doubt there are well-intentioned companies out there. I know there are several of them. But there is something inherent in the DNA of a for-profit company that I feel makes it difficult to trust them. And that something was summed up years ago by economist Milton Friedman, in what is now known as the Friedman Doctrine. 

In his eponymously named doctrine, Friedman says that a corporation should only have one purpose: “An entity’s greatest responsibility lies in the satisfaction of the shareholders.” The corporation should, therefore, always endeavor to maximize its revenues to increase returns for the shareholders.

So, a business will be trustworthy as long as fits its financial interest to be trustworthy. But what happens when those two things come into conflict, as they inevitably will?

Why is it inevitable, you ask? Why can’t a company be profitable and worthy of our trust? Ah, that’s where, sooner or later, the inevitable conflict will come.

Let’s strip this down to the basics with a thought experiment.

In a 2017 article in the Harvard Business Review, neuroscientist Paul J. Zak talks about the neuroscience of trust. He explains how he discovered that oxytocin is the neurochemical basis of trust — what he has since called The Trust Molecule.

To do this, he set up a classic trust task borrowed from Nobel laureate economist Vernon Smith:

“In our experiment, a participant chooses an amount of money to send to a stranger via computer, knowing that the money will triple in amount and understanding that the recipient may or may not share the spoils. Therein lies the conflict: The recipient can either keep all the cash or be trustworthy and share it with the sender.”

The choice of this task speaks volumes. It also lays bare the inherent conflict that sooner or later will face all corporations: money or trust? This is especially true of companies that have shareholders. Our entire capitalist ethos is built on the foundation of the Friedman Doctrine. Imagine what those shareholders will say when given the choice outlined in Zak’s experiment: “Keep the money, screw the trust.” Sometimes, you can’t have both. Especially when you have a quarterly earnings target to hit.

For humans, trust is our default position. It has been shown through game theory research using the Prisoner’s Dilemma that the best strategy for evolutionary success is one called “Tit for Tat.” In Tit for Tat, our opening position is typically one of trust and cooperation. But if we’re taken advantage of, then we raise our defences and respond in kind.

So, when we look at the neurological basis of trust, consistency is another requirement. We will be willing to trust a brand until it gives a reason not to. The more reliable the brand is in earning that trust, the more embedded that trust will become. As I said in the previous post, consistency builds beliefs and once beliefs are formed, it’s difficult to shake them loose.

Trying to thread this needle between trust and profitability can become an exercise in marketing “spin”: telling your customers you’re trustworthy, while you’re are doing everything possible to maximize your profits. A case in point — which we’ve seen repeatedly — is Facebook and its increasingly transparent efforts to maximize advertising revenue while gently whispering in our ear that we should trust it with our most private information.

Given the potential conflict between trust and profit, is trusting a corporation a lost cause? No, but it does put a huge amount of responsibility on the customer. The Edelman study has made abundantly clear that if there is such a thing as a “market” for trust, then trust is in dangerously short supply. This is why we’re turning to brands and for-profit corporations as a place to put our trust. We have built a society where we believe that’s the only thing we can trust.

Mark Carney, the governor of the Bank of England and the former governor of the Bank of Canada, puts this idea forward in his new book, “Value(s).” In it, he shows how “market economies” have evolved into “market societies” where price determines the value of everything. And corporations will follow profit, wherever it leads.

If we understand that fundamental characteristic of corporations, it does bring an odd kind of power that rests in the hands of consumers.

Markets are not unilateral beasts. They rely on the balance between supply and demand. We form half that equation. It is our willingness to buy that determine how prices are determined in Carney’s “market societies.” So, if we are willing to place our trust in a brand, we can also demand that the brand proves that our trust has not been misplaced, through the rewards and penalties built into the market. 

Essentially, we have to make trust profitable.

Media: The Midpoint of the Stories that Connect Us

I’m in the mood for navel gazing: looking inward.

Take the concept of “media,” for instance. Based on the masthead above this post, it’s what this site — and this editorial section — is all about. I’m supposed to be on the “inside” when it comes to media.

But media is also “inside” — quite literally. The word means “middle layer,” so it’s something in between.

There is a nuance here that’s important. Based on the very definition of the word, it’s something equidistant from both ends. And that introduces a concept we in media must think about: We have to meet our audience halfway. We cannot take a unilateral view of our function.

When we talk about media, we have to understand what gets passed through this “middle layer.” Is it information? Well, then we have to decide what information is. Again, the etymology of the word “inform” shows us that informing someone is to “give form to their mind.” But that mind isn’t a blank slate or a lump of clay to be molded as we want. There is already “form” there. And if, through media, we are meeting them halfway, we have to know something about what that form may be.

We come back to this: Media is the midpoint between what we, the tellers,  believe, and what we want our audience to believe. We are looking for the shortest distance between those two points. And, as self-help author Patti Digh wrote, “The shortest distance between two people is a story.”

We understand the world through stories — so media has become the platform for the telling of stories. Stories assume a common bond between the teller and the listener. It puts media squarely in the middle ground that defines its purpose, the point halfway between us. When we are on the receiving end of a story, our medium of choice is the one closest to us, in terms of our beliefs and our world narrative. These media are built on common ideological ground.

And, if we look at a recent study that helps us understand how the brain builds models of the things around us, we begin to understand the complexity that lies within a story.

This study from the Max Planck Institute for Human Cognitive and Brain Sciences shows that our brains are constantly categorizing the world around us. And if we’re asked to recognize something, our brains have a hierarchy of concepts that it will activate, depending on the situation. The higher you go in the hierarchy, the more parts of your brain that are activated.

For example, if I asked you to imagine a phone ringing, the same auditory centers in your brain that activate when you actually hear the phone would kick into gear and give you a quick and dirty cognitive representation of the sound. But if I asked you to describe what your phone does for you in your life, many more parts of your brain would activate, and you would step up the hierarchy into increasingly abstract concepts that define your phone’s place in your own world. That is where we find the “story” of our phone.

As psychologist Robert Epstein  says in this essay, we do not process a story like a computer. It is not data that we crunch and analyze. Rather, it’s another type of pattern match, between new information and what we already believe to be true.

As I’ve said many times, we have to understand why there is such a wide gap in how we all interpret the world. And the reason can be found in how we process what we take in through our senses.

The immediate sensory interpretation is essentially a quick and dirty pattern match. There would be no evolutionary purpose to store more information than is necessary to quickly categorize something. And the fidelity of that match is just accurate enough to do the job — nothing more.

For example, if I asked you to draw a can of Coca-Cola from memory, how accurate do you think it would be? The answer, proven over and over again, is that it probably wouldn’t look much like the “real thing.”

That’s coming from one sense, but the rest of your senses are just as faulty. You think you know how Coke smells and tastes and feels as you drink it, but these are low fidelity tags that act in a split second to help us recognize the world around us. They don’t have to be exact representations because that would take too much processing power.

But what’s really important to us is our “story” of Coke. That was clearly shown in one of my favorite neuromarketing studies, done at Baylor University by Read Montague.

He and his team reenacted the famous Pepsi Challenge — a blind taste test pitting Coke against Pepsi. But this time, they scanned the participant’s brains while they were drinking. The researchers found that when Coke drinkers didn’t know what they were drinking, only certain areas of their brains activated, and it didn’t really matter if they were drinking Coke or Pepsi.

But when they knew they were drinking Coke, suddenly many more parts of the brain started lighting up, including the prefrontal cortex, the part of the brain that is usually involved in creating our own personal narratives to help us understand our place in the world.

And while the actual can of Coke doesn’t change from person to person, our Story of Coke can be an individual to us as our own fingerprints.

We in the media are in the business of telling stories. This post is a story. Everything we do is a story. Sometimes they successfully connect with others, and sometimes they don’t. But in order to make effective use of the media we chose as a platform, we must remember we can only take a story halfway. On the other end there is our audience, each of whom has their own narratives that define them. Media is the middle ground where those two things connect.

The Split-Second Timing of Brand Trust

Two weeks ago, I talked about how brand trust can erode so quickly and cause so many issues. I intimated that advertising and branding have become decoupled — and advertising might even erode brand trust, leading to a lasting deficit.

Now I think that may be a little too simplistic. Brand trust is a holistic thing — the sum total of many moving parts. Taking advertising in isolation is misleading. Will one social media ad for a brand lead to broken trust? Probably not. But there may be a cumulative effect that we need to be aware of.

In looking at the Edelman Trust Barometer study closer, a very interesting picture emerges. Essentially, the study shows there is a trust crisis. Edelman calls it information bankruptcy.

The slide in trust is probably not surprising. It’s hard to be trusting when you’re afraid, and if there’s one thing the Edelman Barometer shows, it’s that we are globally fearful. Our collective hearts are in our mouths. And when this happens, we are hardwired to respond by lowering our trust and raising our defenses.

But our traditional sources for trusted information — government and media — have also abdicated their responsibilities to provide it. They have instead stoked our fears and leveraged our divides for their own gains. NGOs have suffered the same fate. So, if you can’t trust the news, your leaders or even your local charity, who can you trust?

Apparently, you can trust a corporation. Edelman shows that businesses are now the most trusted organizations in North America. Media, especially social media, is the least trusted institution. I find this profoundly troubling, but I’ll put that aside for a future post. For now, let’s just accept it at face value.

As I said in that previous column, we want to trust brands more than ever. But we don’t trust advertising. This creates a dilemma for the marketer.

This all brings to mind a study I was involved with a little over 10 years ago. Working with Simon Fraser University, we wanted to know how the brain responded to trusted brands. The initial results were fascinating — but unfortunately, we never got the chance to do the follow-up study we intended.

This was an ERP study (event-related potential), where we looked at how the brain responded when we showed brand images as a stimulus. ERP studies are useful to better understand the immediate response of the brain to something — the fast loop I talk so much about — before the slow loop has a chance to kick in and rationalize things.

We know now that what happens in this fast loop really sets the stage for what comes after. It essentially makes up the mind, and then the slow loop adds rational justification for what has already been decided.

What we found was interesting: The way we respond to our favorite brands is very similar to the way we respond to pictures of our favorite people. The first hint of this occurred in just 150 milliseconds, about one-sixth of a second. The next reinforcement was found at 400 milliseconds. In that time, less than half a second in total, our minds were made up. In fact, the mind was basically made up in about the same time it takes to blink an eye.  Everything that followed was just window dressing.

This is the power of trust. It takes a split second for our brains to recognize a situation where it can let its guard down. This sets in motion a chain of neurological events that primes the brain for cooperation and relationship-building. It primes the oxytocin pump and gets it flowing. And this all happens just that quickly.

On the other side, if a brand isn’t trusted, a very different chain of events occurs just as quickly. The brain starts arming itself for protection. Our amygdala starts gearing up. We become suspicious and anxious.

This platform of brand trust — or lack of it — is built up over time. It is part of our sense-making machinery. Our accumulating experience with the brand either adds to our trust or takes it away.

But we must also realize that if we have strong feelings about a brand, one way or the other, it then becomes a belief. And once this happens, the brain works hard to keep that belief in place. It becomes virtually impossible at that point to change minds. This is largely because of the split-second reactions our study uncovered.

This sets very high stakes for marketers today. More than ever, we want to trust brands. But we also search for evidence that this trust is warranted in a very different way. Brand building is the accumulation of experience over all touch points. Each of those touch points has its own trust profile. Personal experience and word of mouth from those we know is the highest. Advertising on social media is one of the lowest.

The marketer’s goal should be to leverage trust-building for the brand in the most effective way possible. Do it correctly, through the right channels, and you have built trust that’s triggered in an eye blink. Screw it up, and you may never get a second chance.