Harry, Meghan and the Curse of Celebrity

The new Netflix series on Harry and Meghan is not exactly playing out according to plan. A few weeks ago, MediaPost TV Columnist Adam Buckman talked about the series, which promised unprecedented intimate view into the lives of the wayward Royal and his partner; it’s aim being, “– to give the rest of us a full-access pass into every nook and cranny of the lives and minds of Harry and Meghan.”

Since then, reviews have been mixed. While it is (according to Netflix) their most watched documentary ever, the world seems to be responding with a collective yawn. It is certainly not turning out to be the PR boost the two were hoping for, at least based on some viewer reviews on Rotten Tomatoes. Here is just one sample: “A massive whinge fest based on a string of lies, half-truths, and distortions of reality from two of the most privileged people on the planet.”

What I found interesting in this is the complex concept of celebrity, and how it continues to evolve – or more accurately, devolve – in our culture. This is particularly true when we mix our attitudes of modern celebrity with the hoary construct of royalty.

If it does anything, I think Harry and Meghan shows how the very concept of celebrity has turned toxic and has poisoned whatever nominal value you may find in sustaining a monarchy. And, if we are going to dissect the creeping disease of celebrity, we must go to the root of the problem, the media, because our current concept of celebrity didn’t really exist before modern mass media.

We have evolved to keep an eye on those that are at the top of the societal pyramid. It was a good survival tactic to do so. Our apex figureheads – whether they be heroes or gods – served as role models; a literal case of monkey see, monkey do. But it also ensured political survival. There is a bucketload of psychology tucked up in our brains reinforcing this human trait.

In many mythologies, the line between heroes and gods was pretty fuzzy. Also, interestingly, gods were always carnal creatures. The Greek and Roman mythical gods and heroes ostensibly acted as both role models and moral cautionary tales. With great power came great hedonistic appetites.

This gradually evolved into royalty. With kings and queens, there was a very deliberate physical and societal distance kept between royalty and the average subject.  The messy bits of bad behavior that inevitably come with extreme privilege were always kept well hidden from the average subject.  It pretty much played out that way for thousands of years.

There was a yin and yang duality to this type of celebrity that evolved over time. If we trace the roots of the word notorious, we see the beginnings of this duality and get some hints of when it began to unravel.

Notorious comes from the latin notus – meaning to know. It’s current meaning, to be known for something negative, only started in the 17th century. It seems we could accept the duality of notoriety when it came to the original celebrities – our heroes and gods – but with the rise of Christianity and, later, Puritanism (which also hit its peak in the 17th century) we started a whitewash campaign on our own God’s image This had a trickle-down effect in a more strait-laced society. We held our heroes, our God, as well as our kings and queens to a higher standard. We didn’t want to think of them as carnal creatures.

Then, thanks to the media, things got a lot more complicated.

Up until the 19 century, there was really no thing as a celebrity the way we know them today. Those that care about such things generally agree that French actress Sarah Bernhardt was the first modern celebrity. She became such because she knew how to manipulate media. She was the first to get her picture in the press. She was able to tour the world, with the telegraph spreading the word before her arrival. As the 19th century drew to a close, our modern concept of celebrity as being born.

It took a while for this fascination with celebrity spilled over to monarchies. In the case of the house of Windsor (which is a made-up name. The actual name of the family was Saxe-Coburg – Gotha, a decidedly Germanic name that became problematic when England was at war with Germany in World War I) this problem came to a head rather abruptly with King Edward VIII. This was the first royal who revelled in celebrity and who tried to use the media to his advantage. The worlds of celebrity and royalty collided with his abdication in 1936.

In watching Harry and Meghan, I couldn’t help but recount the many, many collisions between celebrity and the Crown since then. The monarchy has always tried to control their image through the media and one can’t help feeling they have been hopelessly naïve in that attempt. Celebrity feeds on itself – it is the nature of the beast – and control is not an option.

Celebrity gives us the illusion of a false intimacy. We mistakenly believe we know the person who is famous, the same as we know those closest to us in our own social circle. We feel we have the right to judge them based on the distorted image we have of them that comes through the media. Somehow, we believe we know what motivates Harry and Meghan, what their ethics entail, what type of person they are.

I suppose one can’t fault Harry and Meghan for trying – yet again – to add their own narrative to the whirling pool of celebrity that surrounds them. But, if history is any indicator, it’s not really a surprise that it’s not going according to their plan.

The Ten Day Tech Detox

I should have gone cold turkey on tech. I really should have.

It would have been the perfect time – should have been the perfect time.

But I didn’t. As I spent 10 days on BC’s gorgeous sunshine coast with family, I also trundled along my assortment of connected gadgets. 

But I will say it was a partially successful detox. I didn’t crack open the laptop as much as I usually do. I generally restricted use of my iPad to reading a book.

But my phone – it was my phone, always within reach, that tempted me with social media’s siren call.

In a podcast, Andrew Selepak, social media professor at the University of Florida, suggests that rather than doing a total detox that is probably doomed to fail, you use vacations as an opportunity to use tech as a tool rather than an addiction.

I will say that for most of the time, that’s what I did. As long as I was occupied with something I was fine. 

Boredom is the enemy. It’s boredom that catches you. And the sad thing was, I really shouldn’t have been bored. I was in one of the most beautiful places on earth. I had the company of people I loved. I saw humpback whales – up close – for Heaven’s sake. If ever there was a time to live in the moment, to embrace the here and now, this was it. 

The problem, I realized, is that we’re not really comfortable any more with empty spaces – whether they be in conversation, in our social life or in our schedule of activities. We feel guilt and anxiety when we’re not doing anything.

It was an interesting cycle. As I decompressed after many weeks of being very busy, the first few days were fine. “I need this,” I kept telling myself. It’s okay just to sit and read a book. It’s okay not to have every half-hour slot of the day meticulously planned to jam as much in as possible.

That lasted about 48 hours. Then I started feeling like I should be doing something. I was uncomfortable with the empty spaces.

The fact is, as I learned – boredom always has been part of the human experience. It’s a feature – not a bug. As I said, boredom represents the empty spaces that allow themselves to be filled with creativity.  Alicia Walf, a neuroscientist and a senior lecturer in the Department of Cognitive Science at Rensselaer Polytechnic Institute, says it is critical for brain health to let yourself be bored from time to time.

“Being bored can help improve social connections. When we are not busy with other thoughts and activities, we focus inward as well as looking to reconnect with friends and family. 

Being bored can help foster creativity. The eureka moment when solving a complex problem when one stops thinking about it is called insight.

Additionally, being bored can improve overall brain health.  During exciting times, the brain releases a chemical called dopamine which is associated with feeling good.  When the brain has fallen into a predictable, monotonous pattern, many people feel bored, even depressed. This might be because we have lower levels of dopamine.”

That last bit, right there, is the clue why our phones are particularly prone to being picked up in times of boredom. Actually, three things are at work here. The first is that our mobile devices let us carry an extended social network in our pockets. In an article from Harvard, this is explained: “Thanks to the likes of Facebook, Snapchat, Instagram, and others, smartphones allow us to carry immense social environments in our pockets through every waking moment of our lives.”

As Walf said, boredom is our brains way of cueing us to seek social interaction. Traditionally, this was us getting the hell out of our cave – or cabin – or castle – and getting some face time with other humans. 

But technology has short circuited that. Now, we get that social connection through the far less healthy substitution of a social media platform. And – in the most ironic twist – we get that social jolt not by interacting with the people we might happen to be with, but by each staring at a tiny little screen that we hold in our hand.

The second problem is that mobile devices are not designed to leave us alone, basking in our healthy boredom. They are constantly beeping, buzzing and vibrating to get our attention. 

The third problem is that – unlike a laptop or even a tablet – mobile devices are our device of choice when we are jonesing for a dopamine jolt. It’s our phones we reach for when we’re killing time in a line up, riding the bus or waiting for someone in a coffee shop. This is why I had a hard time relegating my phone to being just a tool while I was away.

As a brief aside – even the term “killing time” shows how we are scared to death of being bored. That’s a North American saying – boredom is something to be hunted down and eradicated. You know what Italians call it? “Il dolce far niente” – the sweetness of doing nothing. Many are the people who try to experience life by taking endless photos and posting on various feeds, rather than just living it. 

The fact is, we need boredom. Boredom is good, but we are declaring war on it, replacing it with a destructive need to continually bath our brains in the dopamine high that comes from checking our Facebook feed or latest Tiktok reel. 

At least one of the architects of this vicious cycle feels some remorse (also from the article from Harvard). “ ‘I feel tremendous guilt,’ admitted Chamath Palihapitiya, former Vice President of User Growth at Facebook, to an audience of Stanford students. He was responding to a question about his involvement in exploiting consumer behavior. ‘The short-term, dopamine-driven feedback loops that we have created are destroying how society works,’ “

That is why we have to put the phone down and watch the humpback whales. That, miei amici, is il dolci far niente!

With Digital Friends Like These, Who Needs Enemies?

Recently, I received an email from Amazon that began:

“You’re amazing. Really, you’re awesome! Did that make you smile? Good. Alexa is here to compliment you. Just say, ‘Alexa, compliment me’”

“What,” I said to myself, “sorry-assed state is my life in that I need to depend on a little black electronic hockey puck to affirm my self-worth as a human being?”

I realize that the tone of the email likely had tongue at least part way implanted in cheek, but still, seriously – WTF Alexa? (Which, incidentally, Alexa also has covered. Poise that question and Alexa responds – “I’m always interested in feedback.”)

My next thought was, maybe I think this is a joke, but there are probably people out there that need this. Maybe their lives are dangling by a thread and it’s Alexa’s soothing voice digitally pumping their tires that keeps them hanging on until tomorrow. And – if that’s true – should I be the one to scoff at it?

I dug a little further into the question, “Can we depend on technology for friendship, for understanding, even – for love?”

The answer, it turns out, is probably yes.

A few studies have shown that we will share more with a virtual therapist than a human one in a face-to-face setting. We feel heard without feeling judged.

In another study, patients with a virtual nurse ended up creating a strong relationship with it that included:

  • Using close forms of greeting and goodbye
  • Expressing happiness to see the nurse
  • Using compliments
  • Engaging in social chat
  • And expressing a desire to work together and speak with the nurse again

Yet another study found that robots can even build a stronger relationship with us by giving us a pat on the hand or touching our shoulder. We are social animals and don’t do well when we lose that sociability. If we go too long without being touched, we experience something called “skin hunger” and start feeling stressed, depressed and anxious. The use of these robots is being tested in senior’s care facilities to help combat extreme loneliness.

In reading through these studies, I was amazed at how quickly respondents seemed to bond with their digital allies. We have highly evolved mechanisms that determine when and with whom we seem to place trust. In many cases, these judgements are based on non-verbal cues: body language, micro-expressions, even how people smell. It surprised me that when our digital friends presented none of these, the bonds still developed. In fact, it seems they were deeper and stronger than ever!

Perhaps it’s the very lack of humanness that is the explanation. As in the case of the success of a virtual therapist, maybe these relationships work because we can leave the baggage of being human behind. Virtual assistants are there to serve us, not judge or threaten us. We let our guards down and are more willing to open up.

Also, I suspect that the building blocks of these relationships are put in place not by the rational, thinking part of our brains but the emotional, feeling part. It’s been shown that self-affirmation works by activating the reward centers of our brain, the ventral striatum and ventromedial prefrontal cortex. These are not pragmatic, cautious parts of our cognitive machinery. As I’ve said before, they’re all gas and no brakes. We don’t think a friendship with a robot is weird because we don’t think about it at all, we just feel better. And that’s enough.

AI companionship seems a benign – even beneficial use of technology – but what might the unintended consequences be? Are we opening ourselves up to potential dangers by depending on AI for our social contact – especially when the lines are blurred between for-profit motives and affirmation we become dependent on.

In therapeutic use cases of virtual relationships as outlined up to now, there is no “for-profit” motive. But Amazon, Apple, Facebook, Google and the other providers of consumer directed AI companionship are definitely in it for the money. Even more troubling, two of those – Facebook and Google – depend on advertising for their revenue. Much as this gang would love us to believe that they only have our best interests in mind – over $1.2 trillion in combined revenue says otherwise. I suspect they have put a carefully calculated price on digital friendship.

Perhaps it’s that – more than anything – that threw up the red flags when I got that email from Amazon. It sounded like it was coming from a friend, and that’s exactly what worries me.

Does Social Media “Dumb Down” the Wisdom of Crowds?

We assume that democracy is the gold standard of sustainable political social contracts. And it’s hard to argue against that. As Winston Churchill said, “democracy is the worst form of government – except for all the others that have been tried.”

Democracy may not be perfect, but it works. Or, at least, it seems to work better than all the other options. Essentially, democracy depends on probability – on being right more often than we’re wrong.

At the very heart of democracy is the principle of majority rule. And that is based on something called Jury Theorem, put forward by the Marquis de Condorcet in his 1785 work, Essay on the Application of Analysis to the Probability of Majority Decisions. Essentially, it says that the probability of making the right decision increases when you average the decisions of as many people as possible. This was the basis of James Suroweicki’s 2004 book, The Wisdom of Crowds.

But here’s the thing about the wisdom of crowds – it only applies when those individual decisions are reached independently. Once we start influencing each other’s decision, that wisdom disappears. And that makes social psychologist Solomon Asch’s famous conformity experiments of 1951 a disturbingly significant fly in the ointment of democracy.

You’re probably all aware of the seminal study, but I’ll recap anyway. Asch gathered groups of people and showed them a card with three lines of obviously different lengths. Then he asked participants which line was the closest to the reference line. The answer was obvious – even a toddler can get this test right pretty much every time.

But unknown to the test subject, all the rest of the participants were “stooges” – actors paid to sometimes give an obviously incorrect answer. And when this happened, Asch was amazed to find that the test subjects often went against the evidence of their own eyes just to conform with the group. When wrong answers were given, a third of the subjects always conformed, 75% of the subjects conformed at least once, and only 25% stuck to the evidence in front of them and gave the right answer.

The results baffled Asch. The most interesting question to him was why this was happening. Were people making a decision to go against their better judgment – choosing to go with the crowd rather than what they were seeing with their own eyes? Or was something happening below the level of consciousness? This was something Solomon Asch wondered about right until his death in 1996. Unfortunately, he never had the means to explore the question further.

But, in 2005, a group of researchers at Emory University, led by Gregory Berns, did have a way. Here, Asch’s experiment was restaged, only this time participants were in a fMRI machine so Bern and his researchers could peak at what was actually happening in their brains. The results were staggering.

They found that conformity actually changes the way our brain works. It’s not that we change what we say to conform with what others are saying, despite what we see with our own eyes. What we see is changed by what others are saying.

If, Berns and his researchers reasoned, you were consciously making a decision to go against the evidence of your own eyes just to conform with the group, you should see activity in the frontal areas of our brain that are engaged in monitoring conflicts, planning and other higher-order mental activities.

But that isn’t what they found. In those participants that went along with obviously incorrect answers from the group, the parts of the brain that showed activity were only in the posterior parts of the brain – those that control spatial awareness and visual perception. There was no indication of an internal mental conflict. The brain was actually changing how it processed the information it was receiving from the eyes.

This is stunning. It means that conformity isn’t a conscious decision. Our desire to conform is wired so deeply in our brains, it actually changes how we perceive the world. We never have the chance to be objectively right, because we never realize we’re wrong.

But what about those that went resisted conformity and stuck to the evidence they were seeing with their own eyes? Here again, the results were fascinating. The researchers found that in these cases, they saw a spike of activity in the right amygdala and right caudate nucleus – areas involved in the processing of strong emotions, including fear, anger and anxiety. Those that stuck to the evidence of their own eyes had to overcome emotional hurdles to do so. In the published paper, the authors called this the “pain of independence.”

This study highlights a massively important limitation in the social contract of democracy. As technology increasingly imposes social conformity on our culture, we lose the ability to collectively make the right decision. Essentially, is shows that this effect not only erases the wisdom of crowds, but actively works against it by exacting an emotional price for being an independent thinker.

The Physical Foundations of Friendship

It’s no secret that I worry about what the unintended consequences might be for us as we increasingly substitute a digital world for a physical one. What might happen to our society as we spend less time face-to-face with people and more time face-to-face with a screen?

Take friendship, for example. I have written before about how Facebook friends and real friends are not the same thing. A lot of this has to do with the mental work required to maintain a true friendship. This cognitive requirement led British anthropologist Robin Dunbar to come up with something called Dunbar’s Number – a rough rule-of-thumb that says we can’t really maintain a network of more than 150 friends, give or take a few.

Before you say, “I have way more friends on Facebook than that,” realize that I don’t care what your Facebook Friend count is. Mine numbers at least 3 times more than Dunbar’s 150 limit. But they are not all true friends. Many are just the result of me clicking a link on my laptop. It’s quick, it’s easy, and there is absolutely no requirement to put any skin in the game. Once clicked, I don’t have to do anything to maintain these friendships. They are just part of a digital tally that persists until I might click again, “unfriending” them. Nowhere is the ongoing physical friction that demands the maintenance required to keep a true friendship from slipping into entropy.

So I was wondering – what is that magical physical and mental alchemy that causes us to become friends with someone in the first place? When we share physical space with another human, what is the spark that causes us to want to get to know them better? Or – on the flip side – what are the red flags that cause us to head for the other end of the room to avoid talking to them? Fortunately, there is some science that has addressed those questions.

We become friends because of something in sociology call homophily – being like each other. In today’s world, that leads to some unfortunate social consequences, but in our evolutionary environment, it made sense. It has to do with kinship ties and what ethologist Richard Dawkins called The Selfish Gene. We want family to survive to pass on our genes. The best way to motivate us to protect others is to have an emotional bond to them. And it just so happens that family members tend to look somewhat alike. So we like – or love – others who are like us.

If we tie in the impact of geography over our history, we start to understand why this is so. Geography that restricted travel and led to inbreeding generally dictated a certain degree of genetic “sameness” in our tribe. It was a quick way to sort in-groups from out-groups. And in a bloodier, less politically correct world, this was a matter of survival.

But this geographic connection works both ways. Geographic restrictions lead to homophily, but repeated exposure to the same people also increases the odds that you’ll like them. In psychology, this is called mere-exposure effect.

In these two ways, the limitations of a physical world has a deep, deep impact on the nature of friendship. But let’s focus on the first for a moment. 

It appears we have built-in “friend detectors” that can actually sense genetic similarities. In a rather fascinating study, Nicholas Christakis and James Fowler found that friends are so alike genetically, they could actually be family. If you drill down to the individual building blocks of a gene at the nucleotide level, your friends are as alike genetically to you as your fourth cousin. As Christakis and Fowler say in their study, “friends may be a kind of ‘functional kin’.”

This shows how deeply friendships bonds are hardwired into us. Of course, this doesn’t happen equally across all genes. Evolution is nothing if not practical. For example, Christakis and Fowler found that specific systems do stay “heterophilic” (not alike) – such as our immune system. This makes sense. If you have a group of people who stay in close proximity to each other, it’s going to remain more resistant to epidemics if there is some variety in what they’re individually immune to. If everyone had exactly the same immunity profile, the group would be highly resistant to some bugs and completely vulnerable to others. It would be putting all your disease prevention eggs in one basket.

But in another example of extreme genetic practicality, how similar we smell to our friends can be determined genetically.  Think about it. Would you rather be close to people who generally smell the same, or those that smell different? It seems a little silly in today’s world of private homes and extreme hygiene, but when you’re sharing very close living quarters with others and there’s no such thing as showers and baths, how everyone smells becomes extremely important.

Christakis and Fowler found that our olfactory sensibilities tend to trend to the homophilic side between friends. In other words, the people we like smell alike. And this is important because of something called olfactory fatigue. We use smell as a difference detector. It warns us when something is not right. And our nose starts to ignore smells it gets used to, even offensive ones. It’s why you can’t smell your own typical body odor. Or, in another even less elegant example, it’s why your farts don’t stink as much as others. 

Given all this, it would make sense that if you had to spend time close to others, you would pick people who smelled like you. Your nose would automatically be less sensitive to their own smells. And that’s exactly what a new study from the Weizmann Institute of Science found. In the study, the scent signatures of complete strangers were sampled using an electronic sniffer called an eNose. Then the strangers were asked to engage in nonverbal social interactions in pairs. After, they were asked to rate each interaction based on how likely they would be to become friends with the person. The result? Based on their smells alone, the researchers were able to predict with 71% accuracy who would become friends.

The foundations of friendship run deep – down to the genetic building blocks that make us who we are. These foundations were built in a physical world over millions of years. They engage senses that evolved to help us experience that physical world. Those foundations are not going to disappear in the next decade or two, no matter how addictive Facebook or TikTok becomes. We can continue to layer technology over these foundations, but to deny them it to ignore human nature.

Don’t Be Too Quick To Dismiss The Metaverse

According to my fellow Media Insider Maarten Albarda, the metaverse is just another in a long line of bright shiny objects that — while promising to change the world of marketing — will probably end up on the giant waste heap of overhyped technologies.

And if we restrict Maarten’s caution to specifically the metaverse and its impact on marketing, perhaps he’s right. But I think this might be a case of not seeing the forest for the trees.

Maarten lists a number of other things that were supposed to revolutionize our lives: Clubhouse, AI, virtual reality, Second Life. All seemed to amount to much ado about nothing.

But as I said almost 10 years ago, when I first started talking about one of those overhyped examples, Google Glass — and what would eventually become the “metaverse” (in rereading this, perhaps I’m better at predictions than I thought)  — the overall direction of these technologies do mark a fundamental shift:

“Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the ‘cloud.’ It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections.”

As Wired founder and former executive editor Kevin Kelly has told us, technology knows what it wants. Eventually, it gets it. Sooner or later, all these things are bumping up against a threshold that will mark a fundamental shift in how we live.

You may call this the long awaited “singularity” or not. Regardless, it does represent a shift from technology being a tool we use consciously to enhance our experiences, to technology being so seamlessly entwined with our reality that it alters our experiences without us even being aware of it. We’re well down this path now, but the next decade will move us substantially further, beyond the point of no return.

And that will impact everything, including marketing.

What is interesting is the layer technology is building over the real world, hence the term “meta.” It’s a layer of data and artificial intelligence that will fundamentally alter our interactions with that world. It’s technology that we may not use intentionally — or, beyond the thin layer of whatever interface we use, may not even be aware of.

This is what makes it so different from what has come before. I can think of no technical advance in the past that is so consequential to us personally yet functions beyond the range of our conscious awareness or deliberate usage. The eventual game-changer might not be the metaverse. But a change is coming, and the metaverse is a signal of that.

Technology advancing is like the tide coming in. If you watch the individual waves coming in, they don’t seem to amount to much. One stretches a little higher than the last, followed by another that fizzles out at the shoreline. But cumulatively, they change the landscape — forever. This tide is shifting humankind’s relationship with technology. And there will be no going back.

Maybe Maarten is right. Maybe the metaverse will turn out to be a big nothingburger. But perhaps, just perhaps, the metaverse might be the Antonio Meucci  of our time: an example where the technology was inevitable, but the timing wasn’t quite right.

Meucci was an Italian immigrant who started working on the design of a workable telephone in 1849, a full two decades before Alexander Graham Bell even started experimenting with the concept.  Meucci filed a patent caveat in 1871, five years before Bell’s patent application was filed, but was destitute and didn’t have the money to renew it.  His wave of technological disruption may have hit the shore a little too early, but that didn’t diminish the significance of the telephone, which today is generally considered one of the most important inventions  of all time in terms of its impact on humanity.

Whatever is coming, and whether or not the metaverse represents the sea change catalyst that alters everything, I fully expect at some point in the very near future to pinpoint this time as the dawn of the technological shift that made the introduction of the telephone seem trivial in comparison.

Why Outré is En Vogue

Last week, I talked about the planeload of social media influencers that managed to ruffle the half-frozen feathers of we normally phlegmatic Canadians. But that example got me thinking. Outrage – or, as the French say, “outre” – sells. The more outrageous it is, the better it seems to work. James William Awad  – the man behind the Plane of Shame – knew this. And we all just obligingly fell into his trap.

This all depends on how understanding how social networks work. Let’s begin by admitting that humans love to gossip. Information gives us status. The more interesting the information, the higher it’s value and, accordingly, the higher our social status. The currency of social networks is curiosity, having something that people will pay attention to.

But there is also the element of tribal identification. We signal who we are by the information we share. To use Canadian sociologist Ervin Goffman’s analogy, we are all actors and what we share is part of the role we have built for ourselves.

But these roles are not permanent. They shift depending on what stage we’re on and who the audience is. In today’s world social media has given us a massive stage.  And I suspect this might overload our normal social mechanisms. On this stage, we know that things that spread on social media tend to be in outlier territory, far from the boring middle ground of the everyday; they could be things we love or things that shock and outrage. Whether we love or hate the things we share depends on which tribe we identify with at the time.

Think of us humans as having a sharing thermostat where the trigger point is set depending on how strongly our emotions are triggered. If a post with new information doesn’t hit the threshold, it doesn’t get shared. Once that threshold is passed, the likelihood to share increases with the intensity of our emotions. It’s true for us, and because we’re human, it’s also true for everyone else that sees our post. The benefits of sharing juicy information is immediately reinforced through the dopamine releasing mechanism of getting likes and shares. The higher the number, the bigger the natural high.

But even when they lie well out in outlier territory, good news and bad news are not created equal. In evolutionary terms, we are hardwired to pay more attention to bad news. Good news might make us temporarily feel better, but bad news might kill us. If we want to survive long enough to pass on our genes, we better pay attention to the things that threaten us. That’s why traditional broadcasters know, “if it bleeds, it leads.”

Harvard Business School professor Amit Goldenberg found the same is true for social networks. “Although people produce much more positive content on social media in general, negative content is much more likely to spread,” says Goldenberg.

This creates an interesting – and potentially dangerous – arena for social and influencer marketing to play out in. The example I used in my last post is a perfect example. If you can outrage people, you win. It will spread virally through social networks, creating so much noise that eventually, traditional media will pick it up. This then connects the story to a broader social media audience. You get an amplification feedback loop that keeps reaching more and more people. Yes, the majority of the people will be outraged, but your target market will be delighted. Again, it all depends on which tribe you identify with.

It’s this appeal to the basest of human instincts that is troubling about this new spin on “earned” media. Savvy marketers have learned to game the system by pushing our hot buttons, leaving us in a perpetual state of pissed-off-edness.

The most frustrating thing is – it works.

The Complexities Of Understanding Each Other

How our brain understands things that exist in the real world is a fascinating and complex process.

Take a telephone, for example.

When you just saw that word in print, your brain went right to work translating nine abstract symbols (including the same one repeated three times), the letters we use to write “telephone,” into a concept that means something to you. And for each of you reading this, the process could be a little different. There’s a very good likelihood you’re picturing a phone. The visual cortex of your brain is supplying you with an image that comes from your real-world experience with phones.

But perhaps you’re thinking of the sound a phone makes, in which case the audio center of your brain has come to life and you’re reimagining the actual sound of a phone.

recent study from the Max Planck Institute found there’s a hierarchy of understanding that activates in the brain when we think of things, going from the concrete at the lowest levels to the abstract at higher levels. It can all get quite complex — even for something relatively simple like a phone.

Imagine what a brain must go through to try to understand another person.

Another study from Ruhr University in Bochum, Germany, tried to unpack that question. The research team found, again, that the brain pulls many threads together to try to understand what another person might be going through. It pulls back clues that come through our senses. But, perhaps most importantly, in many cases it attempts to read the other person’s mind. The research team believes it’s this ability that’s central to social understanding.  “It enables us to develop an individual understanding of others that goes beyond the here and now,” explains researcher Julia Wolf. “This plays a crucial role in building and maintaining long-term relationships.”

In both these cases of understanding, our brains rely on our experience in the real world to create an internal realization in our own brains. The richer those experiences are, the more we have to work with when we build those representations in our mind.

This becomes important when we try to understand how we understand each other. The more real-world experience we have with each other, the more successful we will be when it comes to truly getting into someone else’s head. This only comes from sharing the same physical space and giving our brains something to work with. “All strategies have limited reliability; social cognition is only successful by combining them,” says study co-researcher Sabrina Coninx.

I have talked before about the danger of substituting a virtual world for a physical one when it comes to truly building social bonds. We just weren’t built to do this. What we get through our social media channels is a mere trickle of input compared to what we would get through a real flesh-and-blood interaction.

Worse still, it’s not even an unbiased trickle. It’s been filtered through an algorithm that is trying to interpret what we might be interested in. At best it is stripped of context. At worst, it can be totally misleading.

Despite these worrying limitations, more and more of us are relying on this very unreliable signal to build our own internal representations of reality, especially those involving other people.

Why is this so dangerous? It’s The negative impact of social media is twofold. First it strips us of the context we need to truly understand each other, and then it creates an isolation of understanding. We become ideologically balkanized.

Balkanization is the process through which those that don’t agree with each other become formally isolated from each other. It was first used to refer to the drawing of boundaries between regions (originally in the Balkan peninsula) that were ethnically, politically or religiously different from each other.

Balkanization increasingly relies on internal representations of the “other,” avoiding real world contact that may challenge those representations. The result is a breakdown of trust and understanding across those borders. And it’s this breakdown of trust we should be worried about.

Our ability to reach across boundaries to establish mutually beneficial connections is a vital component in understanding the progress of humans. In fact, in his book “The Rational Optimist,” Matt Ridley convincingly argues that this ability to trade with others is the foundation that has made homo sapiens dominant on this planet. But, to successfully trade and prosper, we have to trust each other. “As a broad generalisation, the more people trust each other in a society, the more prosperous that society is, and trust growth seems to precede income growth,” Ridley explains.

As I said, balkanization is a massive breakdown of trust. In every single instance in the history of humankind, a breakdown of trust leads to a society that regresses rather than advances. But if we take every opportunity to build trust and break down the borders of balkanization, we prosper.

Neuroeconomist Paul Zak, who has called the neurotransmitter oxytocin the “trust molecule,” says, “A 15% increase in the proportion of people in a country who think others are trustworthy, raises income per person by 1% per year for every year thereafter.”

We evolved to function in a world that was messy, organic and, most importantly, physical. Our social mechanisms work best when we keep bumping into each other, whether we want to or not. Technology might be wonderful at making the world more efficient, but it doesn’t do a very good job at making it more human.

Moving Beyond Willful Ignorance

This is not the post I thought I’d be writing today. Two weeks ago, when I started to try to understand willful ignorance, I was mad. I suspect I shared that feeling with many of you. I was tired of the deliberate denial of fact that had consequences for all of us. I was frustrated with anti-masking, anti-vaxxing, anti-climate change and, most of all, anti-science. I was ready to go to war with those I saw in the other camp.

And that, I found out, is exactly the problem. Let me explain.

First, to recap. As I talked about two weeks ago, willful ignorance is a decision based on beliefs, so it’s very difficult – if not impossible – to argue, cajole or inform people out of it. And, as I wrote last week, willful ignorance has some very real and damaging consequences. This post was supposed to talk about what we do about that problem. I intended to find ways to isolate the impact of willful ignorance and minimize its downside. In doing so, I was going to suggest putting up even more walls to separate “us” from “them.”

But the more I researched this and thought about it, the more I realized that that was exactly the wrong approach. Because this recent plague of willful ignorance is many things, but – most of all – it’s one more example of how we love to separate “us” from “them.” And both sides, including mine, are equally guilty of doing this. The problem we have to solve here is not so much to change the way that some people process information (or don’t) in a way we may not agree with. What we have to fix is a monumental breakdown of trust.

Beliefs thrive in a vacuum. In a vacuum, there’s nothing to challenge them. And we have all been forced into a kind of ideological vacuum for the past year and a half. I talked about how our physical world creates a more heterogeneous ideological landscape than our virtual world does. In a normal life, we are constantly rubbing elbows with those of all leanings. And, if we want to function in that life, we have to find a way to get along with them, even if we don’t like them or agree with them. For most of us, that natural and temporary social bonding is something we haven’t had to do much lately.

It’s this lowering of our ideological defence systems that starts to bridge the gaps between us and them. And it also starts pumping oxygen into our ideological vacuums, prying the lids off our air-tight belief systems. It might not have a huge impact, but this doesn’t require a huge impact. A little trust can go a long way.

After World War II, psychologists and sociologists started to pick apart a fundamental question – how did our world go to war with itself? How, in the name of humanity, did the atrocities of the war occur? One of the areas they started to explore with vigour was this fundamental need of humans to sort ourselves into the categories of “us” and “them”.

In the 1970’s, psychologist Henri Tajfel found that we barely need a nudge to start creating in-groups and out-groups. We’ll do it for anything, even something as trivial as which abstract artist, Klee or Kandisky, we prefer. Once sorted on the flimsiest of premises, these groups started showing a strong preference to favour their own group and punish the other. There was no pre-existing animosity between the groups, but in games such as the Banker’s Game, they showed that they would even forego rewards for themselves if it meant depriving the other group of their share.

If we do this for completely arbitrary reasons such as those used by Tajfel, imagine how nasty we can get when the stakes are much higher, such as our own health or the future of the planet.

So, if we naturally sort ourselves into in groups and out groups and our likelihood to consider perspectives other than our own increases the more we’re exposed to those perspectives in a non-hostile environment, how do we start taking down those walls?

Here’s where it gets interesting.

What we need to break down the walls between “us” and “them” is to find another “them” that we can then unite against.

One of the theories about why the US is so polarized now is that with the end of the Cold War, the US lost a common enemy that united “us” in opposition to “them”. Without the USSR, our naturally tendency to categorize ourselves into ingroups and outgroups had no option but to turn inwards. You might think this is hogwash, but before you throw me into the “them” camp, let me tell you about what happened in Robbers Cave State Park in Oklahoma.

One of the experiments into this ingroup/outgroup phenomenon was conducted by psychologist Muzafer Sherif in the summer of 1954. He and his associates took 22 boys of similar backgrounds (ie they were all white, Protestant and had two parents) to a summer camp at Robbers Cave and randomly divided them into two groups. First, they built team loyalty and then they gradually introduced a competitive environment between the two groups. Predictably, animosity and prejudice soon developed between them.

Sherif and his assistants then introduced a four-day cooling off period and then tried to reduce conflict by mixing the two groups. It didn’t work. In fact, it just made things worse. Things didn’t improve until the two groups were brought together to overcome a common obstacle when the experimenters purposely sabotaged the camp’s water supply. Suddenly, the two groups came together to overcome a bigger challenge. This, by the way, is exactly the same theory behind the process that NASA and Amazon’s Blue Origin uses to build trust in their flight crews.

As I said, when I started this journey, I was squarely in the “us” vs “them” camp. And – to be honest – I’m still fighting my instinct to stay there. But I don’t think that’s the best way forward. I’m hoping that as our world inches towards a better state of normal, everyday life will start to force the camps together and our evolved instincts for cooperation will start to reassert themselves.

I also believe that the past 19 months (and counting) will be a period that sociologists and psychologists will study for years to come, as it’s been an ongoing experiment in human behavior at a scope that may never happen again.

We can certainly hope so.

Why Is Willful Ignorance More Dangerous Now?

In last week’s post, I talked about how the presence of willful ignorance is becoming something we not only have to accept, but also learn how to deal with. In that post, I intimated that the stakes are higher than ever, because willful ignorance can do real damage to our society and our world.

So, if we’ve lived with willful ignorance for our entire history, why is it now especially dangerous? I suspect it’s not so much that willful ignorance has changed, but rather the environment in which we find it.

The world we live in is more complex because it is more connected. But there are two sides to this connection, one in which we’re more connected, and one where we’re further apart than ever before.

Technology Connects Us…

Our world and our society are made of networks. And when it comes to our society, connection creates networks that are more interdependent, leading to complex behaviors and non-linear effects.

We must also realize that our rate of connection is accelerating. The pace of technology has always been governed by Moore’s Law, the tenet that the speed and capability of our computers will double every two years. For almost 60 years, this law has been surprisingly accurate.

What this has meant for our ability to connect digitally is that the number and impact of our connections has also increased exponentially, and it will continue to increase in our future. This creates a much denser and more interconnected network, but it has also created a network that overcomes the naturally self-regulating effects of distance.

For the first time, we can have strong and influential connections with others on the other side of the globe. And, as we forge more connections through technology, we are starting to rely less on our physical connections.

And Drives Us Further Apart

The wear and tear of a life spent bumping into each other in a physical setting tends to smooth out our rougher ideological edges. In face-to-face settings, most of us are willing to moderate our own personal beliefs in order to conform to the rest of the crowd. Exactly 80 years ago, psychologist Solomon Asch showed how willing we were to ignore the evidence of our own eyes in order to conform to the majority opinion of a crowd.

For the vast majority of our history, physical proximity has forced social conformity upon us. It leavens out our own belief structure in order to keep the peace with those closest to us, fulfilling one of our strongest evolutionary urges.

But, thanks to technology, that’s also changing. We are spending more time physically separated but technically connected. Our social conformity mechanisms are being short-circuited by filter bubbles where everyone seems to share our beliefs. This creates something called an availability bias:  the things we see coming through our social media feeds forms our view of what the world must be like, even though statistically it is not representative of reality.

It gives the willfully ignorant the illusion that everyone agrees with them — or, at least, enough people agree with them that it overcomes the urge to conform to the majority opinion.

Ignorance in a Chaotic World

These two things make our world increasingly fragile and subject to what chaos theorists call the Butterfly Effect, where seemingly small things can make massive differences.

It’s this unique nature of our world, which is connected in ways it never has been before, that creates at least three reasons why willful ignorance is now more dangerous than ever:

One: The impact of ignorance can be quickly amplified through social media, causing a Butterfly Effect cascade. Case in point, the falsehood that the U.S. election results weren’t valid, leading to the Capitol insurrection of Jan. 6.

The mechanics of social media that led to this issue are many, and I have cataloged most of them in previous columns: the nastiness that comes from arm’s-length discourse, a rewiring of our morality, and the impact of filter bubbles on our collective thresholds governing anti-social behaviors.

Secondly, and what is probably a bigger cause for concern, the willfully ignorant are very easily consolidated into a power base for politicians willing to play to their beliefs. The far right — and, to a somewhat lesser extent, the far left — has learned this to devastating impact. All you have to do is abandon your predilection for telling the truth so you can help them rationalize their deliberate denial of facts. Do this and you have tribal support that is almost impossible to shake.

The move of populist politicians to use the willfully ignorant as a launch pad for their own purposes further amplifies the Butterfly Effect, ensuring that the previously unimaginable will continue to be the new state of normal.

Finally, there is the third factor: our expanding impact on the physical world. It’s not just our degree of connection that technology is changing exponentially. It’s also the degree of impact we have on our physical world.

For almost our entire time on earth, the world has made us. We have evolved to survive in our physical environment, where we have been subject to the whims of nature.

But now, increasingly, we humans are shaping the nature of the world we live in. Our footprint has an ever-increasing impact on our environment, and that footprint is also increasing exponentially, thanks to technology.

The earth and our ability to survive on it are — unfortunately — now dependent on our stewardship. And that stewardship is particularly susceptible to the impact of willful ignorance. In the area of climate change alone, willful ignorance could — and has — led to events with massive consequences. A recent study estimates that climate change is directly responsible for 5 million deaths a year.

For all these reasons, willful ignorance is now something that can have life and death consequences.