Why The World is Conspiring Against Us

With all the other things 2020 will go down in history for, it has also proven to be a high-water mark for conspiracy theories. And that shouldn’t surprise us. Science has proven that when the going get tough, the paranoid get weirder. Add to this the craziness multiplier effect of social media, and it’s no wonder that 2020 has given us a bumper crop of batshit crazy. 

As chronicled for you, my dear reader, I kicked over my own little hornet’s nest of conspiracy craziness a few weeks ago. I started with probing a little COVID anti-vaxxing lunacy right here in my home and native land, Canada.The next thing I knew, the QAnoners were lurching out of the woodwork like the coming of the zombie apocalypse.

I have since run for cover.

But as I was running, I noticed two things. One, most of the people sharing the theories were from the right side of the political spectrum. And two, while they’ve probably always been inclined to indulge in conspiratorial thinking, it seems (anecdotally, anyway) that it’s getting worse.

So I decided to dig a little deeper to find the answers to two questions: Why them, and why now?

Let’s start with why them?

My Facebook experience started with the people I grew up with in a small town in Alberta. It’s hard to imagine a more conservative place. The primary industries are oil, gas and farming. Cowboys — real cowboys wearing real Levi jeans — still saunter down Main Street. This was the first place in Western Canada to elect a representative whose goal was to take Western Canada out of a liberal (and Eastern intellectual elitist)—dominated confederation. If you wanted to find the equivalent of Trumpism in Canada, you’d stand a damn good chance of finding it in this part of Alberta. 

So I wondered: What is about conservatives, especially from the extreme right side of conservatism, that make them more susceptible to spreading conspiracy theories?

It turns out it’s not just the extreme right that believes in conspiracies. According to one study, those on the extreme right or left are more apt to believe in conspiracies. It’s just that it happens more often on the right.

And that could be explained by looking at the types of personalities who tend to believe in conspiracies. According to a 2017 analysis of U.S. data by Daniel Freeman and Richard Bentall, over a quarter of the American population are convinced that “there is a conspiracy behind many things in the world.” 

Not surprisingly, when you dig down to the roots of these beliefs, it comes down to a crippling lack of trust, closely tying those ideas to paranoia. Freeman and Bentall noted, “Unfounded conspiracy beliefs and paranoid ideas are both forms of excessive mistrust that may be corrosive at both an individual and societal level.”

So, if one out of every four people in the U.S. (and apparently a notable percentage of Canadians) lean this way, what are these people like? It turns out there are a cluster of personality traits  likely to lead to belief in conspiracy theories.  

First, these people tend to be anxious about things in general. They have a lower level of education and are typically in the bottom half of income ranges. More than anything, they feel disenfranchised and that the control that once kept their world on track has been lost. 

Because of this, they feel victimized by a powerful elite. They have a high “BS receptivity.” And they believe that only they and a small minority of the like-minded know the real truth. In this way, they gain back some of the individual control they feel they’ve lost.

Given the above, you could perhaps understand why, during the Obama years, conspiracy theorists tended to lean to the right. But if anything, there are more conspiratorial conservatives then ever after almost four years of Trump. Those in power were put there by people who don’t trust those in power. So that brings us to the second question: Why now?

Obviously, it’s been a crappy year that has cranked up everybody’s anxiety level. But the conspiracy wave was already well-established when COVID-19 came along. And that wave started when Republicans (and hard right-wing politicians worldwide) decided to embrace populism as a strategy. 

The only way a populist politician can win is by dividing the populace. Populism is – by its nature – antagonistic in nature. There needs to be an enemy, and that enemy is always on the other side of the political divide. As Ezra Klein points out in his book  “Why We’re Polarized,” population density and the U.S. Electoral College system makes populism a pretty effective strategy for the right.

This is why Republicans are actually stoking the conspiracy fires, including outright endorsement of the QAnon-sense. Amazing as it seems, Republicans are like Rocky Balboa: Even when they win, they seem able to continue being the underdog. 

The core that has been whipped up by populism keeps shadow boxing with their avowed enemy: the liberal elite. This political weaponization of conspiracy theories continues to find a willing audience who eagerly amplify it through social media. There is some evidence to show that extreme conservatives are more willing that embrace conspiracies than extreme liberals, but the biggest problem is that there is a highly effective conspiracy machine continually pumping out right-targeted theories.

It seems there were plenty of conspiracies theories making the rounds well before now. The shitstorm that became known as the year 2020 is simply adding fuel to an already raging fire.

Our Brain And Its Junk News Habit

Today, I’m going to return to the Reuter’s Digital News Report and look at the relationship between us, news and social media. But what I’m going to talk about is probably not what you think I’m going to talk about.

Forget all the many, many problems that come with relying on social media to be informed. Forget about filter bubbles and echo chambers. Forget about misleading or outright false stories. Forget about algorithmic targeting. Forget about the gaping vulnerabilities that leave social media open to nefarious manipulation. Forget all that (but just for the moment, because those are all horrible and very real problems that we need to focus on).

Today, I want to talk about one specific problem that comes when we get our news through social media. When we do that, our brains don’t work the way they should if we want to be well informed.

First, let’s talk about the scope of the issue here. According to the Reuter’s study, in the U.S. more people — 72% — turn online for news than any other source. Television comes in second at 59%. If we single out social media, it comes in third at 48%. Trailing the pack is print media at just 20%.

Reuters Digital News Study 2020 – Sources of News in US

If we plot this on a chart over the last seven years, print and social media basically swapped spots, with their respective lines crossing each other in 2014; one trending up and one trending down. In 2013, 47% of us turned to print as a primary news source and just 27% of us went to social media.

If we further look at those under 35, accessing news through social media jumps to the number-one spot by a fairly wide margin. And because they’re young, we’re not talking Facebook here. Those aged 18 to 24 are getting their news through Instagram, Snapchat and TikTok.

The point, if it’s not clear by now, is that many of us get our news through a social media channel — and the younger we are, the more that’s true. The paradox is that the vast majority of us — over 70% — don’t trust the news we see on our social media feeds. If we were to pick an information source we trusted, we would never go to social media.

This brings up an interesting juxtaposition in how we’re being informed about the world: almost all of us are getting our news through social media, but almost none of us are looking for it when we do.

According to the Reuter’s Report, 72% of us (all ages, all markets) get our news through the “side door.” This means we are delivered news — primarily through social media and search — without us intentionally going directly to the source of the information. For those aged 18 to 24, “side door” access jumps to 84% and, of that, access through social media jumps to 38%.

Our loyalty to the brand and quality of an information provider is slipping between our fingers and we don’t seem to care. We say we want objective, non-biased, quality news sources, but in practice we lap up whatever dubious crap is spoon-fed to us by Facebook or Instagram. It’s the difference between telling our doctor what we intend to eat and what we actually eat when we get home to the leftover pizza and the pint of Häagen-Dazs in our fridge.

The difference between looking for and passively receiving information is key to understanding how our brain works. Let’s talk a little bit about “top-down” and “bottom-up” activation and the “priming” of our brain.

When our brain has a goal — like looking for COVID-19 information — it behaves significantly differently than when it is just bored and wanting to be entertained.

The goal sets a “top down” intent. It’s like an executive order to the various bits and pieces of our brain to get their shit together and start working as a team. Suddenly the entire brain focuses on the task at hand and things like reliability of information become much more important to us. If we’re going to go directly to a information source we trust, this is going to be when we do it.

If the brain isn’t actively engaged in a goal, then information has to initiate a “bottom-up” activation. And that is an entirely different animal.

We never go to social media looking for a specific piece of news. That’s not how social media works. We go to our preferred social channels either out of sheer boredom or a need for social affirmation. We hope there’s something in the highly addictive endlessly scrolling format that will catch our attention.

For a news piece to do that, it has to somehow find a “hook” in our brain.  Often, that “hook” is an existing belief. The parts of our brain that act as gatekeepers against unreliable information are bypassed because no one bothered to wake them up.

There is a further brain-related problem with relying on social media, and that’s the “priming” issue. This is where one stimulus sets a subconscious “lens” that will impact subsequent stimuli. Priming sets the brain on a track we’re not aware of, which makes it difficult to control.

Social media is the perfect priming platform. One post sets the stage for the next, even if they’re completely unrelated.

These are just two factors that make social media an inherently dangerous platform to rely on for being informed.

The third is that social media makes information digestion much too easy. Our brain barely needs to work at all. And if it does need to work, we quickly click back and scroll down to the next post. Because we’re looking to be entertained, not informed, the brain is reluctant to do any unnecessary heavy lifting.   

This is a big reason why we may know the news we get through social media channels is probably not good for us, but we gulp it down anyway, destroying our appetite for more trustworthy information sources.

These three things create a perfect cognitive storm for huge portions of the population to be continually and willingly misinformed. That’s not even factoring in all the other problems with social media that I mentioned at the outset of this column. We need to rethink this — soon!

Playing Fast and Loose with the Truth

A few months ago, I was having a conversation with someone and they said something that I was pretty sure was not true. I don’t know if it was a deliberate lie. It may have just been that this particular person was uninformed. But they said it with the full confidence that what they said was true. I pushed back a little and they instantly defended their position.

My first instinct was just to let it go. I typically don’t go out of my way to cause friction in social settings. Besides, it was an inconsequential thing. I didn’t really care about it. But I was feeling a little pissy at the time, so I fact checked her by looking it up on my phone. And I was right. She had stated something that wasn’t true and then doubled down on it.

Like I said, it was inconsequential – a trivial conversation point. But what if it wasn’t? What if there was a lot riding on whether or not what they said was true? What if this person was in a position of power, like – oh, I don’t know – the President of the United States?

The role of truth in our social environment is currently a thing in flux. I cannot remember a time when we have been more suspicious of what we see, read and hear on a daily basis. As I mentioned a couple weeks ago, less than 40% of us trust what we hear on the news. And when that news comes through our social media feed, the level of distrust jumps to a staggering 80%

Catching someone in a lie has significant social and cognitive implications. We humans like to start from a default position of trust. If we can do that, it eliminates a lot of social friction and cognitive effort. We only go to not trusting when we have to protect ourselves.

Our proclivity for trust is what has made a global commerce and human advancement possible. But, unfortunately, it does leave us vulnerable. Collectively, we usually play by the same playbook I was initially going to use in my opening example. It’s just easier to go along with what people say, even if we may doubt that it’s true. This is especially so if the untruth is delivered with confidence. We humans love confidence in others because it means we don’t have to work as hard. Confidence is a signal we use to decide to trust and trust is always easier than distrust. The more confident the delivery, the less likely we are to question it.

It’s this natural human tendency that put the “con” in “con artist.” “Con” is short for confidence, and it originates with an individual named William Thompson, who plied the streets of New York in the 1840’s. He would walk up to a total stranger who was obviously well off and greet them like a long-lost friend. After a few minutes of friendly conversation during which the target would be desperately trying to place this individual, Thompson would ask for the loan of something of value. He would then set his hook with this, “Do you have confidence in me to loan me this [item] til tomorrow?”  The success of this scam was totally dependent on an imbalance of confidence; extreme confidence on the part of the con artist and a lack of confidence on the part of the target.

It is ironic that in an era where it’s easier than ever to fact check, we are seeing increasing disregard for the truth. According to the Washington Post, Donald Trump passed a misinformation milestone on July 9, making 20,000 false or misleading claims since he became President. He surged past that particular post when he lied 62 times on that day alone. I don’t even think I talk 62 times per day.

This habit of playing fast and loose with the truth is not Trump’s alone. Unfortunately, egregious lying has been normalized in today’s world. We have now entered an era where full time fact checking is necessary. On July 7, NY Times columnist Thomas Friedman said we need a Biden-Trump debate, but only on two conditions: First, only if Trump releases his tax returns, and second, only if there is a non-partisan real-time fact-checking team keeping the debaters accountable.

We have accepted this as the new normal. But we shouldn’t. There is an unacceptable cost we’re paying by doing so. And that cost becomes apparent when we think about the consequence of lying on a personal basis.

If we catch an acquaintance in a deliberate lie, we put them in the untrustworthy column. We are forced into a default position of suspicion whenever we deal with them in the future. This puts a huge cognitive load on us. As I said before, it takes much more effort to not trust someone. It makes it exponentially harder to do business with them. It makes it more difficult to enjoy their company. It introduces friction into our relationship with them.

Even if the lie is not deliberate but stated with confidence, we label them as uninformed. Again, we trust them less.

Now multiply this effort by everyone. You quickly see where the model breaks down. Lying may give the liar a temporary advantage, but it’s akin to a self-limiting predator-prey model. If it went unchecked, soon the liars would only have other liars to deal with. It’s just not sustainable.

Truth exists for a reason. It’s the best social strategy for the long term. We should fight harder for it.

How We Forage for the News We Want

Reuters Institute out of the UK just released a comprehensive study looking at how people around the world are finding their news. There is a lot here, so I’ll break it into pieces over a few columns and look at the most interesting aspects. Today, I’ll look at the 50,000-foot view, which can best be summarized as a dysfunctional relationship between our news sources and ourselves. And like most dysfunctional relationships, the culprit here is a lack of trust.

Before we dive in, we should spend some time looking at how the way we access news has changed over the last several years.

Over my lifetime, we have trended in two general directions – less cognitively demanding news channels and less destination specific news sources. The most obvious shift has been away from print. According to Journalism.org and the Pew Research Center, circulation of U.S. Daily newspapers peaked around 1990, at about 62 and a half million. That’s one subscription for every 4 people in the country at that time.

In 2018, it was projected that circulation had dropped more than 50%, to less than 30 million. That would have been one subscription for every 10 people. We were no longer reading our news in a non-digital format. And that may have significant impact on our understanding of the news. I’ll return to this in another column, but for now, let’s just understand that our brain operates in a significantly different way when it’s reading rather than watching or listening.

Up the end of the last century, we generally trusted news destinations. Whether it be a daily newspaper like the New York Times, a news magazine like Time or a nightly newscast such as any of the network news shows, each was a destination that offered one thing above all others – the news. And whether you agreed with them or not, each had an editorial process that governed what news was shared. We had a loyalty to our chosen news destinations that was built on trust.

Over the past two decades, this trust has broken down due to one primary factor – our continuing use of social media. And that has dramatically shifted how we get our news.

In the US, three out of every four people use online sources to get their news. One in two use social media.  Those aged 18 to 24 are more than twice as likely to rely on social media. In the UK, under-35s get more of their news from Social Media than any other source.

Also, influencers have become a source of news, particularly amongst young people. In the US, a quarter of those 18 to 24 used Instagram as a source of news about COVID.

This means that most times, we’re getting our news through a social media lens. Let’s set aside for a moment the filtering and information veracity problems that introduces. Let’s just talk about intent for a moment.

I have talked extensively in the past about information foraging when it comes to search. When information is “patchy” and spread diversely, the brain has to make a quickly calculated guess about which patch it’s most likely to find the information in it’s looking for. With Information Foraging, the intent we have frames everything that comes after.

In today’s digital world, information sources have disaggregated into profoundly patchy environments. We still go to news-first destinations like CNN or Fox News but we also get much of our information about the world through our social media feeds. What was interesting about the Reuters report was that it was started before the COVID pandemic, but the second part of the study was conducted during COVID. And it highlights a fascinating truth about our relationship with the news when it comes to trust.

The study shows that the majority of us don’t trust the news we get through social media but most times, we’re okay with that. Less than 40% of people trust the news in general, and even when we pick a source, less than half of us trust that particular channel. Only 22% indicated they trust the news they see in social media. Yet half of us admit we use social media to get our news. The younger we are, the more reliant we are on social media for news. The fastest growing sources for news amongst all age groups – but especially those under 30 – are Instagram, SnapChat and WhatsApp.

Here’s another troubling fact that fell out of the study. Social platforms, especially Instagram and SnapChat, are dominated by influencers. That means that much of our news comes to us by way of a celebrity influencer reposting it on their feed. This is a far cry from the editorial review process that used to act as a gate keeper on our trusted news sources.

So why do we continue to use news sources we admit we don’t trust? I suspect it may have to do with something called the Meaning Maintenance Model. Proposed in 2006 by Heine, Proulx and Vohs, the model speculates that a primary driver for us is to maintain our beliefs in how the world works. This is related to the sense making loop (Klein, Moon and Hoffman) I’ve also talked about in the past. We make sense of the world by first starting with the existing frame of what we believe to be true. If what we’re experiencing is significantly different from what we believe, we will update our frame to align with the new evidence.

What the Meaning Maintenance Model suggests is that we will go to great lengths to avoid updating our frame. It’s much easier just to find supposed evidence that supports our current beliefs. So, if our intent is to get news that supports our existing world view, social media is the perfect source. It’s algorithmically filtered to match our current frame. Even if we believe the information is suspect, it still comforts us to have our beliefs confirmed. This works well for news about politics, societal concerns and other ideologically polarized topics.

We don’t like to admit this is the case. According to the Reuter’s study, 60% of us indicate we want news sources that are objective and not biased to any particular point of view. But this doesn’t jive with reality at all. As I wrote about in a previous column, almost all mainstream news sources in the US appear to have a significant bias to the right or left. If we’re talking about news that comes through social media channels, that bias is doubled down on. In practice, we are quite happy foraging from news sources that are biased, as long as that bias matches our own.

But then something like COVID comes along. Suddenly, we all have skin in the game in a very real and immediate way. Our information foraging intent changes and our minimum threshold for the reliability of our news sources goes way up. The Reuters study found that when it comes to sourcing COVID information, the most trusted sources are official sites of health and scientific organizations. The least trusted sources are random strangers, social media and messaging apps.

It requires some reading between the lines, but the Reuters study paints a troubling picture of the state of journalism and our relationship with it. Where we get our information directly impacts what we believe. And what we believe determines what we do.

These are high stakes in an all-in game of survival.

Are We Killing Politeness?

One of the many casualties of our changing culture seems to be politeness. When the President of the United States is the poster child for rude behavior, it’s tough for politeness to survive. This is especially true in the no-holds-barred, digitally distanced world of social media.

I consider myself to be reasonably polite. Being so, I also expect this in others. Mild rudeness makes me anxious. Excessive rudeness makes me angry. This being the case, I am troubled by the apparent decline of civility. So today I wanted to take a look at politeness and why it might be slipping away from us.

First of all, we have to understand the politeness is not universal. What is considered polite in one culture is not in another.

Secondly, being polite is not the same as being friendly. Or empathetic. Or being respectful of others. Or being compassionate, according to this post  from The Conversation. There is a question of degree and intent here. Being polite is a rather unique behavior that encompasses both desirable and less desirable qualities. And that begs the question: What is the purpose of politeness? Is a less-polite world a good or a bad thing?

First, let’s look at the origin of the world. It comes from the Latin “politus,” meaning “polished — made smooth.” Just in case you’re wondering, “politics” does not come from the same root. That comes from the Greek word for “citizen” — “polites.”

One last etymological nugget. The closest comparison to polite may be “nice,” which originates from the Latin “nescius,” meaning “ignorant”. Take that for what it’s worth.

This idea of politeness as a type of social “polish” really comes from Europe — and especially Britain. There, politeness was linked with class hierarchies. Being polite was a sign of good breeding — a dividing line between the high-born and the riffraff. This class-bound definition came along with the transference of the concept to North America.

Canada is typically considered one of the most polite nations in the world. As a Canadian who has traveled a fair amount, I would say that’s probably true.

But again, there are variations in the concept of politeness and how it applies to both Canadians and Americans.

When we consider the British definition of politeness, you begin to see how Americans and Canadians might respond differently to it. To understand that is to understand much of what makes up our respective characters.

As a Canadian doing much of my business in the U.S. for many years, I was always struck by the difference in approaches I found north and south of the 49th parallel. Canadians businesses we met with were unfailingly polite, but seldom bought anything. Negotiating the prospect path in Canada was a long and often frustrating journey.

American businesses were much more likely to sign a contract. On the whole, I would also say they were friendlier in a more open and less-guarded way. I have to admit that in a business setting, I preferred the American approach.

According to anthropologists Penelope Brown and Stephen Levinson, who have extensively researched politeness, there is negative and positive politeness. Negative politeness is concern with adhering to social norms, often by deferring to someone or something else.

This is Canadian politeness personified. Our entire history is one of deference to greater powers, first to our colonial masters — the British and French — and, more recently, from our proximity to the cultural and economic master that is the U.S.

For Canadians, deferral is survival. As former Prime Minister Pierre Trudeau once said about the U.S., “Living next to you is in some ways like sleeping with an elephant. No matter how friendly and even-tempered is the beast, if I can call it that, one is affected by every twitch and grunt.”

Negative politeness is a way to smooth out social friction, but there is good and bad here. Ideally it should establish a baseline of trust, respect and social capital. But ultimately, politeness is a consensus of compromise.  And that’s why Canadians are so good at it.  Negative politeness wants everything to be fair.

But then there is positive politeness, which is more American in tone and nature. This is a desire to help others, making it more closely linked to compassion. But in this noble motive there is also a unilateral defining of what is right and wrong. Positive politeness tries to make everything right, based on the protagonist’s definition of what that is.

The two sides of politeness actually come from different parts of the brain. Negative politeness comes from the part of the brain that governs aggression. It is all about applying brakes to our natural instincts. But positive politeness comes from the part of the brain that regulates social bonding and affiliation.

When you understand this, you understand the difference between Canadians and Americans in what we consider polite. For the former, our definition comes handed down from the British class-linked origins, and has morphed into a culture of compromise and deferral.

The American definition comes from many generations of being the de facto moral leaders of the free world.

We (Canadians) want to be nice. You (Americans) want to be right. The two are not mutually exclusive, but they are also not the same thing. Not by a long shot.

What Trump has done (with a certain kind of perverse genius) has played on this national baseline of compassion. He has wantonly discarded any vestiges of politeness and split the nation on what it means to be right.

But by eliminating politeness, you have also eliminated that governor of our behavior. Reactions about what is right and wrong are now immediate, rough and unfiltered.

The polish that politeness brings — that deferral of spoken judgement for even a brief moment in order to foster cooperation — is gone. We have no opportunity to consider other perspectives. We have no motive to cooperate. This is abundantly apparent on every social media platform.

In game theory, politeness is a highly successful strategy commonly called “tit for tat.” It starts from assuming a default position of fairness from the other party, continuing to cooperate if this proves to be true, and escalating to retaliation if it’s not. But this tactic evolved in a world of face-to-face encounters. Somehow, it seems less needed in a divided world where rudeness and immediate judgement are the norm.

Still, I will cling to my notions of politeness. Yes, sometimes it seems to get in the way of definitive action. But on the whole, I would rather live in a world that’s a little nicer and a little more polite, even if that seems foolish to some of you.

Crisis? What Crisis?

You would think that a global pandemic would hold our attention for a while.

Nope.

We’re tired of it. We’re moving on.  We’re off to the next thing.

Granted, in this case the next thing deserves to be focused on. It is abysmal that it still exists. So it should be focused on. Probably for the rest of our lives and beyond – going forward until it ceases to be a thing. But it won’t be. Soon we’ll be talking about something else.

And that’s the point of this post – our collective inability to remain focused on anything without being distracted by the next breaking story in our news feed. How did we come to this?

I blame memes.

To a certain extent, our culture is the product of who we are and who we are is a product of our culture. Each is shaped by the other, going forward in a constantly improvised pas de deux. Humans create the medium – which then becomes part of the environment we adapt too.

Books and the printed word changed who we were for over five centuries.  Cinema has been helping to define us for almost 150 years. And radio and television has been moulding us for the past century. Our creations have helped create who we are.

This has never been truer than with social media. Unlike other media which took discrete chunks of our time and attention, social media is ubiquitous and pervasive. According to a recent survey, we spend on average 2 hours and 23 minutes per day on social media. That is about 13% of our waking hours.  Social media has become intertwined with our lives to the point that we had to start qualifying what happens where with labels like “IRL” (In Real Life).

There is another difference between social media and what has come before it. Almost every previous entertainment medium that has demanded our attention has been built on the foundation of a long form narrative arc. Interacting with each medium has been a process – a commitment to invest a certain amount of time to go on a journey with the storyteller. The construction of a story depends on patterns that are instantly recognized by us. Once we identify them, we are invested in discovering the outcome. We understand that our part of the bargain is to exchange our time and attention. The payoff is the joy that comes from us making sense of a new world or situation, even if it is imaginary.  

But social media depends on a different exchange. Rather than tapping into our inherent love of the structure of a story it depends on something called variable intermittent rewards. Essentially, it’s the same hook that casinos use to keep people at a slot machine or table. Not only is it highly addictive, it also pushes us to continually scroll to the next thing. It completely bypasses the thinking part of our brains and connects directly to the reward center buried in our limbic system. Rather than ask for our time and attention social media dangles a never-ending array of bright, shiny memes that asks nothing from us: no thinking, almost no attention and a few seconds of our time at most. For a lazy brain, this is the bargain of a lifetime.

It’s probably not a coincidence that the media that are most dependent on advertising are also the media that avoids locking our attention on a single topic for an extended period. This makes social media the perfect match for interruptive ad forms. They are simply slotted into the never-ending scroll of memes.

Social media has only been around for a little over 2 decades. It has been a significant part of our lives for half that time. If even a little bit of what I suspect is happening is indeed taking place, that scares the hell out of me. It would mean that no other medium has changed us so much and so quickly.

That is something worth paying attention to. 

How Social Media is Rewiring our Morality

Just a few short months ago, I never dreamed that one of the many fault lines in our society would be who wore a face mask and who didn’t. But on one day last week, most of the stories on CNN.com were about just that topic.

For reasons I’ll explain at the end of this post, the debate has some interesting moral and sociological implications. But before we get to that, let’s address this question: What is morality anyway?

Who’s On First?

In the simplest form possible, there is one foundational evolutionary spectrum to what we consider our own morality, which is: Are we more inclined to worry about ourselves or worry about others? Each of us plots our own morals somewhere on this spectrum.

At one end we have the individualist, the one who continually puts “me first.” Typically, the morals of those focused only on themselves concern individual rights, freedoms and beliefs specific to them. This concern for these rights does not extend to anyone considered outside their own “in” group.

As we move across the spectrum, we next find the familial moralist: Those who worry first about their own kin. Morality is always based on “family first.”

Next comes those who are more altruistic, as long as that altruism is directed at those who share common ground with themselves.  You could call this the “we first” group.

Finally, we have the true altruist, who believes in a type of universal altruism and that a rising tide truly lifts all boats.  

This concept of altruism has always been a bit of a puzzle for early evolutionists. In sociological parlance, it’s called proactive prosociality — doing something nice for someone who is not closely related to you without being asked. It seems at odds with the concept of the Selfish Gene, first introduced by evolutionary biologist Richard Dawkins in his book of the same name in 1976.

But as Dawkins has clarified over and over again since the publication of the book, selfish genes and prosociality are not mutually exclusive. They are, in fact, symbiotic.

Moral Collaboration

We have spent about 95% or our entire time as a species as hunter-gatherers. If we have evolved a mechanism of morality,  it would make sense to be most functional in that type of environment.

Hunter-gatherer societies need to collaborate. This is where the seeds of reciprocal altruism can be found. A group of people who work together to ensure continued food supplies will outlive and out-reproduce a group of people who don’t.  From a selfish gene perspective, collaboration will beat stubborn individualism.

But this type of collaboration comes with an important caveat: It only applies to individuals that live together in the same communal group.

Social conformity acts as a manual override on our own moral beliefs. Even in situations where we may initially have a belief of what is right and wrong, most of us will end up going with what the crowd is doing.

It’s an evolutionary version of the wisdom of crowds. But our evolved social conformity safety net comes with an important caveat: it assumes that everyone in the group is  in the same physical location and dealing with the same challenge.  

There is also a threshold effect there that determines how likely we are to conform. How we will act in any given situation will depend on a number of factors: how strong our existing beliefs are, the situation we’re in, and how the crowd is acting. This makes sense. Our conformity is inversely related to our level of perceived knowledge. The more we think we know, the less likely it is that we’ll conform to what the crowd is doing.

We should expect that a reasonably “rugged” evolutionary environment where survival is a continual struggle would tend to produce an optimal moral framework somewhere in the middle of familial and community altruism, where the group benefits from collaboration but does not let its guard down against outside threats.

But something interesting happens when the element of chronic struggle is removed, as it is in our culture. It appears that our morality tends to polarize to opposite ends of the spectrum.

Morality Rewired

What happens when our morality becomes our personal brand, part of who we believe we are? When that happens, our sense of morality migrates from the evolutionary core of our limbic brain to our cortex, the home of our personal brand. And our morals morph into a sort of tribal identity badge.

In this case, social media can short-circuit the evolutionary mechanisms of morality.

For example, there has been a proven correlation  between prosociality and the concept of “watching eye.” We are more likely to be good people when we have an audience.

But social media twists the concept of audience and can nudge our behavior from the prosocial to the more insular and individualistic end of the spectrum.

The successfulness of social conformity and the wisdom of crowds depends on a certain heterogeneity in the ideological makeup of the crowd. The filter bubble of social media strips this from our perceived audience, as I have written. It reinforces our moral beliefs by surrounding us with an audience that also shares those beliefs. The confidence that comes from this tends to push us away from the middle ground of conformed morality toward outlier territory. Perhaps this is why we’re seeing the polarization of morality all too evident today.

As I mentioned at the beginning, there may never have been  a more observable indicator of our own brand of morality than the current face-mask debate.

In an article on Businessinsider.com, Daniel Ackerman compared it to the crusade against seat belts in the 1970’s. Certainly when it comes to our perceived individual rights and not wanting to be told what to do, there are similarities. But there is one crucial difference. You wear seat belts to save your own life. You wear a face mask to save other lives.

We’ve been told repeatedly that the main purpose of face masks is to stop you spreading the virus to others, not the other way around. That makes the decision of whether you wear a face mask or not the ultimate indicator of your openness to reciprocal altruism.

The cultural crucible in which our morality is formed has changed. Our own belief structure of right and wrong is becoming more inflexible. And I have to believe that social media may be the culprit.

A.I. and Our Current Rugged Landscape

In evolution, there’s something called the adaptive landscape. It’s a complex concept, but in the smallest nutshell possible, it refers to how fit species are for a particular environment. In a relatively static landscape, status quos tend to be maintained. It’s business as usual. 

But a rugged adaptive landscape —-one beset by disruption and adversity — drives evolutionary change through speciation, the introduction of new and distinct species. 

The concept is not unique to evolution. Adapting to adversity is a feature in all complex, dynamic systems. Our economy has its own version. Economist Joseph Schumpeter called them Gales of Creative Destruction.

The same is true for cultural evolution. When shit gets real, the status quo crumbles like a sandcastle at high tide. When it comes to life today and everything we know about it, we are definitely in a rugged landscape. COVID-19 might be driving us to our new future faster than we ever suspected. The question is, what does that future look like?

Homo Deus

In his follow up to his best-seller “Sapiens: A Brief History of Humankind,” author Yuval Noah Harari takes a shot at predicting just that. “Homo Deus: A Brief History of Tomorrow” looks at what our future might be. Written well before the pandemic (in 2015) the book deals frankly with the impending irrelevance of humanity. 

The issue, according to Harari, is the decoupling of intelligence and consciousness. Once we break the link between the two, the human vessels that have traditionally carried intelligence become superfluous. 

In his book, Harari foresees two possible paths: techno-humanism and Dataism. 

Techno-humanism

In this version of our future, we humans remain essential, but not in our current form. Thanks to technology, we get an upgrade and become “super-human.”

Dataism

Alternatively, why do we need humans at all? Once intelligence becomes decoupled from human consciousness, will it simply decide that our corporeal forms are a charming but antiquated oddity and just start with a clean slate?

Our Current Landscape

Speaking of clean slates, many have been talking about the opportunity COVID-19 has presented to us to start anew. As I was writing this column, I received a press release from MIT promoting a new book “Building the New Economy,” edited by Alex Pentland. I haven’t read it yet, but based on the first two lines in the release, it certainly seems to be following this type of thinking:“With each major crisis, be it war, pandemic, or major new technology, there has been a need to reinvent the relationships between individuals, businesses, and governments. Today’s pandemic, joined with the tsunami of data, crypto and AI technologies, is such a crisis.”

We are intrigued by the idea of using the technologies we have available to us to build a societal framework less susceptible to inevitable Black Swans. But is this just an invitation to pry open Pandora’s Box and allow the future Yuval Noah Harari is warning us about?

The Debate 

Harari isn’t the only one seeing the impending doom of the human race. Elon Musk has been warning us about it for years. As we race to embrace artificial intelligence, Musk sees the biggest threat to human existence we have ever faced. 

“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me,” warns Musk. “It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”

There are those that pooh-pooh Musk’s alarmism, calling it much ado about nothing. Noted Harvard cognitive psychologist and author Steven Pinker, whose rose-colored vision of humanity’s future reliably trends up and to the right, dismissed Musk’s warnings with this: “If Elon Musk was really serious about the AI threat, he’d stop building those self-driving cars, which are the first kind of advanced AI that we’re going to see.”

In turn, Musk puts Pinker’s Pollyanna perspective down to human hubris: “This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”

From Today Forward

This brings us back to our current adaptive landscape. It’s rugged. The peaks and valleys of our day-to-day reality are more rugged then they have ever been — at least in our lifetimes. 

We need help. And when you’re dealing with a massive threat that involves probability modeling and statistical inference, more advanced artificial intelligence is a natural place to look. 

Would we trade more invasive monitoring of our own bio-status and aggregation of that data to prevent more deaths? In a heartbeat.

Would we put our trust in algorithms that can instantly crunch vast amounts of data our own brains couldn’t possibly comprehend? We already have.

Will we even adopt connected devices constantly streaming the bits of data that define our existence to some corporate third party or government agency in return for a promise of better odds that we can extend that existence? Sign us up.

We are willingly tossing the keys to our future to the Googles, Apples, Amazons and Facebooks of the world. As much as the present may be frightening, we should consider the steps we’re taking carefully.

If we continue rushing down the path towards Yuval Noah Harari’s Dataism, we should be prepared for what we find there: “This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.”

The Mother of all Mood Swings

How are you doing? 

Yes, you. 

I know how I’m doing — today, anyway. It varies day to day. It depends on the news. It depends on the weather. It depends on Trump’s Twitter stream.

Generally, I’m trying to process the abnormal with the tools I have. I don’t know precisely how you’re doing, but I suspect you’re going through your own processing with your own tools.

I do know one thing. The tools I have are pretty much the same tools you have, at least when we look at them in the broad strokes. It’s one of the surprising things about humans. We all go through some variation of the same process when we deal with life’s big events. 

Take grief and other traumatic life changes. We’re pretty predictable in how we deal with it. So predictable, in fact, that there’s a psychological model with its own acronym for it: DABDA. It’s known as the five stages of grief:  denial, anger, bargaining, depression and acceptance. It was first introduced in 1969 by Swiss-American psychiatrist Elisabeth Kübler-Ross.  

Noted American neurobiologist and author Robert Sapolsky marvels on the universality of our processing of grief in his book  “The Trouble with Testosterone”:  “Poems, paintings, symphonies by the most creative artists who ever lived, have been born out of mourning… We cry, we rage, we demand that the oceans’ waves stop, that the planets halt their movements in the sky, all because the earth will no longer be graced by the one who sang lullabies as no one else could; yet that, too, is reducible to DABDA. Why should grief be so stereotypical?”

But it’s not just bad stuff we process this way. If you look at how we process any big change, you’ll find there are pretty predictable stages we humans go through.

So why are we so predictable in how we deal with change? In general, these are all variations of the sensemaking cycle, which is how we parse the world around us. We start with a frame — an understanding of what we believe to be true — and we constantly compare this to new information we get from our environment. 

Because we are cognitively energy-efficient, we are reluctant to throw out old frames and adopt new ones, especially when those new ones are being forced upon us. It’s just the way we’re wired. 

But life change is usually a solo journey, and we rely on anchors to help us along the way. We rely on our psychoscapes, the cognitive environments in which our minds typically operate. Friends, families, favorite activities, social diversions: these are the things that we can rely on for an emotional boost, even if only temporarily.

But what if everyone is experiencing trauma at the same time? What if our normal psychoscape is no longer there? What then?

Then we enter the SNAFU zone.

SNAFU is an acronym coined in World War II:  “situation normal, all f*cked up.”  It was used to refer to a situation that is bad, but is also a normal state of affairs. 

We are talking a lot about the new normal. But here’s the thing: The new situation normal is going to be a shit show, guaranteed to be all f*cked up. And it’s going to be that way because everyone  — and I mean everyone — is going to be going through the Mother of all Mood Swings. 

First of all, although the stages of managing change may be somewhat universal, the path we take through them is anything but. Some will get stuck at the denial and anger stage and storm the state legislature with assault weapons demanding a haircut. Some are already at acceptance, trying to navigate through a world that is officially SNAFU. We are all processing the same catalyst of change, but we’re at different places in that process. 

Secondly, the psychological anchors we depend on may not be there for us. When we are going through collective stress, we tend to rely on community. We revert to our evolutionary roots of being natural herders. Without exception, the way humans have always dealt with massive waves of change is to come together with others. And this is where a pandemic that requires social distancing throws a king-sized wrench in the works. We can’t even get a hug to help us through a bad day.

As the levels of our collective stress climb, there are bound to be a lot of WTF moments. Nerves will fray and tempers will flare. We will be walking on eggshells. There will be little patience for perspectives that differ from our own. Societal divides will deepen and widen. The whole world will become moodier than a hormonal teenager. 

Finally, we have all of the above playing out in a media landscape that was already fractured to an unprecedented level going into this. All the many things that are FU in this particular SNAFU will be posted, tweeted, shared and reshared. There will be no escape from it. 

Unlike the hormonal teenager, we can’t send COVID-19 to its room.

The Showdown between Smart and Stupid

If you have been wondering how the hell Dr. Anthony Fauci or Dr. Deborah Brix continues to function in the environment they find themselves in, you have company. I too have had my WTF moments and have been pondering, “Is it just me, or has the entire world become dumber?”

In answer to this question, I don’t think the average IQ of the population has slipped, but it certainly seems so. Especially in the White House.

Now, I meant the above as a rhetorical question. There is evidence that we are – on average – getting smarter. It’s called the Flynn Effect. There is also evidence we’re getting dumber. It probably nets out to zero, or at least to an insignificant move in either direction. I suspect recent signs of stupidity are more a factor of availability bias, as I’ve talked about before. Thanks to our news feeds, the is ample evidence of “Stupid is as stupid does.”

What is true is that dumb people have a voice they’ve never had before, thanks to all types of media, but most especially social media. The current populist political climate has also enshrined stupidity as an unfortunate side effect of democracy and free speech. Ignorance is running rampant across the heartlands of America and many other countries – including my own.

There are some frightening network effects that come from this. As stupidity gums up the gears of the governmental machinery that should be protecting us, we’re starting to see smart people making an end run around it. As the level of public discourse continually gets dumbed down, the really smart people are just avoiding it all together and are quietly reinventing the world according to their own rules.

For example, according to the Brookings Institute, there has been an 86% turn over in Trump’s top advisors since he took office. Based on statistical probability alone, at least a few of these had be to smart people.

This is not surprising. Smart people tend to avoid other people in general. At least one study has found that they are happiest when they’re alone. And this is especially true when they’re surrounded by stupid people. All the smart people I know do not suffer fools gladly. So, what we’re seeing is a polarization of intelligence, with a growing divide between the smart and the stupid.

Unfortunately, this is also polarizing our attitudes towards science. When I was growing up in the Sixties, we revered science and respected smart people. And when I say “we” I mean the greater collective “we.” Maybe it was because science was giving us hope at the time. We were literally shooting for the Moon. But if you listen to scientists today, you are quickly swamped under a tsunami of scary-as-shit bad news. It’s painful to be smart. For the last decade or so, ignorance did appear to be bliss.

That brings us to COVID-19.

One thing that the current pandemic has done has suddenly made the world very interested in things they never cared about before – like the science of epidemiology and the bureaucracy of pharmaceutical clinical trials. It has created a worldwide Venn diagram where the circles of stupidity and science are forced to overlap.

In this sudden focusing of the world’s attention on a single topic, it has also made us realize the price of stupidity. What was before an irritant is now deadly.

The danger here is that we will probably find an intellitocracy emerge. But we won’t realize it, because it will be hidden from most of us. And it will be hidden because smart people are going to get exasperated and avoid stupid people. We don’t want that to happen.

We need science – and smart people – in the public domain. We can’t afford to have them withdraw in order to save themselves from having to deal with stupidity. And more than anything, we mustn’t let science go from being publicly funded to privately funded because it’s the path of least resistance. We need our public domains fully staffed with smart people.

Intelligence will ultimately prevail over ignorance. In the arms race of evolution, stupid people are bringing a knife to a gun fight. It may not seem like that now, but eventually the smart will be the victors. This means that smart people are going to define what our lives and society look like. And we need to know what they’re thinking about. We need as much of that as possible happening in a public forum, not in a private research lab somewhere in Silicon Valley.

Here’s just one example of why we need to be paying attention to what smart people are thinking about. Author and social activist Naomi Klein – who has previously warned us about unbridled capitalism, unethical marketing and other apocalyptical trends – is now warning us about a potential coup against personal privacy that’s taking shape under the cover of the pandemic.

Klein’s latest piece in theintercept.com reveals how New York Governor Andrew Cuomo is assembling a super-smart SWAT team of billionaires including Bill Gates, Eric Schmidt and others to help him put a “high-tech dystopia” together as a new post-pandemic future:

“It has taken some time to gel, but something resembling a coherent Pandemic Shock Doctrine is beginning to emerge. Call it the “Screen New Deal.” Far more high-tech than anything we have seen during previous disasters, the future that is being rushed into being as the bodies still pile up treats our past weeks of physical isolation not as a painful necessity to save lives, but as a living laboratory for a permanent — and highly profitable — no-touch future.”

We are balanced on a precipice between smart and stupid. Smart will ultimately prevail. When it does, it shouldn’t come as a surprise to us. Ideally, we should have some say in the formation of our collective future.