Are We Killing Politeness?

One of the many casualties of our changing culture seems to be politeness. When the President of the United States is the poster child for rude behavior, it’s tough for politeness to survive. This is especially true in the no-holds-barred, digitally distanced world of social media.

I consider myself to be reasonably polite. Being so, I also expect this in others. Mild rudeness makes me anxious. Excessive rudeness makes me angry. This being the case, I am troubled by the apparent decline of civility. So today I wanted to take a look at politeness and why it might be slipping away from us.

First of all, we have to understand the politeness is not universal. What is considered polite in one culture is not in another.

Secondly, being polite is not the same as being friendly. Or empathetic. Or being respectful of others. Or being compassionate, according to this post  from The Conversation. There is a question of degree and intent here. Being polite is a rather unique behavior that encompasses both desirable and less desirable qualities. And that begs the question: What is the purpose of politeness? Is a less-polite world a good or a bad thing?

First, let’s look at the origin of the world. It comes from the Latin “politus,” meaning “polished — made smooth.” Just in case you’re wondering, “politics” does not come from the same root. That comes from the Greek word for “citizen” — “polites.”

One last etymological nugget. The closest comparison to polite may be “nice,” which originates from the Latin “nescius,” meaning “ignorant”. Take that for what it’s worth.

This idea of politeness as a type of social “polish” really comes from Europe — and especially Britain. There, politeness was linked with class hierarchies. Being polite was a sign of good breeding — a dividing line between the high-born and the riffraff. This class-bound definition came along with the transference of the concept to North America.

Canada is typically considered one of the most polite nations in the world. As a Canadian who has traveled a fair amount, I would say that’s probably true.

But again, there are variations in the concept of politeness and how it applies to both Canadians and Americans.

When we consider the British definition of politeness, you begin to see how Americans and Canadians might respond differently to it. To understand that is to understand much of what makes up our respective characters.

As a Canadian doing much of my business in the U.S. for many years, I was always struck by the difference in approaches I found north and south of the 49th parallel. Canadians businesses we met with were unfailingly polite, but seldom bought anything. Negotiating the prospect path in Canada was a long and often frustrating journey.

American businesses were much more likely to sign a contract. On the whole, I would also say they were friendlier in a more open and less-guarded way. I have to admit that in a business setting, I preferred the American approach.

According to anthropologists Penelope Brown and Stephen Levinson, who have extensively researched politeness, there is negative and positive politeness. Negative politeness is concern with adhering to social norms, often by deferring to someone or something else.

This is Canadian politeness personified. Our entire history is one of deference to greater powers, first to our colonial masters — the British and French — and, more recently, from our proximity to the cultural and economic master that is the U.S.

For Canadians, deferral is survival. As former Prime Minister Pierre Trudeau once said about the U.S., “Living next to you is in some ways like sleeping with an elephant. No matter how friendly and even-tempered is the beast, if I can call it that, one is affected by every twitch and grunt.”

Negative politeness is a way to smooth out social friction, but there is good and bad here. Ideally it should establish a baseline of trust, respect and social capital. But ultimately, politeness is a consensus of compromise.  And that’s why Canadians are so good at it.  Negative politeness wants everything to be fair.

But then there is positive politeness, which is more American in tone and nature. This is a desire to help others, making it more closely linked to compassion. But in this noble motive there is also a unilateral defining of what is right and wrong. Positive politeness tries to make everything right, based on the protagonist’s definition of what that is.

The two sides of politeness actually come from different parts of the brain. Negative politeness comes from the part of the brain that governs aggression. It is all about applying brakes to our natural instincts. But positive politeness comes from the part of the brain that regulates social bonding and affiliation.

When you understand this, you understand the difference between Canadians and Americans in what we consider polite. For the former, our definition comes handed down from the British class-linked origins, and has morphed into a culture of compromise and deferral.

The American definition comes from many generations of being the de facto moral leaders of the free world.

We (Canadians) want to be nice. You (Americans) want to be right. The two are not mutually exclusive, but they are also not the same thing. Not by a long shot.

What Trump has done (with a certain kind of perverse genius) has played on this national baseline of compassion. He has wantonly discarded any vestiges of politeness and split the nation on what it means to be right.

But by eliminating politeness, you have also eliminated that governor of our behavior. Reactions about what is right and wrong are now immediate, rough and unfiltered.

The polish that politeness brings — that deferral of spoken judgement for even a brief moment in order to foster cooperation — is gone. We have no opportunity to consider other perspectives. We have no motive to cooperate. This is abundantly apparent on every social media platform.

In game theory, politeness is a highly successful strategy commonly called “tit for tat.” It starts from assuming a default position of fairness from the other party, continuing to cooperate if this proves to be true, and escalating to retaliation if it’s not. But this tactic evolved in a world of face-to-face encounters. Somehow, it seems less needed in a divided world where rudeness and immediate judgement are the norm.

Still, I will cling to my notions of politeness. Yes, sometimes it seems to get in the way of definitive action. But on the whole, I would rather live in a world that’s a little nicer and a little more polite, even if that seems foolish to some of you.

The Potential Woes of Working from Home

Many of you have now had a few months under your belt working virtually from home rather than going to the office. At least some of you are probably considering continuing to do so even after COVID recedes and the all clear is given to return to normal. A virtual workplace makes all kinds of rational sense – both for employees and employers. But there are irrational reasons why you might want to think twice before you fully embrace going virtual.

About a decade ago, my company also went with a hybrid virtual/physical workplace. As the CEO, there was a lot I liked about it. It was a lot more economical than leasing more office space. It gave us the flexibility to recruit top talent in areas where we had no physical presence. And it seemed that technology was up to the task of providing the communication and work-flow tools we needed to support our virtual members.

On the whole, our virtual employees also seemed to like it. It gave them more flexibility in their workday. It also made it less formal. If you wanted to work in pajamas and bunny slippers, so be it. And with a customer base spread across many time zones, it also made it easier to shift client calls to times that were mutually acceptable.

It seemed to be a win-win. For awhile. Then we noticed that all was not wonderful in work-from-home land.

I can’t say productivity declined. We were always a results-based workplace so as long as the work got done, we were happy. But we started to feel a shift in our previously strong corporate culture. We found team-member complaints about seemingly minor things skyrocket. We found less cohesion across teams. Finally – and most critically – it started to impact our relationships with our customers.

Right about the time all this was happening, we were acquired by a much bigger company. One of the dictates that was handed down from the new owners was that we establish physical offices and bring our virtual employees back to the mothership for the majority of their work-week. At the time, I wasn’t fully aware of the negative consequences of going virtual so I initially fought the decision. But to be honest, I was secretly happy. I knew something wasn’t quite right. I just wasn’t sure what it was. I suspected it might have been our new virtual team members.

The move back to a physical workplace was a tough one. Our virtual team members were very vocal about how this was a loss of their personal freedom. New HR fires were erupting daily and I spent much of my time fighting them. This, combined with the inevitable cultural consequences of being acquired, often made me shake my head in bewilderment. Life in our company was turning into a shit-show.

I wish I could say that after we all returned to the same workplace, we joined hands and sang a rousing chorus of Kumbaya. We didn’t. The damage had been done. Many of the disgruntled former virtual team members ended up moving on. The cultural core of the company remained with our original team members who had worked in the same office location for several years. I eventually completed my contract and went my own way.

I never fully determined what the culprit was. Was it our virtual team members? Or was it the fact that we embraced a virtual workplace without considering unintended consequences. I suspected it was a little of both.

Like I said, that was a decade ago. From a rational perspective, all the benefits of a virtual workplace seem even more enticing than they did then. But in the last 10 years, there has been research done on those irrational factors that can lead to the cracks in a corporate culture that we experienced.

Mahdi Roghanizad is an organizational behavior specialist from Ryerson University in Toronto. He has long looked at the limitations of computerized communication. And his research provides a little more clarity into our failed experiment with a virtual workplace.

Roghanizad has found that without real-life contact, the parts of our brain that provide us with the connections needed to build trust never turn on. In order to build a true relationship with another person, we need something called the Theory of Mind. According to Wikipedia, “Theory of mind is necessary to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.”

But unless we’re physically face-to-face with another person, our brain doesn’t engage in this critical activity. “Eye contact is required to activate that theory of mind and when the eye contact is not there, the whole other signal information is not processed by our brain,” said Roghanizad. Even wearing a pair of sunglasses is enough to short circuit the process. Relegating contact to a periodic Zoom call guarantees that this empathetic part of our brains will never kick in.

But it’s not just being eye-ball to eye-ball. There are other non-verbal cues we rely on to connect with other people and create a Theory of Mind. Other research has shown the importance of pheromones and physical gestures like crossing your arms and leaning forward or back. This is why we subconsciously start to physically imitate people we’re talking to. The stronger the connection with someone, the more we imitate them.

This all comes back to the importance of bandwidth in the real world. A digital connection cannot possibly incorporate all the nuance of a face-to-face connection. And whether we realize it or not, we rely on that bandwidth to understand other people. From that understanding comes the foundations of trusted relationships. And trusted relationships are the difference between a high-functioning work team and a dysfunctional one.

I wish I knew that ten years ago.

Crisis? What Crisis?

You would think that a global pandemic would hold our attention for a while.

Nope.

We’re tired of it. We’re moving on.  We’re off to the next thing.

Granted, in this case the next thing deserves to be focused on. It is abysmal that it still exists. So it should be focused on. Probably for the rest of our lives and beyond – going forward until it ceases to be a thing. But it won’t be. Soon we’ll be talking about something else.

And that’s the point of this post – our collective inability to remain focused on anything without being distracted by the next breaking story in our news feed. How did we come to this?

I blame memes.

To a certain extent, our culture is the product of who we are and who we are is a product of our culture. Each is shaped by the other, going forward in a constantly improvised pas de deux. Humans create the medium – which then becomes part of the environment we adapt too.

Books and the printed word changed who we were for over five centuries.  Cinema has been helping to define us for almost 150 years. And radio and television has been moulding us for the past century. Our creations have helped create who we are.

This has never been truer than with social media. Unlike other media which took discrete chunks of our time and attention, social media is ubiquitous and pervasive. According to a recent survey, we spend on average 2 hours and 23 minutes per day on social media. That is about 13% of our waking hours.  Social media has become intertwined with our lives to the point that we had to start qualifying what happens where with labels like “IRL” (In Real Life).

There is another difference between social media and what has come before it. Almost every previous entertainment medium that has demanded our attention has been built on the foundation of a long form narrative arc. Interacting with each medium has been a process – a commitment to invest a certain amount of time to go on a journey with the storyteller. The construction of a story depends on patterns that are instantly recognized by us. Once we identify them, we are invested in discovering the outcome. We understand that our part of the bargain is to exchange our time and attention. The payoff is the joy that comes from us making sense of a new world or situation, even if it is imaginary.  

But social media depends on a different exchange. Rather than tapping into our inherent love of the structure of a story it depends on something called variable intermittent rewards. Essentially, it’s the same hook that casinos use to keep people at a slot machine or table. Not only is it highly addictive, it also pushes us to continually scroll to the next thing. It completely bypasses the thinking part of our brains and connects directly to the reward center buried in our limbic system. Rather than ask for our time and attention social media dangles a never-ending array of bright, shiny memes that asks nothing from us: no thinking, almost no attention and a few seconds of our time at most. For a lazy brain, this is the bargain of a lifetime.

It’s probably not a coincidence that the media that are most dependent on advertising are also the media that avoids locking our attention on a single topic for an extended period. This makes social media the perfect match for interruptive ad forms. They are simply slotted into the never-ending scroll of memes.

Social media has only been around for a little over 2 decades. It has been a significant part of our lives for half that time. If even a little bit of what I suspect is happening is indeed taking place, that scares the hell out of me. It would mean that no other medium has changed us so much and so quickly.

That is something worth paying attention to. 

How Social Media is Rewiring our Morality

Just a few short months ago, I never dreamed that one of the many fault lines in our society would be who wore a face mask and who didn’t. But on one day last week, most of the stories on CNN.com were about just that topic.

For reasons I’ll explain at the end of this post, the debate has some interesting moral and sociological implications. But before we get to that, let’s address this question: What is morality anyway?

Who’s On First?

In the simplest form possible, there is one foundational evolutionary spectrum to what we consider our own morality, which is: Are we more inclined to worry about ourselves or worry about others? Each of us plots our own morals somewhere on this spectrum.

At one end we have the individualist, the one who continually puts “me first.” Typically, the morals of those focused only on themselves concern individual rights, freedoms and beliefs specific to them. This concern for these rights does not extend to anyone considered outside their own “in” group.

As we move across the spectrum, we next find the familial moralist: Those who worry first about their own kin. Morality is always based on “family first.”

Next comes those who are more altruistic, as long as that altruism is directed at those who share common ground with themselves.  You could call this the “we first” group.

Finally, we have the true altruist, who believes in a type of universal altruism and that a rising tide truly lifts all boats.  

This concept of altruism has always been a bit of a puzzle for early evolutionists. In sociological parlance, it’s called proactive prosociality — doing something nice for someone who is not closely related to you without being asked. It seems at odds with the concept of the Selfish Gene, first introduced by evolutionary biologist Richard Dawkins in his book of the same name in 1976.

But as Dawkins has clarified over and over again since the publication of the book, selfish genes and prosociality are not mutually exclusive. They are, in fact, symbiotic.

Moral Collaboration

We have spent about 95% or our entire time as a species as hunter-gatherers. If we have evolved a mechanism of morality,  it would make sense to be most functional in that type of environment.

Hunter-gatherer societies need to collaborate. This is where the seeds of reciprocal altruism can be found. A group of people who work together to ensure continued food supplies will outlive and out-reproduce a group of people who don’t.  From a selfish gene perspective, collaboration will beat stubborn individualism.

But this type of collaboration comes with an important caveat: It only applies to individuals that live together in the same communal group.

Social conformity acts as a manual override on our own moral beliefs. Even in situations where we may initially have a belief of what is right and wrong, most of us will end up going with what the crowd is doing.

It’s an evolutionary version of the wisdom of crowds. But our evolved social conformity safety net comes with an important caveat: it assumes that everyone in the group is  in the same physical location and dealing with the same challenge.  

There is also a threshold effect there that determines how likely we are to conform. How we will act in any given situation will depend on a number of factors: how strong our existing beliefs are, the situation we’re in, and how the crowd is acting. This makes sense. Our conformity is inversely related to our level of perceived knowledge. The more we think we know, the less likely it is that we’ll conform to what the crowd is doing.

We should expect that a reasonably “rugged” evolutionary environment where survival is a continual struggle would tend to produce an optimal moral framework somewhere in the middle of familial and community altruism, where the group benefits from collaboration but does not let its guard down against outside threats.

But something interesting happens when the element of chronic struggle is removed, as it is in our culture. It appears that our morality tends to polarize to opposite ends of the spectrum.

Morality Rewired

What happens when our morality becomes our personal brand, part of who we believe we are? When that happens, our sense of morality migrates from the evolutionary core of our limbic brain to our cortex, the home of our personal brand. And our morals morph into a sort of tribal identity badge.

In this case, social media can short-circuit the evolutionary mechanisms of morality.

For example, there has been a proven correlation  between prosociality and the concept of “watching eye.” We are more likely to be good people when we have an audience.

But social media twists the concept of audience and can nudge our behavior from the prosocial to the more insular and individualistic end of the spectrum.

The successfulness of social conformity and the wisdom of crowds depends on a certain heterogeneity in the ideological makeup of the crowd. The filter bubble of social media strips this from our perceived audience, as I have written. It reinforces our moral beliefs by surrounding us with an audience that also shares those beliefs. The confidence that comes from this tends to push us away from the middle ground of conformed morality toward outlier territory. Perhaps this is why we’re seeing the polarization of morality all too evident today.

As I mentioned at the beginning, there may never have been  a more observable indicator of our own brand of morality than the current face-mask debate.

In an article on Businessinsider.com, Daniel Ackerman compared it to the crusade against seat belts in the 1970’s. Certainly when it comes to our perceived individual rights and not wanting to be told what to do, there are similarities. But there is one crucial difference. You wear seat belts to save your own life. You wear a face mask to save other lives.

We’ve been told repeatedly that the main purpose of face masks is to stop you spreading the virus to others, not the other way around. That makes the decision of whether you wear a face mask or not the ultimate indicator of your openness to reciprocal altruism.

The cultural crucible in which our morality is formed has changed. Our own belief structure of right and wrong is becoming more inflexible. And I have to believe that social media may be the culprit.

A.I. and Our Current Rugged Landscape

In evolution, there’s something called the adaptive landscape. It’s a complex concept, but in the smallest nutshell possible, it refers to how fit species are for a particular environment. In a relatively static landscape, status quos tend to be maintained. It’s business as usual. 

But a rugged adaptive landscape —-one beset by disruption and adversity — drives evolutionary change through speciation, the introduction of new and distinct species. 

The concept is not unique to evolution. Adapting to adversity is a feature in all complex, dynamic systems. Our economy has its own version. Economist Joseph Schumpeter called them Gales of Creative Destruction.

The same is true for cultural evolution. When shit gets real, the status quo crumbles like a sandcastle at high tide. When it comes to life today and everything we know about it, we are definitely in a rugged landscape. COVID-19 might be driving us to our new future faster than we ever suspected. The question is, what does that future look like?

Homo Deus

In his follow up to his best-seller “Sapiens: A Brief History of Humankind,” author Yuval Noah Harari takes a shot at predicting just that. “Homo Deus: A Brief History of Tomorrow” looks at what our future might be. Written well before the pandemic (in 2015) the book deals frankly with the impending irrelevance of humanity. 

The issue, according to Harari, is the decoupling of intelligence and consciousness. Once we break the link between the two, the human vessels that have traditionally carried intelligence become superfluous. 

In his book, Harari foresees two possible paths: techno-humanism and Dataism. 

Techno-humanism

In this version of our future, we humans remain essential, but not in our current form. Thanks to technology, we get an upgrade and become “super-human.”

Dataism

Alternatively, why do we need humans at all? Once intelligence becomes decoupled from human consciousness, will it simply decide that our corporeal forms are a charming but antiquated oddity and just start with a clean slate?

Our Current Landscape

Speaking of clean slates, many have been talking about the opportunity COVID-19 has presented to us to start anew. As I was writing this column, I received a press release from MIT promoting a new book “Building the New Economy,” edited by Alex Pentland. I haven’t read it yet, but based on the first two lines in the release, it certainly seems to be following this type of thinking:“With each major crisis, be it war, pandemic, or major new technology, there has been a need to reinvent the relationships between individuals, businesses, and governments. Today’s pandemic, joined with the tsunami of data, crypto and AI technologies, is such a crisis.”

We are intrigued by the idea of using the technologies we have available to us to build a societal framework less susceptible to inevitable Black Swans. But is this just an invitation to pry open Pandora’s Box and allow the future Yuval Noah Harari is warning us about?

The Debate 

Harari isn’t the only one seeing the impending doom of the human race. Elon Musk has been warning us about it for years. As we race to embrace artificial intelligence, Musk sees the biggest threat to human existence we have ever faced. 

“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me,” warns Musk. “It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”

There are those that pooh-pooh Musk’s alarmism, calling it much ado about nothing. Noted Harvard cognitive psychologist and author Steven Pinker, whose rose-colored vision of humanity’s future reliably trends up and to the right, dismissed Musk’s warnings with this: “If Elon Musk was really serious about the AI threat, he’d stop building those self-driving cars, which are the first kind of advanced AI that we’re going to see.”

In turn, Musk puts Pinker’s Pollyanna perspective down to human hubris: “This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”

From Today Forward

This brings us back to our current adaptive landscape. It’s rugged. The peaks and valleys of our day-to-day reality are more rugged then they have ever been — at least in our lifetimes. 

We need help. And when you’re dealing with a massive threat that involves probability modeling and statistical inference, more advanced artificial intelligence is a natural place to look. 

Would we trade more invasive monitoring of our own bio-status and aggregation of that data to prevent more deaths? In a heartbeat.

Would we put our trust in algorithms that can instantly crunch vast amounts of data our own brains couldn’t possibly comprehend? We already have.

Will we even adopt connected devices constantly streaming the bits of data that define our existence to some corporate third party or government agency in return for a promise of better odds that we can extend that existence? Sign us up.

We are willingly tossing the keys to our future to the Googles, Apples, Amazons and Facebooks of the world. As much as the present may be frightening, we should consider the steps we’re taking carefully.

If we continue rushing down the path towards Yuval Noah Harari’s Dataism, we should be prepared for what we find there: “This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.”

The Mother of all Mood Swings

How are you doing? 

Yes, you. 

I know how I’m doing — today, anyway. It varies day to day. It depends on the news. It depends on the weather. It depends on Trump’s Twitter stream.

Generally, I’m trying to process the abnormal with the tools I have. I don’t know precisely how you’re doing, but I suspect you’re going through your own processing with your own tools.

I do know one thing. The tools I have are pretty much the same tools you have, at least when we look at them in the broad strokes. It’s one of the surprising things about humans. We all go through some variation of the same process when we deal with life’s big events. 

Take grief and other traumatic life changes. We’re pretty predictable in how we deal with it. So predictable, in fact, that there’s a psychological model with its own acronym for it: DABDA. It’s known as the five stages of grief:  denial, anger, bargaining, depression and acceptance. It was first introduced in 1969 by Swiss-American psychiatrist Elisabeth Kübler-Ross.  

Noted American neurobiologist and author Robert Sapolsky marvels on the universality of our processing of grief in his book  “The Trouble with Testosterone”:  “Poems, paintings, symphonies by the most creative artists who ever lived, have been born out of mourning… We cry, we rage, we demand that the oceans’ waves stop, that the planets halt their movements in the sky, all because the earth will no longer be graced by the one who sang lullabies as no one else could; yet that, too, is reducible to DABDA. Why should grief be so stereotypical?”

But it’s not just bad stuff we process this way. If you look at how we process any big change, you’ll find there are pretty predictable stages we humans go through.

So why are we so predictable in how we deal with change? In general, these are all variations of the sensemaking cycle, which is how we parse the world around us. We start with a frame — an understanding of what we believe to be true — and we constantly compare this to new information we get from our environment. 

Because we are cognitively energy-efficient, we are reluctant to throw out old frames and adopt new ones, especially when those new ones are being forced upon us. It’s just the way we’re wired. 

But life change is usually a solo journey, and we rely on anchors to help us along the way. We rely on our psychoscapes, the cognitive environments in which our minds typically operate. Friends, families, favorite activities, social diversions: these are the things that we can rely on for an emotional boost, even if only temporarily.

But what if everyone is experiencing trauma at the same time? What if our normal psychoscape is no longer there? What then?

Then we enter the SNAFU zone.

SNAFU is an acronym coined in World War II:  “situation normal, all f*cked up.”  It was used to refer to a situation that is bad, but is also a normal state of affairs. 

We are talking a lot about the new normal. But here’s the thing: The new situation normal is going to be a shit show, guaranteed to be all f*cked up. And it’s going to be that way because everyone  — and I mean everyone — is going to be going through the Mother of all Mood Swings. 

First of all, although the stages of managing change may be somewhat universal, the path we take through them is anything but. Some will get stuck at the denial and anger stage and storm the state legislature with assault weapons demanding a haircut. Some are already at acceptance, trying to navigate through a world that is officially SNAFU. We are all processing the same catalyst of change, but we’re at different places in that process. 

Secondly, the psychological anchors we depend on may not be there for us. When we are going through collective stress, we tend to rely on community. We revert to our evolutionary roots of being natural herders. Without exception, the way humans have always dealt with massive waves of change is to come together with others. And this is where a pandemic that requires social distancing throws a king-sized wrench in the works. We can’t even get a hug to help us through a bad day.

As the levels of our collective stress climb, there are bound to be a lot of WTF moments. Nerves will fray and tempers will flare. We will be walking on eggshells. There will be little patience for perspectives that differ from our own. Societal divides will deepen and widen. The whole world will become moodier than a hormonal teenager. 

Finally, we have all of the above playing out in a media landscape that was already fractured to an unprecedented level going into this. All the many things that are FU in this particular SNAFU will be posted, tweeted, shared and reshared. There will be no escape from it. 

Unlike the hormonal teenager, we can’t send COVID-19 to its room.

The Showdown between Smart and Stupid

If you have been wondering how the hell Dr. Anthony Fauci or Dr. Deborah Brix continues to function in the environment they find themselves in, you have company. I too have had my WTF moments and have been pondering, “Is it just me, or has the entire world become dumber?”

In answer to this question, I don’t think the average IQ of the population has slipped, but it certainly seems so. Especially in the White House.

Now, I meant the above as a rhetorical question. There is evidence that we are – on average – getting smarter. It’s called the Flynn Effect. There is also evidence we’re getting dumber. It probably nets out to zero, or at least to an insignificant move in either direction. I suspect recent signs of stupidity are more a factor of availability bias, as I’ve talked about before. Thanks to our news feeds, the is ample evidence of “Stupid is as stupid does.”

What is true is that dumb people have a voice they’ve never had before, thanks to all types of media, but most especially social media. The current populist political climate has also enshrined stupidity as an unfortunate side effect of democracy and free speech. Ignorance is running rampant across the heartlands of America and many other countries – including my own.

There are some frightening network effects that come from this. As stupidity gums up the gears of the governmental machinery that should be protecting us, we’re starting to see smart people making an end run around it. As the level of public discourse continually gets dumbed down, the really smart people are just avoiding it all together and are quietly reinventing the world according to their own rules.

For example, according to the Brookings Institute, there has been an 86% turn over in Trump’s top advisors since he took office. Based on statistical probability alone, at least a few of these had be to smart people.

This is not surprising. Smart people tend to avoid other people in general. At least one study has found that they are happiest when they’re alone. And this is especially true when they’re surrounded by stupid people. All the smart people I know do not suffer fools gladly. So, what we’re seeing is a polarization of intelligence, with a growing divide between the smart and the stupid.

Unfortunately, this is also polarizing our attitudes towards science. When I was growing up in the Sixties, we revered science and respected smart people. And when I say “we” I mean the greater collective “we.” Maybe it was because science was giving us hope at the time. We were literally shooting for the Moon. But if you listen to scientists today, you are quickly swamped under a tsunami of scary-as-shit bad news. It’s painful to be smart. For the last decade or so, ignorance did appear to be bliss.

That brings us to COVID-19.

One thing that the current pandemic has done has suddenly made the world very interested in things they never cared about before – like the science of epidemiology and the bureaucracy of pharmaceutical clinical trials. It has created a worldwide Venn diagram where the circles of stupidity and science are forced to overlap.

In this sudden focusing of the world’s attention on a single topic, it has also made us realize the price of stupidity. What was before an irritant is now deadly.

The danger here is that we will probably find an intellitocracy emerge. But we won’t realize it, because it will be hidden from most of us. And it will be hidden because smart people are going to get exasperated and avoid stupid people. We don’t want that to happen.

We need science – and smart people – in the public domain. We can’t afford to have them withdraw in order to save themselves from having to deal with stupidity. And more than anything, we mustn’t let science go from being publicly funded to privately funded because it’s the path of least resistance. We need our public domains fully staffed with smart people.

Intelligence will ultimately prevail over ignorance. In the arms race of evolution, stupid people are bringing a knife to a gun fight. It may not seem like that now, but eventually the smart will be the victors. This means that smart people are going to define what our lives and society look like. And we need to know what they’re thinking about. We need as much of that as possible happening in a public forum, not in a private research lab somewhere in Silicon Valley.

Here’s just one example of why we need to be paying attention to what smart people are thinking about. Author and social activist Naomi Klein – who has previously warned us about unbridled capitalism, unethical marketing and other apocalyptical trends – is now warning us about a potential coup against personal privacy that’s taking shape under the cover of the pandemic.

Klein’s latest piece in theintercept.com reveals how New York Governor Andrew Cuomo is assembling a super-smart SWAT team of billionaires including Bill Gates, Eric Schmidt and others to help him put a “high-tech dystopia” together as a new post-pandemic future:

“It has taken some time to gel, but something resembling a coherent Pandemic Shock Doctrine is beginning to emerge. Call it the “Screen New Deal.” Far more high-tech than anything we have seen during previous disasters, the future that is being rushed into being as the bodies still pile up treats our past weeks of physical isolation not as a painful necessity to save lives, but as a living laboratory for a permanent — and highly profitable — no-touch future.”

We are balanced on a precipice between smart and stupid. Smart will ultimately prevail. When it does, it shouldn’t come as a surprise to us. Ideally, we should have some say in the formation of our collective future.

Our Complicated Relationship with Heroes

It’s not really surprising that we think more about heroes in times of adversity. Many of our most famous superheroes were born in the crucible of crisis: Batman, Superman, Wonder Woman and Captain America were all created during the Great Depression or the early years of World War II.

Today, we are again craving heroes. They are fabricated out of less fantastic stuff: taxi drivers who give free rides to the airport for patients, nurses who staff the front lines of our hospitals, chefs who provide free food to essential workers and a centenarian (as of tomorrow) who is raising millions for his national health care system by walking around his garden every day.

These are ordinary people who are doing extraordinary things. They are being raised to the rank of hero thanks to the surging tides of social media.

Again, this isn’t surprising. We are still in the early stages of what, for most of us, will likely be the defining crisis of our lifetimes. We desperately need some good news.

In fact, everybody’s favorite paper salesman/CIA operative/husband of Mary Poppins — John Krasinski — has curated a weekly webcast collection of feel-good salutes to local heroes called “Some Good News.” As of the writing of this post, it had collectively racked up close to 50 million views.

Krasinski has himself become a hero by doing things like throwing a surprise virtual prom for all the grads who were derived of theirs by the pandemic, or letting a group of ER nurses take the field at an eerily empty Fenway Park.

Having heroes should be a good thing. They should inspire us to be better people  — to become heroes ourselves. Right?

Well…

It’s complicated.

On the surface of it, hero worship is probably a good thing, especially if our heroes are doing things we all could do, if we were so inclined.  “If a 99.9-year-old man can raise millions for a national health service, there must be something I can do.”

On that very theme, the Heroic Imagination Project was formed to help us all be heroes. Headed up by famed psychologist Dr. Phillip Zimbardo, HIP came out of his infamous Stanford Prison Experiment. “If,” reasoned Zimbardo, “we all have the capacity to be evil, given the right circumstances, we should also all have the capacity to be heroes, again under the right circumstances.”

But there are a few hurdles between us and heroism. One of them, ironically, comes part and parcel with the very idea of hero worship.

In an extensive analysis of how superheroes reflect the American mythology of their own times, Dublin writer Sally Rooney shows how a country uses its heroes to reassure itself of its own goodness: The superhero makes sense in times of crisis. Reducing the vast complex of nationhood into the body of an individual means periods of geopolitical turmoil can be repackaged as moments of psychological stress. In the mirror of the superhero, America is reassured of its good qualities. Physical strength is good, as is the ability to make wisecracks under pressure. Masculinity is good, and women are okay as long as they can do very high kicks while making wisecracks. Once America is on the scene, order can be restored.”

So, we use heroes as a moral baseline to make us feel better about collective selves. They can help us reaffirm our faith in our national ideologies. A picture of a nurse in scrubs silently staring down a protester demanding a haircut makes us feel that things are still OK  in the heartland of the nation. It’s a reverse adaptation of the Lake Wobegone effect: “If this person represents the best of what we (as Americans) are, then the average can’t be all that bad.”

Unfortunately, this leads right into the second hurdle, the Bystander Effect: “If something happens that demands heroic action and there are a lot of people around, surely there’s a hero in the crowd that will step forward before I have to.” Being a hero demands a certain amount of sacrifice. As long as someone else is willing to make that sacrifice, we don’t have to — but we can still feel good about ourselves by giving it a like,  or, if we’re truly motivated, sharing it on our feed.

As the greatest real-time sociological experiment in our lifetime continues to play out, we might have yet another example of an unintended consequence brought on by social media. Based on our Facebook feeds, it appears that we have more heroes than ever. That’s great, but will it encourage us or keep us from stepping up and becoming heroes ourselves?

A New Definition of Social

I am an introvert. My wife is an extrovert. Both of us have been self-isolating for about 5 weeks now.  I don’t know if our experiences are representative of introverts and extroverts as a group, but my sample size has – by necessity – been reduced to a “n” of 2. Our respective concepts of what it means to be social have been disruptively redefined, but in very different ways.

The Extro-Version

You’ve probably heard of Dunbar’s Number. It was proposed by anthropologist Robin Dunbar. It’s the “suggested cognitive limit to the number of people with whom one can maintain stable social relationships.” The number, according to Dunbar, is about 150. But that number is not an absolute. it’s a theoretical limit. Some of us can juggle way more social connections than others.

My wife’s EQ (emotional quotient) is off the charts. She has a need to stay emotionally connected to a staggering number of people. Even in normal times, she probably “checks in” with dozens of people every week. Before COVID-19, this was done face-to-face whenever possible.

Now, her empathetic heart feels an even greater need to make sure everyone is doing okay. But she has to do it through socially distanced channels. She uses text messaging a lot. But she also makes at least a few phone calls every day for those in her network who are not part of the SMS or social media universe.

She has begun using Zoom to coordinate virtual get-togethers of a number of her friends. Many in this circle are also extroverts. A fair number of them are – like my wife – Italian. You can hear them recharging their social batteries as the energy and volume escalates. It’s not cappuccino and biscotti but they are making do with what they’ve got.

Whatever the channel, it has been essential for my wife to maintain this continuum of connection.

The Intro-Version

There are memes circulating that paint the false picture that the time has finally come for us introverts. “I’ve been practicing for this my entire life,” says one. They consistently say that life in lockdown is much harder for extroverts than introverts. They even hint that we should be in introvert’s heaven. They are wrong. I am not having the time of my life.

I’m not alone. Other introverts are having trouble adjusting to a social agenda being forced upon them by their self-isolated extrovert friends and colleagues. We introverts seldom get to write the rules of social acceptability, even in a global pandemic.

If you type “Are introverts more likely” into Google, it will suggest the following ways to complete that sentence: “to be depressed”, “to be single”, “to have anxiety”, “to be alcoholic”, and “to be psychopaths”. The world is not built for introverts.

Understanding introversion vs extroversion is to understand social energy. Unlike my wife for whom social interactions act as a source of renewal, for me they are a depletion of energy. If I’m going to make the effort, it better be worth my while. A non-introvert can’t understand that. It’s often interpreted as aloofness, boredom or just being rude. It’s none of these. It’s just our batteries being run down.

Speaking for myself, I don’t think most introverts are antisocial. We’re just “differently” social. We need connections the same as extroverts. But those connections are of a certain kind. It’s true that introverts are not good at small talk. But under the right circumstances, we do love to talk. Those circumstances are just more challenging in the current situation.

Take Zoom for instance. My wife, the extrovert, and myself, the introvert, have done some Zoom meetings side by side. I have noticed a distinct difference in how we Zoom. But before I talk about that, let me set a comparative to a more typical example of an introvert’s version of hell: the dreaded neighborhood house party.

As an introvert in this scenario, I would be constantly reading body language and non-verbal cues to see if there was an opportunity to reluctantly crowbar my way into a conversation. I would only do so if the topic interested me. Even then, I would be subconsciously monitoring my audience to see if they looked bored. On the slightest sign of disinterest, I would awkwardly wind down the conversation and retreat to my corner.

It’s not that I don’t like to talk. But I much prefer sidebar one-on-one conversations. I don’t do well in environments where there is too much going on. In those scenarios, introverts tend to clam up and just listen.

Now, consider a Zoom “Happy Hour” with a number of other people. All of that non-verbal bandwidth we Introverts rely on to pick and choose where we expend our limited social energy is gone.   Although Zoom adds a video feed, it’s a very low fidelity substitute for an in-the-flesh interaction.

With all this mental picking and choosing happening in the background, you can understand why introverts are slow to jump into the conversational queue and, when we finally do, we find that someone else (probably an extrovert) has started talking first. I’m constantly being asked, “Did you say something Gord?”, at which point everyone stops talking and looks at my little Zoom cubicle, waiting for me to talk. That, my friends, is an introvert’s nightmare.

Finally, I Get the Last Word

Interestingly, neither my wife nor I are using Facebook much for connection. She has joined a few Facebook groups, one of which is a fan club for our provincial health officer, Dr. Bonnie Henry. Dr. Henry has become the most beloved person in B.C.

And I’m doing what I always tell everyone else not to do; follow my Facebook newsfeed and go into self-isolated paroxysms of rage about the Pan-dumb-ic and the battle between science and stupidity.

There is one social sacrifice that both my wife and I agree on. The thing we miss most is the ability to hug those we love.

Quant vs Qual in the time of Crisis

Digesting reality is becoming more and more difficult. I often find myself gagging on it. Last Friday was a good example. I have been limiting my news intact for my own sanity, but Friday morning I went down the rabbit hole. Truth be told, I started doing some research for the post I was intending to write (which I will probably get to next week) and I was soon overwhelmed with what I was reading.

I’m beginning to suspect that we’re getting an extra dump of frightening news on Fridays as officials realize that it’s more difficult to enforce social distancing on weekends. Whether this is the case or not, I found my chest tightening from anxiety. My hands got shaky as I found myself clicking on frightening link after frightening link. Predictions scared the shit out of me. I was worried for my community and country. I was worried for myself. But most of all, I was worried for my kids, my wife, my dad, my in-laws and my family.

Fear and anxiety swamped my normally rational side. Intellect gave way to despair. That’s not a good mode for me. I have to run cool – I need to be rational to function. Emotions mentally shut me down.

So I retreated to the numbers. My single best source throughout this has been the posts from Tomas Pueyo – the VP of Growth at Course Hero. They are exhaustively researched statistical analyses and “what-if” models assembled by an ad-hoc team of rockstar quants. On his first post on March 10 –  “Coronavirus: Why You Must Act Now” – Pueyo and his team nailed it. If everyone listened and followed his advice, we wouldn’t be where we are now. Similarly, his post on March 19 – “Coronavirus: The Hammer and The Dance” gave a tough but rational prescription to follow. His latest – “Coronavirus: Out of Many, One” – drills down on a state-by-state analysis of COVID in the US.

I’m not going to blow smoke here. These are tough numbers to read. Even the best-case scenarios would have been impossible to imagine just a few weeks ago. But the worst-case scenarios are exponentially more frightening. And if you – like me – needs to retreat to ration in order to keep functioning, this is the best rationale I’ve found for dealing with COVID 19. It’s not what we want to hear, but it’s what we must listen to.

In my marketing life, I always encouraged a healthy mix of both quantitative and qualitative perspectives in trying to understand what is real. I’ve said in the past: “Quantitative is watching the dashboard while you drive. Qualitative is looking out the windshield.”

I often find that marketers tend to focus too much on the numbers and not enough on the people on the other side of those numbers. We were an industry deluged with data and it made us less human.

Ironically, I now find myself on the other side of that argument. We have to understand that even our most trustworthy media sources are going to be telling us the stories that have the most impact on us. Whether you turn to Fox or CNN as your news source, we would be getting soundbites out of context that are – by design – sensational in nature. They may differ in their editorial slants, but – right or left – we can’t consider them representational of reality. They are the outliers.

Being human, we can’t help but apply these to our current reality. It’s called availability bias. It the simplest terms possible, it means that those things that are most in our face become our understanding of any given situation.

In normal times, these individual examples can heighten our humanity and make us a little less numb. They remind us of the relevance of the individual experience– the importance of every life and the tragedy of even one person suffering.

“If only one man dies of hunger, that is a tragedy.
If millions die, that’s only a statistic.”

– Joseph Stalin

Normally, I would never dream of quoting Joe Stalin in a post. But these are not normal times. And the fact is, Stalin was right. when we start looking at statistics and mathematical modelling, our brain works differently. It forces us to use a more rational cognitive mechanism; one less likely to be influenced by emotion. And in responding to a crisis, this is exactly the type of reasoning required.

This is a time unlike anything any of us has experienced. In times like this, actions should be based on the most accurate and scientific information possible. We need the cold, hard logic of math as a way to not become swamped by the wave of our own emotions. In order to make really difficult decisions for the greater good, we need to distance ourselves from our own little bubbles of reality, especially when that reality is made up of non-representative examples streamed to us through media channels.