What Would Aaron Do?

I am a big Aaron Sorkin fan. And before you rain on my parade, I say that fully understanding that he epitomizes the liberal intellectual elitist, sanctimonious cabal that has helped cleave American culture in two. I get that. And I don’t care.

I get that his message is from the left side of the ideological divide. I get that he is preaching to the choir. And I get that I am part of the choir. Still, given the times, I felt that a little Sorkin sermon was just what I needed. So I started rewatching Sorkin’s HBO series “The Newsroom.”

If you aren’t part of this particular choir, let me bring you up to speed. The Newsroom in this case is at the fictional cable network ACN. One of the primary characters is lead anchor Will McEvoy (played by Jeff Daniels), who has built his audience by being noncontroversial and affable — the Jay Leno of journalism. 

This brings us to the entrance of the second main character: Mackenzie McHale, played by Emily Mortimer. Exhausted from years as an embedded journalist covering multiple conflicts in Afghanistan, Pakistan and Iraq, she comes on board as McEvoy’s new executive producer (and also happens to be his ex-girlfriend). 

In typical Sorkin fashion, she goads everyone to do better. She wants to reimagine the news by “reclaiming journalism as an honorable profession,” with “civility, respect, and a return to what’s important; the death of bitchiness; the death of gossip and voyeurism; speaking truth to stupid.”

I made it to episode 3 before becoming profoundly sad and world-weary. Sorkin’s sermon from 2012—– just eight years ago —  did not age well. It certainly didn’t foreshadow what was to come. 

Instead of trying to be better, the news business — especially cable news — has gone in exactly the opposite direction, heading straight for Aaron Sorkin’s worst-case scenario. This scenario formed part of a Will McEvoy speech in that third episode: “I’m a leader in an industry that miscalled election results, hyped up terror scares, ginned up controversy, and failed to report on tectonic shifts in our country — from the collapse of the financial system to the truths about how strong we are to the dangers we actually face.”

That pretty much sums up where we are. But even Sorkin couldn’t anticipate what horrors social media would throw into the mix. The reality is actually worse than his worst-case scenario. 

Sorkin’s appeal for me was that he always showed what “better” could be. That was certainly true in his breakthrough political hit “The West Wing.” 

He brought the same message to the jaded world of journalism in “The Newsroom. He was saying, “Yes, we are flawed people working in a flawed system set in a flawed nation. But it can be better….Our future is in our hands. And whatever that future may be, we will be held accountable for it when it happens.”

This message is not new. It was the blood and bones of Abraham Lincoln’s annual address to Congress on December 1, 1862, just one month before the Emancipation Proclamation was signed into law. Lincoln was preparing the nation for the choice of a path which may have been unprecedented and unimaginably difficult, but would ultimately be proven to be the more moral one: “It is not ‘can any of us imagine better?’ but, ‘can we all do better?’ The dogmas of the quiet past, are inadequate to the stormy present. The occasion is piled high with difficulty, and we must rise — with the occasion.”

“The Newsroom” was Sorkin’s last involvement with a continuing TV series. He was working on his directorial movie debut, “Molly’s World,” when Trump got elected. 

Since then, he has adapted Harper Lee’s “To Kill a Mockingbird” for Broadway, with “The Newsroom’”s Jeff Daniels as Atticus Finch. 

Sorkin being Sorkin, he ran into a legal dispute with Lee’s estate when he updated the source material to be a little more open about the racial tension that underlies the story. Aaron Sorkin is not one to let sleeping dogmas lie. 

Aaron Sorkin also wrote a letter to his daughter and wife on the day after the 2016 election, a letter than perhaps says it all. 

It began, “Well the world changed late last night in a way I couldn’t protect us from.”

He was saying that as a husband and father. But I think it was a message for us all — a message of frustration and sadness. He closed the letter by saying “I will not hand [my daughter] a country shaped by hateful and stupid men. Your tears last night woke me up, and I’ll never go to sleep on you again.”

Yes, Sorkin was preaching when he was scripting “The Newsroom.” But he was right. We should do better. 

In that spirit, I’ll continue to dissect the Reuters study on the current state of journalism I mentioned last week. And I’ll do this because I think we have to hold our information sources to “doing better.” We have to do a better job of supporting those journalists that are doing better. We have to be willing to reject the “dogmas of the quiet past.” 

One of those dogmas is news supported by advertising. The two are mutually incompatible. Ad-supported journalism is a popularity contest, with the end product a huge audience custom sliced, diced and delivered to advertisers — instead of a well-informed populace.

We have to do better than that.

How We Forage for the News We Want

Reuters Institute out of the UK just released a comprehensive study looking at how people around the world are finding their news. There is a lot here, so I’ll break it into pieces over a few columns and look at the most interesting aspects. Today, I’ll look at the 50,000-foot view, which can best be summarized as a dysfunctional relationship between our news sources and ourselves. And like most dysfunctional relationships, the culprit here is a lack of trust.

Before we dive in, we should spend some time looking at how the way we access news has changed over the last several years.

Over my lifetime, we have trended in two general directions – less cognitively demanding news channels and less destination specific news sources. The most obvious shift has been away from print. According to Journalism.org and the Pew Research Center, circulation of U.S. Daily newspapers peaked around 1990, at about 62 and a half million. That’s one subscription for every 4 people in the country at that time.

In 2018, it was projected that circulation had dropped more than 50%, to less than 30 million. That would have been one subscription for every 10 people. We were no longer reading our news in a non-digital format. And that may have significant impact on our understanding of the news. I’ll return to this in another column, but for now, let’s just understand that our brain operates in a significantly different way when it’s reading rather than watching or listening.

Up the end of the last century, we generally trusted news destinations. Whether it be a daily newspaper like the New York Times, a news magazine like Time or a nightly newscast such as any of the network news shows, each was a destination that offered one thing above all others – the news. And whether you agreed with them or not, each had an editorial process that governed what news was shared. We had a loyalty to our chosen news destinations that was built on trust.

Over the past two decades, this trust has broken down due to one primary factor – our continuing use of social media. And that has dramatically shifted how we get our news.

In the US, three out of every four people use online sources to get their news. One in two use social media.  Those aged 18 to 24 are more than twice as likely to rely on social media. In the UK, under-35s get more of their news from Social Media than any other source.

Also, influencers have become a source of news, particularly amongst young people. In the US, a quarter of those 18 to 24 used Instagram as a source of news about COVID.

This means that most times, we’re getting our news through a social media lens. Let’s set aside for a moment the filtering and information veracity problems that introduces. Let’s just talk about intent for a moment.

I have talked extensively in the past about information foraging when it comes to search. When information is “patchy” and spread diversely, the brain has to make a quickly calculated guess about which patch it’s most likely to find the information in it’s looking for. With Information Foraging, the intent we have frames everything that comes after.

In today’s digital world, information sources have disaggregated into profoundly patchy environments. We still go to news-first destinations like CNN or Fox News but we also get much of our information about the world through our social media feeds. What was interesting about the Reuters report was that it was started before the COVID pandemic, but the second part of the study was conducted during COVID. And it highlights a fascinating truth about our relationship with the news when it comes to trust.

The study shows that the majority of us don’t trust the news we get through social media but most times, we’re okay with that. Less than 40% of people trust the news in general, and even when we pick a source, less than half of us trust that particular channel. Only 22% indicated they trust the news they see in social media. Yet half of us admit we use social media to get our news. The younger we are, the more reliant we are on social media for news. The fastest growing sources for news amongst all age groups – but especially those under 30 – are Instagram, SnapChat and WhatsApp.

Here’s another troubling fact that fell out of the study. Social platforms, especially Instagram and SnapChat, are dominated by influencers. That means that much of our news comes to us by way of a celebrity influencer reposting it on their feed. This is a far cry from the editorial review process that used to act as a gate keeper on our trusted news sources.

So why do we continue to use news sources we admit we don’t trust? I suspect it may have to do with something called the Meaning Maintenance Model. Proposed in 2006 by Heine, Proulx and Vohs, the model speculates that a primary driver for us is to maintain our beliefs in how the world works. This is related to the sense making loop (Klein, Moon and Hoffman) I’ve also talked about in the past. We make sense of the world by first starting with the existing frame of what we believe to be true. If what we’re experiencing is significantly different from what we believe, we will update our frame to align with the new evidence.

What the Meaning Maintenance Model suggests is that we will go to great lengths to avoid updating our frame. It’s much easier just to find supposed evidence that supports our current beliefs. So, if our intent is to get news that supports our existing world view, social media is the perfect source. It’s algorithmically filtered to match our current frame. Even if we believe the information is suspect, it still comforts us to have our beliefs confirmed. This works well for news about politics, societal concerns and other ideologically polarized topics.

We don’t like to admit this is the case. According to the Reuter’s study, 60% of us indicate we want news sources that are objective and not biased to any particular point of view. But this doesn’t jive with reality at all. As I wrote about in a previous column, almost all mainstream news sources in the US appear to have a significant bias to the right or left. If we’re talking about news that comes through social media channels, that bias is doubled down on. In practice, we are quite happy foraging from news sources that are biased, as long as that bias matches our own.

But then something like COVID comes along. Suddenly, we all have skin in the game in a very real and immediate way. Our information foraging intent changes and our minimum threshold for the reliability of our news sources goes way up. The Reuters study found that when it comes to sourcing COVID information, the most trusted sources are official sites of health and scientific organizations. The least trusted sources are random strangers, social media and messaging apps.

It requires some reading between the lines, but the Reuters study paints a troubling picture of the state of journalism and our relationship with it. Where we get our information directly impacts what we believe. And what we believe determines what we do.

These are high stakes in an all-in game of survival.

Hope’s Not Dead, It’s Just been Handed Down

It’s been interesting writing this column in the last 4 months. In fact, it’s been interesting writing it for the last 4 years. And I use the word “interesting” as a euphemism. It’s been many things: gut-wrenching, frustrating, maddening and head-scratching. Many times – most times – the writing of this has made me profoundly sad and despairing of our future. It has made me question my own beliefs. But yes, in a macabre sense, it has been interesting.

I call myself a humanist. I believe in the essential goodness of humans, collectively and on the average. I believe we are the agents of our own fate. I believe there are ups and downs in our stewardship of our future, but over the longer term, we will trend in the right direction.

I still am trying to believe in these things. But I have to tell you, it’s getting really hard.

I’m sure it’s not just me. Over the years, this column – Media Insider – has morphed into the most freeform of Mediapost’s columns. The rotating stable of writers, including myself, really has a carte blanche to write about whatever happens to be on our mind. That’s why I was drawn to it. I’m not actively involved in any aspect of the industry anymore, so I really can’t provide any relevant commentary on things like Search, Mobile, TV or the agency world. But I do have many opinions about many things. And this column seemed to be the best place to talk about them.

What really fascinates me is the intersection between human behavior and technology. And so, most of my columns unpack some aspect of that intersection. In the beginning, it seemed that technology was dovetailing nicely with my belief in human goodness. Then things started to go off the track. In the past four years, this derailment has accelerated. In the past four months, it’s been like watching a train wreck.

The writers of Media Insider have all done our best to chronicle what the f*ck is going on. Today I looked back at our collective work over the past 4 months. I couldn’t help thinking that it was like trying to write at the micro level about what happens when a table is upended in the middle of dinner. Yes, I can report that the pepper shaker is still next to the salt shaker. But the bigger story is that everything is skidding down the table to the abyss beyond the edge.

I suspect that where we are now can be directly traced back to the source of my naïve optimism some years ago. We were giddy about what technology could do, not just for marketing, but for everything about our world. But to use the language of COVID, we had been infected but were still asymptomatic. Inside our culture, the virus of unintended consequences was already at work, replicating itself.

My vague and clung-to hope is that this is just another downswing. And my hope comes from my kids. They are better people than I was at their age: more compassionate, more empathetic and more committed to their beliefs. They have rejected much of the cultural baggage of systemic inequality that I took for granted in my twenties. They are both determined to make a difference, each in their own way. In them, I again have hope for the future.

We love to lump people together into categories and slap labels on them. That is also true for my daughters’ generation. They are often called Generation Z.

Every generation has their angels and assholes. That is also true for Generation Z. But here’s the interesting thing about them. They’re really tough to label. Here’s an excerpt from a recent report on Generation Z from Mckinsey:

“Our study based on the survey reveals four core Gen Z behaviors, all anchored in one element: this generation’s search for truth. Gen Zers value individual expression and avoid labels. They mobilize themselves for a variety of causes. They believe profoundly in the efficacy of dialogue to solve conflicts and improve the world. Finally, they make decisions and relate to institutions in a highly analytical and pragmatic way.”

The other interesting thing about this generation is that they grew up with the technology that seems to be upending the world of every previous generation. They seem – somehow – to have developed a natural immunity to the most harmful effects of social media. Maybe my hope that technology will ultimately make us better people wasn’t wrong, it just had to skip a couple of generations.

I know it’s dangerous to lionize or demonize any group – generational or otherwise – en masse. But after watching the world go to a hell in a handbasket in the hands of those in charge for the last few years, I have no qualms about handing things over to my kids and others of their age.

And we should do it soon, while there is a still a world to hand over.

Are We Killing Politeness?

One of the many casualties of our changing culture seems to be politeness. When the President of the United States is the poster child for rude behavior, it’s tough for politeness to survive. This is especially true in the no-holds-barred, digitally distanced world of social media.

I consider myself to be reasonably polite. Being so, I also expect this in others. Mild rudeness makes me anxious. Excessive rudeness makes me angry. This being the case, I am troubled by the apparent decline of civility. So today I wanted to take a look at politeness and why it might be slipping away from us.

First of all, we have to understand the politeness is not universal. What is considered polite in one culture is not in another.

Secondly, being polite is not the same as being friendly. Or empathetic. Or being respectful of others. Or being compassionate, according to this post  from The Conversation. There is a question of degree and intent here. Being polite is a rather unique behavior that encompasses both desirable and less desirable qualities. And that begs the question: What is the purpose of politeness? Is a less-polite world a good or a bad thing?

First, let’s look at the origin of the world. It comes from the Latin “politus,” meaning “polished — made smooth.” Just in case you’re wondering, “politics” does not come from the same root. That comes from the Greek word for “citizen” — “polites.”

One last etymological nugget. The closest comparison to polite may be “nice,” which originates from the Latin “nescius,” meaning “ignorant”. Take that for what it’s worth.

This idea of politeness as a type of social “polish” really comes from Europe — and especially Britain. There, politeness was linked with class hierarchies. Being polite was a sign of good breeding — a dividing line between the high-born and the riffraff. This class-bound definition came along with the transference of the concept to North America.

Canada is typically considered one of the most polite nations in the world. As a Canadian who has traveled a fair amount, I would say that’s probably true.

But again, there are variations in the concept of politeness and how it applies to both Canadians and Americans.

When we consider the British definition of politeness, you begin to see how Americans and Canadians might respond differently to it. To understand that is to understand much of what makes up our respective characters.

As a Canadian doing much of my business in the U.S. for many years, I was always struck by the difference in approaches I found north and south of the 49th parallel. Canadians businesses we met with were unfailingly polite, but seldom bought anything. Negotiating the prospect path in Canada was a long and often frustrating journey.

American businesses were much more likely to sign a contract. On the whole, I would also say they were friendlier in a more open and less-guarded way. I have to admit that in a business setting, I preferred the American approach.

According to anthropologists Penelope Brown and Stephen Levinson, who have extensively researched politeness, there is negative and positive politeness. Negative politeness is concern with adhering to social norms, often by deferring to someone or something else.

This is Canadian politeness personified. Our entire history is one of deference to greater powers, first to our colonial masters — the British and French — and, more recently, from our proximity to the cultural and economic master that is the U.S.

For Canadians, deferral is survival. As former Prime Minister Pierre Trudeau once said about the U.S., “Living next to you is in some ways like sleeping with an elephant. No matter how friendly and even-tempered is the beast, if I can call it that, one is affected by every twitch and grunt.”

Negative politeness is a way to smooth out social friction, but there is good and bad here. Ideally it should establish a baseline of trust, respect and social capital. But ultimately, politeness is a consensus of compromise.  And that’s why Canadians are so good at it.  Negative politeness wants everything to be fair.

But then there is positive politeness, which is more American in tone and nature. This is a desire to help others, making it more closely linked to compassion. But in this noble motive there is also a unilateral defining of what is right and wrong. Positive politeness tries to make everything right, based on the protagonist’s definition of what that is.

The two sides of politeness actually come from different parts of the brain. Negative politeness comes from the part of the brain that governs aggression. It is all about applying brakes to our natural instincts. But positive politeness comes from the part of the brain that regulates social bonding and affiliation.

When you understand this, you understand the difference between Canadians and Americans in what we consider polite. For the former, our definition comes handed down from the British class-linked origins, and has morphed into a culture of compromise and deferral.

The American definition comes from many generations of being the de facto moral leaders of the free world.

We (Canadians) want to be nice. You (Americans) want to be right. The two are not mutually exclusive, but they are also not the same thing. Not by a long shot.

What Trump has done (with a certain kind of perverse genius) has played on this national baseline of compassion. He has wantonly discarded any vestiges of politeness and split the nation on what it means to be right.

But by eliminating politeness, you have also eliminated that governor of our behavior. Reactions about what is right and wrong are now immediate, rough and unfiltered.

The polish that politeness brings — that deferral of spoken judgement for even a brief moment in order to foster cooperation — is gone. We have no opportunity to consider other perspectives. We have no motive to cooperate. This is abundantly apparent on every social media platform.

In game theory, politeness is a highly successful strategy commonly called “tit for tat.” It starts from assuming a default position of fairness from the other party, continuing to cooperate if this proves to be true, and escalating to retaliation if it’s not. But this tactic evolved in a world of face-to-face encounters. Somehow, it seems less needed in a divided world where rudeness and immediate judgement are the norm.

Still, I will cling to my notions of politeness. Yes, sometimes it seems to get in the way of definitive action. But on the whole, I would rather live in a world that’s a little nicer and a little more polite, even if that seems foolish to some of you.

The Potential Woes of Working from Home

Many of you have now had a few months under your belt working virtually from home rather than going to the office. At least some of you are probably considering continuing to do so even after COVID recedes and the all clear is given to return to normal. A virtual workplace makes all kinds of rational sense – both for employees and employers. But there are irrational reasons why you might want to think twice before you fully embrace going virtual.

About a decade ago, my company also went with a hybrid virtual/physical workplace. As the CEO, there was a lot I liked about it. It was a lot more economical than leasing more office space. It gave us the flexibility to recruit top talent in areas where we had no physical presence. And it seemed that technology was up to the task of providing the communication and work-flow tools we needed to support our virtual members.

On the whole, our virtual employees also seemed to like it. It gave them more flexibility in their workday. It also made it less formal. If you wanted to work in pajamas and bunny slippers, so be it. And with a customer base spread across many time zones, it also made it easier to shift client calls to times that were mutually acceptable.

It seemed to be a win-win. For awhile. Then we noticed that all was not wonderful in work-from-home land.

I can’t say productivity declined. We were always a results-based workplace so as long as the work got done, we were happy. But we started to feel a shift in our previously strong corporate culture. We found team-member complaints about seemingly minor things skyrocket. We found less cohesion across teams. Finally – and most critically – it started to impact our relationships with our customers.

Right about the time all this was happening, we were acquired by a much bigger company. One of the dictates that was handed down from the new owners was that we establish physical offices and bring our virtual employees back to the mothership for the majority of their work-week. At the time, I wasn’t fully aware of the negative consequences of going virtual so I initially fought the decision. But to be honest, I was secretly happy. I knew something wasn’t quite right. I just wasn’t sure what it was. I suspected it might have been our new virtual team members.

The move back to a physical workplace was a tough one. Our virtual team members were very vocal about how this was a loss of their personal freedom. New HR fires were erupting daily and I spent much of my time fighting them. This, combined with the inevitable cultural consequences of being acquired, often made me shake my head in bewilderment. Life in our company was turning into a shit-show.

I wish I could say that after we all returned to the same workplace, we joined hands and sang a rousing chorus of Kumbaya. We didn’t. The damage had been done. Many of the disgruntled former virtual team members ended up moving on. The cultural core of the company remained with our original team members who had worked in the same office location for several years. I eventually completed my contract and went my own way.

I never fully determined what the culprit was. Was it our virtual team members? Or was it the fact that we embraced a virtual workplace without considering unintended consequences. I suspected it was a little of both.

Like I said, that was a decade ago. From a rational perspective, all the benefits of a virtual workplace seem even more enticing than they did then. But in the last 10 years, there has been research done on those irrational factors that can lead to the cracks in a corporate culture that we experienced.

Mahdi Roghanizad is an organizational behavior specialist from Ryerson University in Toronto. He has long looked at the limitations of computerized communication. And his research provides a little more clarity into our failed experiment with a virtual workplace.

Roghanizad has found that without real-life contact, the parts of our brain that provide us with the connections needed to build trust never turn on. In order to build a true relationship with another person, we need something called the Theory of Mind. According to Wikipedia, “Theory of mind is necessary to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.”

But unless we’re physically face-to-face with another person, our brain doesn’t engage in this critical activity. “Eye contact is required to activate that theory of mind and when the eye contact is not there, the whole other signal information is not processed by our brain,” said Roghanizad. Even wearing a pair of sunglasses is enough to short circuit the process. Relegating contact to a periodic Zoom call guarantees that this empathetic part of our brains will never kick in.

But it’s not just being eye-ball to eye-ball. There are other non-verbal cues we rely on to connect with other people and create a Theory of Mind. Other research has shown the importance of pheromones and physical gestures like crossing your arms and leaning forward or back. This is why we subconsciously start to physically imitate people we’re talking to. The stronger the connection with someone, the more we imitate them.

This all comes back to the importance of bandwidth in the real world. A digital connection cannot possibly incorporate all the nuance of a face-to-face connection. And whether we realize it or not, we rely on that bandwidth to understand other people. From that understanding comes the foundations of trusted relationships. And trusted relationships are the difference between a high-functioning work team and a dysfunctional one.

I wish I knew that ten years ago.

TV and My Generation

My Generation has been a dumpster fire of epic proportions. I am a baby boomer, born in 1961, at the tail end of the boom. And, according to Time magazine, we broke America.  We probably destroyed the planet. And, oh yeah, we’ve also screwed up the economy. I’d like to say it isn’t true, but I’m pretty sure it is. As a generation, we have an extensive rap sheet.

Statistically, baby boomers are one of the most politically polarized generations alive today. So, the vast chasm that exists between the right and the left may also be our fault. 

As I said, we’re a generational dumpster fire. 

A few columns back I said this: “We create the medium — which then becomes part of the environment we adapt to.”  I was referring to social media and its impact on today’s generations. 

But what about us? What about the generation that has wreaked all this havoc? If I am right and the media we make in turn makes us who we are, what the hell happened to our generation?

Television, that’s what. 

There have been innumerable treatises on how baby boomers got to be in the sorry state we’re in. Most blame the post-war affluence of America and the never-ending consumer orgy it sparked. 

But we were also the first generation to grow up in front of a television screen. Surely that must have had some impact. 

I suspect television was one of the factors that started driving the wedge between the right and left halves of our generation, creating a non-stretchable world in between. Further, I think it may have been the prime suspect.

Let’s plot the trends of what was on TV against my most influential formative years, and — by extension — my generation. 

When I was 5 years old, in 1966, the most popular TV shows fell into two categories: westerns like “Bonanza” and “Gunsmoke,” or cornfed comedies like “The Andy Griffith Show,” “The Beverly Hillbillies,” “Green Acres” and “Petticoat Junction.” Social commentary and satire were virtually nonexistent on American prime-time TV. The values of America were tightly censored, wholesome and non-confrontational. The only person of color in the line-up was Bill Cosby on “I Spy.” Thanks to “Hogan’s Heroes,” even the Nazis were lovable doofuses. 

I suspect when certain people of my generation want to Make America Great Again, it is this America they’re talking about. It was a white, wholesome America that was seen through the universally rose-colored glasses given to us by the three networks. 

It was also completely fictional, ignoring inconveniences like the civil rights movement, Vietnam and rampant gender inequality. This America never existed. 

When we talk about the cultural environment my generation literally cut our teeth in, this is what we refer to. There was no moral ambiguity. It was clear who the good guys were, because they all wore white hats. 

This moral baseline was spoon-fed to us right when we were first making sense of our own realities. Unfortunately, it bore little to no resemblance to what was actually real.

The fact was, through the late ’60s, America was already increasingly polarized politically. Left and right were drifting apart. Even Bob Hope felt the earth splitting beneath his feet. In November, 1969, he asked all the elected leaders of the country, no matter their politics, to join him in a week of national unity. One of those leaders called it “a time of crisis, greater today perhaps than since the Civil War.” 

But rather than trying to heal the wounds, politicians capitalized on them, further splitting the country apart by affixing labels like Nixon’s “The Silent Majority.” 

Now, let’s move ahead to my teen years. From our mid-teens to our mid-twenties, we create our social identities. Our values and morals take on some complexity. The foundations for our lifelong belief structures are formed during these years. 

In 1976, when I was 15, the TV line-up had become a lot more controversial. We had many shows regularly tackling social commentary: “All in the Family,” “M*A*S*H,” “Sanford and Son,” “Welcome Back, Kotter,” “Barney Miller” and “Good Times.” Of course, we still had heaps of wholesome, thanks to “Happy Days,” “Marcus Welby, M.D.” and “The Waltons.

Just when my generation was forming the values that would define us, our prime-time line-up was splitting left and right. You had the social moralizing of left-leaning show runners like Norman Lear (“All in the Family”) and Larry Gelbart (“M*A*S*H”) vs the God and Country values of “The Waltons” and “Little House on the Prairie.” 

I don’t know what happened in your hometown, but in mine, we started to be identified by the shows we watched (or, often, what our parents let us watch). You had the “All in the Family” Group and “The Waltons” Group. In the middle, we could generally agree on “Charlie’s Angels” and “The Six Million Dollar Man.” The cracks in the ideologies of my generation were starting to show.

I suspect as time went forward, the two halves of my generation started looking to television with two different intents: either to inform ourselves of the world that is, warts and all — or to escape to a world that never was. As our programming choices expanded, those two halves got further and further apart, and the middle ground disappeared. 

There are other factors, I’m sure. But speaking for myself, I spent an unhealthy amount of time watching TV when I was young. It couldn’t help but partially form the person I am today. And if that is true for me, I suspect it is also true for the rest of my generation.

A World Flattened by Social Media

“Life is What Happens to You While You’re Busy Making Other Plans”

John Lennon

The magic of our lives is in the nuance, the unexpected and – sometimes – the mundane. It depends on bandwidth – a full spectrum of experience and stimuli that extends beyond the best attempts of our imagination to put boundaries around it. As Mr. Lennon knew, life is lived in a continual parade of moments that keeps marching past us, whether we’ve planned them or not.

Of course, our current ability to make life plans is not what it once was. In fact, most aspects of our former lives have gone into a forced hibernation. Suddenly, our calendars are completely clear and we have a lot of unexpected time on our hands. So many of us have been spending more of that time on social media. I don’t know about you, but I’m finding that a poor substitute for the real world.

I’ve noticed a few of my friends have recently posted that they’re taking a break from Facebook. That’s not unprecedented. But I think this time might be different. Speaking for myself, I have recently been experiencing a strange combination of anxiety and ennui when I spend any time on Facebook.

First, there are the various posts of political and moral outrage. I agree with almost all of them, in direction if not necessarily in degree.

And then there are the various posts of inspirational quotes and assorted pictures of loaves of bread, pets, gardens, favorite albums, our latest hobby, walks in the woods and kids doing adorable things. It is the assorted bric-a-brac of our new normal under COVID.

I like and/or agree with almost all these things. Facebook’s targeting algorithm has me pretty much pegged. But if the sum total of my Facebook feed defined the actual world I had to live in, I would get pretty bored with it in the first 15 minutes.

It would be like seeing the world only in blue and orange. I like blue. I’m okay with orange. But I don’t want to see the world in only those two colours. And that’s what Facebook does.

This is not a slight against Facebook. None of us (with the possible exception of Mark Zuckerberg) should expect it to be a substitute for the real world. But now that a lot of us have been restricted from experiencing big chunks of the real world and have substituted time with social media for it, we should realize the limitations of what it can provide.

Facebook and other social media platforms give us a world without nuance, without bandwidth, without serendipity and without context. Further, it is a world that has been algorithmically altered and filtered specifically for a data defined avatar of who we really are. We’re not even getting the full bandwidth of what is on the platform. We’re getting what happens to squeeze past the content filters that act as our own personalized gatekeepers.

What the past 3 months has taught me is that when we rely on social media for experience, information or perspective, we have to take it for what it is. As a source of information, it is at best highly restricted and biased. As a source of social connection and experience, it is mercilessly flattened and stripped of all nuance. As a substitute for the real world, it comes up woefully short.

Perhaps the biggest restriction with social media is that everything we see is planned and premeditated, either by humans or an algorithm. The content that is posted is done so with clear intent. And the content we actually see has been targeted to fit within some data driven pigeonhole that an algorithm has decided represents us. What we’re missing is exactly what John Lennon was referring to when he talked about what life is: the unplanned, the unexpected, the unintended.

We’re missing an entire spectrum of color beyond blue and orange.

Crisis? What Crisis?

You would think that a global pandemic would hold our attention for a while.

Nope.

We’re tired of it. We’re moving on.  We’re off to the next thing.

Granted, in this case the next thing deserves to be focused on. It is abysmal that it still exists. So it should be focused on. Probably for the rest of our lives and beyond – going forward until it ceases to be a thing. But it won’t be. Soon we’ll be talking about something else.

And that’s the point of this post – our collective inability to remain focused on anything without being distracted by the next breaking story in our news feed. How did we come to this?

I blame memes.

To a certain extent, our culture is the product of who we are and who we are is a product of our culture. Each is shaped by the other, going forward in a constantly improvised pas de deux. Humans create the medium – which then becomes part of the environment we adapt too.

Books and the printed word changed who we were for over five centuries.  Cinema has been helping to define us for almost 150 years. And radio and television has been moulding us for the past century. Our creations have helped create who we are.

This has never been truer than with social media. Unlike other media which took discrete chunks of our time and attention, social media is ubiquitous and pervasive. According to a recent survey, we spend on average 2 hours and 23 minutes per day on social media. That is about 13% of our waking hours.  Social media has become intertwined with our lives to the point that we had to start qualifying what happens where with labels like “IRL” (In Real Life).

There is another difference between social media and what has come before it. Almost every previous entertainment medium that has demanded our attention has been built on the foundation of a long form narrative arc. Interacting with each medium has been a process – a commitment to invest a certain amount of time to go on a journey with the storyteller. The construction of a story depends on patterns that are instantly recognized by us. Once we identify them, we are invested in discovering the outcome. We understand that our part of the bargain is to exchange our time and attention. The payoff is the joy that comes from us making sense of a new world or situation, even if it is imaginary.  

But social media depends on a different exchange. Rather than tapping into our inherent love of the structure of a story it depends on something called variable intermittent rewards. Essentially, it’s the same hook that casinos use to keep people at a slot machine or table. Not only is it highly addictive, it also pushes us to continually scroll to the next thing. It completely bypasses the thinking part of our brains and connects directly to the reward center buried in our limbic system. Rather than ask for our time and attention social media dangles a never-ending array of bright, shiny memes that asks nothing from us: no thinking, almost no attention and a few seconds of our time at most. For a lazy brain, this is the bargain of a lifetime.

It’s probably not a coincidence that the media that are most dependent on advertising are also the media that avoids locking our attention on a single topic for an extended period. This makes social media the perfect match for interruptive ad forms. They are simply slotted into the never-ending scroll of memes.

Social media has only been around for a little over 2 decades. It has been a significant part of our lives for half that time. If even a little bit of what I suspect is happening is indeed taking place, that scares the hell out of me. It would mean that no other medium has changed us so much and so quickly.

That is something worth paying attention to. 

How Social Media is Rewiring our Morality

Just a few short months ago, I never dreamed that one of the many fault lines in our society would be who wore a face mask and who didn’t. But on one day last week, most of the stories on CNN.com were about just that topic.

For reasons I’ll explain at the end of this post, the debate has some interesting moral and sociological implications. But before we get to that, let’s address this question: What is morality anyway?

Who’s On First?

In the simplest form possible, there is one foundational evolutionary spectrum to what we consider our own morality, which is: Are we more inclined to worry about ourselves or worry about others? Each of us plots our own morals somewhere on this spectrum.

At one end we have the individualist, the one who continually puts “me first.” Typically, the morals of those focused only on themselves concern individual rights, freedoms and beliefs specific to them. This concern for these rights does not extend to anyone considered outside their own “in” group.

As we move across the spectrum, we next find the familial moralist: Those who worry first about their own kin. Morality is always based on “family first.”

Next comes those who are more altruistic, as long as that altruism is directed at those who share common ground with themselves.  You could call this the “we first” group.

Finally, we have the true altruist, who believes in a type of universal altruism and that a rising tide truly lifts all boats.  

This concept of altruism has always been a bit of a puzzle for early evolutionists. In sociological parlance, it’s called proactive prosociality — doing something nice for someone who is not closely related to you without being asked. It seems at odds with the concept of the Selfish Gene, first introduced by evolutionary biologist Richard Dawkins in his book of the same name in 1976.

But as Dawkins has clarified over and over again since the publication of the book, selfish genes and prosociality are not mutually exclusive. They are, in fact, symbiotic.

Moral Collaboration

We have spent about 95% or our entire time as a species as hunter-gatherers. If we have evolved a mechanism of morality,  it would make sense to be most functional in that type of environment.

Hunter-gatherer societies need to collaborate. This is where the seeds of reciprocal altruism can be found. A group of people who work together to ensure continued food supplies will outlive and out-reproduce a group of people who don’t.  From a selfish gene perspective, collaboration will beat stubborn individualism.

But this type of collaboration comes with an important caveat: It only applies to individuals that live together in the same communal group.

Social conformity acts as a manual override on our own moral beliefs. Even in situations where we may initially have a belief of what is right and wrong, most of us will end up going with what the crowd is doing.

It’s an evolutionary version of the wisdom of crowds. But our evolved social conformity safety net comes with an important caveat: it assumes that everyone in the group is  in the same physical location and dealing with the same challenge.  

There is also a threshold effect there that determines how likely we are to conform. How we will act in any given situation will depend on a number of factors: how strong our existing beliefs are, the situation we’re in, and how the crowd is acting. This makes sense. Our conformity is inversely related to our level of perceived knowledge. The more we think we know, the less likely it is that we’ll conform to what the crowd is doing.

We should expect that a reasonably “rugged” evolutionary environment where survival is a continual struggle would tend to produce an optimal moral framework somewhere in the middle of familial and community altruism, where the group benefits from collaboration but does not let its guard down against outside threats.

But something interesting happens when the element of chronic struggle is removed, as it is in our culture. It appears that our morality tends to polarize to opposite ends of the spectrum.

Morality Rewired

What happens when our morality becomes our personal brand, part of who we believe we are? When that happens, our sense of morality migrates from the evolutionary core of our limbic brain to our cortex, the home of our personal brand. And our morals morph into a sort of tribal identity badge.

In this case, social media can short-circuit the evolutionary mechanisms of morality.

For example, there has been a proven correlation  between prosociality and the concept of “watching eye.” We are more likely to be good people when we have an audience.

But social media twists the concept of audience and can nudge our behavior from the prosocial to the more insular and individualistic end of the spectrum.

The successfulness of social conformity and the wisdom of crowds depends on a certain heterogeneity in the ideological makeup of the crowd. The filter bubble of social media strips this from our perceived audience, as I have written. It reinforces our moral beliefs by surrounding us with an audience that also shares those beliefs. The confidence that comes from this tends to push us away from the middle ground of conformed morality toward outlier territory. Perhaps this is why we’re seeing the polarization of morality all too evident today.

As I mentioned at the beginning, there may never have been  a more observable indicator of our own brand of morality than the current face-mask debate.

In an article on Businessinsider.com, Daniel Ackerman compared it to the crusade against seat belts in the 1970’s. Certainly when it comes to our perceived individual rights and not wanting to be told what to do, there are similarities. But there is one crucial difference. You wear seat belts to save your own life. You wear a face mask to save other lives.

We’ve been told repeatedly that the main purpose of face masks is to stop you spreading the virus to others, not the other way around. That makes the decision of whether you wear a face mask or not the ultimate indicator of your openness to reciprocal altruism.

The cultural crucible in which our morality is formed has changed. Our own belief structure of right and wrong is becoming more inflexible. And I have to believe that social media may be the culprit.

A.I. and Our Current Rugged Landscape

In evolution, there’s something called the adaptive landscape. It’s a complex concept, but in the smallest nutshell possible, it refers to how fit species are for a particular environment. In a relatively static landscape, status quos tend to be maintained. It’s business as usual. 

But a rugged adaptive landscape —-one beset by disruption and adversity — drives evolutionary change through speciation, the introduction of new and distinct species. 

The concept is not unique to evolution. Adapting to adversity is a feature in all complex, dynamic systems. Our economy has its own version. Economist Joseph Schumpeter called them Gales of Creative Destruction.

The same is true for cultural evolution. When shit gets real, the status quo crumbles like a sandcastle at high tide. When it comes to life today and everything we know about it, we are definitely in a rugged landscape. COVID-19 might be driving us to our new future faster than we ever suspected. The question is, what does that future look like?

Homo Deus

In his follow up to his best-seller “Sapiens: A Brief History of Humankind,” author Yuval Noah Harari takes a shot at predicting just that. “Homo Deus: A Brief History of Tomorrow” looks at what our future might be. Written well before the pandemic (in 2015) the book deals frankly with the impending irrelevance of humanity. 

The issue, according to Harari, is the decoupling of intelligence and consciousness. Once we break the link between the two, the human vessels that have traditionally carried intelligence become superfluous. 

In his book, Harari foresees two possible paths: techno-humanism and Dataism. 

Techno-humanism

In this version of our future, we humans remain essential, but not in our current form. Thanks to technology, we get an upgrade and become “super-human.”

Dataism

Alternatively, why do we need humans at all? Once intelligence becomes decoupled from human consciousness, will it simply decide that our corporeal forms are a charming but antiquated oddity and just start with a clean slate?

Our Current Landscape

Speaking of clean slates, many have been talking about the opportunity COVID-19 has presented to us to start anew. As I was writing this column, I received a press release from MIT promoting a new book “Building the New Economy,” edited by Alex Pentland. I haven’t read it yet, but based on the first two lines in the release, it certainly seems to be following this type of thinking:“With each major crisis, be it war, pandemic, or major new technology, there has been a need to reinvent the relationships between individuals, businesses, and governments. Today’s pandemic, joined with the tsunami of data, crypto and AI technologies, is such a crisis.”

We are intrigued by the idea of using the technologies we have available to us to build a societal framework less susceptible to inevitable Black Swans. But is this just an invitation to pry open Pandora’s Box and allow the future Yuval Noah Harari is warning us about?

The Debate 

Harari isn’t the only one seeing the impending doom of the human race. Elon Musk has been warning us about it for years. As we race to embrace artificial intelligence, Musk sees the biggest threat to human existence we have ever faced. 

“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me,” warns Musk. “It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”

There are those that pooh-pooh Musk’s alarmism, calling it much ado about nothing. Noted Harvard cognitive psychologist and author Steven Pinker, whose rose-colored vision of humanity’s future reliably trends up and to the right, dismissed Musk’s warnings with this: “If Elon Musk was really serious about the AI threat, he’d stop building those self-driving cars, which are the first kind of advanced AI that we’re going to see.”

In turn, Musk puts Pinker’s Pollyanna perspective down to human hubris: “This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”

From Today Forward

This brings us back to our current adaptive landscape. It’s rugged. The peaks and valleys of our day-to-day reality are more rugged then they have ever been — at least in our lifetimes. 

We need help. And when you’re dealing with a massive threat that involves probability modeling and statistical inference, more advanced artificial intelligence is a natural place to look. 

Would we trade more invasive monitoring of our own bio-status and aggregation of that data to prevent more deaths? In a heartbeat.

Would we put our trust in algorithms that can instantly crunch vast amounts of data our own brains couldn’t possibly comprehend? We already have.

Will we even adopt connected devices constantly streaming the bits of data that define our existence to some corporate third party or government agency in return for a promise of better odds that we can extend that existence? Sign us up.

We are willingly tossing the keys to our future to the Googles, Apples, Amazons and Facebooks of the world. As much as the present may be frightening, we should consider the steps we’re taking carefully.

If we continue rushing down the path towards Yuval Noah Harari’s Dataism, we should be prepared for what we find there: “This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.”