Why Technology May Not Save Us

We are a clever race. We’re not as smart as we think we are, but we are pretty damn smart. We are the only race who has managed to forcibly shift the eternal cycles of nature for our own benefit. We have bent the world to our will. And look how that’s turning out for us.

For the last 10,000 years our cleverness has set us apart from all other species on earth. For the last 1000 years, the pace of that cleverness has accelerated. In the last 100 years, it has been advancing at breakneck speed. Our tools and ingenuity have dramatically reshaped our lives. our everyday is full of stuff we couldn’t imagine just a few short decades ago.

That’s a trend that’s hard to ignore. And because of that, we could be excused for thinking the same may be true going forward. When it comes to thinking about technology, we tend to do so from a glass half full perspective. It’s worked for us in the past. It will work for us in the future. There is no problem too big that our own technological prowess cannot solve.

But maybe it won’t. Maybe – just maybe – we’re dealing with another type of problem now to which technology is not well suited as a solution. And here are 3 reasons why.

The Unintended Consequences Problem

Technology solutions focus on the proximate rather than the distal – which is a fancy way of saying that technology always deals with the task at hand. Being technology, these solutions usually come from an engineer’s perspective, and engineers don’t do well with nuance. Complicated they can deal with. Complexity is another matter.

I wrote about this before when I wondered why tech companies tend to be confused by ethics. It’s because ethics falls into a category of problems known as a wicked problem. Racial injustice is another wicked problem. So is climate change. All of these things are complex and messy. Their dependence on collective human behavior makes them so. Engineers don’t like wicked problems, because they are by definition concretely non-solvable. They are also hotbeds of unintended consequences.

In Collapse, anthropologist Jared Diamond’s 2005 exploration of failed societies, past and present, Diamond notes that when we look forward, we tend to cling to technology as a way to dodge impending doom. But he notes, “underlying this expression of faith is the implicit assumption that, from tomorrow onwards, technology will function primarily to solve existing problems and will cease to create new problems.”

And there’s the rub. For every proximate solution it provides, technology has a nasty habit of unleashing scads of unintended new problems. Internal combustion engines, mechanized agriculture and social media come to mind immediately as just three examples. The more complex the context of the problem, the more likely it is that the solution will come with unintended consequences.

The 90 Day Problem

Going hand in hand with the unintended consequence problem is the 90 Day problem. This is a port-over from the corporate world, where management tends to focus on problems that can be solved in 90 days. This comes from a human desire to link cause and effect. It’s why we have to-do lists. We like to get shit done.

Some of the problems we’re dealing with now – like climate change – won’t be solved in 90 days. They won’t be solved in 90 weeks or even 90 months. Being wicked problems, they will probably never be solved completely. If we’re very, very, very lucky and we start acting immediately and with unprecedented effort, we might be seeing some significant progress in 90 years.

This is the inconvenient truth of these problems. The consequences are impacting us today but the payoff for tackling them is – even if we do it correctly – sometime far in the future, possibly beyond the horizon of our own lifetimes. We humans don’t do well with those kinds of timelines.

The Alfred E. Neuman Problem

The final problem with relying on technology is that we think of it as a silver bullet. The alternative is a huge amount of personal sacrifice and effort with no guarantee of success. So, it’s easier just to put our faith in technology and say, “What, Me Worry?” like Mad Magazine mascot Alfred E. Neuman. It’s much easier to shift the onus for us surviving our own future to some nameless, faceless geek somewhere who’s working their way towards their “Eureka” moment.

While that may be convenient and reassuring, it’s not very realistic. I believe the past few years – and certainly the past few months – have shown us that all of us have to make some very significant changes in our lives and be prepared to rethink what we thought our future might be. At the very least, it means voting for leadership committed to fixing problems rather than ignoring them in favor of the status quo.

I hope I’m wrong, but I don’t think technology is going to save our ass this time.

The Fickle Fate of Memes

“In the future, everyone will be world-famous for 15 minutes.”

Attributed to Andy Warhol

If your name is Karen, I’m sorry. The internet has not been kind to you over the past 2 years. You’re probably to the point where you hesitate before you tell people your name. And it’s not your fault that your name has meme-famous for being synonymous with bitchy white privilege.

The odds are that you’re a nice person. I know several Karens and not one of them is a “Karen.” On the other hand, I do know a few “Karen”s (as my Facebook adventure from last week makes clear) and not one of them is named Karen.

But that’s the way memes roll. You’re not at the wheel. The trolling masses have claimed your fate and you just have to go along for the ride. That’s true for Karen, where there doesn’t seem to be an actual “Karen” to which the meme can be attributed. But it’s also true when the meme starts with an actual person – like Rebecca Black.

Remember Rebecca Black? No?  I’ll jog your memory –

Yesterday was Thursday, Thursday
Today it is Friday, Friday (partyin’)
We-we-we so excited
We so excited
We gonna have a ball today

Rebecca Black

Yes, that Rebecca Black – star of “Friday”, which for many years was the most hated video in YouTube history (it still ranks at number 15 according to Wikipedia).

Admit it, when you remembered Rebecca Black, you did not do so fondly. But you know nothing about Rebecca Black. Memes seldom come bundled with a back story. So here are a few facts about Friday you didn’t know.

  • Black didn’t write the song. It was written by two LA music producers
  • Black was 13 at the time the video was shot
  • She had no input into the production or the heavy use of Autotune on her vocals
  • She didn’t see the video or hear the final version of the song before it was posted to YouTube

Although Black was put front and center into the onslaught of negativity the video produced, she had very little to do with the finished product. She was just a 13-year-old girl who was hoping to become a professional singer. And suddenly, she was one of the most hated and ridiculed people in the world. The trolls came out in force. And, unsurprisingly, they were merciless. But then mainstream media jumped on the bandwagon. Billboard and Time magazines, CNN, Stephen Colbert, Jimmy Fallon, Jimmy Kimmel and more all heaped ridicule on Black.

That’s a lot for any 13-year-old to handle.  To understand the impact a meme can have, take 11 minutes to watch the video above about Black from Vice. Black seems to have emerged from the experience as a pretty well-adjusted 22-year-old who is still hoping to turn the fame she got into a positive. She is – more than anything – just trying to regain control of her own story.

The fame Rebecca Black found may have turned out to be of the caustic kind when she found it, but at least she was looking for it. Ghyslain Raza never asked for it and never wanted it. He became a meme by accident.

Ghyslain who? Allow your memory to be jogged once again. You probably know Raza better as the Star Wars Kid.

In 2002, Ghyslain Raza was a shy 14-year-old from Quebec who liked to make videos. One of those videos was shot in the school AV room while Raza was “goofing around,” wielding a makeshift light saber he made from a golf ball retriever. That video fell into the hands of a classmate, who – with all the restraint middle schoolers are known for – promptly posted it online. Soon, a torrent of cyber bullying was unleashed on Raza as views climbed into the tens of millions.

The online comments were hurtful enough. More than a few commenters suggested that Raza commit suicide. Some offered to help. But it was no better for Razain in his real life. He had to change schools when what few friends he had evaporated. At the new school, it got worse, “In the common room, students climbed onto tabletops to insult me.”

Imagine for a moment yourself being 14 and dealing with this. Hell, imagine it at the age you are now. Life would be hell. It certainly was for Raza. In an interview with a Canadian news magazine, he said, “No matter how hard I tried to ignore people telling me to commit suicide, I couldn’t help but feel worthless, like my life wasn’t worth living.”

Both Black and Raza survived their ordeals. Aleksey Varner wasn’t so lucky. The over-the-top video resume he made in 2006, Impossible is Nothing, also became a meme when it was posted online without his permission. Actor Michael Cera was one of the many who did a parody. Like Black and Raza, Vayner battled to get his life back. He lost that battle in 2013. He died from a heart attack that a relative has said was brought on by an overdose of medication.

In our culture, online seems to equal open season. Everyone –  even celebrities that should know better – seem to think it’s okay to parody, ridicule, bully or even threaten death. What we conveniently forget is that there is a very real person with very real feelings on the other side of the meme. No one deserves that kind of fame.

Even if their name is Karen.

The Day My Facebook Bubble Popped

I learned this past week just how ideologically homogenous my Facebook bubble usually is. Politically, I lean left of center. Almost all the people in my bubble are the same.

Said bubble has been built from the people I have met in the past 40 years or so. Most of these people are in marketing, digital media or tech. I seldom see anything in my feed I don’t agree with — at least to some extent.

But before all that, I grew up in a small town in a very right-wing part of Alberta, Canada. Last summer, I went to my 40-year high-school reunion. Many of my fellow graduates stayed close to our hometown for those 40 years. Some are farmers. Many work in the oil and gas industry. Most of them would fall somewhere to the right of where I sit in my beliefs and political leanings.

At the reunion, we did what people do at such things — we reconnected. Which in today’s world meant we friended each other on Facebook. What I didn’t realize at the time is that I had started a sort of sociological experiment. I had poked a conservative pin into my liberal social media bubble.

Soon, I started to see posts that were definitely coming from outside my typical bubble. But most of them fell into the “we can agree to disagree” camp of political debate. My new Facebook friends and I might not see eye-to-eye on certain things, but hell — you are good people, I’m good people, we can all live together in this big ideological tent.

On May 1, 2020, things began to change. That was when Canadian Prime Minister Justin Trudeau announced that 1,500 models of “assault-style” weapons would be classified as prohibited, effective immediately. This came after Gabriel Wortman killed 22 people in Nova Scotia, making it Canada’s deadliest shooting spree. Now, suddenly posts I didn’t politically agree with were hitting a very sensitive raw nerve. Still, I kept my mouth shut. I believed arguing on Facebook was pointless.

Through everything that’s happened in the four months since (it seems like four decades), I have resisted commenting when I see posts I don’t agree with. I know how pointless it is. I realize that I am never going to change anyone’s mind through a comment on a Facebook post.

I understand this is just an expression of free speech, and we are all constitutionally entitled to exercise it. I stuck with the Facebook rule I imposed for myself — keep scrolling and bite your tongue. Don’t engage.

I broke that rule last week. One particular post did it. This post implied that with a COVID-19 survival rate of almost 100%, why did we need a vaccine? I knew better, but I couldn’t help it.

I engaged. It was limited engagement to begin with. I posted a quick comment suggesting that with 800,000 (and counting) already gone, saving hundreds of thousands of lives might be a pretty good reason. Right or left, I couldn’t fathom anyone arguing with that.

I was wrong. Oh my God, was I wrong. My little comment unleashed a social media shit storm. Anti-vaxxing screeds, mind-control plots via China, government conspiracies to artificially over-count the death toll and calling out the sheer stupidity of people wearing face masks proliferated in the comment string for the next five days. I watched the comment string grow in stunned disbelief. I had never seen this side of Facebook before.

Or had I? Perhaps the left-leaning posts I am used are just as conspiratorial, but I don’t realize it because I happen to agree with them. I hope not, but perspective does strange things to our grasp of the things we believe to be true. Are we all — right or left — just exercising our right to free speech through a new platform? And — if we are — who am I to object?

Free speech is held up by Mark Zuckerberg and others as hallowed ground in the social-media universe. In a speech last fall at Georgetown University, Zuckerberg said: “The ability to speak freely has been central in the fight for democracy worldwide.”

It’s hard to argue that. The ability to publicly disagree with the government or any other holder of power over you is much better than any alternative. And the drafters of the U.S. Bill of Rights agreed. Freedom of speech was enshrined in the First Amendment. But the authors of that amendment — perhaps presciently — never defined exactly what constituted free speech. Maybe they knew it would be a moving target.

Over the history of the First Amendment, it has been left to the courts to decide what the exceptions would be.

In general, it has tightened the definitions around one area — what types of expression constitute a “clear and present danger” to others.  Currently, unless you’re specifically asking someone to break the law in the very near future, you’re protected under the First Amendment.

But is there a bigger picture here —one very specific to social media? Yes, legally in the U.S. (or Canada), you can post almost anything on Facebook.

Certainly, taking a stand against face masks and vaccines would qualify as free speech. But it’s not only the law that keeps society functioning. Most of the credit for that falls to social norms.

Social norms are the unwritten laws that govern much of our behavior. They are the “soft guard rails” of society that nudge us back on track when we veer off-course. They rely on us conforming to behaviors accepted by the majority.

If you agree with social norms, there is little nudging required. But if you happen to disagree with them, your willingness to follow them depends on how many others also disagree with them.

Famed sociologist Mark Granovetter showed in his Threshold Models of Collective Behavior that there can be tipping points in groups. If there are enough people who disagree with a social norm, it will create a cascade that can lead to a revolt against the norm.

Prior to social media, the thresholds for this type of behavior were quite high. Even if some of us were quick to act anti-socially, we were generally acting alone.

Most of us felt we needed a substantial number of like-minded people before we were willing to upend a social norm. And when our groups were determined geographically and comprised of ideologically diverse members, this was generally sufficient to keep things on track.

But your social-media feed dramatically lowers this threshold.

Suddenly, all you see are supporting posts of like-minded people. It seems that everyone agrees with you. Emboldened, you are more likely to go against social norms.

The problem here is that social norms are generally there because they are in the best interests of the majority of the people in society. If you go against them, by refusing a vaccine or to wear a face mask,  thereby allowing a disease to spread, you endanger others. Perhaps it doesn’t meet the legal definition of “imminent lawlessness,” but it does present a “clear and present danger.”

That’s a long explanation of why I broke my rule about arguing on Facebook.

Did I change anyone’s mind? No. But I did notice that the person who made the original post has changed their settings, so I don’t see the political ones anymore. I just see posts about grandkids and puppies.

Maybe it’s better that way.

Do We Still Need Cities?

In 2011, Harvard economist Edward Glaeser called the city “man’s greatest invention” in his book “Triumph of the City,” noting that “there is a near-perfect correlation between urbanization and prosperity across nations.”

Why is this so? It’s because historically we needed a critical mass of connection in order to accelerate human achievement.  Cities bring large numbers of people into closer, more frequent and productive contact than other places.  This direct, face-to-face contact is critical for facilitating the exchange of knowledge and ideas that lead to the next new venture business, medical discovery or social innovation.

This has been true throughout our history. While cities can be messy and crowded, they also spin off an amazing amount of ingenuity and creativity, driving us all forward.

But the very same things that make cities hot beds of productive activity also make them a human petri dish in the midst of a pandemic.

Example: New York

If the advantages that Glaeser lists are true for cities in general, it’s doubly true for New York, which just might be the greatest city in the world. Manhattan’s population density is 66,940 people per square mile, which makes it the highest of any area in the U.S. It’s also diverse, with 36% of population foreign-born. It attracts talent in all types of fields from around the world.

Unfortunately, all these things also set New York up to be particularly hard hit by COVID-19. To date, according to Google’s tracker, it has 236,000 confirmed cases of COVID-19 and a mortality rate of 10%. That case rate would put it ahead of all but 18 countries in the world. What has made New York great has also made it tragically vulnerable to a pandemic.

New York is famous for its gritty resilience. But at least one New Yorker thinks this might be the last straw for the Big Apple. In an essay entitled “New York City is dead forever,” self-published and then reprinted by the New York Post, comedy club owner James Altucher talks about how everyone he knows is high-tailing it out of town for safer, less crowded destinations, leaving a ghost town in their wake.

He doesn’t believe they’re coming back. The connections that once relied on physical proximity can now be replicated by technology. Not perfectly, perhaps, but well enough. Certainly, well enough to tip the balance away from the compromises you have to be prepared to swallow to live in a city like New York: higher costs of living, exorbitant real estate, higher crime rates and the other grittier, less-glittery sides of living in a crowded, dense metropolis.


Example: Silicon Valley

So, perhaps tech is partly (or largely) to blame for the disruption to the interconnectedness of cities. But, ironically, thanks to COVID-19, the same thing is happening to the birthplace of tech, Silicon Valley and the Bay area of Northern California.

Barb is a friend of mine who was born in Canada but has lived much of her life in Palo Alto, California — a stone’s throw from the campus of Stanford University. She recently beat a temporary retreat back to her home and native land north of the 49th Parallel.  When Barb explained to her Palo Alto friends and neighbors why Canada seemed to be a safer place right now, she explained it like this,

“My county — Santa Clara — with a population of less than 2 million people, has had almost as many COVID cases in the last three weeks as the entire country of Canada.”

She’s been spending her time visiting her Canadian-based son and exploring the natural nooks and crannies of British Columbia while doing some birdwatching along the way.  COVID-19 is just one of the factors that has caused her to start seriously thinking about life choices she couldn’t have imagined just a few short years ago. As Barb said to me as we chatted, “I have a flight home booked — but as it gets closer to that date, it’s becoming harder and harder to think about going back.”  

These are just two examples of the reordering of what will become the new normal. Many of us have retreated in search of a little social distance from what our lives were. Increasingly, we are relying on tech to bridge the distances that we are imposing between ourselves and others. Breathing room — in its most literal sense — has become our most immediate priority.

This won’t change anytime soon. We can expect this move to continue for at least the next year. It could be — and I suspect it will be — much longer. Perhaps James Altucher is right. Could this pandemic – aided and abetted by tech – finally be what kills mankind’s greatest invention? As he writes in his essay,

“Everyone has choices now. You can live in the music capital of Nashville, you can live in the ‘next Silicon Valley’ of Austin. You can live in your hometown in the middle of wherever. And you can be just as productive, make the same salary, have higher quality of life with a cheaper cost.”

If Altucher is right, there’s another thing we need to think about. According to Glaeser, cities are not only great for driving forward innovation. They also put some much-needed distance between us and nature:

“We human are a destructive species. We tend to destroy stuff when we’re around it. And if you love nature, stay away from it.”

As we look to escape one crisis, we might be diving headlong into the next.

Our Brain And Its Junk News Habit

Today, I’m going to return to the Reuter’s Digital News Report and look at the relationship between us, news and social media. But what I’m going to talk about is probably not what you think I’m going to talk about.

Forget all the many, many problems that come with relying on social media to be informed. Forget about filter bubbles and echo chambers. Forget about misleading or outright false stories. Forget about algorithmic targeting. Forget about the gaping vulnerabilities that leave social media open to nefarious manipulation. Forget all that (but just for the moment, because those are all horrible and very real problems that we need to focus on).

Today, I want to talk about one specific problem that comes when we get our news through social media. When we do that, our brains don’t work the way they should if we want to be well informed.

First, let’s talk about the scope of the issue here. According to the Reuter’s study, in the U.S. more people — 72% — turn online for news than any other source. Television comes in second at 59%. If we single out social media, it comes in third at 48%. Trailing the pack is print media at just 20%.

Reuters Digital News Study 2020 – Sources of News in US

If we plot this on a chart over the last seven years, print and social media basically swapped spots, with their respective lines crossing each other in 2014; one trending up and one trending down. In 2013, 47% of us turned to print as a primary news source and just 27% of us went to social media.

If we further look at those under 35, accessing news through social media jumps to the number-one spot by a fairly wide margin. And because they’re young, we’re not talking Facebook here. Those aged 18 to 24 are getting their news through Instagram, Snapchat and TikTok.

The point, if it’s not clear by now, is that many of us get our news through a social media channel — and the younger we are, the more that’s true. The paradox is that the vast majority of us — over 70% — don’t trust the news we see on our social media feeds. If we were to pick an information source we trusted, we would never go to social media.

This brings up an interesting juxtaposition in how we’re being informed about the world: almost all of us are getting our news through social media, but almost none of us are looking for it when we do.

According to the Reuter’s Report, 72% of us (all ages, all markets) get our news through the “side door.” This means we are delivered news — primarily through social media and search — without us intentionally going directly to the source of the information. For those aged 18 to 24, “side door” access jumps to 84% and, of that, access through social media jumps to 38%.

Our loyalty to the brand and quality of an information provider is slipping between our fingers and we don’t seem to care. We say we want objective, non-biased, quality news sources, but in practice we lap up whatever dubious crap is spoon-fed to us by Facebook or Instagram. It’s the difference between telling our doctor what we intend to eat and what we actually eat when we get home to the leftover pizza and the pint of Häagen-Dazs in our fridge.

The difference between looking for and passively receiving information is key to understanding how our brain works. Let’s talk a little bit about “top-down” and “bottom-up” activation and the “priming” of our brain.

When our brain has a goal — like looking for COVID-19 information — it behaves significantly differently than when it is just bored and wanting to be entertained.

The goal sets a “top down” intent. It’s like an executive order to the various bits and pieces of our brain to get their shit together and start working as a team. Suddenly the entire brain focuses on the task at hand and things like reliability of information become much more important to us. If we’re going to go directly to a information source we trust, this is going to be when we do it.

If the brain isn’t actively engaged in a goal, then information has to initiate a “bottom-up” activation. And that is an entirely different animal.

We never go to social media looking for a specific piece of news. That’s not how social media works. We go to our preferred social channels either out of sheer boredom or a need for social affirmation. We hope there’s something in the highly addictive endlessly scrolling format that will catch our attention.

For a news piece to do that, it has to somehow find a “hook” in our brain.  Often, that “hook” is an existing belief. The parts of our brain that act as gatekeepers against unreliable information are bypassed because no one bothered to wake them up.

There is a further brain-related problem with relying on social media, and that’s the “priming” issue. This is where one stimulus sets a subconscious “lens” that will impact subsequent stimuli. Priming sets the brain on a track we’re not aware of, which makes it difficult to control.

Social media is the perfect priming platform. One post sets the stage for the next, even if they’re completely unrelated.

These are just two factors that make social media an inherently dangerous platform to rely on for being informed.

The third is that social media makes information digestion much too easy. Our brain barely needs to work at all. And if it does need to work, we quickly click back and scroll down to the next post. Because we’re looking to be entertained, not informed, the brain is reluctant to do any unnecessary heavy lifting.   

This is a big reason why we may know the news we get through social media channels is probably not good for us, but we gulp it down anyway, destroying our appetite for more trustworthy information sources.

These three things create a perfect cognitive storm for huge portions of the population to be continually and willingly misinformed. That’s not even factoring in all the other problems with social media that I mentioned at the outset of this column. We need to rethink this — soon!

Playing Fast and Loose with the Truth

A few months ago, I was having a conversation with someone and they said something that I was pretty sure was not true. I don’t know if it was a deliberate lie. It may have just been that this particular person was uninformed. But they said it with the full confidence that what they said was true. I pushed back a little and they instantly defended their position.

My first instinct was just to let it go. I typically don’t go out of my way to cause friction in social settings. Besides, it was an inconsequential thing. I didn’t really care about it. But I was feeling a little pissy at the time, so I fact checked her by looking it up on my phone. And I was right. She had stated something that wasn’t true and then doubled down on it.

Like I said, it was inconsequential – a trivial conversation point. But what if it wasn’t? What if there was a lot riding on whether or not what they said was true? What if this person was in a position of power, like – oh, I don’t know – the President of the United States?

The role of truth in our social environment is currently a thing in flux. I cannot remember a time when we have been more suspicious of what we see, read and hear on a daily basis. As I mentioned a couple weeks ago, less than 40% of us trust what we hear on the news. And when that news comes through our social media feed, the level of distrust jumps to a staggering 80%

Catching someone in a lie has significant social and cognitive implications. We humans like to start from a default position of trust. If we can do that, it eliminates a lot of social friction and cognitive effort. We only go to not trusting when we have to protect ourselves.

Our proclivity for trust is what has made a global commerce and human advancement possible. But, unfortunately, it does leave us vulnerable. Collectively, we usually play by the same playbook I was initially going to use in my opening example. It’s just easier to go along with what people say, even if we may doubt that it’s true. This is especially so if the untruth is delivered with confidence. We humans love confidence in others because it means we don’t have to work as hard. Confidence is a signal we use to decide to trust and trust is always easier than distrust. The more confident the delivery, the less likely we are to question it.

It’s this natural human tendency that put the “con” in “con artist.” “Con” is short for confidence, and it originates with an individual named William Thompson, who plied the streets of New York in the 1840’s. He would walk up to a total stranger who was obviously well off and greet them like a long-lost friend. After a few minutes of friendly conversation during which the target would be desperately trying to place this individual, Thompson would ask for the loan of something of value. He would then set his hook with this, “Do you have confidence in me to loan me this [item] til tomorrow?”  The success of this scam was totally dependent on an imbalance of confidence; extreme confidence on the part of the con artist and a lack of confidence on the part of the target.

It is ironic that in an era where it’s easier than ever to fact check, we are seeing increasing disregard for the truth. According to the Washington Post, Donald Trump passed a misinformation milestone on July 9, making 20,000 false or misleading claims since he became President. He surged past that particular post when he lied 62 times on that day alone. I don’t even think I talk 62 times per day.

This habit of playing fast and loose with the truth is not Trump’s alone. Unfortunately, egregious lying has been normalized in today’s world. We have now entered an era where full time fact checking is necessary. On July 7, NY Times columnist Thomas Friedman said we need a Biden-Trump debate, but only on two conditions: First, only if Trump releases his tax returns, and second, only if there is a non-partisan real-time fact-checking team keeping the debaters accountable.

We have accepted this as the new normal. But we shouldn’t. There is an unacceptable cost we’re paying by doing so. And that cost becomes apparent when we think about the consequence of lying on a personal basis.

If we catch an acquaintance in a deliberate lie, we put them in the untrustworthy column. We are forced into a default position of suspicion whenever we deal with them in the future. This puts a huge cognitive load on us. As I said before, it takes much more effort to not trust someone. It makes it exponentially harder to do business with them. It makes it more difficult to enjoy their company. It introduces friction into our relationship with them.

Even if the lie is not deliberate but stated with confidence, we label them as uninformed. Again, we trust them less.

Now multiply this effort by everyone. You quickly see where the model breaks down. Lying may give the liar a temporary advantage, but it’s akin to a self-limiting predator-prey model. If it went unchecked, soon the liars would only have other liars to deal with. It’s just not sustainable.

Truth exists for a reason. It’s the best social strategy for the long term. We should fight harder for it.

What Would Aaron Do?

I am a big Aaron Sorkin fan. And before you rain on my parade, I say that fully understanding that he epitomizes the liberal intellectual elitist, sanctimonious cabal that has helped cleave American culture in two. I get that. And I don’t care.

I get that his message is from the left side of the ideological divide. I get that he is preaching to the choir. And I get that I am part of the choir. Still, given the times, I felt that a little Sorkin sermon was just what I needed. So I started rewatching Sorkin’s HBO series “The Newsroom.”

If you aren’t part of this particular choir, let me bring you up to speed. The Newsroom in this case is at the fictional cable network ACN. One of the primary characters is lead anchor Will McEvoy (played by Jeff Daniels), who has built his audience by being noncontroversial and affable — the Jay Leno of journalism. 

This brings us to the entrance of the second main character: Mackenzie McHale, played by Emily Mortimer. Exhausted from years as an embedded journalist covering multiple conflicts in Afghanistan, Pakistan and Iraq, she comes on board as McEvoy’s new executive producer (and also happens to be his ex-girlfriend). 

In typical Sorkin fashion, she goads everyone to do better. She wants to reimagine the news by “reclaiming journalism as an honorable profession,” with “civility, respect, and a return to what’s important; the death of bitchiness; the death of gossip and voyeurism; speaking truth to stupid.”

I made it to episode 3 before becoming profoundly sad and world-weary. Sorkin’s sermon from 2012—– just eight years ago —  did not age well. It certainly didn’t foreshadow what was to come. 

Instead of trying to be better, the news business — especially cable news — has gone in exactly the opposite direction, heading straight for Aaron Sorkin’s worst-case scenario. This scenario formed part of a Will McEvoy speech in that third episode: “I’m a leader in an industry that miscalled election results, hyped up terror scares, ginned up controversy, and failed to report on tectonic shifts in our country — from the collapse of the financial system to the truths about how strong we are to the dangers we actually face.”

That pretty much sums up where we are. But even Sorkin couldn’t anticipate what horrors social media would throw into the mix. The reality is actually worse than his worst-case scenario. 

Sorkin’s appeal for me was that he always showed what “better” could be. That was certainly true in his breakthrough political hit “The West Wing.” 

He brought the same message to the jaded world of journalism in “The Newsroom. He was saying, “Yes, we are flawed people working in a flawed system set in a flawed nation. But it can be better….Our future is in our hands. And whatever that future may be, we will be held accountable for it when it happens.”

This message is not new. It was the blood and bones of Abraham Lincoln’s annual address to Congress on December 1, 1862, just one month before the Emancipation Proclamation was signed into law. Lincoln was preparing the nation for the choice of a path which may have been unprecedented and unimaginably difficult, but would ultimately be proven to be the more moral one: “It is not ‘can any of us imagine better?’ but, ‘can we all do better?’ The dogmas of the quiet past, are inadequate to the stormy present. The occasion is piled high with difficulty, and we must rise — with the occasion.”

“The Newsroom” was Sorkin’s last involvement with a continuing TV series. He was working on his directorial movie debut, “Molly’s World,” when Trump got elected. 

Since then, he has adapted Harper Lee’s “To Kill a Mockingbird” for Broadway, with “The Newsroom’”s Jeff Daniels as Atticus Finch. 

Sorkin being Sorkin, he ran into a legal dispute with Lee’s estate when he updated the source material to be a little more open about the racial tension that underlies the story. Aaron Sorkin is not one to let sleeping dogmas lie. 

Aaron Sorkin also wrote a letter to his daughter and wife on the day after the 2016 election, a letter than perhaps says it all. 

It began, “Well the world changed late last night in a way I couldn’t protect us from.”

He was saying that as a husband and father. But I think it was a message for us all — a message of frustration and sadness. He closed the letter by saying “I will not hand [my daughter] a country shaped by hateful and stupid men. Your tears last night woke me up, and I’ll never go to sleep on you again.”

Yes, Sorkin was preaching when he was scripting “The Newsroom.” But he was right. We should do better. 

In that spirit, I’ll continue to dissect the Reuters study on the current state of journalism I mentioned last week. And I’ll do this because I think we have to hold our information sources to “doing better.” We have to do a better job of supporting those journalists that are doing better. We have to be willing to reject the “dogmas of the quiet past.” 

One of those dogmas is news supported by advertising. The two are mutually incompatible. Ad-supported journalism is a popularity contest, with the end product a huge audience custom sliced, diced and delivered to advertisers — instead of a well-informed populace.

We have to do better than that.

How We Forage for the News We Want

Reuters Institute out of the UK just released a comprehensive study looking at how people around the world are finding their news. There is a lot here, so I’ll break it into pieces over a few columns and look at the most interesting aspects. Today, I’ll look at the 50,000-foot view, which can best be summarized as a dysfunctional relationship between our news sources and ourselves. And like most dysfunctional relationships, the culprit here is a lack of trust.

Before we dive in, we should spend some time looking at how the way we access news has changed over the last several years.

Over my lifetime, we have trended in two general directions – less cognitively demanding news channels and less destination specific news sources. The most obvious shift has been away from print. According to Journalism.org and the Pew Research Center, circulation of U.S. Daily newspapers peaked around 1990, at about 62 and a half million. That’s one subscription for every 4 people in the country at that time.

In 2018, it was projected that circulation had dropped more than 50%, to less than 30 million. That would have been one subscription for every 10 people. We were no longer reading our news in a non-digital format. And that may have significant impact on our understanding of the news. I’ll return to this in another column, but for now, let’s just understand that our brain operates in a significantly different way when it’s reading rather than watching or listening.

Up the end of the last century, we generally trusted news destinations. Whether it be a daily newspaper like the New York Times, a news magazine like Time or a nightly newscast such as any of the network news shows, each was a destination that offered one thing above all others – the news. And whether you agreed with them or not, each had an editorial process that governed what news was shared. We had a loyalty to our chosen news destinations that was built on trust.

Over the past two decades, this trust has broken down due to one primary factor – our continuing use of social media. And that has dramatically shifted how we get our news.

In the US, three out of every four people use online sources to get their news. One in two use social media.  Those aged 18 to 24 are more than twice as likely to rely on social media. In the UK, under-35s get more of their news from Social Media than any other source.

Also, influencers have become a source of news, particularly amongst young people. In the US, a quarter of those 18 to 24 used Instagram as a source of news about COVID.

This means that most times, we’re getting our news through a social media lens. Let’s set aside for a moment the filtering and information veracity problems that introduces. Let’s just talk about intent for a moment.

I have talked extensively in the past about information foraging when it comes to search. When information is “patchy” and spread diversely, the brain has to make a quickly calculated guess about which patch it’s most likely to find the information in it’s looking for. With Information Foraging, the intent we have frames everything that comes after.

In today’s digital world, information sources have disaggregated into profoundly patchy environments. We still go to news-first destinations like CNN or Fox News but we also get much of our information about the world through our social media feeds. What was interesting about the Reuters report was that it was started before the COVID pandemic, but the second part of the study was conducted during COVID. And it highlights a fascinating truth about our relationship with the news when it comes to trust.

The study shows that the majority of us don’t trust the news we get through social media but most times, we’re okay with that. Less than 40% of people trust the news in general, and even when we pick a source, less than half of us trust that particular channel. Only 22% indicated they trust the news they see in social media. Yet half of us admit we use social media to get our news. The younger we are, the more reliant we are on social media for news. The fastest growing sources for news amongst all age groups – but especially those under 30 – are Instagram, SnapChat and WhatsApp.

Here’s another troubling fact that fell out of the study. Social platforms, especially Instagram and SnapChat, are dominated by influencers. That means that much of our news comes to us by way of a celebrity influencer reposting it on their feed. This is a far cry from the editorial review process that used to act as a gate keeper on our trusted news sources.

So why do we continue to use news sources we admit we don’t trust? I suspect it may have to do with something called the Meaning Maintenance Model. Proposed in 2006 by Heine, Proulx and Vohs, the model speculates that a primary driver for us is to maintain our beliefs in how the world works. This is related to the sense making loop (Klein, Moon and Hoffman) I’ve also talked about in the past. We make sense of the world by first starting with the existing frame of what we believe to be true. If what we’re experiencing is significantly different from what we believe, we will update our frame to align with the new evidence.

What the Meaning Maintenance Model suggests is that we will go to great lengths to avoid updating our frame. It’s much easier just to find supposed evidence that supports our current beliefs. So, if our intent is to get news that supports our existing world view, social media is the perfect source. It’s algorithmically filtered to match our current frame. Even if we believe the information is suspect, it still comforts us to have our beliefs confirmed. This works well for news about politics, societal concerns and other ideologically polarized topics.

We don’t like to admit this is the case. According to the Reuter’s study, 60% of us indicate we want news sources that are objective and not biased to any particular point of view. But this doesn’t jive with reality at all. As I wrote about in a previous column, almost all mainstream news sources in the US appear to have a significant bias to the right or left. If we’re talking about news that comes through social media channels, that bias is doubled down on. In practice, we are quite happy foraging from news sources that are biased, as long as that bias matches our own.

But then something like COVID comes along. Suddenly, we all have skin in the game in a very real and immediate way. Our information foraging intent changes and our minimum threshold for the reliability of our news sources goes way up. The Reuters study found that when it comes to sourcing COVID information, the most trusted sources are official sites of health and scientific organizations. The least trusted sources are random strangers, social media and messaging apps.

It requires some reading between the lines, but the Reuters study paints a troubling picture of the state of journalism and our relationship with it. Where we get our information directly impacts what we believe. And what we believe determines what we do.

These are high stakes in an all-in game of survival.

Hope’s Not Dead, It’s Just been Handed Down

It’s been interesting writing this column in the last 4 months. In fact, it’s been interesting writing it for the last 4 years. And I use the word “interesting” as a euphemism. It’s been many things: gut-wrenching, frustrating, maddening and head-scratching. Many times – most times – the writing of this has made me profoundly sad and despairing of our future. It has made me question my own beliefs. But yes, in a macabre sense, it has been interesting.

I call myself a humanist. I believe in the essential goodness of humans, collectively and on the average. I believe we are the agents of our own fate. I believe there are ups and downs in our stewardship of our future, but over the longer term, we will trend in the right direction.

I still am trying to believe in these things. But I have to tell you, it’s getting really hard.

I’m sure it’s not just me. Over the years, this column – Media Insider – has morphed into the most freeform of Mediapost’s columns. The rotating stable of writers, including myself, really has a carte blanche to write about whatever happens to be on our mind. That’s why I was drawn to it. I’m not actively involved in any aspect of the industry anymore, so I really can’t provide any relevant commentary on things like Search, Mobile, TV or the agency world. But I do have many opinions about many things. And this column seemed to be the best place to talk about them.

What really fascinates me is the intersection between human behavior and technology. And so, most of my columns unpack some aspect of that intersection. In the beginning, it seemed that technology was dovetailing nicely with my belief in human goodness. Then things started to go off the track. In the past four years, this derailment has accelerated. In the past four months, it’s been like watching a train wreck.

The writers of Media Insider have all done our best to chronicle what the f*ck is going on. Today I looked back at our collective work over the past 4 months. I couldn’t help thinking that it was like trying to write at the micro level about what happens when a table is upended in the middle of dinner. Yes, I can report that the pepper shaker is still next to the salt shaker. But the bigger story is that everything is skidding down the table to the abyss beyond the edge.

I suspect that where we are now can be directly traced back to the source of my naïve optimism some years ago. We were giddy about what technology could do, not just for marketing, but for everything about our world. But to use the language of COVID, we had been infected but were still asymptomatic. Inside our culture, the virus of unintended consequences was already at work, replicating itself.

My vague and clung-to hope is that this is just another downswing. And my hope comes from my kids. They are better people than I was at their age: more compassionate, more empathetic and more committed to their beliefs. They have rejected much of the cultural baggage of systemic inequality that I took for granted in my twenties. They are both determined to make a difference, each in their own way. In them, I again have hope for the future.

We love to lump people together into categories and slap labels on them. That is also true for my daughters’ generation. They are often called Generation Z.

Every generation has their angels and assholes. That is also true for Generation Z. But here’s the interesting thing about them. They’re really tough to label. Here’s an excerpt from a recent report on Generation Z from Mckinsey:

“Our study based on the survey reveals four core Gen Z behaviors, all anchored in one element: this generation’s search for truth. Gen Zers value individual expression and avoid labels. They mobilize themselves for a variety of causes. They believe profoundly in the efficacy of dialogue to solve conflicts and improve the world. Finally, they make decisions and relate to institutions in a highly analytical and pragmatic way.”

The other interesting thing about this generation is that they grew up with the technology that seems to be upending the world of every previous generation. They seem – somehow – to have developed a natural immunity to the most harmful effects of social media. Maybe my hope that technology will ultimately make us better people wasn’t wrong, it just had to skip a couple of generations.

I know it’s dangerous to lionize or demonize any group – generational or otherwise – en masse. But after watching the world go to a hell in a handbasket in the hands of those in charge for the last few years, I have no qualms about handing things over to my kids and others of their age.

And we should do it soon, while there is a still a world to hand over.

Are We Killing Politeness?

One of the many casualties of our changing culture seems to be politeness. When the President of the United States is the poster child for rude behavior, it’s tough for politeness to survive. This is especially true in the no-holds-barred, digitally distanced world of social media.

I consider myself to be reasonably polite. Being so, I also expect this in others. Mild rudeness makes me anxious. Excessive rudeness makes me angry. This being the case, I am troubled by the apparent decline of civility. So today I wanted to take a look at politeness and why it might be slipping away from us.

First of all, we have to understand the politeness is not universal. What is considered polite in one culture is not in another.

Secondly, being polite is not the same as being friendly. Or empathetic. Or being respectful of others. Or being compassionate, according to this post  from The Conversation. There is a question of degree and intent here. Being polite is a rather unique behavior that encompasses both desirable and less desirable qualities. And that begs the question: What is the purpose of politeness? Is a less-polite world a good or a bad thing?

First, let’s look at the origin of the world. It comes from the Latin “politus,” meaning “polished — made smooth.” Just in case you’re wondering, “politics” does not come from the same root. That comes from the Greek word for “citizen” — “polites.”

One last etymological nugget. The closest comparison to polite may be “nice,” which originates from the Latin “nescius,” meaning “ignorant”. Take that for what it’s worth.

This idea of politeness as a type of social “polish” really comes from Europe — and especially Britain. There, politeness was linked with class hierarchies. Being polite was a sign of good breeding — a dividing line between the high-born and the riffraff. This class-bound definition came along with the transference of the concept to North America.

Canada is typically considered one of the most polite nations in the world. As a Canadian who has traveled a fair amount, I would say that’s probably true.

But again, there are variations in the concept of politeness and how it applies to both Canadians and Americans.

When we consider the British definition of politeness, you begin to see how Americans and Canadians might respond differently to it. To understand that is to understand much of what makes up our respective characters.

As a Canadian doing much of my business in the U.S. for many years, I was always struck by the difference in approaches I found north and south of the 49th parallel. Canadians businesses we met with were unfailingly polite, but seldom bought anything. Negotiating the prospect path in Canada was a long and often frustrating journey.

American businesses were much more likely to sign a contract. On the whole, I would also say they were friendlier in a more open and less-guarded way. I have to admit that in a business setting, I preferred the American approach.

According to anthropologists Penelope Brown and Stephen Levinson, who have extensively researched politeness, there is negative and positive politeness. Negative politeness is concern with adhering to social norms, often by deferring to someone or something else.

This is Canadian politeness personified. Our entire history is one of deference to greater powers, first to our colonial masters — the British and French — and, more recently, from our proximity to the cultural and economic master that is the U.S.

For Canadians, deferral is survival. As former Prime Minister Pierre Trudeau once said about the U.S., “Living next to you is in some ways like sleeping with an elephant. No matter how friendly and even-tempered is the beast, if I can call it that, one is affected by every twitch and grunt.”

Negative politeness is a way to smooth out social friction, but there is good and bad here. Ideally it should establish a baseline of trust, respect and social capital. But ultimately, politeness is a consensus of compromise.  And that’s why Canadians are so good at it.  Negative politeness wants everything to be fair.

But then there is positive politeness, which is more American in tone and nature. This is a desire to help others, making it more closely linked to compassion. But in this noble motive there is also a unilateral defining of what is right and wrong. Positive politeness tries to make everything right, based on the protagonist’s definition of what that is.

The two sides of politeness actually come from different parts of the brain. Negative politeness comes from the part of the brain that governs aggression. It is all about applying brakes to our natural instincts. But positive politeness comes from the part of the brain that regulates social bonding and affiliation.

When you understand this, you understand the difference between Canadians and Americans in what we consider polite. For the former, our definition comes handed down from the British class-linked origins, and has morphed into a culture of compromise and deferral.

The American definition comes from many generations of being the de facto moral leaders of the free world.

We (Canadians) want to be nice. You (Americans) want to be right. The two are not mutually exclusive, but they are also not the same thing. Not by a long shot.

What Trump has done (with a certain kind of perverse genius) has played on this national baseline of compassion. He has wantonly discarded any vestiges of politeness and split the nation on what it means to be right.

But by eliminating politeness, you have also eliminated that governor of our behavior. Reactions about what is right and wrong are now immediate, rough and unfiltered.

The polish that politeness brings — that deferral of spoken judgement for even a brief moment in order to foster cooperation — is gone. We have no opportunity to consider other perspectives. We have no motive to cooperate. This is abundantly apparent on every social media platform.

In game theory, politeness is a highly successful strategy commonly called “tit for tat.” It starts from assuming a default position of fairness from the other party, continuing to cooperate if this proves to be true, and escalating to retaliation if it’s not. But this tactic evolved in a world of face-to-face encounters. Somehow, it seems less needed in a divided world where rudeness and immediate judgement are the norm.

Still, I will cling to my notions of politeness. Yes, sometimes it seems to get in the way of definitive action. But on the whole, I would rather live in a world that’s a little nicer and a little more polite, even if that seems foolish to some of you.