Older? Sure. Wiser? Debatable.

I’ve always appreciated Mediapost Editor-in-Chief Joe Mandese’s take on things. It’s usually snarky, cynical and sarcastic, all things which are firmly in my wheelhouse. He also says things I may think but wouldn’t say for the sake of political politeness.

So when Joe gets a full head of steam up, as he did in that recent post which was entitled “Peak Idiocracy?”, I set aside some time to read it. I can vicariously fling aside my Canadian reticence and enjoy a generous helping of Mandesian snarkiness. In this case, the post was a recap of Mediapost’s 2023 Marketing Politics Conference – and the depths that political advertising is sinking to in order to appeal to younger demographics. Without stealing Joe’s thunder (please read the post if you haven’t) one example involved Tiktok and mouth mash-up filters. After the panel where this case study surfaced, Joe posed a question to the panelists.

“If this is how we are electing our representative leaders, do you feel like we’ve reached peak idiocracy in the sense that we are using mouth filters and Harry Potter memes to get their messages across?”

As Joe said, it was an “old guy question.” More than that, it was a cynical, smart, sarcastic old guy question. But the fact remains, it was an old guy question. One of the panelists, DGA Digital Director Laura Carlson responded:

“I don’t think we should discount young voters’ intelligence. I think being able to have fun with the news and have fun with politics and enjoy TikTok and enjoy the platform while also engaging with issues you care about is something I wouldn’t look down on. And I think more of it is better.”

There’s something to this. Maybe a lot to this.

First, I think we have fundamentally different idea of “messaging” from generation to generation. Our generation (technically I’m a Boomer, but the label Generation Jones is a better fit) grew up with the idea that information, whether it be on TV, newspaper, magazine or radio, was delivered as a complete package. There was a scarcity of information, and this bundling of curated information was our only choice for being informed.

That’s not the case for a generation raised with the Internet and social media. Becoming aware and being informed are often decoupled. In an environment jammed with information of all types – good and bad – Information foraging strategies have had to evolve. Now, you have to somehow pierce the information filters we have all put in place in order to spark awareness. If you are successful in doing that and can generate some curiosity, you have umpteen million sources just a few keystrokes away where you can become informed.

Still, we “old guys” (and “old gals” – for the sake of consistency, I’ll use the masculine label, but I mean it in the gender-neutral way) do have a valid perspective that shouldn’t be dismissed as us just being old and grumpy. We’ve been around long enough to see how actions and consequences are correlated. We’ve seen how seemingly trivial trends can have lasting impacts, both good and bad. There is experience here that can prove instructive.

But we also must appreciate that those a few generations behind us have built their own cognitive strategies to deal with information that are probably a better match for the media environment we live in today.

So let me pose a different question. If only one generation could vote, and if everyone’s future depended on that vote, which generation would you choose to give the ballots to? Pew Research did a generational breakdown on awareness of social issues and for me, the answer is clear. I would far rather put my future in the hands of Gen Z and Millennials than in the hands of my own generation. They are more socially aware, more compassionate, more committed to solving our many existential problems and more willing to hold our governments accountable.

So, yes, political advertising might be dumbed down to TikTok level for these younger voters, but they understand how the social media game is played. I think they are savvy enough to know that a TikTok mash up is not something to build a political ideology on. They accept it for what it is, a brazen attempt to scream just a little louder than the competition for their attention; standing out from the cacophony of media intrusiveness that engulfs them. If it has to be silly to do that, so be it.

Sure, the generation of Joe Mandese and myself grew up with “real” journalism: the nightly news with Dan Rather and Tom Brokaw, 60 Minutes, The MacNeil/Lehrer Report, the New York Times, The Washington Post. We were weaned on political debates that dealt with real issues.

And for all that, our generation still put Trump in the White House. So much for the wisdom of “old guys.”

Good News and Bad News about Black Swans

First, the good news. According to a new study we may be able to predict extreme catastrophic events such as earthquakes, tsunamis, massive wildfires and pandemics through machine learning and neural networks.

The problem with these “black swan” type of events (events that are very rare but have extreme consequences) is that there isn’t a lot of data that exists that we can use to predict them. The technical term for these is a “stochastic” event – they are random and are, by definition, very difficult to forecast.

Until now. According to the study’s lead author, George Karniadakis, the researchers may have found a way to give us a heads up by using machine learning to make the most out of the meagre data we do have. “The thrust is not to take every possible data and put it into the system, but to proactively look for events that will signify the rare events,” Karniadakis says. “We may not have many examples of the real event, but we may have those precursors. Through mathematics, we identify them, which together with real events will help us to train this data-hungry operator.”

This means that this science could potentially save thousands – or millions – of lives.

But – and now comes the bad news – we have to listen to it. And we have a horrible track record of doing that.  Let’s take just one black swan – COVID 19. Remember that?

Justsecurity.org is a “online forum for the rigorous analysis of security, democracy, foreign policy, and rights.” In other words, it’s their job to minimize the impact of black swans. And they put together a timeline of the US response to the COVID 19 Pandemic. Now that we know the consequences, it’s a terrifying and maddening read. Without getting into the details, it was months before the US federal government took substantive action against the pandemic, despite repeated alerts from healthcare officials and scientists. This put the U.S. behind pretty much the entire developed world in terms of minimizing the impact of the pandemic and saving lives. All the bells, whistles and sirens were screaming at full volume, but no one wanted to listen.

Why? Because there has been a systemic breakdown in what we call epistemic trust – trust in new information coming to us from what should be a trustworthy and relevant source.

I’ll look at this breakdown on two fronts – trust in government and trust in science. These two things should work together, but all too often they don’t. That was especially true in the Trump administration’s handling of the COVID 19 Pandemic.

Let’s start with trust in government. Based on a recent study across 22 countries by the OECD, on average only about half the citizens trust their government. Trust is highest in countries like Finland, Norway and Luxembourg (where only 20 to 30% of the citizens don’t trust their government) and lowest in countries such as Colombia, Latvia and Austria (where over 60% of citizens have no trust in their government).

You might notice I didn’t mention the U.S. That’s because they weren’t included in the study. But the PEW Research Center has been tracking trust in government since 1958, so let’s look at that.

The erosion of trust in the US federal government started with Lyndon Johnson, with trust in government plummeting with Nixon and Watergate. Interestingly, although separated by ideology, both Republicans and Democrats track similarly when you look at erosion of trust from Nixon through George W. Bush, with the exception being Ronald Reagan. That started to break down with Obama and started to polarize even more with Trump and Biden. Since then, the trends started going in opposite directions, but the overall trend has still been towards lower trust.

Now, let’s look at trust in science. While not as drastic as the decline of trust in government, PEW found that trust in science has also declined, especially in the last few years. Since 2020, the percentage of Americans who had no trust in science had almost doubled, from 12% in April 2020 to 22% in December, 2021.

It’s not that the science got worse in those 20 months. It’s that we didn’t want to hear what the science was telling us. The thing about epistemic trust – our willingness to trust trustworthy information – is that it varies depending on what mood we’re in. The higher our stress level, the less likely we are to accept good information at face value, especially if what it’s trying to tell us will only increase our level of stress.

Inputting new information that disrupts our system of beliefs is hard work under any circumstances. It taxes the brain. And if our brain is already overtaxed, it protects itself by locking the doors and windows that new information may sneak through and doubling down on our existing beliefs. This is what psychologists call Confirmation Bias. We only accept new information if it matches what we already believe. This is doubly true if the new information is not something we really want to hear.

The only thing that may cause us to question our beliefs is a niggling doubt, caused by information that doesn’t fit with our beliefs. But we will go out of our way to find information that does conform to our beliefs so we can ignore the information that doesn’t fit, no matter how trustworthy its source.  The explosion of misinformation that has happened on the internet and through social media has made it easier than ever to stick with our beliefs and willfully ignore information that threatens those beliefs.

The other issue in the systemic breakdown of trust may not always be the message – it might be the messenger. If science is trying to warn us about a threatening Black Swan, that warning is generally going to be delivered in one of two ways, either through a government official or through the media. And that’s probably where we have our biggest problem. Again, referring to research done by PEW, Americans distrusted journalists almost as much as government. Sixty percent of American Adults had little to no trust in journalists, and a whopping 76% had little to no trust in elected officials.

To go back to my opening line, the good news is science can warn us about Black Swan events and save lives. The bad news is, we have to pay attention to those warnings.

Otherwise, it’s just a boy calling “wolf.”

My Many Problems with the Metaverse

I recently had dinner with a comedian who had just did his first gig in the Metaverse. It was in a new Meta-Comedy Club. He was excited and showed me a recording of the gig.

I have to admit, my inner geek thought it was very cool: disembodied hands clapping with avataresque names floating above, bursts of virtual confetti for the biggest laughs and even a virtual-hook that instantly snagged meta-hecklers, banning them to meta-purgatory until they promised to behave. The comedian said he wanted to record a comedy meta-album in the meta-club to release to his meta-followers.

It was all very meta.

As mentioned, as a geek I’m intrigued by the Metaverse. But as a human who ponders our future (probably more than is healthy) – I have grave concerns on a number of fronts. I have mentioned most of these individually in previous posts, but I thought it might be useful to round them up:

Removed from Reality

My first issue is that the Metaverse just isn’t real. It’s a manufactured reality. This is at the heart of all the other issues to come.

We might think we’re clever, and that we can manufacturer a better world than the one that nature has given us, but my response to that would be Orgel’s Second Rule, courtesy of Sir Francis Crick, co-discoverer of DNA: “Evolution is cleverer than you are.”

For millions of years, we have evolved to be a good fit in our natural environment. There are thousands of generations of trial and error baked into our DNA that make us effective in our reality. Most of that natural adaptation lies hidden from us, ticking away below the surface of both our bodies and brains, silently correcting course to keep us aligned and functioning well in our world.

But we, in our never-ending human hubris, somehow believe we can engineer an environment better than reality in less than a single generation. If we take Second Life as the first iteration of the metaverse, we’re barely two decades into the engineering of a meta-reality.

If I was placing bets on who is the better environmental designer for us, humans or evolution, my money would be on evolution, every time.

Who’s Law is It Anyway?

One of the biggest selling features of the Metaverse is that it frees us from the restrictions of geography. Physical distance has no meaning when we go meta.

But this also has issues. Societies need laws and our laws have evolved to be grounded within the boundaries of geographical jurisdictions. What happens when those geographical jurisdictions become meaningless? Right now, there are no laws specifically regulating the Metaverse. And even if there are laws in the future, in what jurisdiction would they be enforced?

This is a troubling loophole – and by hole I mean a massive gaping metaverse-sized void. You know who is attracted by a lack of laws? Those who have no regard for the law. If you don’t think that criminals are currently eyeing the metaverse looking for opportunity, I have a beautiful virtual time-share condo in the heart of meta-Boca Raton that I’d love to sell you.

Data is Matter of the Metaverse

Another “selling feature” for the metaverse is the ability to append metadata to our own experiences, enriching them with access to information and opportunities that would be impossible in the real world. In the metaverse, the world is at our fingertips – or in our virtual headset – as the case may be. We can stroll through worlds, real or imagined, and the sum of all our accumulated knowledge is just one user-prompt away.

But here’s the thing about this admittedly intriguing notion: it makes data a commodity and commodities are built to be exchanged based on market value. In order to get something of value, you have to exchange something of value. And for the builders of the metaverse, that value lies in your personal data. The last shreds of personal privacy protection will be gone, forever!

A For-Profit Reality

This brings us to my biggest problem with the Metaverse – the motivation for building it. It is being built not by philanthropists or philosophers, academics or even bureaucrats. The metaverse is being built by corporations, who have to hit quarterly profit projections. They are building it to make a buck, or, more correctly, several billion bucks.

These are the same people who have made social media addictive by taking the dirtiest secrets of Las Vegas casinos and using them to enslave us through our smartphones. They have toppled legitimate governments for the sake of advertising revenue. They have destroyed our concept of truth, bashed apart the soft guardrails of society and are currently dismantling democracy. There is no noble purpose for a corporation – their only purpose is profit.

Do you really want to put your future reality in those hands?

50 Shades of Greying

Here is what I know: Lisa LaFlamme – the main anchor of CTV News, one of Canada’s national nightly newscasts – was fired.

What I don’t know is why. There are multiple versions of why floating around. The one that seems to have served as a rallying point for those looking to support Ms. LaFlamme is that she was fired because she was getting old. During COVID she decided to let her hair go to its natural grey. That, according to the popular version, prompted network brass to pull the pin on her contract.

I suspect the real reason why was not quite that cut and dried. The owners of the network, Bell Media, have been relentlessly trimming their payrolls at their various news organizations over the past several years. I know of one such story through a personal connection. The way this one scenario played out sounded very similar to what happened to Lisa LaFlamme – minus the accusations of ageism and gender double standards. In this case, it was largely a matter of dollars and cents. TV news is struggling financially. Long-time on-air talent have negotiated a salary over their careers that is no longer sustainable. Something had to give. These are probably just the casualties attributable to a dying industry. A hundred years ago it would have been blacksmiths and gas lamplighters that were being let go by the thousands. The difference is that the average blacksmith or lamplighter didn’t have a following of millions of people. They also didn’t have social media. They certainly didn’t have corporate PR departments desperately searching for the latest social media “woke” bandwagon to vault upon.

What is interesting is how these things play out through various media channels. In Ms LaFlamme’s case, it was a perfect storm that lambasted Bell Media (which owns the CTV Network). As the ageism rumours began to emerge, anti-ageism social media campaigns were run by Dove, Wendy’s and even Sports Illustrated. LaFlamme wasn’t mentioned by name in most of these, but the connection was clear. Going grey was something to be celebrated, not a cause for contract cancellation. Grey flecked gravitas should be gender neutral. “Who the f*&k were these Millennial corporate pin-heads that couldn’t stand a little grey on the nightly news!”

It makes excellent fodder for the meme-factory, but I suspect the reality wasn’t quite that simple. Ms La Flamme has never publicly revealed the actual reason for dismissal from her point of view. She never mentioned ageism. She simply said she was “blindsided” by the news. The reasoning behind the parting of the ways from Bell Media has largely been left up to conjecture.

A few other things to note.  LaFlamme received the news on June 29th but didn’t share the news until six weeks later (August 15th) on a personal video she shared on her own social media feed. Bell Media offered her the opportunity to have an on-air send off, but she declined. Finally, she also declined several offers from Bell to continue with the network in other roles. She chose instead to deliver her parting shot in the war zone of social media.

To be fair to both sides, if we’re to catalog all the various rumors floating about, there are also those saying that the decision was brought in – in part – by an allegedly toxic work environment in the news department that started at the top, with LaFlamme.

Now, if the reason for the termination actually was ageism, that’s abhorrent. Ms. LaFlamme is actually a few years younger than I am. I would hate to think that people of our age, who should be still at the height of their careers, would be discriminated against simply because of age.

The same is true if the reason was sexism. There should be no distinction between the appropriate age of a male or female national anchor.

But if it’s more complex, which I’m pretty sure it is, it shows how our world doesn’t really deal very well with complexity anymore. The consideration required to understand them don’t fit well within the attention constraints of social media. It’s a lot easier just to sub in a socially charged hot button meme and wait for the inevitable opinion camps to form. Sure, they’ll be one dimensional and about as thoughtful as a sledgehammer, but those types of posts are a much better bet to go viral.

Whatever happened in the CTV National Newsroom, I do know that this shows that business decisions in the media business will have to follow a very different playbook from this point forward. Bell Media fumbled the ball badly on this one. They have been scrambling ever since to save face. It appears that Lisa LaFlamme – and her ragtag band of social media supporters – outplayed them at every turn.

By the way, LaFlamme just nabbed a temporary gig as a “special correspondent” for CityTV, Bell Media’s competitor, covering the funeral of Queen Elizabeth II and the proclamation of King Charles III.  She’s being consummately professional and comforting, garnering a ton of social media support as she eases Canada through the grieving process (our emotional tie to the Crown is another very complex relationship that would require several posts to unpack).  

Well played, Lisa LaFlamme – well played.

Dealing with Daily Doom

“We are Doomed”

The tweet came yesterday from a celebrity I follow. And you know what? I didn’t even bother to look to find out in which particular way we were doomed. That’s probably because my social media feeds are filled by daily predictions of doom. The end being nigh has ceased to be news. It’s become routine. That is sad. But more than that, it’s dangerous.

This is why Joe Mandese and I have agreed to disagree about the role media can play in messaging around climate change , or – for that matter – any of the existential threats now facing us. Alarmist messaging could be the problem, not the solution.

Mandese ended his post with this:

“What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist “change” to an “our house is on fire” crisis.”

Joe Mandese – Mediapost

But here’s the thing. Cranking up the crisis intensity on our messaging might have the opposite effect. It may paralyze us.

Something called “doom scrolling” is now very much a thing. And if you’re looking for Doomsday scenarios, the best place to start is the Subreddit r/collapse thread.

In a 30 second glimpse during the writing of this column, I discovered that democracy is dying, America is on the brink of civil war, Russia is turning off the tap on European oil supplies, we are being greenwashed into complacency, the Amazon Rainforest may never recover from its current environmental destruction and the “Doomsday” glacier is melting faster than expected. That was all above-the-fold. I didn’t even have to scroll for this buffet of all-you-can eat disaster. These were just the appetizers.

There is a reason why social media feeds are full of doom. We are hardwired to pay close attention to threats. This makes apocalyptic prophesizing very profitable for social media platforms. As British academic Julia Bell said in her 2020 book, Radical Attention,

“Behind the screen are impassive algorithms designed to ensure that the most outrageous information gets to our attention first. Because when we are enraged, we are engaged, and the longer we are engaged the more money the platform can make from us.”

Julia Bell – Radical Attention

But just what does a daily diet of doom do for our mental health? Does constantly making us aware of the impending end of our species goad us into action? Does it actually accomplish anything?

Not so much. In fact, it can do the opposite.

Mental health professionals are now treating a host of new climate change related conditions, including eco-grief, eco-anxiety and eco-depression. But, perhaps most alarmingly, they are now encountering something called eco-paralysis.

In an October 2020 Time.com piece on doom scrolling, psychologist Patrick Kennedy-Williams, who specializes in treating climate-related anxieties, was quoted, ““There’s something inherently disenfranchising about someone’s ability to act on something if they’re exposed to it via social media, because it’s inherently global. There are not necessarily ways that they can interact with the issue.” 

So, cranking up the intensity of the messaging on existential threats such as climate change may have the opposite effect, by scaring us into doing nothing. This is because of something called Yerkes-Dodson Law.

By Yerkes and Dodson 1908 – Diamond DM, et al. (2007). “The Temporal Dynamics Model of Emotional Memory Processing: A Synthesis on the Neurobiological Basis of Stress-Induced Amnesia, Flashbulb and Traumatic Memories, and the Yerkes-Dodson Law”. Neural Plasticity: 33. doi:10.1155/2007/60803. PMID 17641736., CC0, https://commons.wikimedia.org/w/index.php?curid=34030384

This “Law”, discovered by psychologists Robert Yerkes and John Dodson in 1908, isn’t so much a law as a psychological model. It’s a typical bell curve. On the front end, we find that our performance in responding to a situation increases along with our attention and interest in that situation. But the line does not go straight up. At some point, it peaks and then goes downhill. Intent gives way to anxiety. The more anxious we become, the more our performance is impaired.

When we fret about the future, we are actually grieving the loss of our present. In this process, we must make our way through the 5 stages of grief introduced by psychiatrist Elisabeth Kübler-Ross in 1969 through her work with terminally ill patients. The stages are: Denial, Anger, Bargaining, Depression and Acceptance.

One would think that triggering awareness would help accelerate us through the stages. But there are a few key differences. In dealing with a diagnosis of terminal illness, typically there is one hammer-blow event when you become aware of the situation. From there dealing with it begins. And – even when it begins – it’s not a linear journey. As anyone who has ever grieved will tell you, what stage you’re in depends on which day I’m talking to you. You can slip for Acceptance to Anger in a heartbeat.

With climate change, awareness doesn’t come just once. The messaging never ends. It’s a constant cycle of crisis, trapping us in a loop that cycles between denial, depression and despair.

An excellent post on Climateandmind.org on climate grief talks about this cycle and how we get trapped within it. Some of us get stuck in a stage and never move on. Even climate scientist and activist Susanne Moser admits to being trapped in something she calls Functional Denial,

“It’s that simultaneity of being fully aware and conscious and not denying the gravity of what we’re creating (with Climate Change), and also having to get up in the morning and provide for my family and fulfill my obligations in my work.”

Susan Moser

It’s exactly this sense of frustration I voiced in my previous post. But the answer is not making me more aware. Like Moser, I’m fully aware of the gravity of the various threats we’re facing. It’s not attention I lack, it’s agency.

I think the time to hope for a more intense form of messaging to prod the deniers into acceptance is long past. If they haven’t changed their minds yet, they ain’t goin’ to!

I also believe the messaging we need won’t come through social media. There’s just too much froth and too much profit in that froth.

What we need – from media platforms we trust – is a frank appraisal of the worst-case scenario of our future. We need to accept that and move on to deal with what is to come. We need to encourage resilience and adaptability. We need hope that while what is to come is most certainly going to be catastrophic, it doesn’t have to be apocalyptic.

We need to know we can survive and start thinking about what that survival might look like.

Does Social Media “Dumb Down” the Wisdom of Crowds?

We assume that democracy is the gold standard of sustainable political social contracts. And it’s hard to argue against that. As Winston Churchill said, “democracy is the worst form of government – except for all the others that have been tried.”

Democracy may not be perfect, but it works. Or, at least, it seems to work better than all the other options. Essentially, democracy depends on probability – on being right more often than we’re wrong.

At the very heart of democracy is the principle of majority rule. And that is based on something called Jury Theorem, put forward by the Marquis de Condorcet in his 1785 work, Essay on the Application of Analysis to the Probability of Majority Decisions. Essentially, it says that the probability of making the right decision increases when you average the decisions of as many people as possible. This was the basis of James Suroweicki’s 2004 book, The Wisdom of Crowds.

But here’s the thing about the wisdom of crowds – it only applies when those individual decisions are reached independently. Once we start influencing each other’s decision, that wisdom disappears. And that makes social psychologist Solomon Asch’s famous conformity experiments of 1951 a disturbingly significant fly in the ointment of democracy.

You’re probably all aware of the seminal study, but I’ll recap anyway. Asch gathered groups of people and showed them a card with three lines of obviously different lengths. Then he asked participants which line was the closest to the reference line. The answer was obvious – even a toddler can get this test right pretty much every time.

But unknown to the test subject, all the rest of the participants were “stooges” – actors paid to sometimes give an obviously incorrect answer. And when this happened, Asch was amazed to find that the test subjects often went against the evidence of their own eyes just to conform with the group. When wrong answers were given, a third of the subjects always conformed, 75% of the subjects conformed at least once, and only 25% stuck to the evidence in front of them and gave the right answer.

The results baffled Asch. The most interesting question to him was why this was happening. Were people making a decision to go against their better judgment – choosing to go with the crowd rather than what they were seeing with their own eyes? Or was something happening below the level of consciousness? This was something Solomon Asch wondered about right until his death in 1996. Unfortunately, he never had the means to explore the question further.

But, in 2005, a group of researchers at Emory University, led by Gregory Berns, did have a way. Here, Asch’s experiment was restaged, only this time participants were in a fMRI machine so Bern and his researchers could peak at what was actually happening in their brains. The results were staggering.

They found that conformity actually changes the way our brain works. It’s not that we change what we say to conform with what others are saying, despite what we see with our own eyes. What we see is changed by what others are saying.

If, Berns and his researchers reasoned, you were consciously making a decision to go against the evidence of your own eyes just to conform with the group, you should see activity in the frontal areas of our brain that are engaged in monitoring conflicts, planning and other higher-order mental activities.

But that isn’t what they found. In those participants that went along with obviously incorrect answers from the group, the parts of the brain that showed activity were only in the posterior parts of the brain – those that control spatial awareness and visual perception. There was no indication of an internal mental conflict. The brain was actually changing how it processed the information it was receiving from the eyes.

This is stunning. It means that conformity isn’t a conscious decision. Our desire to conform is wired so deeply in our brains, it actually changes how we perceive the world. We never have the chance to be objectively right, because we never realize we’re wrong.

But what about those that went resisted conformity and stuck to the evidence they were seeing with their own eyes? Here again, the results were fascinating. The researchers found that in these cases, they saw a spike of activity in the right amygdala and right caudate nucleus – areas involved in the processing of strong emotions, including fear, anger and anxiety. Those that stuck to the evidence of their own eyes had to overcome emotional hurdles to do so. In the published paper, the authors called this the “pain of independence.”

This study highlights a massively important limitation in the social contract of democracy. As technology increasingly imposes social conformity on our culture, we lose the ability to collectively make the right decision. Essentially, is shows that this effect not only erases the wisdom of crowds, but actively works against it by exacting an emotional price for being an independent thinker.

Memories Made by Media

If you said the year 1967 to me, the memory that would pop into my head would be of Haight-Ashbury (ground zero for the counterculture movement), hippies and the summer of love. In fact, that same memory would effectively stand in for the period 1967 to 1969. In my mind, those three years were variations on the theme of Woodstock, the iconic music festival of 1969.

But none of those are my memories. I was alive, but my own memories of that time are indistinct and fuzzy. I was only 6 that year and lived in Alberta, some 1300 miles from the intersection of Haight and Ashbury Streets, so I have discarded my own personal representative memories. The ones I have were all created by images that came via media.

The Swapping of Memories

This is an example of the two types of memories we have – personal or “lived” memories and collective memories. Collective memories are the memories we get from outside, either for other people or, in my example, from media. As we age, there tends to be a flow back and forth between these two types or memories, with one type coloring the other.

One group of academics proposed an hourglass model as a working metaphor to understand this continuous exchange of memories – with some flowing one way and others flowing the other.  Often, we’re not even aware of which type of memory we’re recalling, personal or collective. Our memories are notoriously bad at reflecting reality.

What is true, however, is that our personal memories and our collective memories tend to get all mixed up. The lower our confidence in our personal memories, the more we tend to rely on collective memories. For periods before we were born, we rely solely on images we borrow.

Iconic Memories

What is true for all memories, ours or the ones we borrow from others, is we put them through a process called “leveling and sharpening.” This is a type of memory consolidation where we throw out some of the detail that is not important to us – this is leveling – and exaggerate other details to make it more interesting – i.e. sharpening.

Take my borrowed memories of 1967, for example. There was a lot more happening in the world than whatever was happening in San Francisco during the Summer of Love, but I haven’t retained any of it in my representative memory of that year. For example, there was a military coup in Greece, the first successful human heart transplant, the creation of the Corporation for Public Broadcasting, a series of deadly tornadoes in Chicago and Typhoon Emma left 140,000 people homeless in the Philippines. But none of that made it into my memory of 1967.

We could call the memories we do keep as “iconic” – which simply means we chose symbols to represent a much bigger and more complex reality – like everything that happened in a 365 day stretch 5 and a half decades ago.

Mass Manufactured Memories

Something else happens when we swap our own personal memories for collective memories – we find much more commonality in our memories. The more removed we become from our own lived experiences, the more our memories become common property.

If I asked you to say the first thing that comes to mind about 2002, you would probably look back through your own personal memory store to see if there was anything there. Chances are it would be a significant event from your own life, and this would make it unique to you. If we had a group of 50 people in a room and I asked that question, I would probably end up with 50 different answers.

But if I asked that same group what the first thing is that comes to mind when I say the year 1967, we would find much more common ground. And that ground would probably be defined by how each of us identify ourselves. For some you might have the same iconic memory that I do – that of the Haight Ashbury and the Summer of Love. Others may have picked the Vietnam War as the iconic memory from that year. But I would venture to guess that in our group of 50, we would end up with only a handful of answers.

When Memories are Made of Media

I am taking this walk down Memory Lane because I want to highlight how much we rely on the media to supply our collective memories. This dependency is critical, because once media images are processed by us and become part of our collective memories, they hold tremendous sway over our beliefs. These memories become the foundation for how we make sense of the world.

This is true for all media, including social media. A study in 2018 (Birkner & Donk) found that “alternative realities” can be formed through social media to run counter to collective memories formed from mainstream media. Often, these collective memories formed through social media are polarized by nature and are adopted by outlier fringes to justify extreme beliefs and viewpoints. This shows that collective memories are not frozen in time but are malleable – continually being rewritten by different media platforms.

Like most things mediated by technology, collective memories are splintering into smaller and smaller groupings, just like the media that are instrumental in their formation.

Sensationalizing Scam Culture

We seem to be fascinated by bad behavior. Our popular culture is all agog with grifters and assholes. As TV Blog’s Adam Buckman wrote in March: “Two brand-new limited series premiering this week appear to be part of a growing trend in which some of recent history’s most notorious innovators and disruptors are getting the scripted-TV treatment.”

The two series Buckman was talking about were “Super Pumped: The Battle for Uber,” about Uber CEO Travis Kalanick, and “The Dropout,” about Theranos founder Elizabeth Holmes.

But those are just two examples from a bumper crop of shows about bad behavior. My streaming services are stuffed with stories of scammers. In addition to the two series Buckman mentioned, I just finished Shonda Rhimes’ Netflix series “Inventing Anna,” about Anna Sorokin, who posed as an heiress named Anna Delvey.

All these treatments tread a tight wire of moral judgement, where the examples are presented as antisocial, but in a wink-and-a-nod kind of way, where we not so secretly admire these behaviors. Much as the actions are harmful to well-being of the collective “we,” they do appeal to the selfishness and ambition of “me.”

Most of the examples given are rags to riches to retribution stores (Holmes was an exception with her upper-middle-class background). The sky-high ambitions of Kalanick Holmes and Sorokin were all eventually brought back down to earth. Sorokin and Holmes both ended up in prison, and Kalanick was ousted from the company he founded.

But with the subtlest of twists, they didn’t have to end this way. They could have been the story of almost any corporate America hustler who triumphed. With a little more substance and a little less scam, you could swap Elizabeth Holmes for Steve Jobs. They even dressed the same.

Obviously, scamming seems to sell. These people fascinate us. Part of the appeal is no doubt due a class conflict narrative: the scrappy hustler climbing the social ranks by whatever means possible. We love to watch “one of us” pull the wool over the eyes of the social elite.

In the case of Anna Sorokin, Laura Craik dissects our fascination in a piece published in the UK’s Evening Standard:

“The reason people are so obsessed with Sorokin is simple: she had the balls to pull off on a grand scale what so many people try and fail to pull off on a small one. To use a phrase popular on social media, Sorokin succeeded in living her best life — right down to the clothes she wore in court, chosen by a stylist. Like Jay Gatsby, she was a deeply flawed embodiment of The American Dream: a person from humble beginnings who rose to achieve wealth and social status. Only her wealth was borrowed and her social status was conferred via a chimera of untruths.”

Laura Craik – UK Evening Standard

This type of behavior is nothing new. It’s always been a part of us. In 1513, a Florentine bureaucrat named Niccolo Machiavelli gave it a name — actually, his name. In writing “The Prince,” he condoned bad behavior as long as the end goal was to elevate oneself. In a Machiavellian world, it’s always open season on suckers: “One who deceives will always find those who allow themselves to be deceived.”

For the past five centuries, Machiavellianism was always synonymous with evil. It was a recognized character flaw, described as “a personality trait that denotes cunningness, the ability to be manipulative, and a drive to use whatever means necessary to gain power. Machiavellianism is one of the traits that forms the Dark Triad, along with narcissism and psychopathy.”

Now, however, that stigma seems to be disappearing. In a culture obsessed with success, Machiavellianism becomes a justifiable means to an end, so much so that we’ve given this culture its own hashtag: #scamculture: “A scam culture is one in which scamming has not only lost its stigma but is also valorized. We rebrand scamming as ‘hustle,’ or the willingness to commodify all social ties, and this is because the ‘legitimate’ economy and the political system simply do not work for millions of Americans.”

It’s a culture that’s very much at home in Silicon Valley. The tech world is steeped in Machiavellianism. Its tenets are accepted — even encouraged — business practices in the Valley. “Fake it til you make it” is tech’s modus operandi. The example of Niccolo Machiavelli has gone from being a cautionary tale to a how-to manual.

But these predatory practices come at a price. Doing business this way destroys trust. And trust is still, by far, the best strategy for our mutual benefit. In behavioral economics, there’s something called “tit for tat,” which according to Wikipedia “posits that a person is more successful if they cooperate with another person. Implementing a tit-for-tat strategy occurs when one agent cooperates with another agent in the very first interaction and then mimics their subsequent moves. This strategy is based on the concepts of retaliation and altruism.”

In countless game theory simulations, tit for tat has proven to be the most successful strategy for long-term success. It assumes a default position of trust, only moving to retaliation if required.

Our society needs trust to function properly. In a New York Times op-ed entitled “Why We Need to Address Scam Culture,” Tressie McMillan Cottom writes,  

“Scams weaken our trust in social institutions, but their going mainstream — divorced from empathy for the victims or stigma for the perpetrators — means that we have accepted scams as institutions themselves.”

Tressie McMillan Cottom – NY Times

The reason that trust is more effective than scamming is that predatory practices are self-limiting. You can only be a predator if you have enough prey. In a purely Machiavellian world, trust disappears — and there are no easy marks to prey upon.

Making Time for Quadrant Two

Several years ago, I read Stephen Covey’s “The 7 Habits of Highly Effective People.” It had a lasting impact on me. Through my life, I have found myself relearning those lessons over and over again.

One of them was the four quadrants of time management. How we spend our time in these quadrants determines how effective we are.

 Imagine a box split into four quarters. On the upper left box, we’ll put a label: “Important and Urgent.” Next to it, in the upper right, we’ll put a label saying “Important But Not Urgent.” The label for the lower left is “Urgent but Not Important.” And the last quadrant — in the lower right — is labeled “Not Important nor Urgent.”

The upper left quadrant — “Important and Urgent” — is our firefighting quadrant. It’s the stuff that is critical and can’t be put off, the emergencies in our life.

We’ll skip over quadrant two — “Important But Not Urgent” — for a moment and come back to it.

In quadrant three — “Urgent But Not Important” — are the interruptions that other people brings to us. These are the times we should say, “That sounds like a you problem, not a me problem.”

Quadrant four is where we unwind and relax, occupying our minds with nothing at all in order to give our brains and body a chance to recharge. Bingeing Netflix, scrolling through Facebook or playing a game on our phones all fall into this quadrant.

And finally, let’s go back to quadrant two: “Important But Not Urgent.” This is the key quadrant. It’s here where long-term planning and strategy live. This is where we can see the big picture.

The secret of effective time management is finding ways to shift time spent from all the other quadrants into quadrant two. It’s managing and delegating emergencies from quadrant one, so we spend less time fire-fighting. It’s prioritizing our time above the emergencies of others, so we minimize interruptions in quadrant three. And it’s keeping just enough time in quadrant four to minimize stress and keep from being overwhelmed.

The lesson of the four quadrants came back to me when I was listening to an interview with Dr. Sandro Galea, epidemiologist and author of “The Contagion Next Time.” Dr. Galea was talking about how our health care system responded to the COVID pandemic. The entire system was suddenly forced into quadrant one. It was in crisis mode, trying desperately to keep from crashing. Galea reminded us that we were forced into this mode, despite there being hundreds of lengthy reports from previous pandemics — notably the SARS crisis–– containing thousands of suggestions that could have helped to partially mitigate the impact of COVID.

Few of those suggestions were ever implemented. Our health care system, Galea noted, tends to continually lurch back and forth within quadrant one, veering from crisis to crisis. When a crisis is over, rather than go to quadrant two and make the changes necessary to avoid similar catastrophes in the future, we put the inevitable reports on a shelf where they’re ignored until it is — once again — too late.

For me, that paralleled a theme I have talked about often in the past — how we tend to avoid grappling with complexity. Quadrant two stuff is, inevitably, complex in nature. The quadrant is jammed with what we call wicked problems. In a previous column, I described these as, “complex, dynamic problems that defy black-and-white solutions. These are questions that can’t be answered by yes or no — the answer always seems to be maybe.  There is no linear path to solve them. You just keep going in loops, hopefully getting closer to an answer but never quite arriving at one. Usually, the optimal solution to a wicked problem is ‘good enough — for now.’”

That’s quadrant two in a nutshell. Quadrant-one problems must be triaged into a sort of false clarity. You have to deal with the critical stuff first. The nuances and complexity are, by necessity, ignored. That all gets pushed to quadrant two, where we say we will deal with it “someday.”

Of course, someday never comes. We either stay in quadrant one, are hijacked into quadrant three, or collapse through sheer burn-out into quadrant four. The stuff that waits for us in quadrant two is just too daunting to even consider tackling.

This has direct implications for technology and every aspect of the online world. Our industry, because of its hyper-compressed timelines and the huge dollars at stake, seems firmly lodged in the urgency of quadrant one. Everything on our to-do list tends to be a fire we have to put out. And that’s true even if we only consider the things we intentionally plan for. When we factor in the unplanned emergencies, quadrant one is a time-sucking vortex that leaves nothing for any of the other quadrants.

But there is a seemingly infinite number of quadrant two things we should be thinking about. Take social media and privacy, for example. When an online platform has a massive data breach, that is a classic quadrant one catastrophe. It’s all hands on deck to deal with the crisis. But all the complex questions around what our privacy might look like in a data-inundated world falls into quadrant two. As such, they are things we don’t think much about. It’s important, but it’s not urgent.

Quadrant two thinking is systemic thinking, long-term and far-reaching. It allows us to build the foundations that helps to mitigate crisis and minimize unintended consequences.

In a world that seems to rush from fire to fire, it is this type of thinking that could save our asses.

The News Cycle, Our Attention Span and that Oscar Slap

If your social media feed is like mine, it was burning up this Monday with the slap heard around the world. Was Will Smith displaying toxic masculinity? Was “it was a joke” sufficient defence for Chris Rock’s staggering lack of ability to read the room? Was Smith’s acceptance speech legendary or just really, really lame?

More than a few people just sighed and chalked it up as another scandal up for the beleaguered awards show. This was one post I saw from a friend on Facebook, “People smiling and applauding as if an assault never happened is probably Hollywood in a nutshell.”

Whatever your opinion, the world was fascinated by what happened. The slap trended number one on Twitter through Sunday night and Monday morning. On CNN, the top trending stories on Monday morning were all about the “slap.” You would have thought that there was nothing happening in the world that was more important than one person slapping another. Not the world teetering on the edge of a potential world war. Not a global economy that can’t seem to get itself in gear. Not a worldwide pandemic that just won’t go away and has just pushed Shanghai – a city of 30 million – back into a total lock down.

And the spectre of an onrushing climactic disaster? Nary a peep in Monday’s news cycle.

We commonly acknowledge – when we do take the time to stop and think about it – that our news cycles have about the same attention span as a 4-year-old on Christmas morning. No matter what we have in our hands, there’s always something brighter and shinier waiting for us under the tree. We typically attribute this to the declining state of journalism. But we – the consumers of news – are the ones that continually ignore the stories that matter in favour of gossipy tidbits.

This is just the latest example of that. It is nothing more than human nature. But there is a troubling trend here that is being accelerated by the impact of social media. This is definitely something we should pay attention to.

The Confounding Nature of Complexity

Just last week, I talked about something psychologists call a locus of control. Essentially it is defined by the amount of control you feel you have over your life. In times of stress, unpredictability or upheaval, our own perceived span of control tends to narrow to the things we have confidence we can manage. Our ability to cope draws inward, essentially circling the wagons around the last vestiges of our capability to direct our own circumstances. 

I believe the same is true with our ability to focus attention. The more complex the world gets, the more we tend to focus on things that we can easily wrap our minds around. It has been shown repeatedly that anxiety impacts the ability of our brain to focus on things. A study from Finland’s Abo Akademi University showed that anxiety reduces the ability of the brain to focus on tasks. It eats away at our working memory, leaving us with a reduced capacity to integrate concepts and work things out. Complex, unpredictable situations natural raise our level of anxiety, leading us to retreat to things we don’t have to work too hard to understand.

The irony here is the more we are aware of complex and threatening news stories, the more we go right past them to things like the Smith-Rock story. It’s like catnip to a brain that’s trying to retreat from the real news because we can’t cope with it.

This isn’t necessarily the fault of journalism, it’s more a limitation of our own brains. On Monday morning, CNN offered plenty of coverage dealing with the new airstrikes in Ukraine, Biden’s inflammatory remarks about Putin, Trump’s attempts to block Congress from counting votes and the restriction of LGBTQ awareness in the classrooms of Florida. But none of those stories were trending. What was trending were three stories about Rock and Smith, one about the Oscar winners and another about a 1600-pound shark. That’s what we were collectively reading.

False Familiarity

It’s not just that the news is too complex for us to handle that made the Rock/Smith story so compelling. Our built-in social instincts also made it irresistible.

Evolution has equipped us with a highly attuned social antennae. Humans are herders and when you travel in a herd, your ability to survive is highly dependent on picking up signals from your fellow herders. We have highly evolved instincts to help us determine who we can trust and who we should protect ourselves from. We are quick to judge others, and even quicker to gossip about behavior that steps over those invisible boundaries we call social norms.

For generations, these instincts were essential when we had keep tabs on the people closest to us. But with the rise of celebrity culture in the last century, we now apply those same instincts to people we think we know. We pass judgement on the faces we see on TV and in social media. We have a voracious appetite for gossip about the super-rich and the super famous.

Those foibles may be ours and ours alone, but they not helped by the fact that certain celebrities – namely one Mr. Smith – feels compelled to share way too much about himself with the public at large. Witness his long and tear-laden acceptance speech. Even though I have only a passing interest in the comings and goings of Will and Jada, I know more about their sex lives than that of my closest friends. The social norm that restricts bedroom talk amongst our friends and family is not there with the celebrities we follow. We salivate over salacious details.

No Foul, No Harm?

That’s the one-two punch (sorry, I had to go there) that made the little Oscar ruckus such a hot news item. But what’s the harm? It’s just a momentary distraction for the never-ending shit-storm that defines our daily existence, right?

Not quite.

The more we continually take the path of least resistance in our pursuit of information, the harder it becomes for us to process the complex concepts that make up our reality. When that happens, we tend to attribute too much importance and meaning to these easily digestible nuggets of gossip. As we try to understand complex situations (which covers pretty much everything of importance in our world today) we start relying too much on cognitive short cuts like availability bias and representative bias. In the first case, we apply whatever information we have at hand to every situation and in the second we resort to substituting stereotypes and easy labels in place of trying to understand the reality of an individual or group.

Ironically, it’s exactly this tendency towards cognitive laziness that was skewered in one of Sunday night’s nominated features, Adam McKay’s Don’t Look Up.

Of course, it was ignored. As Will Smith said, sometimes, “art imitates life.”