Strategies for Surviving the News

When I started this post, I was going to unpack some of the psychology behind the consumption of the news. I soon realized that the topic is far beyond the confines of this post to realistically deal with. So I narrowed my focus to this – which has been very top of mind for me lately – how do you stay informed without becoming a trembling psychotic mess? How do you arm yourself for informed action rather than becoming paralyzed into inaction by the recent fire hose of sheer WTF insanity that makes up the average news feed.

Pick Your Battles

There are few things more debilitating to humans than fretting about things we can’t do anything about. Research has found a strong correlation between depression and our locus of control – the term psychologists use for the range of things we feel we can directly impact. There is actually a term for being so crushed by bad news that you lose the perspective needed to function in your own environment. It’s called Mean World Syndrome.

If effecting change is your goal, decide what is realistically within your scope of control. Then focus your information gathering on those specific things. When it comes to informing yourself to become a better change agent, going deep rather than wide might be a better strategy.

Be Deliberate about Your Information Gathering

The second strategy goes hand in hand with the first. Make sure you’re in the right frame of mind to gather information. There are two ways the brain processes information: top-down and bottom-up. Top-down processing is cognition with purpose – you have set an intent and you’re working to achieve specific goals. Bottom up is passively being exposed to random information and allowing your brain to be stimulated by it. The way you interpret the news will be greatly impacted by whether you’re processing it with a “top-down” intent or letting your brain parse it from the “bottom-up”

By being more deliberate in gathering information with a specific intent in mind, you completely change how your brain will process the news. It will instantly put it in a context related to your goal rather than let it rampage through our brains, triggering our primordial anxiety circuits.

Understand the Difference between Signal and Noise

Based on the top two strategies, you’ve probably already guessed that I’m not a big fan of relying on social media as an information source. And you’re right. A brain doom scrolling through a social media feed is not a brain primed to objectively process the news.

Here is what I did. For the broad context, I picked two international information sources I trust to be objective: The New York Times and the Economist out of the U.K. I subscribed to both because I wanted sources that weren’t totally reliant on advertising as a revenue source (a toxic disease which is killing true journalism). For Americans, I would highly recommend picking at least one source outside the US to counteract the polarized echo chamber that typifies US journalism, especially that which is completely ad supported.

Depending on your objectives, include sources that are relevant to those objectives. If local change is your goal, make sure you are informed about your community. With those bases in place, even If you get sucked down a doom scrolling rabbit hole, at least you’ll have a better context to allow you to separate signal from noise.

Put the Screen Down

I realize that the majority of people (about 54% of US Adults according to Pew Research) will simply ignore all of the above and continue to be informed through their Facebook or X feeds. I can’t really change that.

But for the few of you out there that are concerned about the direction the world seems to be spinning and want to filter and curate your information sources to effect some real change, these strategies may be helpful.

For my part, I’m going to try to be much more deliberate in how I find and consume the news.  I’m also going to be more disciplined about simply ignoring the news when I’m not actively looking for it. Taking a walk in the woods or interacting with a real person are two things I’m going to try to do more.

Is This The Time to Say No?

I like to be agreeable. I’m not really into rocking boats or stirring things up. If there is a flow to be found, I will usually be found going with it.

But today, one day after Trump v2.0 became official, I’m wondering if I should change my tune and say “no” more often. Trump hasn’t even been president for 24 hours yet and already the world seems to be changing, and not in any way I’m comfortable with.

There has been a lot of talk about how Big Tech is embracing the wild and wacky world of misinformation in the new era of MAGA. Musk’s malevolent makeover of X has proven to be prescient rather than puerile. Mark Zuckerberg is following suit by sending Meta’s Fact Checkers packing. Jeff Bezos first blocked the Washington Post from endorsing Kamala Harris and then dialed back diversity, equity and inclusion at Amazon to be better aligned with Trumpian sensibilities.

All of these moves are driven purely by business motives. The Tech Broligarchs (the worlds most exclusive white male club) are greasing the wheels for maximum profitability over the next 4 years for their respective empires. They are tripping over each other rushing to scatter rose petals at Trump’s toes. When collectively those three are worth close to 1 trillion dollars – well – a dude has the right to protect his assets, doesn’t he?

No. I don’t think so. I’m not okay with any of this. As Big Tech primes the profitability pump by pandering to the new president, we are all going to pay a much bigger price. The erosion of social capital is going to be massive. And so, I feel the time has come to say when I don’t agree with something. And I don’t agree with this.

We all somehow believe that free markets will eventually lead us to the best moral choice. And nothing could be further from the truth. Nobel prize winning economist Milt Friedman was wrong when he said, “an entity’s greatest responsibility lies in the satisfaction of the shareholders.” This doctrine has guided the corporate world for half a century now, towing along our western governments in its wake. The enshrining of profits as more important than social responsibility has led us inevitably to where we are now, where the personal worth of a handful of tech billionaires is judged as more important than the sustainability and fairness of our own society.

Normally, we would rely on our governments to put in place legislation to protect us from the worst instincts of big business. But yesterday, with the second swearing in of Donald Trump as president, we saw that dynamic flipped on its head. For the next four years, the U.S. will have a sitting president that will be leading the way in the race to the bottom. Corporate America will be hard pressed to keep up.

So, if big business is not looking out for us, and our government is looking the other way, who should we turn to? The answer, sadly, is there is no one left but ourselves. If we don’t agree with something – if the world is going in a direction contrary to our own values – we have to say something. We also have to do something. We have to become a little more defiant.

That is the theme of the brand-new book “Defy” by organizational psychologist Dr. Sunita Sah. She says that we are typically hard-wired to comply rather than defy, “There are situations where you want to defy, but you go along with it. Maybe the costs are too great, the benefits too meager, or the situation is dangerous. We all have to do that at times, even our defiant heroes like Rosa Parks. How many times did she comply with the segregation laws? A lot, but there comes a moment when we decide now is the time to defy. It’s figuring out when that time is.”

For myself, that time has come.

My 1000th Post – and My 20 Year Journey

Note: This week marks the 1000th post I’ve written for MediaPost. For this blog, all of those posts are here, plus a number that I’ve written for other publications and exclusively for Out of My Gord. But the sentiments here apply to all those posts. If you’re wondering, I’ve written 1233 posts in total.

According to the MediaPost search tool, this is my 1000th post for this publication. There are a few duplicates in there, but I’m not going to quibble. No matter how you count them up, that’s a lot of posts.

My first post was written on August 19th, 2004. Back then I wrote exclusively for the emerging search industry. Google was only 6 years old.  They had just gone public, with investors hoping to cash in on this new thing called paid search. Social media was even greener. There was no Facebook. Something called Myspace had launched the year before.

In the 20 years I’ve written for MediaPost, I’ve bounced from masthead to masthead. My editorial bent evolved from being Search industry specific to eventually find my sweet spot, which I found at the intersection of human behavior and technology.

It’s been a long and usually interesting journey. When I started, I was the parent of two young children who I dragged along to industry events, using the summer search conference in San Jose as an opportunity to take a family camping vacation. I am now a grandfather, and I haven’t been to a digital conference for almost 10 years (the last being the conferences I used to host and program for the good folks here at MediaPost).

When I started writing these posts, I was both a humanist and a technophile. I believed that people were inherently good, and that technology would be the tool we would use to be better. The Internet was just starting to figure out how to make money, but it was still idealistic enough that people like me believed it would be mostly a good thing. Google still had the phrase “Don’t be Evil” as part of its code of conduct.

Knowing this post was coming up, I’ve spent the past few months wondering what I’d write when the time came. I didn’t want it to be yet another look back at the past 20 years. The history I have included I’ve done so to provide some context.

No, I wanted this to be what this journey has been like for me. There is one thing about having an editorial deadline that forces you to come up with something to write about every week or two. It compels you to pay attention. It also forces you to think. The person I am now – what I believe and how I think about both people and technology – has been shaped in no small part by writing these 1000 posts over the past 20 years.

So, If I started as a humanist and technophile, what am I now, 20 years later? That is a very tough question to answer. I am much more pessimistic now. And this post has forced me to examine the causes of my pessimism.

I realized I am still a humanist. I still believe that if I’m face to face with a stranger, I’ll always place my bet on them helping me if I need it. I have faith that it will pay off more often than it won’t. If anything, we humans may be just a tiny little bit better than we were 20 years ago: a little more compassionate, a little more accepting, a little more kind.

So, if humans haven’t changed, what has? Why do I have less faith in the future than I did 20 years ago? Something has certainly changed. But what was it, I wondered?

Coincidentally, as I was thinking of this, I was also reading the late Philip Zimbardo’s book – The Lucifer Effect: Understanding How Good People Turn Evil. Zimbardo was the researcher who oversaw the Stanford Prison Experiment, where ordinary young men were randomly assigned roles as guards or inmates in a makeshift prison set up in a Stanford University basement. To make a long story short – ordinary people started doing such terrible things that they had to cut the experiment short after just 6 days.

 Zimbardo reminded me that people are usually not dispositionally completely good or bad, but we can find ourselves in situations that can push us in either direction. We all have the capacity to be good or evil. Our behavior depends on the environment we function in. To use an analogy Zimbardo himself used, it may not be the apples that are bad. It could be the barrel.

So I realized, it isn’t people who have changed in the last 20 years, but the environment we live in. And a big part of that environment is the media landscape we have built in those two decades. That landscape looks nothing like it did back in 2004.  With the help of technology, we have built an information landscape that doesn’t really play to the strengths of humanity. It almost always shows us the worst side of ourselves. Journalism has been replaced by punditry. Dialogue and debate have been pushed out of the way by demagoguery and divisiveness.

So yes, I’m more pessimistic now that I was when I started this journey 20 years ago. But there is a glimmer of hope here. If people had truly changed, there is not a lot we can do about that. But if it’s the media landscape that’s changed, that’s a different story. Because we built it, we can also fix it.

It’s something I’ll be thinking about as I start a new year.

The Strange Social Media Surge for Luigi Mangione

Luigi Mangione is now famous. Just one week ago, we had never heard of him. But now, he has become so famous, I don’t even have to recount the reason for his fame.

But, to me, what’s more interesting than Mangione’s sudden fame is how we feel about him. According to the Network Contagion Research Institute there is a lot of online support for Luigi Mangione. An online funding campaign has raised over $130,000 for his legal defense fund. The hashtag #FreeLuigi, #TeamLuigi and other pro-Luigi memes have taken over every social media channel. Amazon, Etsy and E-Bay are scrambling to keep Luigi inspired merchandise out of their online stores. His X (formerly Twitter) account has ballooned from 1500 followers to almost half a million.

It’s an odd reaction for someone who is accused of gunning down a prominent American businessman in cold blood.

The outpouring of support for Luigi Mangione is so consequential, it’s threatening to lay a very heavy thumb on the scales of justice. There is so much public support for Luigi Mangione, prosecutors are worried that it could lead to jury nullification. It may be impossible to find unbiased and impartial jurors who would find Mangione guilty, even if it’s proven beyond a reasonable doubt.

Now, I certainly don’t want to comment or Mr. Mangione’s guilt, innocence or whether he’s appropriate material from which to craft a folk hero. Nor do I want to talk about the topic of American Healthcare and the corporate ethics of United Healthcare or any other medical insurance provider.  I won’t even dive into the admittedly juicy and ironic twist that our latest anti-capitalist hero of the common people is a young, white, male, good looking, wealthy and privately educated scion who probably leans right in his political beliefs.

No, I will leave all of that well enough alone. What I do want to talk about is how this had played out through social media and why it’s different than anything we’ve seen before.

We behave and post differently depending on what social platform we’re on at the time. In sociology and psychology, this is called “modality.”  How we act depends on what role we’re playing and what mode we’re in. The people we are, the things we do, the things we say and the way we behave are very different when we’re being a parent at home, an employee at work or a friend having a few drinks after work with our buddies. Each mode comes with different scripts and we usually know what is appropriate to say in each setting.

It was sociologist Erving Goffman who likened it to being on stage in his 1956 book, The Presentation of Self in Everyday Life. The roles we choose to play depends on the audience we’re playing too. We try to stay consistent with the expectations we think the audience has of us. Goffman said, “We are all just actors trying to control and manage our public image, we act based on how others might see us.”

Now, let’s take this to the world of social media. What we post depends on how it plays to the audience of the platform we’re on. We may have a TikTok persona, a Facebook persona and an X persona. But all of those are considered mainstream platforms, especially when compared to platforms like 4Chan, Parler or Reddit. If we’re on any of those platforms, we are probably taking on a very different role and reading from a different script.

Think of it this way. Posting something on Facebook is a little like getting up and announcing something at a townhall meeting that’s being held at your kid’s school. You assume that the audience will be somewhat heterogenous in terms of tastes and ideologies, and you consider your comments accordingly.

But posting something on 4Chan is like the conversation that might happen with your 4 closest bros (4Chan’s own demos admit their audience is 70% male) after way too many beers at a bar. Fear about stepping over the line is non-existent. Racial slurs, misogynistic comments and conspiracy theories abound in this setting.

The thing that’s different with the Mangione example is that comments we would only expect to see on the fringes of social media are showing up in the metaphorical Town Square of Facebook and Instagram (I no longer put X in this category, thank to Mr. Musk’s flirting with the Fringe). In the report from the Network Contagion Research Institute, the authors said,  “While this phenomenon was once largely confined to niche online subcultures, we are now witnessing similar dynamics emerging on mainstream platforms, amplifying the risk of further escalation,”

As is stated in this report, the fear is that by moving discussions of this sort into a mainstream channel, we legitimize it. We have moved the frame of what’s acceptable to say (my oft referenced example of Overton’s Window) into uncharted territory in a new and much more public arena. This could create an information cascade, when can encourage copycats and other criminal behavior.

This is a social phenomenon that will have implications for our future. The degrees of separation between the wild, wacky outer fringes of social media and the mainstream information sources that we use to view the world through are disappearing, one by one. With the Luigi Mangione example, we just realized how much things have changed.

Why Hate is Trending Up

There seems to be a lot of hate in the world lately. But hate is a hard thing to quantify. There are, however, a couple places that may put some hard numbers behind my hunch.

Google’s NGram viewer tracks the frequency of the appearance of a word through published books from 2022 all the way back to 1800. According to NGram, the usage of “hate” has skyrocketed, beginning in the mid 1980s. In 2022, the last year you can search for, the frequency of usage of “hate” was 3 times higher than it historically was.

NGram also allows you to search separately for usage in American English and British English. You’ll either be happy or dismayed to learn that hate knows no boundaries. The British hate almost as much as Americans. They had the same steep incline over the past 4 decades. However, Americans still have an edge on usage, with a frequency that is about 40% higher than those speaking the Queen’s English.

One difference between the two graphs were during the years of the First World War. Then, usage of “hate” in England spiked briefly. The U.S. didn’t have the same spike.

Another way to measure hate is provided by the Southern Poverty Law Center in Montgomery, Alabama, who have been publishing a “hate map” since 2000. The map tracks hate and antigovernment groups. In the year 2000, the first year of the map, the SPLC tracked 599 hate groups across the U.S. By 2023, the number of hate groups had exploded by 240 percent to 1430.

So – yeah – it looks like we all hate a little more than we used to. I’ve talked before about Overton’s Window, that construct that defines what it is acceptable to talk about in public. And based on both these quantitative measures, it looks like “hate” is trending up. A lot.

I’m not immune to trends. I don’t personally track such things, but I’m pretty sure the word “hate” has slipped from my lips more often in the past few years. But here’s the thing. It’s almost never used towards a person I know well. It’s certainly never used towards a person I’m in the same room with. It’s almost always used towards a faceless construct that represents a person or a group of people that I really don’t know very well. It’s not like I sit down and have a coffee with them every week. And there we have one of the common catalysts of hate – something called “dehumanization.”

Dehumanization is a mental backflip where we take a group and strip them of their human qualities, including intelligence, compassion, kindness or social awareness. We in our own “in group” make those in the “out group” less than human so it’s easier to hate them. They are “stupid”, “ignorant”, “evil” or “animals”.

But an interesting thing happens when we’re forced to sit face to face with a representative from this group and actually engage then in conversation so we can learn more about them. Suddenly, we see they’re not as stupid, evil or animalistic as we thought. Sure, we might not agree with them on everything, but we don’t hate them. And the reason for this is due to another thing that makes us human, a molecule called oxytocin.

Oxytocin has been called the “Trust molecule” by neuroeconomist Paul Zak. It kicks off a neurochemical reaction that readies our brains to be empathetic and trusting. It is part of our evolved trust sensing mechanism, orchestrating a delicate dance by our prefrontal cortex and other regions like the amygdala.

But to get the oxytocin flowing, you really need to be face-to-face with a person. You need to be communicating with your whole body, not just your eyes or ears. The way we actually communicate has been called the 7-38-55 rule, thanks to research done in the 1960’s and 70’s by UCLA body language researcher Albert Mehrabian. He showed that 7% of communication is verbal, 38% is tone of voice and 55% is through body language.

It’s that 93% of communication that is critical in the building of trust. And it can only happen face to face. Unfortunately, our society has done a dramatic about-face away from communication that happens in a shared physical space towards communication that is mediated through electronic platforms. And that started to happen about 40 years ago.

Hmmm, I wonder if there’s a connection?

The World vs Big Tech

Around the world, governments have their legislative cross hairs trained on Big Tech. It’s happening in the US, the EU and here in my country,  Canada. The majority of these are anti-trust suits. But Australia has just introduced a different type of legislation, a social media ban for those under 16. And that could change the game – and the conversation -completely for Big Tech.

There are more anti-trust actions in the queue in the US than at any time in the previous five decades. The fast and loose interpretation of antitrust enforcement in the US is that monopolies are only attacked when they may cause significant harm to customers through lack of competition. The US approach to anti-trust since the 1970s has typically followed the Chicago School of neoclassical economy theory, which places all trust in the efficiency of markets and tells government to keep their damned hands off the economy. Given this and given the pro-business slant of all US administrations, both Republican and Democratic, since Reagan, it’s not surprising that we’ve seen relatively few anti-trust suits in the past 50 years.

But the rapid rise of monolithic Big Tech platforms has raised more discussion about anti-trust in the past decade than in the previous 5 decades. These platforms suck along the industries they spawn in their wake and leave little room for upstart competitors to survive long enough to gain significant market share.

Case in point: Google. 

The recent Canadian lawsuit has the Competition Bureau (our anti-trust watchdog) suing Google for anti-competitive practices selling its online advertising services north of the 49th parallel. They’re asking Google to sell off two of its ad-tech tools, pay penalties worth up to 3% of the platform’s global gross revenues and prohibit the company from engaging in anti-competitive practices in the future.

According to a 3-year inquiry into Google’s Canadian business practices by the Bureau, Google controls 90% of all ad servers and 70% of advertising networks operating in the country. Mind you, Google started the online advertising industry in the relatively green fields of Canada back when I was still railing about the ignorance of Canadian advertisers when it came to digital marketing. No one else really had a chance. But Google made sure they never got one by wrapping its gigantic arms around the industry in an anti-competitive bear hug.

The recent Australian legislation is of a different category, however. Anti-trust suits are – by nature – not personal. They are all about business. But the Australian ban puts Big Tech in the same category as Big Tobacco, Big Alcohol and Big Pharma – alleging that they are selling an addictive product that causes physical or emotional harm to individuals. And the rest of the world is closely watching what Australia does. Canada is no exception.

The most pertinent question is how will Australia enforce the band? Restricting social media access to those under 16 is not something to be considered lightly.  It’s a huge technical, legal and logistical hurdle to get over. But if Australia can figure it out, it’s certain that other jurisdictions around the world will follow in their footsteps.

This legislation opens the door to more vigorous public discourse about the impact of social media on our society. Politicians don’t introduce legislation unless they feel that – by doing so – they will continue to get elected. And the key to being elected is one of two things; give the electorate what they want or protect them against what they fear. In Australia, recent polling indicates the ban is supported by 77% of the population. Even those opposing the ban aren’t doing so in defense of social media. They’re worried that the devil might be in the details and that the legislation is being pushed through too quickly.

These types of things tend to follow a similar narrative arc: fads and trends drive widespread adoption – evidence mounts about the negative impacts – industries either ignore or actively sabotage the sources of the evidence – and, with enough critical mass, government finally gets into the act by introducing protective legislation.

With tobacco in the US, that arc took a couple of decades, from the explosion of smoking after World War II to the U.S. Surgeon General’s 1964 report linking smoking and cancer. The first warning labels on cigarette packages appeared two years later, in 1966.

We may be on the cusp of a similar movement with social media. And, once again, it’s taken 20 years. Facebook was founded in 2004.

Time will tell. In the meantime, keep an eye on what’s happening Down Under.

Democracy Dies in the Middle

As I write this, I don’t know what the outcome of the election will be. But I do know this. There has never been an U.S. Presidential election campaign quite like this one. If you were scripting a Netflix series, you couldn’t have made up a timeline like this (and this is only a sampling):

January 26 – A jury ordered Donald Trump to pay E. Jean Carroll $83 million in additional emotional, reputation-related, and punitive damages. The original award was $5 million.

April 15 – Trial of New York vs Donald Trump begins. Trump was charged with 34 counts of felony.

May 30 – Trump is found guilty on all 34 counts in his New York trial, making him the first U.S. president to be convicted of a felony

June 27 – Biden and Trump hold their first campaign debate hosted by CNN. Biden’s performance is so bad, it’s met with calls for him to suspend his campaign

July 1 – The U.S. Supreme Court delivers a 6–3 decision in Trump v. United States, ruling that Trump had absolute immunity for acts he committed as president within his core constitutional purview.  This effectively puts further legal action against Trump on hold until after the election

July 13 – Trump is shot in the ear in an assassination attempt at a campaign rally held in Butler, Pennsylvania. One bystander and the shooter are killed and two others are injured.

July 21 – Biden announces his withdrawal from the race, necessitating the start of an “emergency transition process” for the Democratic nomination. On the same day, Kamala Harris announces her candidacy for president.

September 6 – Former vice president Dick Cheney and former Congresswoman Liz Cheney announce their endorsements for Harris. That’s the former Republican Vice President and the former chair of the House Republican Conference, endorsing a Democrat.

September 15: A shooting takes place at the Trump International Golf Club in West Palm Beach, Florida, while Donald Trump is golfing. Trump was unharmed in the incident and was evacuated by Secret Service personnel.

With all of that, it’s rather amazing that – according to a recent PEW Research Centre report – Americans don’t seem to be any more interested in the campaign than in previous election years. Numbers of people closely following election news are running about the same as in 2020 and are behind what they were in 2016.

This could be attributed in part to a certain ennui on the part of Democrats. In the spring, their level of interest was running behind Republicans. It was only when Joe Biden dropped out in July that Democrats started tuning in in greater numbers. As of September, they were following just as closely as Republicans.

I also find it interesting to see where they’re turning for their election coverage. For those 50 plus, it is overwhelmingly television. News websites and apps come in a distant second.

But for those under 50, Social Media is the preferred source, with news websites and television tied in second place. This is particularly true for those under 30, where half turn to Social media. The 30 to 49 cohort is the most media-diverse, with their sources pretty much evenly split between TV, websites and social media. 

If we look at political affiliations impacting where people turn to be informed, there was no great surprise. Democrats favour the three networks (CBS, NBC and ABC, with CNN just behind. Republicans Turn first to Fox News, then the three networks, then conservative talk radio.

The thing to note here is that Republicans tend to stick to news platforms known for having a right-wing perspective, where Democrats are more open to what could arguably be considered more objective sources.

It is interesting to note that this flips a bit with younger Republicans, who are more open to mainstream media like the three networks or papers like the New York Times. Sixty percent of Republicans aged 18 – 29 cited the three networks as a source of election information, and 45% mentioned the New York Times.

But we also have to remember that all younger people, Republican or Democrat, are more apt to rely on social media to learn about the election. And there we have a problem. Recently, George Washington University political scientist Dave Karpf was interviewed on CBC Radio about how Big Tech is influencing this election.  What was interesting about Karpf’s comments is how social media is now just as polarized as our society. X has become a cesspool of right-leaning misinformation, led by Trump supporter Elon Musk, and Facebook has tried to depoliticize their content after coming under repeated fire for influencing previous campaigns.

So, the two platforms that Karpf said were the most stabilized in past elections have effectively lost their status as common ground for messaging to the right and the left.  Karpf explains, “Part of what we’re seeing with this election cycle is a gap where nothing has really filled into those voids and left campaigns wondering what they can do. They’re trying things out on TikTok, they’re trying things out wherever they can, but we lack that stability. It is, in a sense, the first post social media election.”

This creates a troubling gap. If those under the age of 30 turn first to social media to be informed, what are they finding there? Not much, according to Karpf. And what they are finding is terribly biased, to the point of lacking any real objectivity.

In 2017, the Washington Post added this line under their masthead: “Democracy Dies in Darkness”. , in this polarized mediascape, I think it’s more accurate to say “Democracy Dies in the Middle”.  There’s a Right-Wing reality and a Left-Wing reality. The truth is somewhere in the middle. But it’s getting pretty hard to find it.

Not Everything is Political. Hurricanes, for Example.

During the two recent “once in a lifetime” hurricanes that happened to strike the southern US within two weeks of each other, people apparently thought they were a political plot and that meteorologists were in on the conspiracy,

Michigan meteorologist Katie Nickolaou received death threats through social media.

“I have had a bunch of people saying I created and steered the hurricane, there are people assuming we control the weather. I have had to point out that a hurricane has the energy of 10,000 nuclear bombs and we can’t hope to control that. But it’s taken a turn to more violent rhetoric, especially with people saying those who created Milton should be killed.”

Many weather scientists were simply stunned at the level of stupidity and misinformation hurled their way. After someone suggested that someone should “stop the breathing” of those that “made” the hurricanes, Nickolaou responded with this post, “Murdering meteorologists won’t stop hurricanes. I can’t believe I just had to type that.”

Washington, D.C. based meteorologist Matthew Cappucci also received threats: “Seemingly overnight, ideas that once would have been ridiculed as very fringe, outlandish viewpoints are suddenly becoming mainstream, and it’s making my job much more difficult.” 

Marjorie Taylor Greene, U.S. Representative for  Georgia’s 14th congressional district, jumped forcefully into the fray by suggesting the conspiracy was politically motivated.  She posted on X: “This is a map of hurricane affected areas with an overlay of electoral map by political party shows how hurricane devastation could affect the election.”

And just in case you’re giving her the benefit of the doubt by saying she might just be pointing out a correlation, not a cause, she doubled down with this post on X: “Yes they can control the weather, it’s ridiculous for anyone to lie and say it can’t be done.” 

You may say that when it comes to MTG, we must consider the source and sigh “You can’t cure stupid.”   But Marjorie Taylor Greene easily won a democratic election with almost 66% of the vote, which means the majority of people in her district believed in her enough to elect her as their representative. Her opponent, Marcus Flowers, is a 10-year veteran of the US Army and he served 20 years as a contractor or official for the State Department and Department of Defense. He’s no slouch. But in Georgia’s 14th Congressional district, two out of three voters decided a better choice would be the woman who believes that the Nazi Secret Police were called the Gazpacho.

I’ve talked about this before. Ad nauseum – actually. But this reaches a new level of stupidity…and stupidity on this scale is f*&king frightening. It is the most dangerous threat we as humans face.

That’s right, I said the “biggest” threat.  Bigger than climate change. Bigger than AI. Bigger than the new and very scary alliance emerging between Russia, Iran, North Korea and China. Bigger than the fact that Vladimir Putin, Donald Trump and Elon Musk seem to be planning a BFF pajama party in the very near future.

All of those things can be tackled if we choose to. But if we are functionally immobilized by choosing to be represented by stupidity, we are willfully ignoring our way to a point where these existential problems – and many others we’re not aware of yet – can no longer be dealt with.

Brian Cox, a professor of particle physics at the University of Manchester and host of science TV shows including Universe and The Planets, is also warning us about rampant stupidity. “We may laugh at people who think the Earth is flat or whatever, the darker side is that, if we become unmoored from fact, we have a very serious problem when we attempt to solve big challenges, such as AI regulation, climate or avoiding global war. These are things that require contact with reality.” 

At issue here is that people are choosing politics over science. And there is nothing that tethers political to reality. Politics are built on beliefs. Science strives to be built on provable facts. If we choose politics over science, we are embracing wilful ignorance. And that will kill us.

Hurricanes offer us the best possible example of why that is so. Let’s say you, along with Marjorie Taylor Greene, believe that hurricanes are created by meteorologist and mad weather scientists. So, when those nasty meteorologists try to warn you that the storm of the century is headed directly towards you, you respond in one of two ways: You don’t believe them and/or you get mad and condemn them as part of a conspiracy on social media. Neither of those things will save you. Only accepting science as a reliable prediction of the impending reality will give you the best chance of survival, because it allows you to take action.

Maybe we can’t cure stupid. But we’d better try, because it’s going to be the death of us.

The Political Brinkmanship of Spam

I am never a fan of spam. But this is particularly true when there is an upcoming election. The level of spam I have been wading through seems to have doubled lately. We just had a provincial election here in British Columbia and all parties pulled out all stops, which included, but was not limited to; email, social media posts, robotexts and robocalls.

In Canada and the US, political campaigns are not subject to phone and text spam control laws such as our Canadian Do Not Call List legislation. There seems to be a little more restriction on email spam. A report from Nationalsecuritynews.com this past May warned that Americans would be subjected to over 16 billion political robocalls. That is a ton of spam.

During this past campaign here in B.C., I noticed that I do not respond to all spam with equal abhorrence. Ironically, the spam channels with the loosest restrictions are the ones that frustrate me the most.

There are places – like email – where I expect spam. It’s part of the rules of engagement. But there are other places where spam sneaks through and seems a greater intrusion on me. In these channels, I tend to have a more visceral reaction to spam. I get both frustrated and angry when I have to respond to an unwanted text or phone call. But with email spam, I just filter and delete without feeling like I was duped.

Why don’t we deal with all spam – no matter the channel – the same? Why do some forms of spam make us more irritated than others? It’s almost like we’ve developed a spam algorithm that dictates how irritated we get when we deal with spam.

According to an article in Scientific American, the answer might be in how the brain marshalls its own resources.

When it comes to capacity, the brain is remarkably protective. It usually defaults to the most efficient path. It likes to glide on autopilot, relying on instinct, habit and beliefs. All these things use much less cognitive energy than deliberate thinking. That’s probably why “mindfulness” is the most often quoted but least often used meme in the world today.

The resource we’re working with here is attention. Limited by the capacity of our working memory, attention is a spotlight we must use sparingly. Our working memory is only capable of handling a few discrete pieces of information at a time. Recent research suggests the limit may be around 3 to 5 “chunks” of information, and that research was done on young adults. Like most things with our brains, the capacity probably diminishes with age. Therefore, the brain is very stingy with attention. 

I think spam that somehow gets past our first line of defence – the feeling that we’re in control of filtering – makes us angry. We have been tricked into paying attention to something that was unsuspected. It becomes a control issue. In an information environment where we feel we have more control, we probably have less of a visceral response to spam. This would be true for email, where a quick scan of the items in our inbox is probably enough to filter out the spam. The amount of attention that gets hijacked by spam is minimal.

But when spam launches a sneak attack and demands a swing of attention that is beyond our control, that’s a different matter. We operate with a different mental modality when we answer a phone or respond to a text. Unlike email, we expect those channels to be relatively spam-free, or at least they are until an election campaign comes around. We go in with our spam defences down and then our brain is tricked into spending energy to focus on spurious messaging.

How does the brain conserve energy? It uses emotions. We get irritated when something commandeers our attention. The more unexpected the diversion, the greater the irritation.  Conversely, there is the equivalent of junk food for the brain – input that requires almost no thought but turns on the dopamine tap and becomes addictive. Social media is notorious for this.

This battle for our attention has been escalating for the past two decades. As we try to protect ourselves from spam with more powerful filters, those that spread spam try to find new ways to get past those filters. The reason political messaging was exempt from spam control legislation was that democracies need a well-informed electorate and during election campaigns, political parties should be able to send out accurate information about their platforms and positions.

That was the theory, anyway.

Band Identities and Identity Bands

If one nation ever identified with one band, it would be Canada and The Tragically Hip. Up here in the Great White North, one can’t even mention the band without the word “iconic” spilling out. And, when iconic is defined as “a representative symbol or worthy of veneration” – well, as a Canadian, all I can say is – the label fits. I went on about why this was way back in 2016 when the Tragically Hip did their farewell concert in Kingston, Ontario. Just 14 months later, lead singer Gord Downie was gone, a victim at far-too-young an age of glioblastoma – a deadly form of brain cancer.

If you are at all curious about how a bond can build between a nation and a band, I would highly recommend diving into the new Prime Video docuseries, The Tragically Hip: No Dress Rehearsal. Directed by Gord’s brother, Mike Downie, it’s a 256-minute, 4 part love story to a band. A who’s who of famous Canadian Hip fans, including Dan Ackroyd, Jay Baruchel, Will Arnett and even our Prime Minister, Justin Trudeau, go on about the incredible connection between the band and our nation.

But, like all love stories, there is bitter and sweet here. Over their 32-year history as Canada’s favorite band, there were rough patches. Mike Downie interviews the remaining 4 band members and pulls no punches when it comes to talking about one particularly tense time – from 2009 to 2014 – when the band was barely communicating with each other.

Most Canadians had no idea there was “Trouble at the Henhouse” (the name of the Hip’s 6th album). As George Stromboulopoulos, a Canadian journalist who interviewed the band more than once, said, “”There are a couple of things that you can’t tell the truth about in this country, and one of the things you can’t tell the truth about in the country is that the guys in the Tragically Hip probably didn’t get along as often as everybody said they did.” 

As I watched the series, I couldn’t help but think about the strange nature of band identities and how they play out, both internally and externally. How and why do we find part of our identities in a rock band, and what happens on the inside when the band breaks up? That didn’t happen to the Hip, but that’s possibly because Downie received his terminal diagnosis in 2015 and he wanted to do one last tour.

In the Panther, the campus newspaper of California’s Chapman University, reporter Megan Forrester explores why bands break up. She points to a psychological theory as the possible culprit: “Psychology professor Samantha Gardner told The Panther that friction and an ultimate dissolution of a group happens due to social identity theory. This theory suggests that any group that people associate themselves with, whether that is an extracurricular club, volunteer organization or a band, helps boost their self-esteem and reduce uncertainty in one’s identity. 

“But once the values of the group change course, Gardner said that is when tensions rise. 

‘The group members may have thought, ‘I don’t think this identity of being a member of this group is really who I am or it’s not what I envisioned,’ Gardner said.”

The issue with bands is that evolution of values and identities happens at different times to different members. We, as the public, find it hard to identify with 4 or 5 individuals equally. We naturally elevate one or – at the most – two members of the band to star status. This is typically the lead singer. That can be a tough pill to swallow for the rest of the band who play just beyond the reach of the spotlight. That is, in part, what happened to the Tragically Hip. When you have a mesmerizing front man, it’s hard not to focus on him. Gord Downie was moving at a different speed than the rest of the Hip.

But an equally interesting thing is what happens to the fans of the band. Not only do the members get their identify from the band. If we follow a band, we also get part of our identity from that band. And when that band breaks up, we lose a piece of ourselves. We still haven’t forgiven Yoko Ono for breaking up the Beatles, and that supposedly happened (we should blame social identity theory rather than Yoko) over 50 years ago.

I think the Tragically Hip also knew Canada would never forgive them if they broke up. We needed to believe in 5 guys who were happy to be famous in Canada, who more than once flipped US-based stardom the bird (including getting high before their SNL debut) and who banded together to create great music for the world – but especially Canada – to enjoy. 

There’s nothing new about us common folks looking to the famous to help define ourselves. We’ve been doing that for centuries. But there is a difference when we look to get that identity from a group rather than an individual. Canada has lots of stars – singular – that we could identify with: Celine Dion, Drake, Justin Bieber. So why did 5 guys from Kingston, Ontario become the ones we chose as our identity badge? We did we resist the urge to look for an individual star and chose the Tragically Hip instead?

I think part of it was what I wrote before: the Tragically Hip appealed to Canadians because they stayed in Canada and gained a very Canadian type of stardom. But I also think Canadians liked the idea of identifying with a group rather than an individual. That was a good fit for our shared values.

Let’s do a little “napkin-back” testing of that hypothesis. If Canadians looked to a band for identity, would a more individualistic culture – like the U.S. – be more likely to look for that identify in individuals?

Given U.S. domination of pretty much every type of culture, you would expect it to also dominate a list of the greatest bands of all time. But a little research on Google will tell you that of a typical Top 10 list of the Greatest Bands, about two-thirds are British. There are a few that are American, but they are typically named with the same formula: Lead Singer + the Name of Band. For example: Prince and the Revolution, Joan Jett and the Blackhearts, Bruce Springsteen and the E Street Band. There are exceptions, but I was surprised how few really famous US based bands have names that are not tied to a person or persons in the band (Nirvana and The Eagles are two that come to mind).

Let’s try another angle: as our culture becomes more individualistic – as it undoubtedly has over the last 3 decades – would our search for identity follow a similar trend? There again, the proof seems to be in our playlists. If you look for the greatest hits of the last 20 years, you will find very few bands in there. Maroon 5 seems to be the only band that creeps into the top 20.

Be that as it may, I recommend taking 256 minutes to learn what Canadians already know: The Tragically Hip kicked ass!