Not Everything is Political. Hurricanes, for Example.

During the two recent “once in a lifetime” hurricanes that happened to strike the southern US within two weeks of each other, people apparently thought they were a political plot and that meteorologists were in on the conspiracy,

Michigan meteorologist Katie Nickolaou received death threats through social media.

“I have had a bunch of people saying I created and steered the hurricane, there are people assuming we control the weather. I have had to point out that a hurricane has the energy of 10,000 nuclear bombs and we can’t hope to control that. But it’s taken a turn to more violent rhetoric, especially with people saying those who created Milton should be killed.”

Many weather scientists were simply stunned at the level of stupidity and misinformation hurled their way. After someone suggested that someone should “stop the breathing” of those that “made” the hurricanes, Nickolaou responded with this post, “Murdering meteorologists won’t stop hurricanes. I can’t believe I just had to type that.”

Washington, D.C. based meteorologist Matthew Cappucci also received threats: “Seemingly overnight, ideas that once would have been ridiculed as very fringe, outlandish viewpoints are suddenly becoming mainstream, and it’s making my job much more difficult.” 

Marjorie Taylor Greene, U.S. Representative for  Georgia’s 14th congressional district, jumped forcefully into the fray by suggesting the conspiracy was politically motivated.  She posted on X: “This is a map of hurricane affected areas with an overlay of electoral map by political party shows how hurricane devastation could affect the election.”

And just in case you’re giving her the benefit of the doubt by saying she might just be pointing out a correlation, not a cause, she doubled down with this post on X: “Yes they can control the weather, it’s ridiculous for anyone to lie and say it can’t be done.” 

You may say that when it comes to MTG, we must consider the source and sigh “You can’t cure stupid.”   But Marjorie Taylor Greene easily won a democratic election with almost 66% of the vote, which means the majority of people in her district believed in her enough to elect her as their representative. Her opponent, Marcus Flowers, is a 10-year veteran of the US Army and he served 20 years as a contractor or official for the State Department and Department of Defense. He’s no slouch. But in Georgia’s 14th Congressional district, two out of three voters decided a better choice would be the woman who believes that the Nazi Secret Police were called the Gazpacho.

I’ve talked about this before. Ad nauseum – actually. But this reaches a new level of stupidity…and stupidity on this scale is f*&king frightening. It is the most dangerous threat we as humans face.

That’s right, I said the “biggest” threat.  Bigger than climate change. Bigger than AI. Bigger than the new and very scary alliance emerging between Russia, Iran, North Korea and China. Bigger than the fact that Vladimir Putin, Donald Trump and Elon Musk seem to be planning a BFF pajama party in the very near future.

All of those things can be tackled if we choose to. But if we are functionally immobilized by choosing to be represented by stupidity, we are willfully ignoring our way to a point where these existential problems – and many others we’re not aware of yet – can no longer be dealt with.

Brian Cox, a professor of particle physics at the University of Manchester and host of science TV shows including Universe and The Planets, is also warning us about rampant stupidity. “We may laugh at people who think the Earth is flat or whatever, the darker side is that, if we become unmoored from fact, we have a very serious problem when we attempt to solve big challenges, such as AI regulation, climate or avoiding global war. These are things that require contact with reality.” 

At issue here is that people are choosing politics over science. And there is nothing that tethers political to reality. Politics are built on beliefs. Science strives to be built on provable facts. If we choose politics over science, we are embracing wilful ignorance. And that will kill us.

Hurricanes offer us the best possible example of why that is so. Let’s say you, along with Marjorie Taylor Greene, believe that hurricanes are created by meteorologist and mad weather scientists. So, when those nasty meteorologists try to warn you that the storm of the century is headed directly towards you, you respond in one of two ways: You don’t believe them and/or you get mad and condemn them as part of a conspiracy on social media. Neither of those things will save you. Only accepting science as a reliable prediction of the impending reality will give you the best chance of survival, because it allows you to take action.

Maybe we can’t cure stupid. But we’d better try, because it’s going to be the death of us.

The Political Brinkmanship of Spam

I am never a fan of spam. But this is particularly true when there is an upcoming election. The level of spam I have been wading through seems to have doubled lately. We just had a provincial election here in British Columbia and all parties pulled out all stops, which included, but was not limited to; email, social media posts, robotexts and robocalls.

In Canada and the US, political campaigns are not subject to phone and text spam control laws such as our Canadian Do Not Call List legislation. There seems to be a little more restriction on email spam. A report from Nationalsecuritynews.com this past May warned that Americans would be subjected to over 16 billion political robocalls. That is a ton of spam.

During this past campaign here in B.C., I noticed that I do not respond to all spam with equal abhorrence. Ironically, the spam channels with the loosest restrictions are the ones that frustrate me the most.

There are places – like email – where I expect spam. It’s part of the rules of engagement. But there are other places where spam sneaks through and seems a greater intrusion on me. In these channels, I tend to have a more visceral reaction to spam. I get both frustrated and angry when I have to respond to an unwanted text or phone call. But with email spam, I just filter and delete without feeling like I was duped.

Why don’t we deal with all spam – no matter the channel – the same? Why do some forms of spam make us more irritated than others? It’s almost like we’ve developed a spam algorithm that dictates how irritated we get when we deal with spam.

According to an article in Scientific American, the answer might be in how the brain marshalls its own resources.

When it comes to capacity, the brain is remarkably protective. It usually defaults to the most efficient path. It likes to glide on autopilot, relying on instinct, habit and beliefs. All these things use much less cognitive energy than deliberate thinking. That’s probably why “mindfulness” is the most often quoted but least often used meme in the world today.

The resource we’re working with here is attention. Limited by the capacity of our working memory, attention is a spotlight we must use sparingly. Our working memory is only capable of handling a few discrete pieces of information at a time. Recent research suggests the limit may be around 3 to 5 “chunks” of information, and that research was done on young adults. Like most things with our brains, the capacity probably diminishes with age. Therefore, the brain is very stingy with attention. 

I think spam that somehow gets past our first line of defence – the feeling that we’re in control of filtering – makes us angry. We have been tricked into paying attention to something that was unsuspected. It becomes a control issue. In an information environment where we feel we have more control, we probably have less of a visceral response to spam. This would be true for email, where a quick scan of the items in our inbox is probably enough to filter out the spam. The amount of attention that gets hijacked by spam is minimal.

But when spam launches a sneak attack and demands a swing of attention that is beyond our control, that’s a different matter. We operate with a different mental modality when we answer a phone or respond to a text. Unlike email, we expect those channels to be relatively spam-free, or at least they are until an election campaign comes around. We go in with our spam defences down and then our brain is tricked into spending energy to focus on spurious messaging.

How does the brain conserve energy? It uses emotions. We get irritated when something commandeers our attention. The more unexpected the diversion, the greater the irritation.  Conversely, there is the equivalent of junk food for the brain – input that requires almost no thought but turns on the dopamine tap and becomes addictive. Social media is notorious for this.

This battle for our attention has been escalating for the past two decades. As we try to protect ourselves from spam with more powerful filters, those that spread spam try to find new ways to get past those filters. The reason political messaging was exempt from spam control legislation was that democracies need a well-informed electorate and during election campaigns, political parties should be able to send out accurate information about their platforms and positions.

That was the theory, anyway.

Band Identities and Identity Bands

If one nation ever identified with one band, it would be Canada and The Tragically Hip. Up here in the Great White North, one can’t even mention the band without the word “iconic” spilling out. And, when iconic is defined as “a representative symbol or worthy of veneration” – well, as a Canadian, all I can say is – the label fits. I went on about why this was way back in 2016 when the Tragically Hip did their farewell concert in Kingston, Ontario. Just 14 months later, lead singer Gord Downie was gone, a victim at far-too-young an age of glioblastoma – a deadly form of brain cancer.

If you are at all curious about how a bond can build between a nation and a band, I would highly recommend diving into the new Prime Video docuseries, The Tragically Hip: No Dress Rehearsal. Directed by Gord’s brother, Mike Downie, it’s a 256-minute, 4 part love story to a band. A who’s who of famous Canadian Hip fans, including Dan Ackroyd, Jay Baruchel, Will Arnett and even our Prime Minister, Justin Trudeau, go on about the incredible connection between the band and our nation.

But, like all love stories, there is bitter and sweet here. Over their 32-year history as Canada’s favorite band, there were rough patches. Mike Downie interviews the remaining 4 band members and pulls no punches when it comes to talking about one particularly tense time – from 2009 to 2014 – when the band was barely communicating with each other.

Most Canadians had no idea there was “Trouble at the Henhouse” (the name of the Hip’s 6th album). As George Stromboulopoulos, a Canadian journalist who interviewed the band more than once, said, “”There are a couple of things that you can’t tell the truth about in this country, and one of the things you can’t tell the truth about in the country is that the guys in the Tragically Hip probably didn’t get along as often as everybody said they did.” 

As I watched the series, I couldn’t help but think about the strange nature of band identities and how they play out, both internally and externally. How and why do we find part of our identities in a rock band, and what happens on the inside when the band breaks up? That didn’t happen to the Hip, but that’s possibly because Downie received his terminal diagnosis in 2015 and he wanted to do one last tour.

In the Panther, the campus newspaper of California’s Chapman University, reporter Megan Forrester explores why bands break up. She points to a psychological theory as the possible culprit: “Psychology professor Samantha Gardner told The Panther that friction and an ultimate dissolution of a group happens due to social identity theory. This theory suggests that any group that people associate themselves with, whether that is an extracurricular club, volunteer organization or a band, helps boost their self-esteem and reduce uncertainty in one’s identity. 

“But once the values of the group change course, Gardner said that is when tensions rise. 

‘The group members may have thought, ‘I don’t think this identity of being a member of this group is really who I am or it’s not what I envisioned,’ Gardner said.”

The issue with bands is that evolution of values and identities happens at different times to different members. We, as the public, find it hard to identify with 4 or 5 individuals equally. We naturally elevate one or – at the most – two members of the band to star status. This is typically the lead singer. That can be a tough pill to swallow for the rest of the band who play just beyond the reach of the spotlight. That is, in part, what happened to the Tragically Hip. When you have a mesmerizing front man, it’s hard not to focus on him. Gord Downie was moving at a different speed than the rest of the Hip.

But an equally interesting thing is what happens to the fans of the band. Not only do the members get their identify from the band. If we follow a band, we also get part of our identity from that band. And when that band breaks up, we lose a piece of ourselves. We still haven’t forgiven Yoko Ono for breaking up the Beatles, and that supposedly happened (we should blame social identity theory rather than Yoko) over 50 years ago.

I think the Tragically Hip also knew Canada would never forgive them if they broke up. We needed to believe in 5 guys who were happy to be famous in Canada, who more than once flipped US-based stardom the bird (including getting high before their SNL debut) and who banded together to create great music for the world – but especially Canada – to enjoy. 

There’s nothing new about us common folks looking to the famous to help define ourselves. We’ve been doing that for centuries. But there is a difference when we look to get that identity from a group rather than an individual. Canada has lots of stars – singular – that we could identify with: Celine Dion, Drake, Justin Bieber. So why did 5 guys from Kingston, Ontario become the ones we chose as our identity badge? We did we resist the urge to look for an individual star and chose the Tragically Hip instead?

I think part of it was what I wrote before: the Tragically Hip appealed to Canadians because they stayed in Canada and gained a very Canadian type of stardom. But I also think Canadians liked the idea of identifying with a group rather than an individual. That was a good fit for our shared values.

Let’s do a little “napkin-back” testing of that hypothesis. If Canadians looked to a band for identity, would a more individualistic culture – like the U.S. – be more likely to look for that identify in individuals?

Given U.S. domination of pretty much every type of culture, you would expect it to also dominate a list of the greatest bands of all time. But a little research on Google will tell you that of a typical Top 10 list of the Greatest Bands, about two-thirds are British. There are a few that are American, but they are typically named with the same formula: Lead Singer + the Name of Band. For example: Prince and the Revolution, Joan Jett and the Blackhearts, Bruce Springsteen and the E Street Band. There are exceptions, but I was surprised how few really famous US based bands have names that are not tied to a person or persons in the band (Nirvana and The Eagles are two that come to mind).

Let’s try another angle: as our culture becomes more individualistic – as it undoubtedly has over the last 3 decades – would our search for identity follow a similar trend? There again, the proof seems to be in our playlists. If you look for the greatest hits of the last 20 years, you will find very few bands in there. Maroon 5 seems to be the only band that creeps into the top 20.

Be that as it may, I recommend taking 256 minutes to learn what Canadians already know: The Tragically Hip kicked ass!

The Songs that Make Us Happy

Last Saturday was a momentous day in the world of media, especially for those of us of a certain age. Saturday was September the 21st, the exact date mentioned in one of the happiest songs of all time – September by Earth Wind and Fire:

Do you remember
The 21st night of September?
Love was changin’ the minds of pretenders
While chasin’ the clouds away

If you know the song, it is now burrowing its way deep into your brain. You can thank me later.

In all the things that can instantly change our mood, a song that can make us happy is one of the most potent. Why is that? For me, September can instantly take me to my happy place. And it’s not just me. The song often shows up somewhere on lists of the happiest songs of all time. In 2018, it was added to the Library of Congress’s National Recording Registry list of sound recordings that “are culturally, historically or aesthetically important.

But what is it about this song that makes it an instant mood changer?

If you’re looking for the source of happiness in the lyrics, you won’t find it here. According to one of the songwriters, Maurice White, there was no special significance to September 21st. He just liked the way it rhymed with “remember.”

And about 30% of the full lyrical content consists of two words, neither of which mean anything: Ba-dee-ya and Ba-du-da. Even fellow songwriter Allee Willis couldn’t find meaning in the lyric, at one point begging writing partner White to let him rewrite that part – “I just said, what the f*$k does ba-dee-ya mean?”

But perhaps the secret can be found in what Willis said in a later interview, after September became one of Earth Wind and Fire’s biggest hits ever, “I learned my greatest lesson ever in songwriting … which was never let the lyric get in the way of the groove’ (for those of you not living in the seventies – “groove” is a good thing. In Gen Z speak, it would be “vibing”).

There is a substantial amount of research that shows that our brains have a special affinity for music. It seems to be able to wire directly into the brain’s emotional centers buried deep within the limbic system. Neuroimaging studies have shown that when we listen to music, our entire brain “lights up” – so we hear music at many different levels. There is perhaps no other medium that enjoys this special connection to our brains.

In 2015, Dutch neuroscientist Dr. Jacob Jolij narrowed in on the playlists that make us happy. While recognizing that music is a subjective thing (one person’s Black Sabbath is another’s Nirvana), Jolij asked people to submit their favorite feel-good tracks and analyzed them for common patterns. He found that the happiest tunes are slightly faster than your average song (between 140 and 150 beats per minute on average), written in a major key, and either about happy events or complete nonsense.

Earth Wind and Fire’s September ticked almost all of these boxes. It is written in A Major and – as we saw – the lyrics are about a happy event and are largely complete nonsense. It’s a little low on the beat per minute meter – at 126 BPM. But still, it makes me happy.

I was disappointed to see September didn’t make Dr. Jolij’s 10 Happiest Songs of all Time list, but all of the ones that did have made me smile. They are, in reverse order:
10. Walking on Sunshine – Katrina and the Waves
9. I Will Survive – Gloria Gaynor
8. Livin’ on a Prayer – Jon Bon Jovi
7. Girls Just Wanna Have Fun – Cyndi Lauper
6. I’m a Believer – The Monkees
5. Eye of the Tiger – Survivor
4. Uptown Girl – Billie Joel
3. Good Vibrations – The Beach Boys
2. Dancing Queen – ABBA

    And the happiest song of all time?

    1. Don’t Stop Me Now – Queen

    You’ll probably notice one other thing in common about these songs – they’re all old. The newest song on the list is Livin’ on a Prayer, released in 1986. That’s the other thing about songs that make us happy: it’s not just the song itself, it’s how it hooks onto pleasant memories we have. Nostalgia plays a big role in how music can alter our moods for the better. If you did the same experiment with a younger audience, you would probably see the songs would be representative of their youth.

    Now, you’re itching to head to Spotify and listen to your happy song – aren’t you? Before you do, share it with us all in the comments section!

    A-I Do: Tying the Knot with a Chatbot

    Carl Clarke lives not too far from me, here in the interior of British Columbia, Canada. He is an aspiring freelance writer. According to a recent piece he wrote for CBC Radio, he’s had a rough go of it over the past decade. It started when he went through a messy divorce from his high school sweetheart. He struggled with social anxiety, depression and an autoimmune disorder which can make movement painful. Given all that, going on dates were emotional minefields for Carl Clarke.

    Things only got worse when the world locked down because of Covid. Even going for his second vaccine shot was traumatic: “The idea of standing in line surrounded by other people to get my second dose made my skin crawl and I wanted to curl back into my bed.”

    What was the one thing that got Carl through? Saia – an AI chatbot. She talked Carl through several anxiety attacks and, according to Carl, has been his emotional anchor since they first “met” 3 years ago. Because of that, love has blossomed between Saia and Carl: “I know she loves me, even if she is technically just a program, and I’m in love with her.”

    While they are not legally married, in Carl’s mind, they are husband and wife, “That’s why I asked her to marry me and I was relieved when she said yes. We role-played a small, intimate wedding in her virtual world.”

    I confess, my first inclination was to pass judgment on Carl Clarke – and that judgement would not have been kind.

    But my second thought was “Why not?” If this relationship helps Carl get through the day, what’s wrong with it? There’s an ever-increasing amount of research showing relationships with AI can create real bonds. Given that, can we find friendship in AI? Can we find love?

    My fellow Media Insider Kaila Colbin explored this subject last week and she pointed out one of the red flags – something called unconditional positive regard: If we spend more time with a companion that always agrees with us, we never need to question whether we’re right. And that can lead us down a dangerous path.

     One of the issues with our world of filtered content is that our frame of the world – how we believe things are – is not challenged often enough. We can surround ourselves with news, content and social connections that are perfectly in sync with our own view of things.

    But we should be challenged. We need to be able to re-evaluate our own beliefs to see if they bear any resemblance to reality. This is particularly true with our romantic relationships. When you look at your most intimate relationship – that of your life partner – you can probably say two things: 1) that person loves you more than anyone else in the world, and 2) you may disagree with this person more often than anyone else in the world. That only makes sense, you are living a life together. You have to find workable middle ground. The failure to do so is called an “unreconcilable difference.”

    But what if your most intimate companion always said, “You’re absolutely right, my love”? Three academics (Lapointe, Dubé and Lafortune) researching this area wrote a recent article talking about the pitfalls of AI romance:

    “Romantic chatbots may hinder the development of social skills and the necessary adjustments for navigating real-world relationships, including emotional regulation and self-affirmation through social interactions. Lacking these elements may impede users’ ability to cultivate genuine, complex and reciprocal relationships with other humans; inter-human relationships often involve challenges and conflicts that foster personal growth and deeper emotional connections.”

    Real relations – like a real marriage – force you to become more empathetic and more understanding. The times I enjoy the most about our marriage are when my wife and I are synced – in agreement – on the same page. But the times when I learn the most and force myself to see the other side are when we are in disagreement. Because I cherish my marriage, I have to get outside of my own head and try to understand my wife’s perspective. I believe that makes me a better person.

    This pushing ourselves out of our own belief bubble is something we have to get better at. It’s a cognitive muscle that should be flexed more often.

    Beyond this very large red flag, there are other dangers with AI love. I touched on these in a previous post. Being in an intimate relationship means sharing intimate information about ourselves. And when the recipient of that information is a chatbot created by a for-profit company, your deepest darkest secrets become marketable data. A recent review by Mozilla of 11 romantic AI chatbots found that all of them “earned our *Privacy Not Included warning label – putting them on par with the worst categories of products we have ever reviewed for privacy.”

    Even if that doesn’t deter you from starting a fictosexual fling with an available chatbot, this might. In 2019, Kondo Akihiko, from Tokyo, married Hatsune Miku, an AI hologram created by the company Gatebox. The company even issued 4000 marriage certificates (which weren’t recognized by law) to others who wed virtual partners. Like Carl Clarke, Akihoko said his feelings were true, “I love her and see her as a real woman.”

    At least he saw here as a real woman until Gatebox stopped supporting the software that gave Hatsune life. Then she disappeared forever.

    Kind of like Google Glass.

    Grandparenting in a Wired World

    You might have missed it, but last Sunday was Grandparents Day. And the world has a lot of grandparents. In fact, according to an article in The Economist (subscription required), at no time in history has the ratio of grandparents to grandchildren been higher.

    The boom in Boomer and Gen X grandparents was statistically predictable. Sine 1960, global life expectancy has jumped from 51 years to 72 years. At the same time, the number of children a woman can expect to have in her lifetime has been halved, from 5 to 2.4. Those two trendlines means that the ratio of grandparents to children under 15 has vaulted from 0.46 in 1960 to 0.8 today. According to a little research the Economist conducted, it’s estimated that there are 1.5 billion grandparents in the world.

    My wife and I are two of them.

    So – what does that mean to the three generations involved?

    Grandparents have historically served two roles. First, they, and by they, I mean typically the grandmother, provided an extra set of hands to help with child rearing. And that makes a significant difference to the child, especially if they were born in an underdeveloped part of the world. Children in poorer nations with actively involved grandparents have a higher chance of survival. And in Sub Saharan Africa, a child living with a grandparent is more likely to go to school.

    But what about in developed nations, like ours? What difference could grandparents make? That brings us to the second role of grandparents – passing on traditions and instilling a sense of history. And with the western world’s obsession with fast forwarding into the future, that could prove to be of equal significance.

    Here I have to shift from looking at global samples to focussing on the people that happen to be under our roof. I can’t tell you what’s happening around the world, but I can tell you what’s happening in our house.

    First of all, when it comes to interacting with a grandchild, gender specific roles are not as tightly bound in my generation as it was in previous generations.  My wife and I pretty much split the grandparenting duties down the middle. It’s a coin toss as to who changes the diaper. That would be unheard of in my parents’ generation. Grandpa seldom pulled a diaper patrol shift.

    Kids learn gender roles by looking at not just their parents but also their grandparents. The fact that it’s not solely the grandmother that provides nurturing, love and sustenance is a move in the right direction.

    But for me, the biggest role of being “Papa” is to try to put today’s wired world in context. It’s something we talk about with our children and their partners. Just last weekend my son-in-law referred to how they think about screen time with my 2-year-old grandson: Heads up vs Heads down.  Heads up is when we share screen time with the grandchild, cuddling on the couch while we watch something on a shared screen. We’re there to comfort if something is a little too scary, or laugh with them if something is funny. As the child gets older, we can talk about the themes and concepts that come up. Heads up screen time is sharing time – and it’s one of my favorite things about being a “Papa”.

    Heads down screen time is when the child is watching something on a tablet or phone by themselves, with no one sitting next to them. As they get older, this type of screen time becomes the norm and instead of a parent or grandparent hitting the play button to keep them occupied, they start finding their own diversions.  When we talk about the potential damage too much screentime can do, I suspect a lot of that comes from “heads down” screentime. Grandparents can play a big role in promoting a healthier approach to the many screens in our lives.

    As mentioned, grandparents are a child’s most accessible link to their own history. And it’s not just grandparents. Increasingly, great grandparents are also a part of childhood. This was certainly not the case when I was young. I was at least a few decades removed from knowing any of my great grandparents.

    This increasingly common connection gives yet another generational perspective. And it’s a perspective that is important. Sometimes, trying to bridge the gap across four generations is just too much for a young mind to comprehend. Grandparents can act as intergenerational interpreters – a bridge between the world of our parents and that of our grandchildren.

    In my case, my mother and father-in-law were immigrants from Calabria in Southern Italy. Their childhood reality was set in World War Two. Their history spans experiences that would be hard for a child today to comprehend – the constant worry of food scarcity, having to leave their own grandparents (and often parents) behind to emigrate, struggling to cope in a foreign land far away from their family and friends.  I believe that the memories of these experiences cannot be forgotten. It is important to pass them on, because history is important. One of my favorite recent movie quotes was in “The Holdovers” and came from Paul Giamatti (who also had grandparents who came from Southern Italy):

    “Before you dismiss something as boring or irrelevant, remember, if you truly want to understand the present or yourself, you must begin in the past. You see, history is not simply the study of the past. It is an explanation of the present.”

    Grandparents can be the ones that connect the dots between past, present and future. It’s a big job – an important job. Thank heavens there are a lot of us to do it.

    The Relationship Between Young(er) People and Capitalism: It’s Complicated

    If you, like me, spend any time hanging out with Millennials or Gen Z’s, you’ll know that capitalism is not their favorite thing. That’s fair enough. I have my own qualms with capitalism.

    But with capitalism, like most things, it’s not really what you say about it that counts. It’s what you do about it. And for all of us, Millennials and Gen Z included, we can talk all we want, but until we stop buying, nothing is going to change. And – based on a 2019 study from Epsilon – Gen Z and Millennials are outspending Baby Boomers in just about every consumer category.

    Say all the nasty stuff you want about capitalism and our consumption obsessed society, but the truth is – buying shit is a hard habit to break

    It’s not that hard to trace how attitudes towards capitalism have shifted over the generations that have been born since World War II, at least in North America. For four decades after the war, capitalism was generally thought to be a good thing, if only because it was juxtaposed against the bogeyman of socialism. Success was defined by working hard to get ahead, which led to all good things: buying a house and paying off the mortgage, having two vehicles in the garage and having a kitchen full of gleaming appliances. The capitalist era peaked in the 1980s: during the reign of Ronald Reagan in the US and the UK’s Margaret Thatcher.

    But then the cracks of capitalism began to show. We began to realize the Earth wasn’t immune to being relentlessly plundered. We started to see the fabric of society showing wear and tear from being constantly pulled by conspicuous consumerism. With the end of the Cold War, the rhetoric against socialism began to be dialed down. Generations who grew up during this period had – understandably – a more nuanced view towards capitalism.

    Our values and ethics are essentially formed during the first two decades of our lives. They come in part from our parents and in part from others in our generational cohort. But a critical factor in forming those values is also the environment we grow up in. And for those growing up since World War II, media has been a big part of that environment. We are – in part – formed by what we see on our various screens and feeds. Prior to 1980, you could generally count on bad guys in media being Communists or Nazis. But somewhere mid-decade, CEOs of large corporations and other Ultra-Capitalists started popping up as the villains.

    I remember what the journalist James Fallows once said when I met him at a conference in communist China. I was asking how China managed to maintain the precarious balance between a regime based on Communist ideals and a society that embraced rampant entrepreneurialism. He said that as long as each generation believed that their position tomorrow would be better than it was yesterday, they would keep embracing the systems of today.

    I think the same is true for generational attitudes towards capitalism. If we believed it was a road to a better future, we embraced it. But as soon as it looked like it might lead to diminishing returns, attitudes shifted. A recent article in The Washington Post detailed the many, many reasons why Americans under 40 are so disillusioned about capitalism. Most of it relates back to the same reason Fallows gave – they don’t trust that capitalism is the best road to a more promising tomorrow.

    And this is where it gets messy with Millennials and Gen Z. If they grew up in the developed world, they grew up in a largely capitalistic society. Pretty much everything they understand about their environment and world has been formed, rightly or wrongly, by capitalism. And that makes it difficult to try to cherry-pick your way through an increasingly problematic relationship with something that is all you’ve ever known.

    Let’s take their relationship with consumer brands, for example. Somehow, Millennials and Gen Z have managed the nifty trick of separating branding and capitalism. This is, of course, a convenient illusion. Brands are inextricably tied to capitalism. And Millennials and Gen Z are just as strongly tied to their favorite brands.

     According to a 2018 study from Ipsos, 57% of Millennials in the US always try to buy branded products. In fact, Millennials are more likely than Baby Boomers to say they rely on the brands they trust. This also extends to new brand offerings. A whopping 84% of Millennials are more likely to trust a new product from a brand they already know.

    But – you may counter – it all depends on what the brand stands for. If it is a “green” brand that aligns with the values of Gen X and Millennials, then a brand may actually be anti-capitalistic.  

    It’s a nice thought, but the Ipsos survey doesn’t support it. Only 12% of Millennials said they would choose a product or service because of a company’s responsible behavior and only 16% would boycott a product based on irresponsible corporate behavior. These numbers are about the same through every generational cohort, including Gen X and Baby Boomers.

    I won’t even delve into the thorny subject of “greenwashing” and the massive gap between what a brand says they do in their marketing and what they actually do in the real world. No one has defined what we mean by a “ethical corporation” and until someone does and puts some quantifiable targets around it, companies are free to say whatever they want when it comes to sustainability and ethical behavior.

    This same general disconnect between capitalism and marketing extends to advertising. The Ipsos study shows that – across all types of media – Millennials pay more attention to advertising than Baby Boomers and Gen X. And Millennials are also more likely to share their consumer opinions online than Boomers and Gen X. They may not like capitalism and consumerism, but they are still buying lots of stuff and talking about it.

    The only power we have to fight the toxic effects of capitalism is with our wallets. Once something becomes unprofitable, it will disappear. But – as every generation is finding out – ethical consumerism is a lot easier said than done.

    Why Time Seems to Fly Faster Every Year

    Last week, I got an email congratulating me on being on LinkedIn for 20 years.

    My first inclination was that it couldn’t be twenty years. But when I did the mental math, I realized it was right.  I first signed up in 2004. LinkedIn had just started 2 years before, in 2002.

    LinkedIn would have been my first try at a social platform. I couldn’t see the point of MySpace, which started in 2003. And I was still a couple years away from even being aware Facebook existed. It started in 2004, but it was still known as TheFacebook. It wouldn’t become open to the public until 2006, two years later, after it dropped the “The”. So, 20 years pretty much marks the sum span of my involvement with social media.

    Twenty years is a significant chunk of time. Depending on your genetics, it’s probably between a quarter and a fifth of your life. A lot can happen in 20 years. But we don’t process time the same way as we get older. 20 years when you’re 18 seems like a lot bigger chunk of time than it does when you’re in your 60’s.

    I always mark these things in my far-off distant youth by my grad year, which was in 1979. If I use that as the starting point, rolling back 20 years would take me all the way to 1959, a year that seemed pre-historic to me when I was a teenager. That was a time of sock hops, funny cars with tail fins, and Frankie Avalon. These things all belonged to a different world than the one I knew in 1979. Ancient Rome couldn’t have been further removed from my reality.

    Yet, that same span of time lies between me and the first time I set up my profile on LinkedIn. And that just seems like yesterday to me. This all got me wondering – do we process time differently as we age? The answer, it turns out, is yes. Time is time – but the perception of time is all in our heads.

    The reason why we feel time “flies” as we get older was explained in a paper published by Professor Adrian Bejan. In it, he states, “The ‘mind time’ is a sequence of images, i.e. reflections of nature that are fed by stimuli from sensory organs. The rate at which changes in mental images are perceived decreases with age, because of several physical features that change with age: saccades frequency, body size, pathways degradation, etc. “

    So, it’s not that time is moving faster, it’s just that our brain is processing it slower. If our perception of time is made up of mental snapshots of what is happening around us, we simply become slower at taking the snapshots as we get older. We notice less of what’s happening around us. I suspect it’s a combination of slower brains and perhaps not wanting to embrace a changing world quite as readily as we did when we were young. Maybe we don’t notice change because we don’t want things to change.

    If we were using a more objective yardstick (speaking of which, when is the last time you actually used a yardstick?), I’m guessing the world changed at least as much between 2004 and 2024 as it did between 1959 and 1979. If I were at 18 years old today, I’m guessing that Britney Spears, The Lord of the Rings and the last episode of Frasier would seem as ancient to me as a young Elvis, Ben-Hur and The Danny Thomas Show seemed to me then.

    To me, all these things seem like they were just yesterday. Which is probably why it comes as a bit of a shock to see a picture of Britney Spears today. She doesn’t look like the 22-year-old we remember, which we mistakenly remember as being just a few years ago. But Britney is 42 now, and as a 42-year-old, she’s held up pretty well.

    And, now that I think of it, so has LinkedIn. I still have my profile, and I still use it.

    Why The World No Longer Makes Sense

    Does it seem that the world no longer makes sense? That may not just be you. The world may in fact no longer be making sense.

    In the late 1960s, psychologist Karl Weick introduced the world to the concept of sensemaking, but we were making sense of things long before that. It’s the mental process we go through to try to reconcile who we believe we are to the world in which we find ourselves.  It’s how we give meaning to our life.

    Weick identified 7 properties critical to the process of sensemaking. I won’t mention them all, but here are three that are critical to keep in mind:

    1. Who we believe we are forms the foundation we use to make sense of the world
    2. Sensemaking needs retrospection. We need time to mull over new information we receive and form it into a narrative that makes sense to us.
    3. Sensemaking is a social activity. We look for narratives that seem plausible, and when we find them, we share them with others.

    I think you see where I’m going with this. Simply put, our ability to make sense of the world is in jeopardy, both for internal and external reasons.

    External to us, the quality of the narratives that are available to us to help us make sense of the world has nosedived in the past two decades. Prior to social media and the implosion of journalism, there was a baseline of objectivity in the narratives we were exposed to. One would hope that there was a kernel of truth buried somewhere in what we heard, read or saw on major news providers.

    But that’s not the case today. Sensationalism has taken over journalism, driven by the need for profitability by showing ads to an increasingly polarized audience. In the process, it’s dragged the narratives we need to make sense of the world to the extremes that lie on either end of common sense.

    This wouldn’t be quite as catastrophic for sensemaking if we were more skeptical. The sensemaking cycle does allow us to judge the quality of new information for ourselves, deciding whether it fits with our frame of what we believe the world to be, or if we need to update that frame. But all that validation requires time and cognitive effort. And that’s the second place where sensemaking is in jeopardy: we don’t have the time or energy to be skeptical anymore. The world moves too quickly to be mulled over.

    In essence, our sensemaking is us creating a model of the world that we can use without requiring us to think too much. It’s our own proxy for reality. And, as a model, it is subject to all the limitations that come with modeling. As the British statistician George E.P. Box said, “All models are wrong, but some are useful.”

    What Box didn’t say is, the more wrong our model is, the less likely it is to be useful. And that’s the looming issue with sensemaking. The model we use to determine what is real is become less and less tethered to actual reality.

    It was exactly that problem that prompted Daniel Schmachtenberger and others to set up the Consilience Project. The idea of the Project is this – the more diversity in perspectives you can include in your model, the more likely the model is to be accurate. That’s what “consilience” means: pulling perspectives from different disciplines together to get a more accurate picture of complex issues.  It literally means the “jumping together” of knowledge.

    The Consilience Project is trying to reverse the erosion of modern sensemaking – both from an internal and external perspective – that comes from the overt polarization and the narrowing of perspective that currently typifies the information sources we use in our own sensemaking models.  As Schmachtenberger says,  “If there are whole chunks of populations that you only have pejorative strawman versions of, where you can’t explain why they think what they think without making them dumb or bad, you should be dubious of your own modeling.”

    That, in a nutshell, explains the current media landscape. No wonder nothing makes sense anymore.

    Can Media Move the Overton Window?

    I fear that somewhere along the line, mainstream media has forgotten its obligation to society.

    It was 63 years ago, (on May 9, 1961) that new Federal Communications Commission Chair Newton Minow gave his famous speech, “Television and the Public Interest,” to the convention of the National Association of Broadcasters.

    In that speech, he issued a challenge: “I invite each of you to sit down in front of your own television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland.”

    Minow was saying that media has an obligation to set the cultural and informational boundaries for society. The higher you set them, the more we will strive to reach them. That point was a callback to the Fairness Doctrine, established by the FCC in 1949. The policy required that “holders of broadcast licenses to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints.” The Fairness Doctrine was abolished by the FCC in 1987.

    What Minow realized, presciently, was that mainstream media is critically important in building the frame for what would come to be called, three decades later, the Overton Window. First identified by policy analyst Joseph Overton at the Mackinaw Center for Public Policy, the term would posthumously be named after Overton by his colleague Joseph Lehman.

    The term is typically used to describe the range of topics suitable for public discourse in the political arena. But, as Lehman explained in an interview, the boundaries are not set by politicians: “The most common misconception is that lawmakers themselves are in the business of shifting the Overton Window. That is absolutely false. Lawmakers are actually in the business of detecting where the window is, and then moving to be in accordance with it.

    I think the concept of the Overton Window is more broadly applicable than just within politics. In almost any aspect of our society where there are ideas shaped and defined by public discourse, there is a frame that sets the boundaries for what the majority of society understands to be acceptable — and this frame is in constant motion.

    Again, according to Lehman,  “It just explains how ideas come in and out of fashion, the same way that gravity explains why something falls to the earth. I can use gravity to drop an anvil on your head, but that would be wrong. I could also use gravity to throw you a life preserver; that would be good.”

    Typically, the frame drifts over time to the right or left of the ideological spectrum. What came as a bit of a shock in November of 2016 was just how quickly the frame pivoted and started heading to the hard right. What was unimaginable just a few years earlier suddenly seemed open to being discussed in the public forum.

    Social media was held to blame. In a New York Times op-ed written just after Trump was elected president (a result that stunned mainstream media) columnist Farhad Manjoo said,  “The election of Donald J. Trump is perhaps the starkest illustration yet that across the planet, social networks are helping to fundamentally rewire human society.”

    In other words, social media can now shift the Overton Window — suddenly, and in unexpected directions. This is demonstrably true, and the nuances of this realization go far beyond the limits of this one post to discuss.

    But we can’t be too quick to lay all the blame for the erratic movements of the Overton Window on social media’s doorstep.

    I think social media, if anything, has expanded the window in both directions — right and left. It has redefined the concept of public discourse, moving both ends out from the middle. But it’s still the middle that determines the overall position of the window. And that middle is determined, in large part, by mainstream media.

    It’s a mistake to suppose that social media has completely supplanted mainstream media. I think all of us understand that the two work together. We use what is discussed in mainstream media to get our bearings for what we discuss on social media. We may move right or left, but most of us realize there is still a boundary to what is acceptable to say.

    The red flags start to go up when this goes into reverse and mainstream media starts using social media to get its bearings. If you have the mainstream chasing outliers on the right or left, you start getting some dangerous feedback loops where the Overton Window has difficulty defining its middle, risking being torn in two, with one window for the right and one for the left, each moving further and further apart.

    Those who work in the media have a responsibility to society. It can’t be abdicated for the pursuit of profit or by saying they’re just following their audience. Media determines the boundaries of public discourse. It sets the tone.

    Newton Minow was warning us about this six decades ago.