Meta’s Social Media Battle Plan

My fellow Media Insider Maarten Albarda called it the “The Big Tobacco Moment for Social Media” in his post last week. Then, just yesterday, Steve Rosenbaum added that the K.G.M v. Meta Platforms case “signals a shift that cuts directly through the core defense platforms have relied on for decades.”

It was a seismic decision, and I’m pretty sure the various conference rooms of 1 Meta Way, Menlo Park, California have the doors closed as a bunch of sweaty lawyers and Meta staff are rolling out the whiteboards (or the Meta Quest virtual reality equivalent) and rolling up their sleeves to assess the potential damage and draw up a battle plan. Let’s take a moment to speculate about what they may be talking about.

In at least one of those conference rooms, Meta’s legal team is assessing one line of defence, which I’ll call Project “Hail Mary,” tapping into the current pop culture Zeitgeist. This involves an appeal to the $6 million decision. It’s not this case that’s worrying them. It’s the thousands waiting in the queue for the legal precedent to be set. The Meta Legal Team will be spending much of their foreseeable future in a courtroom. Even they know that chances for a successful appeal are slim. 

The second line of defence is to quantify the impact of this on Meta’s bottom line if the appeal is not successful. So let’s unpack that, because it deals with the elephant in the room, touched on in both Steve and Maarten’s post: Is this the beginning of a slippery slope that will lead to the dismantling of algorithmic ad targeting and the demise of the endless scroll for everyone, or just legal minors? 

If we follow the lead of Australia, the first country to implement a ban on social media, it will just be minors – those under 16. The legislation was passed late last year and the ban officially took place on December 10, 2025. 

There are several countries around the world looking at implementing a similar ban, including Canada. Most are watching to see how Australia implements and polices its ban, as there are several thorny issues at play here. The countries seriously looking at it tend to share a similar legislative sentiment with Australia when it comes to consumer rights and privacy concerns. 

The U.S., under the current administration, is the least likely to implement federal restrictions on social media. Still, that is not keeping several states from introducing their own legislation. What the K.M.G. v. Meta decision does do is move the debate from the arena of federally controlled media to that of state controlled online safety, privacy and mental health concerns. All will be watching the pending suits, which will likely fill up dockets in U.S. courts for the next few years at least. 

Given the international aspect of this, it’s instructive to look at how Meta’s revenues breakdown by region. 

The biggest share, 39%, is the U.S. and Canada, but 94% of that comes from the U.S. We’re a Meta rounding error up here.

The Asia-Pacific is the second biggest regional market – with 26.8% of global revenues. While the user numbers are huge, the revenue per user is much smaller than in North America. Several countries in this market are considering some type of age-based restriction on social media usage – largely driven by the academic concerns of parents and educators in China, Japan and Korea.

Next is Europe, with 23.2% of Meta’s revenue pie. If there is any jurisdiction likely to follow Australia’s lead, it’s the E.U., who have consistently shown leadership in implementing privacy protection legislation.

Finally, there is the rest of the world, which collectively accounts for about 11% of Meta revenues. When you consider this includes all of Africa, all South America and whatever else is left, you can appreciate that attitudes towards legislation will be all over the map, both literally and figuratively.

Still, let’s say that a significant chunk of Meta’s revenue – say about 30 to 40% – comes from regions likely to pass legislation similar to Australia’s. Still, that undoubtedly will be only directed at minors younger than 16, which today makes up less than 10% of Meta’s user base (between Instagram and Facebook). All those young people have gone to TikTok (where it makes up 25% of their user base). 

So, what Meta’s financial planners are probably talking about is the fact that – even in a worst legal case scenario – we’re talking about 3 to 4% of their total user base that may be legislatively restricted in some form or another. If you’re in triage mode, that’s not severe enough to consider major surgery or amputation. Probably a band-aid will do the trick. 

The Most Canadian of Social Networks

It may be the most polite social network in the world. It’s Hey.Cafe – a Facebook alternative built by Canadians for Canadians.

I first heard about Hey.Cafe through a reel on Facebook (oh, the irony) from Tod Maffin, a former CBC radio host, author and podcaster. Prompted by the not so veiled threats coming from south of the border, Tod’s been on a “buy Canadian” campaign for several months now and that has recently extended to Canadian alternatives for the big social media platforms. It was Tod that suggested to every Canadian listening (currently about 10,000,000 a week, according to Tod’s website) that we check out Hey.Cafe.

So, I did. It turned out that Anthony Lee, the creator of Hey.Cafe, lives about an hour down the highway from me, here in the heart of beautiful British Columbia. So I reached out and we had a chat – a nice, polite Canadian chat. Because that’s how we do things up here.

The first thing I learned, which was a surprise, is that Hey.Cafe is not new. In fact, it’s been around since 2001. That means there was a version of Hey.Cafe before there was ever a Facebook (which started in 2004). In addition to running a tech support company out of Penticton, BC, Anthony has been developing alternatives to the major social media platforms for the better part of 3 decades now, “Whenever I thought, ‘Oh, I think I have an idea,’ I’d make some changes, that kind of stuff. But it definitely wasn’t a sit down and work on it all day thing, unless I had some time free that I was just like, ‘Yeah, I’ll spend this week working on stuff.’”

Then I asked the obvious question, “Why now? Why is Hey.Cafe suddenly gaining attention?”

There is the “buy Canadian” thing, of course. But Anthony said it’s more than just Canadians being fed up with an American president and his bluster. We’re also fed up with social media founders that have their noses firmly pressed up against said President’s posterior simply because it’s good for business.

And let’s not even get into the simmering cesspool every major social media platform has become, driven by an ad-obsessed business model that monetizes eyeballs at the expense of ethics. Lee concurred, “It’s all about algorithm for them. They don’t care if it’s someone you follow or not. If, if it looks like it’s gonna make some attention, whether it be good or bad, they’re gonna push it in the feed.”

So, are Canadian’s kicking Hey.Cafes tires like a rink-side Zamboni? Yes, finally. Thanks to the plug from Tod Maffin, users shot up from about 5,000 to over 40,000 in two weeks. And it’s still growing. Because it’s still a side of the desk project, Anthony had to cap new accounts at 250 an hour.

Now, those numbers are infinitesimal compared to any of the major platforms, but they do signal a willingness by Canadians to try something not tied to business practices we don’t agree with. At the same time, it does bring up the elephant in the room for anyone going up against Facebook or any of the big platforms – the curse of Metcalfe’s Law. Metcalfe’s Law – named after Ethernet pioneer Robert Metcalfe – says that the value of a network is proportional to the square of the number of connected users. Having a telephone isn’t much use if no one else has one. For networks, bigger = better. And Facebook is currently 75,000 times bigger than Hey.Cafe.

Given that, does Hey.Cafe stand a chance? I hope so. I supported it with a one-year subscription because I would love to see Anthony Lee’s side project survive and – hopefully – succeed. I did go on and post a few things. I even started a new “café” – Hey.Cafe’s version of a Facebook Group. So far, nothing much has happened there, but we’ll politely wait and see. Again, that’s how we do things up here.

What I did find, however, is a community that seems genuinely, politely happy to be there. And not all of them are Canadian. This was a post from a nurse newly arrived from the U.S.: “Newly landed nurse practitioner from Oregon via Boston (long story). Love the concept of no ads and AI. Now to find some other communities, Bernese Mountain Dogs and skiing!”

I did ask Anthony, given the audience MediaPost (where this post also runs) reaches, if there’s any message he’d like to pass on. For media buyers especially, he offered this, “Whether it be HeyCafe, Bluesky, Mastodon, (consider) using more services that aren’t the big three players. Use more stuff that puts you in the spotlight of communities that are all over the place.”

While Anthony would love for Hey.Cafe to be economically sustainable, maybe the take-away here is not so much about financial success. Maybe these are Canadians signalling a change in our attitude. It’s as if we’ve been in an abusive relationship with Facebook for years but have put up with it because it’s been too hard to leave. But, at some point in abusive relationships, there comes a red line which, when crossed, you begin planning your exit. It doesn’t happen immediately. It may not happen at all, but there is a significant mental shift that happens where you become aware of how toxic the relationship really is and you start planning a life free from that toxicity.

For 40,000 Canadians and wannabe Canadians – at least – that switch may have happened.

How Seniors Get Sucked into Falling for Bad Information

It happened to me last Thursday. I was tired, I was jet lagged and I was feeling like garbage. My defenses were down. So, before I realized it, I was spinning down a social media sewer spiral. My thumb took over, doom scrolling through post after post offering very biased commentary on the current state of the world, each reinforcing just how awful things are. Little was offered in the way of factual back up, and I didn’t bother looking for it. My mood plummeted. I alternated between paranoia, outrage and depression. An hour flew by as my brain was hijacked by a feckless feed.

And I know better. I really do. Up in my prefrontal cortex, I knew I was being sucked into a vicious vortex of AI slop and troll baiting. Each time I scrolled down, I would tell myself, “Okay, this is the last one. After this, put the phone down.” And each time, my thumb would ignore me.

This is not news to any of us. Every one of you reading this knows about the addictive nature of social media. And you also know the pernicious impacts of AI generated content spoon fed to us by an algorithm whose sole purpose is to hog tie our own willpower and keep our eyes locked on the screen. I also suspect that you, like I, think because we know all this, we have built up at least some immunity to the siren call of social media.

But I’m here to tell you that social media has gotten really, really good at being really, really awful for us. I didn’t notice it so much when I was on my game, busy doing other things and directing my attention with a fully functional executive brain. But the minute my guard slipped, the minute my cognitive capacity shifted down into a lower gear, I was sucked into the misinformational sh*thole that is social media.

Being a guy that likes to ask why, I did exactly that when the jet lag finally dissipated. Why did I, a person who should know better, fall into the crappy content trap?  “Maybe,” I said to myself reluctantly, “it’s a generational thing.” Maybe brains of a certain age are more susceptible to being cognitively hijacked and led astray.

A recent study from the University of Utah does lend some credence to that theory. Researchers found that adults older than 60 were more likely to share misinformation online than younger people. This was true for information about health, but a prior study showed an even higher tendency to swallow bad information when it came to politics.

Lead researcher Ben Lyons set out to find why those of us north of 60 are more likely to be led astray by online misinformation. Spoiler alert – it doesn’t have anything to do with our brains slowing down or lower information literacy rates. It appears that older people can sniff out bullshit just as well as younger people. But it turns out that if that information, no matter how dubious it is, matches our own beliefs and world view, we’ll happily share it even if it doesn’t pass the smell test.

Lyons called this congeniality bias. I’ve talked before about the sensemaking cycle. In it, new information is matched to our existing belief schema. It it’s a match, we usually accept it without a lot of qualification. If it isn’t, we can choose to reject it or we can reframe our beliefs based on the new information. The second option is a lot more work and, it seems, the older we get the less likely we are to do this heavy lifting. As we age, we get more fully locked into who we are and what we believe. We’ve spent a lot of years building our beliefs and so we’re reluctant to stray from them.

Of course, like all things human, this tendency is not a given nor universally applied. Some older people are naturally more skeptical, and some are more inflexible in their beliefs. Not surprisingly, Lyons found those that leaned right in their political affiliations tend to be more belief-bound.

But, as I discovered this past Thursday, these information filtering tendencies are dependent on our moods and cognitive capacity. I am a naturally skeptical person and like to think I’m usually pretty picky about my information sources. But this is true only when I’m on my game. The minute my brain down-shifted, I began accepting dubious information at face value simply because I happened to agree with it. I didn’t bother checking to make sure it was true.

It sounded true, and that was all that mattered.

What Authoritarianism Gets Wrong

Like the rest of the world, my attention and intentions got hijacked over the weekend by what is happening in Minneapolis. I did not intend to write this post, but I feel I must.

What is happening right now is – plain and simple – authoritarianism. Some – like Jonathon Rausch in the Atlantic –  have used the word Fascism. Whatever label you put on it, it has the same flawed logic behind it – the belief that might makes right. It’s the same calculus of cruelty and coercion that the school yard bully uses: I’m bigger than you so do what I want you to do.

Here’s the problem with that formula. Resolve, resistance and resiliency aren’t things that can be consistently quantified. They are not static. The bewildering thing about humans when we’re faced with a crisis is this: the harder you push, the harder we’ll push back.

This is the reality of the red line. We accept adversity only up to a certain point. Past that point, individual concerns give way to that of the greater good. We join together into a coalition, dismantling the smaller walls that used to separate us to unite and fight a greater enemy that threatens us all. Rather than being beaten down by adversity, it raises us up.

We have always done this. Journalist Sebastian Junger documents one example in his excellent book Tribe: On Homecoming and Belonging. During the London Blitz, Hitler believed he could bomb Londoners into submission. For 56 days he tried, dropping over 12,000 tonnes of bombs on the city, sure that it would break the will of Londoners. On one day alone, in September 1940, over 700 tones of high explosives and 86,000 incendiaries fell, killing 1,436 people. But the resolve of Londoners never wavered. In fact, it grew with adversity. They kept calm and carried on.

I’ve seen it firsthand in my own community. Our city, Kelowna, B.C., has been threatened with wildfires a number of times. In 2003, our city of 150,000 lost over 200 homes in one night and one third of the city was evacuated.

I have never seen this city come together like it did then. Neighbours helped neighbours. Those of us who weren’t evacuated opened our homes to those that were. In many cases, spare bedrooms and pull-out couches were occupied by total strangers. Crisis centers were swamped with offers of food, clothing, blankets and volunteer assistance.

This is how we’re wired. We band together in times of trouble. We are tribal creatures. As Junger found in his research, psychological health actually seems to improve in times of crisis. He cites a 1961 paper by American sociologist Charles Fritz, which opens with this sentence, “Who do large-scale disasters produce such mentally healthy conditions?” Junger writes, “Fritz’s theory was that modern society has gravely disrupted the social bonds that have always characterized the human experience, and that disasters thrust people back into a more ancient, organic way of relating. Disasters, he proposed, create a ‘community of sufferers’ that allows individuals to experience an immensely reassuring connection to others.”

Humans evolved to join together to overcome obstacles. Our modern world doesn’t often ask that of us. But right now, in Minneapolis, that’s exactly what’s happening as thousands of ordinary people are coordinating protection patrols to document authoritarianism. They are using the encrypted Signal platform to communicate and direct observers to emerging trouble areas. They have established their own protocols of behaviour. It is, in the words of Robert F. Worth, again writing in the Atlantic, “a meticulous urban choreography of civic protest.”

At least two Minnesotans have paid as much as they mortally can, with their own lives.

This is the wrench that humans throw into the crushing cogs of authoritarian behaviour: the more you crack down on us, the stronger we will become as we join together to push back against you.

Of all the places on Earth, Americans should know this.  I can think of one more example of this that is particularly relevant. It happened 250 years ago, when American colonists joined together to protest against the authority of the British Crown.

We shouldn’t forget that.

The Cost of Not Being Curious

The world is having a pandemic-proportioned wave of Ostrichitis.

Now, maybe you haven’t heard of Ostrichitis. But I’m willing to bet you’re showing at least some of the symptoms:

  • Avoiding newscasts, especially those that feature objective and unbiased reporting
  • Quickly scrolling past any online news items in your feed that look like they may be uncomfortable to read
  • Dismissing out of hand information coming from unfamiliar sources

These are the signs of Ostrichitis – or the Ostrich Effect – and I have all of them. This is actually a psychological effect, more pointedly called willful ignorance, which I wrote about a few years ago. And from where I’m observing the world, we all seem to have it to one extent or another.

I don’t think this avoidance of information comes as a shock to anyone. The world is a crappy place right now. And we all seem to have gained comfort from adopting the folk wisdom that “no news is good news.” Processing bad news is hard work, and we just don’t have the cognitive resources to crunch through endless cycles of catastrophic news. If the bad news affirms our existing beliefs, it makes us even madder than what we were. If it runs counter to our beliefs, it forces us to spin up our sensemaking mechanisms and reframe our view of reality. Either way, there are way more fun things to do.

A recent study from the University of Chicago attempted to pinpoint when children started avoid bad news. The research team found that while young children don’t tend to put boundaries around their curiosity, as they age they start avoiding information that challenges their beliefs or their own well-being. The threshold seems to be about 6 years old. Before that, children are actively seeking information of all kinds (as any parent barraged by never ending “Whys” can tell you). After that, chidren start strategizing the types of information they pay attention to.

Now, like everything about humans, curiosity tends to be an individual thing. Some of us are highly curious and some of us avoid seeking new information religiously. But even if we are a curious sort, we may pick and choose what we’re curious about. We may find “safe zones” where we let our curiosity out to play. If things look too menacing, we may protect ourselves by curbing our curiosity.

The unfortunate part of this is that curiosity, in all its forms, is almost always a good thing for humans (even if it can prove fatal to cats).

The more curious we are, the better tied we are to reality. The lens we use to parse the world is something called a sense-making loop. I’ve often referred to this in the past. It’s a processing loop that compares what we experience with what we believe, referred to as our “frame”. For the curious, this frame is often updated to match what we experience. For the incurious, the frame is held on to stubbornly, often by ignoring new information or bending information to conform to their beliefs. A curious brain is a brain primed to grow and adapt. An incurious brain is one that is stagnant and inflexible. That’s why the father of modern-day psychology, William James, called curiosity “the impulse towards better cognition.”

When we think about the world we want, curiosity is a key factor in defining it. Curiosity keeps us moving forward. The lack of curiosity locks us in place or even pushes us backwards, causing the world to regress to a more savage and brutal place. Writers of dystopian fiction knew this. That’s why authors including H.G. Wells, Aldous Huxley, Ray Bradbury and George Orwell all made a lack of curiosity a key part of their bleak future worlds. Our current lack of curiosity is driving our world in the same dangerous direction.

For all these reasons, it’s essential that we stay curious, even if it’s becoming increasingly uncomfortable.

Lilith Fair: A Quarter Century and A Different World Ago

Lilith Fair: Building a Mystery, a new documentary released on Hulu (CBC Gem in Canada), is much more than a chronicle of a music festival. It’s a very timely statement on the both the strength and fragility of community.

Lilith Fair was the festival launched in 1997 by Canadian singer/songwriter Sarah McLachlan. It was conceived as a feminine finger in the eye of a determinedly misogynistic music industry. At the end of the 90’s, despite a boom in talented female singer songwriters (Tracy Chapman, Jewel, Paula Cole, Sheryl Crow, Natalie Merchant, Shawn Colvin, Lisa Loeb, Suzanne Vega and others too numerous to mention), radio stations wouldn’t run two songs by women back-to-back. They also wouldn’t book two women on the same concert ticket. The feeling, based on nothing other than male intuition, was that it would be too much “femininity” for the audience to handle.

McLachlan, in her charmingly polite Canadian way, said “Fudge you!” and launched her own festival. The first one, in 1997, played almost 40 concerts over 51 days across North America. The line-up was exclusively female – 70 singers in all playing on three stages. Almost every concert sold out. Apparently, there was an audience for female talent. Lilith Fair would be repeated in 1998 and 1999, with both tours being smashing successes.

The World needed Lilith Fair in the late 90s. It wasn’t only the music industry that was misogynistic and homophobic. It was our society. The women who played Lilith Fair found a community of support unlike anything they had ever experienced in their careers. Performers who had been feeling isolated for years suddenly found support and – more than anything – understanding.

It was women who made the rules and ran the Lilith Fair show. It was okay to perform when you were 8 months pregnant. It was okay to hold your baby onstage as you performed the group encore. It was okay to bring the whole family on tour and let the kids play backstage while you did your set. These were things that were – up until then – totally foreign in the music industry. It was the very definition of community – diverse people having something in common and joining together to deal from a position of strength.

But it didn’t happen overnight. It took a while – and a lot of bumping into each other backstage – for the community to gel. It also needed a catalyst, which turned out to be Amy Ray and Emily Saliers – officially known as the Indigo Girls. It was their out-going friendliness that initially broke the ice “because we were so gay and so puppy dog-like.”

This sense of community extended beyond the stage to the thousands who attended: men and women, old and young, straight and gay. It didn’t matter – Lilith Fair was a place where you would be accepted and understood. As documentary producer Dan Levy (of Schitt’s Creek fame) – who was 12 years old when he attended and was yet to come out – said, “Being there was one of the earliest memories I’ve had of safety.”

The unity and inclusiveness of Lilith Fair stood in stark contrast to another festival of the same era – Woodstock 99. There, toxic masculinity from acts like Limp Bizkit singer Fred Durst and Kid Rock, swung the vibe of the event heavily towards anarchy and chaos rather than community.

But while Lilith Fair showed the importance of community, it also showed how fragile it could be. The festival became the butt of jokes on late night television (including one particularly cringe-worthy one by Jay Leno about Paula Cole’s body hair) and those that sought to diminish its accomplishments and importance. Finally, at the end of the 1999 tour, McLachlan had had enough. The last concert was played in the rain at Edmonton, Alberta on August 31st.

McLachlan did try to revive Lilith Fair in 2010, but it was a complete failure. Whatever lightening in a bottle she had captured the first time was gone. The world had passed it by. The documentary didn’t dwell on this other than offering a few reasons why this might be. Perhaps Lilith Fair wasn’t needed anymore. Maybe it had done its job. After all, women had mounted some of the top tours of that time, including Taylor Swift, Madonna, Pink and Lady Gaga.

Or maybe it had nothing to do with the industry. Maybe it had everything to do with us, the audience.

The world of 1999 was very different place than the world of 2010. Community was in the midst of being redefined from those sharing a common physical location to those sharing a common ideology in online forums. And that type of community didn’t require a coming together. If anything, those types of communities kept us apart, staring at a screen – alone in our little siloes.

According to the American Time Use Survey, the time spent in-person socializing has been on a steady decline since 2000.  This is especially true for those under the age of 25, the prime market for musical festivals. When we did venture forth to see a concert, we are looking for spectacle, not community. This world was moving too fast for the coalescing of the slow, sweet magic that made Lilith Fair so special.

At the end of the documentary, Sarah McLachlan made it clear that she’ll never attempt to bring Lilith Fair back to life. It was a phenomenon of that time. And that is sad – sad indeed.

When Did the Future Become So Scary?

The TWA hotel at JFK airport in New York gives one an acute case of temporal dissonance. It’s a step backwards in time to the “Golden Age of Travel” – the 1960s. But even though you’re transported back 60 years, it seems like you’re looking into the future. The original space – the TWA Flight Center – was designed in 1962 by Eero Saarinen. This was a time when America was in love with the idea of the future. Science and technology were going to be our saving grace. The future was going to be a utopian place filled with flying jet cars, benign robots and gleaming, sexy white curves everywhere.  The TWA Flight Center was dedicated to that future.

It was part of our love affair with science and technology during the 60s. Corporate America was falling over itself to bring the space-age fueled future to life as soon as possible. Disney first envisioned the community of tomorrow that would become Epcot. Global Expos had pavilions dedicated to what the future would bring. There were four World Fairs over 12 years, from 1958 to 1970, each celebrating a bright, shiny white future. There wouldn’t be another for 22 years.

This fascination with the future was mirrored in our entertainment. Star Trek (pilot in 1964, series start in 1966) invited all of us to boldly go where no man had gone before, namely a future set roughly three centuries from then.   For those of us of a younger age, the Jetsons (original series from 1963 to 64) indoctrinated an entire generation into this religion of future worship. Yes, tomorrow would be wonderful – just you wait and see!

That was then – this is now. And now is a helluva lot different.

Almost no one – especially in the entertainment industry – is envisioning the future as anything else than an apocalyptic hell hole. We’ve done an about face and are grasping desperately for the past. The future went from being utopian to dystopian, seemingly in the blink of an eye. What happened?

It’s hard to nail down exactly when we went from eagerly awaiting the future to dreading it, but it appears to be sometime during the last two decades of the 20th Century. By the time the clock ticked over to the next millennium, our love affair was over. As Chuck Palahniuk, author of the 1999 novel Invisible Monsters, quipped, “When did the future go from being a promise to a threat?”

Our dread about the future might just be a fear of change. As the future we imagined in the 1960’s started playing out in real time, perhaps we realized our vision was a little too simplistic. The future came with unintended consequences, including massive societal shifts. It’s like we collectively told ourselves, “Once burned, twice shy.” Maybe it was the uncertainty of the future that scared the bejeezus out of us.

But it could also be how we got our information about the impact of science and technology on our lives. I don’t think it’s a coincidence that our fear of the future coincided with the decline of journalism. Sensationalism and endless punditry replaced real reporting just about the time we started this about face. When negative things happened, they were amplified. Fear was the natural result. We felt out of control and we keep telling ourselves that things never used to be this way.  

The sum total of all this was the spread of a recognized psychological affliction called Anticipatory Anxiety – the certainty that the future is going to bring bad things down upon us. This went from being a localized phenomenon (“my job interview tomorrow is not going to go well”) to a widespread angst (“the world is going to hell in a handbasket”). Call it Existential Anticipatory Anxiety.

Futurists are – by nature – optimists. They believe things well be better tomorrow than they are today. In the Sixties, we all leaned into the future. The opposite of this is something called Rosy Retrospection, and it often comes bundled with Anticipatory Anxiety. It is a known cognitive bias that comes with a selective memory of the past, tossing out the bad and keeping only the good parts of yesterday. It makes us yearn to return to the past, when everything was better.

That’s where we are today. It explains the worldwide swing to the right. MAGA is really a 4-letter encapsulation of Rosy Retrospection – Make America Great Again! Whether you believe that or not, it’s a message that is very much in sync with our current feelings about the future and the past.

As writer and right-leaning political commentator William F. Buckley said, “A conservative is someone who stands athwart history, yelling Stop!”

It’s Tough to Consume Conscientiously

It’s getting harder to be both a good person and a wise consumer.

My parents never had this problem when I was a kid. My dad was a Ford man. Although he hasn’t driven for 10 years, he still is. If you grew up in the country, your choices were simple – you needed a pickup truck. And in the 1960s and 70s, there were only three choices: Ford, GMC or Dodge. For dad, the choice was Ford – always.

Back then, brand relationships were pretty simple. We benefited from the bliss of ignorance. Did the Ford Motor Company do horrible things during that time? Absolutely. As just one example, they made a cost-benefit calculation and decided to keep the Pinto on the road even though they knew it tended to blow up when hit from the rear. There is a corporate memo saying – in black and white – that it would be cheaper to settle the legal claims of those that died than to fix the problem. The company was charged for negligent homicide. It doesn’t get less ethical than that.

But that didn’t matter to Dad. He either didn’t know or didn’t care. The Pinto Problem, along with the rest of the shady stuff done by the Ford Motor Company, including bribes, kickbacks and improper use of corporate funds by Henry Ford II, was not part of Dad’s consumer decision process. He still bought Ford. And he still considered himself a good person. The two things had little to do with each other.

Things are harder now for consumers. We definitely have more choice, and those choices are harder, because we know more.  Even buying eggs becomes an ethical struggle. Do we save a few bucks, or do we make some chicken’s life a little less horrible?

Let me give you the latest example from my life. Next year, we are planning to take our grandchildren to a Disney theme park. If our family has a beloved brand, it would be Disney. The company has been part of my kids’ lives in one form or another since they were born and we all want it to be part of their kid’s lives as well.

Without getting into the whole debate, I personally have some moral conflicts with some of Disney’s recent corporate decisions. I’m not alone. A Facebook group for those planning a visit to this particular park has recently seen posts from those agonizing over the same issue. Does taking the family to the park make us complicit in Disney’s actions that we may not agree with? Do we care enough to pull the plug on a long-planned park visit?

This gets to the crux of the issue facing consumers now – how do we balance our beliefs about what is wrong and right with our desire to consume? Which do we care more about? The answer, as it turns out, seems to almost always be to click the buy button as we hold our noses.

One way to make that easier is to tell ourselves that one less visit to a Disney mark will make virtually no impact on the corporate bottom line. Depriving ourselves of a long-planned family experience will make no difference. And – individually – this is true. But it’s exactly this type of consumer apathy which, when aggregated, allows corporations to get away with being bad moral characters.

Even if we want to be more ethically deliberate in our consumer decisions, it’s hard to know where to draw the line. Where are we getting our information about corporate behavior from? Can it be trusted? Is this a case of one regrettable action, or is there a pattern of unethical conduct? These decisions are always complex, and coming to any decision that involves complexity is always tricky.

To go back to a simpler time, my grandmother had a saying that she applied liberally to any given situation, “What does all this have to do with the price of tea in China?” Maybe she knew what was coming.

The Credibility Crisis

We in the western world are getting used to playing fast and loose with the truth. There is so much that is false around us – in our politics, in our media, in our day-to-day conversations – that it’s just too exhausting to hold everything to a burden of truth. Even the skeptical amongst us no longer have the cognitive bandwidth to keep searching for credible proof.

This is by design. Somewhere in the past four decades, politicians and society’s power brokers have discovered that by pandering to beliefs rather than trading in facts, you can bend to the truth to your will. Those that seek power and influence have struck paydirt in falsehoods.

In a cover story last summer in the Atlantic, journalist Anne Applebaum explains the method in the madness: “This tactic—the so-called fire hose of falsehoods—ultimately produces not outrage but nihilism. Given so many explanations, how can you know what actually happened? What if you just can’t know? If you don’t know what happened, you’re not likely to join a great movement for democracy, or to listen when anyone speaks about positive political change. Instead, you are not going to participate in any politics at all.”

As Applebaum points out, we have become a society of nihilists. We are too tired to look for evidence of meaning. There is simply too much garbage to shovel through to find it. We are pummeled by wave after wave of misinformation, struggling to keep our heads above the rising waters by clinging to the life preserver of our own beliefs. In the process, we run the risk of those beliefs becoming further and further disconnected from reality, whatever that might be. The cogs of our sensemaking machinery have become clogged with crap.

This reverses a consistent societal trend towards the truth that has been happening for the past several centuries. Since the Enlightenment of the 18th century, we have held reason and science as the compass points of our True Norh. These twin ideals were buttressed by our institutions, including our media outlets. Their goal was to spread knowledge. It is no coincidence that journalism flourished during the Enlightenment. Freedom of the press was constitutionally enshrined to ensure they had the both the right and the obligation to speak the truth.

That was then. This is now. In the U.S. institutions, including media, universities and even museums, are being overtly threatened if they don’t participate in the wilful obfuscation of objectivity that is coming from the White House. NPR and PBS, two of the most reliable news sources according to the Ad Fontes media bias chart, have been defunded by the federal government. Social media feeds are awash with AI slop. In a sea of misinformation, the truth becomes impossible to find. And – for our own sanity – we have had to learn to stop caring about that.

But here’s the thing about the truth. It gives us an unarguable common ground. It is consistent and independent from individual belief and perspective. As longtime senator Daniel Patrick Moynihan famously said, “Everyone is entitled to his own opinion, but not to his own facts.” 

When you trade in falsehoods, the ground is consistently shifting below your feet. The story is constantly changing to match the current situation and the desired outcome. There are no bearings to navigate by. Everyone had their own compass, and they’re all pointing in different directions.

The path the world is currently going down is troubling in a number of ways, but perhaps the most troubling is that it simply isn’t sustainable. Sooner or later in this sea of deliberate chaos, credibility is going to be required to convince enough people to do something they may not want to do. And if you have consistently traded away your credibility by battling the truth, good luck getting anyone to believe you.

Face Time in the Real World is Important

For all the advances made in neuroscience, we still don’t fully understand how our brains respond to other people. What we do know is that it’s complex.

Join the Chorus

Recent studies, including this one from Rochester University, are showing that when we see someone we recognize, the brain responds with a chorus of neuronal activity. Neurons from different parts of the brain fire in unison, creating a congruent response that may simultaneously pull from memory, from emotion, from the rational regions of our prefrontal cortex and from other deep-seated areas of our brain. The firing of any one neuron may be relatively subtle, but together this chorus of neurons can create a powerful response to a person. This cognitive choir represents our total comprehension of an individual.

Non-Verbal Communication

“You’ll have your looks, your pretty face. – And don’t underestimate the importance of body language!” – Ursula, The Little Mermaid

Given that we respond to people with different parts of the brain, it makes sense that we use part of the brain we didn’t realize when communicating with someone else. In 1967, psychologist Albert Mehrabian attempted to pin this down with some actual numbers, publishing a paper in which he put forth what became known as Mehrabian’s Rule: 7% of communication is verbal, 38% is tone of voice and 55% is body language.

Like many oft-quoted rules, this one is typically mis-quoted. It’s not that words are not important when we communication something. Words convey the message. But it’s the non-verbal part that determines how we interpret the message – and whether we trust it or not.

Folk wisdom has told us, “Your mouth is telling me one thing, but your eyes are telling me another.” In this case, folk wisdom is right. We evolved to respond to another person with our whole bodies, with our brains playing the part of conductor. Maybe the numbers don’t exactly add up to Mehrabian’s neat and tidy ratio, but the importance of non-verbal communication is undeniable. We intuitively pick up incredibly subtle hints: a slight tremor in the voice, a bead of sweat on the forehead, a slight turn down of one corner of the mouth, perhaps a foot tapping or a finger trembling, a split-second darting of the eye. All this is subconsciously monitored, fed to the brain and orchestrated into a judgment about a person and what they’re trying to tell us. This is how we evolved to judge whether we should build trust or lose it.

Face to Face vs Face to Screen

Now, we get to the question you knew was coming, “What happens when we have to make these decisions about someone else through a screen rather than face to face?”

Given that we don’t fully understand how the brain responds to people yet, it’s hard to say how much of our ability to judge whether we should convey trust or withhold it is impaired by screen-to-screen communication. My guess is that the impairment is significant, probably well over 50%. It’s difficult to test this in a laboratory setting, given that it generally requires some type of neuroimaging, such as an fMRI scanner. In order to present a stimulus for the brain to respond to when the subject is strapped in, a screen is really the only option. But common sense tells me – given the sophisticated and orchestrated nature of our brain’s social responses – that a lot is lost in translation from a real-world encounter to a screen recording.

New Faces vs Old Ones

If we think of how our brains respond to faces, we realize that in today’s world, a lot of our social judgements are increasing made without face-to-face encounters. In a case where we know someone, we will pull forward a snapshot of our entire history with that person. The current communication is just another data point in a rich collection of interpersonal experience. One would think that would substantially increase our odds of making a valid judgement.

But what if we must make a judgement on someone we’ve never met before, and have only seen through a screen; be it a TikTok post, an Instagram Reel, a YouTube video or a Facebook Post? What if we have to decide whether to believe an influencer when making an important life decision? Are we willing to rely on a fraction of our brain’s capacity when deciding whether to place trust in someone we’ve never met?