The Ten Day Tech Detox

I should have gone cold turkey on tech. I really should have.

It would have been the perfect time – should have been the perfect time.

But I didn’t. As I spent 10 days on BC’s gorgeous sunshine coast with family, I also trundled along my assortment of connected gadgets. 

But I will say it was a partially successful detox. I didn’t crack open the laptop as much as I usually do. I generally restricted use of my iPad to reading a book.

But my phone – it was my phone, always within reach, that tempted me with social media’s siren call.

In a podcast, Andrew Selepak, social media professor at the University of Florida, suggests that rather than doing a total detox that is probably doomed to fail, you use vacations as an opportunity to use tech as a tool rather than an addiction.

I will say that for most of the time, that’s what I did. As long as I was occupied with something I was fine. 

Boredom is the enemy. It’s boredom that catches you. And the sad thing was, I really shouldn’t have been bored. I was in one of the most beautiful places on earth. I had the company of people I loved. I saw humpback whales – up close – for Heaven’s sake. If ever there was a time to live in the moment, to embrace the here and now, this was it. 

The problem, I realized, is that we’re not really comfortable any more with empty spaces – whether they be in conversation, in our social life or in our schedule of activities. We feel guilt and anxiety when we’re not doing anything.

It was an interesting cycle. As I decompressed after many weeks of being very busy, the first few days were fine. “I need this,” I kept telling myself. It’s okay just to sit and read a book. It’s okay not to have every half-hour slot of the day meticulously planned to jam as much in as possible.

That lasted about 48 hours. Then I started feeling like I should be doing something. I was uncomfortable with the empty spaces.

The fact is, as I learned – boredom always has been part of the human experience. It’s a feature – not a bug. As I said, boredom represents the empty spaces that allow themselves to be filled with creativity.  Alicia Walf, a neuroscientist and a senior lecturer in the Department of Cognitive Science at Rensselaer Polytechnic Institute, says it is critical for brain health to let yourself be bored from time to time.

“Being bored can help improve social connections. When we are not busy with other thoughts and activities, we focus inward as well as looking to reconnect with friends and family. 

Being bored can help foster creativity. The eureka moment when solving a complex problem when one stops thinking about it is called insight.

Additionally, being bored can improve overall brain health.  During exciting times, the brain releases a chemical called dopamine which is associated with feeling good.  When the brain has fallen into a predictable, monotonous pattern, many people feel bored, even depressed. This might be because we have lower levels of dopamine.”

That last bit, right there, is the clue why our phones are particularly prone to being picked up in times of boredom. Actually, three things are at work here. The first is that our mobile devices let us carry an extended social network in our pockets. In an article from Harvard, this is explained: “Thanks to the likes of Facebook, Snapchat, Instagram, and others, smartphones allow us to carry immense social environments in our pockets through every waking moment of our lives.”

As Walf said, boredom is our brains way of cueing us to seek social interaction. Traditionally, this was us getting the hell out of our cave – or cabin – or castle – and getting some face time with other humans. 

But technology has short circuited that. Now, we get that social connection through the far less healthy substitution of a social media platform. And – in the most ironic twist – we get that social jolt not by interacting with the people we might happen to be with, but by each staring at a tiny little screen that we hold in our hand.

The second problem is that mobile devices are not designed to leave us alone, basking in our healthy boredom. They are constantly beeping, buzzing and vibrating to get our attention. 

The third problem is that – unlike a laptop or even a tablet – mobile devices are our device of choice when we are jonesing for a dopamine jolt. It’s our phones we reach for when we’re killing time in a line up, riding the bus or waiting for someone in a coffee shop. This is why I had a hard time relegating my phone to being just a tool while I was away.

As a brief aside – even the term “killing time” shows how we are scared to death of being bored. That’s a North American saying – boredom is something to be hunted down and eradicated. You know what Italians call it? “Il dolce far niente” – the sweetness of doing nothing. Many are the people who try to experience life by taking endless photos and posting on various feeds, rather than just living it. 

The fact is, we need boredom. Boredom is good, but we are declaring war on it, replacing it with a destructive need to continually bath our brains in the dopamine high that comes from checking our Facebook feed or latest Tiktok reel. 

At least one of the architects of this vicious cycle feels some remorse (also from the article from Harvard). “ ‘I feel tremendous guilt,’ admitted Chamath Palihapitiya, former Vice President of User Growth at Facebook, to an audience of Stanford students. He was responding to a question about his involvement in exploiting consumer behavior. ‘The short-term, dopamine-driven feedback loops that we have created are destroying how society works,’ “

That is why we have to put the phone down and watch the humpback whales. That, miei amici, is il dolci far niente!

Of Streaming, Satellites and Sunsets

I’ve been out of the loop for the last 3 weeks as I actually did life stuff. Today, looking to get back into the loop so I could write a column about media, I ran through several emails from Mediapost to see what y’all have been talking about in my absence.

Two caught my eye. The first was a Media Insider from Dave Morgan titled “Cross-Training for Cross-Platform TV”. Dave’s jist, paraphrasing heavily, is that to get a decent audience for high engagement video ads, we’ll have to get comfortable with fishing in a whole bunch of smaller ponds rather than casting our net in a single ocean.

That checks out. As our entertainment choices and information sources keep multiplying exponentially, it’s natural that the big blocks of purchasable attention advertisers used to rely on are getting split into smaller and smaller chunks. This is certainly true for video-based media. In the column, Morgan said, the next decade will mean navigating a mix of linear and streaming TV channels and platforms to have any hope of efficiently reaching audiences at scale.”

Now, I don’t pretend to know anything about buying video ads –Mr. Morgan certainly has forgotten more than I’ll ever know – but I do know this. I recently caught up on a network series through getting it on demand on the network’s streaming platform. The ad execution was abysmal, to say the least. The creative, the delivery and the viewer experience was excruciating to sit through. I literally hated any brands that placed ads through the channel by the time I was done.

If I had to guess, I would say that this was treated like an advertising bargain bin – a last minute throw in for network advertisers that no one really thought or cared about. Some of the creative wasn’t even designed for the platform. The images didn’t execute correctly on the screen (tablet) I was watching it on. Whatever it cost these advertisers for this exposure, it was completely wasted on this audience of one.

The other item was more of a WTF moment – a column by Mediapost staff writer Wayne Friedman. In the column – Look Into The Night Sky – You Might See An Ad For Car Insurance”Friedman tells of a recent study that “looked at the possibility of a ‘space advertising’ mission, where one could advertise in the twilight over a particular urban area or city.”

This would be done by launching a number of satellites into a station orbit and letting them literally unfurl an advertising banner every night just after sunset.

Again, WTF. Do I want an ad popping up after a spectacular sunset telling me said sunset was brought to me by the MyPillow guy? No.

And knowing that advertisers can be a little obtuse sometimes, I’ll repeat – a little more emphatically – “F*&k NO!”

I had just a little taste of this the last week when I happened to see a Starlink train head across the night sky above me. If you haven’t seen this, it’s a perfect row of SpaceX Starlink satellites in orbit that can be seen in just the right conditions. In my case, there were probably about 50 satellites in a row.

Was it cool? Sure. But it was also unsettling. The night sky is supposed to be messy and spectacular, not precisely lined up like a set of Christmas lights. It was disconcerting to see something so obviously man-made encroaching on nature’s firmament.

Look advertisers, I get that it’s getting harder and harder to get our attention with your ads. That’s probably because we don’t want to give it to you, and – increasingly – we don’t have to. If that sounds harsh, it’s because you’ve burnt out any goodwill you might have had by sledgehammering us over the head with poorly executed, ham-fisted ads delivered ad-nauseum without any concern for our experience on the receiving end. That will be true on any platform you choose to deliver those ads on.

So, to circle back to Dave Morgan’s message, if you’re going to do it, at least try to do it well.

And finally, just so we’re clear, stay the hell out of my sunset!

Risk, Reward and the Rebound Market

Twelve years ago, when looking at B2B purchases and buying behaviors, I talked about a risk/reward matrix. I put forward the thought that all purchases have an element of risk and reward in them. In understanding the balance between those two, we can also understand what a buyer is going through.

At the time, I was saying how many B2B purchases have low reward but high risk. This explains the often-arduous B2B buying process, involving RFPs, approved vendor lists, many levels of sign off and a nasty track record of promising prospects suddenly disappearing out of a vendors lead pipeline. It was this mystifying marketplace that caused us to do a large research investigation into B2B buying and lead to me writing the book, The Buyersphere Project: How Businesses Buy from Businesses in the Digital Marketplace.

When I wrote about the matrix right here on Mediapost back then, there were those that said I had oversimplified buying behavior – that even the addition of a third dimension would make the model more accurate and more useful. Better yet, do some stat crunching on realtime data, as suggested by Andre Szykier:

“Simple StatPlot or SPSS in the right hands is the best approach rather than simplistic model proposed in the article.”

Perhaps, but for me, this model still serves as a quick and easy way to start to understand buyer behavior. As British statistician George P. Box once said, “All models are wrong, but some are useful.”

Fast forward to the unusual times we now find ourselves in. As I have said before, as we emerge from a forced 2-year hiatus from normal, it’s inevitable that our definitions of risk and reward in buying behaviors might have to be updated. I was reminded of this when I was last week’s commentary – “Cash-Strapped Consumers Seek Simple Pleasures” by Aaron Paquette. He starts by saying, “With inflation continuing to hover near 40-year highs, consumers seek out savings wherever they can find them — except for one surprising segment.”

Surprising? Not when I applied the matrix. It made perfect sense. Paquette goes on,

“Consumers will trade down for their commodities, but they pay up for their sugar, caffeine or cholesterol fix. They’re going without new clothes or furniture, and buying the cheapest pantry staples, to free scarce funds for a daily indulgence. Starbucks lattes aren’t bankrupting young adults — it’s their crushing student loans. And at a time when consumers face skyrocketing costs for energy, housing, education and medical care, they find that a $5 Big Mac, Frappuccino, or six pack of Coca-Cola is an easy way to “treat yo self.”

I have talked before about what we might expect as the market puts a global pandemic behind us. The concepts of balancing risk and reward are very much at the heart of our buying behaviors. Sociologist Nicholas Christakis explores this in his book Apollo’s Arrow. Right now, we’re in a delicate transition time. We want to reward ourselves but we’re still highly risk averse. We’re going to make purchases that fall into this quadrant of the matrix.

This is a likely precursor to what’s to come, when we move into reward seeking with a higher tolerance of risk. Christakis predicts this to come sometime in 2024: “What typically happens is people get less religious. They will relentlessly seek out social interactions in nightclubs and restaurants and sporting events and political rallies. There’ll be some sexual licentiousness. People will start spending their money after having saved it. They’ll be joie de vivre and a kind of risk-taking, a kind of efflorescence of the arts, I think.”

The consumer numbers shared by Paquette shows we’re dipping our toes into the waters of hedonism . The party hasn’t started yet but we are more than ready to indulge ourselves a little with a reward that doesn’t carry a lot of risk.

50 Shades of Greying

Here is what I know: Lisa LaFlamme – the main anchor of CTV News, one of Canada’s national nightly newscasts – was fired.

What I don’t know is why. There are multiple versions of why floating around. The one that seems to have served as a rallying point for those looking to support Ms. LaFlamme is that she was fired because she was getting old. During COVID she decided to let her hair go to its natural grey. That, according to the popular version, prompted network brass to pull the pin on her contract.

I suspect the real reason why was not quite that cut and dried. The owners of the network, Bell Media, have been relentlessly trimming their payrolls at their various news organizations over the past several years. I know of one such story through a personal connection. The way this one scenario played out sounded very similar to what happened to Lisa LaFlamme – minus the accusations of ageism and gender double standards. In this case, it was largely a matter of dollars and cents. TV news is struggling financially. Long-time on-air talent have negotiated a salary over their careers that is no longer sustainable. Something had to give. These are probably just the casualties attributable to a dying industry. A hundred years ago it would have been blacksmiths and gas lamplighters that were being let go by the thousands. The difference is that the average blacksmith or lamplighter didn’t have a following of millions of people. They also didn’t have social media. They certainly didn’t have corporate PR departments desperately searching for the latest social media “woke” bandwagon to vault upon.

What is interesting is how these things play out through various media channels. In Ms LaFlamme’s case, it was a perfect storm that lambasted Bell Media (which owns the CTV Network). As the ageism rumours began to emerge, anti-ageism social media campaigns were run by Dove, Wendy’s and even Sports Illustrated. LaFlamme wasn’t mentioned by name in most of these, but the connection was clear. Going grey was something to be celebrated, not a cause for contract cancellation. Grey flecked gravitas should be gender neutral. “Who the f*&k were these Millennial corporate pin-heads that couldn’t stand a little grey on the nightly news!”

It makes excellent fodder for the meme-factory, but I suspect the reality wasn’t quite that simple. Ms La Flamme has never publicly revealed the actual reason for dismissal from her point of view. She never mentioned ageism. She simply said she was “blindsided” by the news. The reasoning behind the parting of the ways from Bell Media has largely been left up to conjecture.

A few other things to note.  LaFlamme received the news on June 29th but didn’t share the news until six weeks later (August 15th) on a personal video she shared on her own social media feed. Bell Media offered her the opportunity to have an on-air send off, but she declined. Finally, she also declined several offers from Bell to continue with the network in other roles. She chose instead to deliver her parting shot in the war zone of social media.

To be fair to both sides, if we’re to catalog all the various rumors floating about, there are also those saying that the decision was brought in – in part – by an allegedly toxic work environment in the news department that started at the top, with LaFlamme.

Now, if the reason for the termination actually was ageism, that’s abhorrent. Ms. LaFlamme is actually a few years younger than I am. I would hate to think that people of our age, who should be still at the height of their careers, would be discriminated against simply because of age.

The same is true if the reason was sexism. There should be no distinction between the appropriate age of a male or female national anchor.

But if it’s more complex, which I’m pretty sure it is, it shows how our world doesn’t really deal very well with complexity anymore. The consideration required to understand them don’t fit well within the attention constraints of social media. It’s a lot easier just to sub in a socially charged hot button meme and wait for the inevitable opinion camps to form. Sure, they’ll be one dimensional and about as thoughtful as a sledgehammer, but those types of posts are a much better bet to go viral.

Whatever happened in the CTV National Newsroom, I do know that this shows that business decisions in the media business will have to follow a very different playbook from this point forward. Bell Media fumbled the ball badly on this one. They have been scrambling ever since to save face. It appears that Lisa LaFlamme – and her ragtag band of social media supporters – outplayed them at every turn.

By the way, LaFlamme just nabbed a temporary gig as a “special correspondent” for CityTV, Bell Media’s competitor, covering the funeral of Queen Elizabeth II and the proclamation of King Charles III.  She’s being consummately professional and comforting, garnering a ton of social media support as she eases Canada through the grieving process (our emotional tie to the Crown is another very complex relationship that would require several posts to unpack).  

Well played, Lisa LaFlamme – well played.

Dealing with Daily Doom

“We are Doomed”

The tweet came yesterday from a celebrity I follow. And you know what? I didn’t even bother to look to find out in which particular way we were doomed. That’s probably because my social media feeds are filled by daily predictions of doom. The end being nigh has ceased to be news. It’s become routine. That is sad. But more than that, it’s dangerous.

This is why Joe Mandese and I have agreed to disagree about the role media can play in messaging around climate change , or – for that matter – any of the existential threats now facing us. Alarmist messaging could be the problem, not the solution.

Mandese ended his post with this:

“What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist “change” to an “our house is on fire” crisis.”

Joe Mandese – Mediapost

But here’s the thing. Cranking up the crisis intensity on our messaging might have the opposite effect. It may paralyze us.

Something called “doom scrolling” is now very much a thing. And if you’re looking for Doomsday scenarios, the best place to start is the Subreddit r/collapse thread.

In a 30 second glimpse during the writing of this column, I discovered that democracy is dying, America is on the brink of civil war, Russia is turning off the tap on European oil supplies, we are being greenwashed into complacency, the Amazon Rainforest may never recover from its current environmental destruction and the “Doomsday” glacier is melting faster than expected. That was all above-the-fold. I didn’t even have to scroll for this buffet of all-you-can eat disaster. These were just the appetizers.

There is a reason why social media feeds are full of doom. We are hardwired to pay close attention to threats. This makes apocalyptic prophesizing very profitable for social media platforms. As British academic Julia Bell said in her 2020 book, Radical Attention,

“Behind the screen are impassive algorithms designed to ensure that the most outrageous information gets to our attention first. Because when we are enraged, we are engaged, and the longer we are engaged the more money the platform can make from us.”

Julia Bell – Radical Attention

But just what does a daily diet of doom do for our mental health? Does constantly making us aware of the impending end of our species goad us into action? Does it actually accomplish anything?

Not so much. In fact, it can do the opposite.

Mental health professionals are now treating a host of new climate change related conditions, including eco-grief, eco-anxiety and eco-depression. But, perhaps most alarmingly, they are now encountering something called eco-paralysis.

In an October 2020 Time.com piece on doom scrolling, psychologist Patrick Kennedy-Williams, who specializes in treating climate-related anxieties, was quoted, ““There’s something inherently disenfranchising about someone’s ability to act on something if they’re exposed to it via social media, because it’s inherently global. There are not necessarily ways that they can interact with the issue.” 

So, cranking up the intensity of the messaging on existential threats such as climate change may have the opposite effect, by scaring us into doing nothing. This is because of something called Yerkes-Dodson Law.

By Yerkes and Dodson 1908 – Diamond DM, et al. (2007). “The Temporal Dynamics Model of Emotional Memory Processing: A Synthesis on the Neurobiological Basis of Stress-Induced Amnesia, Flashbulb and Traumatic Memories, and the Yerkes-Dodson Law”. Neural Plasticity: 33. doi:10.1155/2007/60803. PMID 17641736., CC0, https://commons.wikimedia.org/w/index.php?curid=34030384

This “Law”, discovered by psychologists Robert Yerkes and John Dodson in 1908, isn’t so much a law as a psychological model. It’s a typical bell curve. On the front end, we find that our performance in responding to a situation increases along with our attention and interest in that situation. But the line does not go straight up. At some point, it peaks and then goes downhill. Intent gives way to anxiety. The more anxious we become, the more our performance is impaired.

When we fret about the future, we are actually grieving the loss of our present. In this process, we must make our way through the 5 stages of grief introduced by psychiatrist Elisabeth Kübler-Ross in 1969 through her work with terminally ill patients. The stages are: Denial, Anger, Bargaining, Depression and Acceptance.

One would think that triggering awareness would help accelerate us through the stages. But there are a few key differences. In dealing with a diagnosis of terminal illness, typically there is one hammer-blow event when you become aware of the situation. From there dealing with it begins. And – even when it begins – it’s not a linear journey. As anyone who has ever grieved will tell you, what stage you’re in depends on which day I’m talking to you. You can slip for Acceptance to Anger in a heartbeat.

With climate change, awareness doesn’t come just once. The messaging never ends. It’s a constant cycle of crisis, trapping us in a loop that cycles between denial, depression and despair.

An excellent post on Climateandmind.org on climate grief talks about this cycle and how we get trapped within it. Some of us get stuck in a stage and never move on. Even climate scientist and activist Susanne Moser admits to being trapped in something she calls Functional Denial,

“It’s that simultaneity of being fully aware and conscious and not denying the gravity of what we’re creating (with Climate Change), and also having to get up in the morning and provide for my family and fulfill my obligations in my work.”

Susan Moser

It’s exactly this sense of frustration I voiced in my previous post. But the answer is not making me more aware. Like Moser, I’m fully aware of the gravity of the various threats we’re facing. It’s not attention I lack, it’s agency.

I think the time to hope for a more intense form of messaging to prod the deniers into acceptance is long past. If they haven’t changed their minds yet, they ain’t goin’ to!

I also believe the messaging we need won’t come through social media. There’s just too much froth and too much profit in that froth.

What we need – from media platforms we trust – is a frank appraisal of the worst-case scenario of our future. We need to accept that and move on to deal with what is to come. We need to encourage resilience and adaptability. We need hope that while what is to come is most certainly going to be catastrophic, it doesn’t have to be apocalyptic.

We need to know we can survive and start thinking about what that survival might look like.

The Tricky Timing Of Being Amazed By The Future

When I was a kid, the future was a big deal. The cartoon the Jetson’s was introduced in 1962. We were in the thick of the space race. Science was doing amazing things. What the future might look like was the theme of fairs and exhibits around the world, including my little corner of the world in Western Canada. I remember going to an exhibit about the Amazing World of Tomorrow at the Calgary Stampede when I was 7 or 8, so either in 1968 or 1969.

Walt Disney was also a big fan of the future. That’s why you have Tomorrowland at Disneyland in Anaheim, California and Epcot at Disneyworld in Kissimmee, Florida. Disney mused, “Tomorrow can be a wonderful age. Our scientists today are opening the doors of the Space Age to achievements that will benefit our children and generations to come. The Tomorrowland attractions have been designed to give you an opportunity to participate in adventures that are a living blueprint of our future.”

But the biggest problem with Tomorrowland is that the future kept becoming the present and – in doing so – it became no big deal. The first Tomorrowland opened in 1955 and the “future” it envisioned was 1986. From then forward, Disney has continually tried to keep Tomorrowland from becoming Yesterdayland. It was an example of just how short the shelf life of “Tomorrow” actually is.

For example, in 1957, the Monsanto House of the Future was introduced in California’s Tomorrowland. The things that amazed then were microwave ovens and television remote controls. The amazement factor on these two things didn’t last very long. But even so, they lasted longer than the Viewliner – “the fastest miniature train in the world.” That Tomorrowland attraction lasted just one year.

Oh, and then there was the video phone.

In the 1950’s and 60’s, we were fascinated by idea of having a video call with someone. I remember seeing a videophone demonstrated at the fair I went to as a kid. It was probably the AT&T Picturephone, which was introduced at the 1964 New York World’s Fair.  We were all suitably amazed.

But the Picturephone wasn’t really new. Bell Labs had been working on it since 1927. A large screen videophone was shown in Charlie Chaplin’s 1936 film, Modern Times. Even with this decades long runup, when AT&T tried to make it commercially viable in 1970, it was a dismal failure.  This just shows how fragile the timing is with trying to bring the future to today. If it’s too soon, everyone is scared to adopt it. If it’s too late, it’s boring. More than anything, our appreciation of the future comes down to a matter of luck.

Here are a few more examples. Yesterday, I got a call on my mobile when I couldn’t get to my phone, so I answered it on my Apple Watch. My father-in-law happened to be with me. “You answered the phone on your watch? Now I’ve seen everything!” He was amazed, but for me it was commonplace. If we backtrack to 1946, when the comic book character Dick Tracy introduced his wrist radio, it was almost unimaginably cool. Well, it was unimaginable to everyone but inventor Al Gross, who had actually built such a device. That’s where Tracy’s creator, Chester Gould, got the idea from.

Or teleconferencing. Today, in our post-COVID world, Zoom meetings are the norm, even mundane. But the technology we today take for granted has been 150 years in the making. It was in the 1870’s Bell Labs (again) first came up with the idea of transmitting both an image and audio over wire.

Like most things, the tricky timing of our relationship with the future is a product of how our brains work. We use our remembered past as the springboard to try to imagine the future. And our degree of amazement depends on how big the gap is between the two.

In the 1950’s, H.M. (research patients were usually known only by their initials) was a patient who suffered from epilepsy. He underwent an experimental surgery that removed several parts of his brain, including his entire hippocampus, which is vital for memory. In that surgery, H.M. not only lost his past, but he also became unable to imagine the future. Since then, functional MRI studies have found that the same parts of the brain are involved in both retrieving memories and in imagining the future.

In both these instances, the brain creates a scene. If it’s in the past, we relive a memory, often with questionable fidelity to what actually happened. Our memories are notoriously creative at filling in gaps in our memory with things we just make up. And if it’s in the future, we prelive the scene, using what we know to build what the future might look like.

How amazing the future is to us depends on the gap between what we know and what we’re able to imagine. The bigger the gap that we’re able to manage, the more we’re amazed. But as the future becomes today, the gap narrows dramatically, and the amazement drops accordingly. Adoption of new technologies depends in part on being able to squeeze through this rapidly narrowing window. If the window is too big, we aren’t willing to take on the risks involved. If the window is too small, there’s not enough of an advantage for us to adopt the future technology.

Even with this challenge of timing, the future is relentless. It comes to us in wave after wave, passing from being amazing to boring. In the process, we sometimes have to look back to realize how far we’ve come.

I was thinking about that and about the 7-year-old boy I was, standing looking at the Picturephone at the Calgary Stampede in 1968. As amazing as it seemed to me at the time, how could I possibly imagine the world I live in today, a little over a half century later?

With Digital Friends Like These, Who Needs Enemies?

Recently, I received an email from Amazon that began:

“You’re amazing. Really, you’re awesome! Did that make you smile? Good. Alexa is here to compliment you. Just say, ‘Alexa, compliment me’”

“What,” I said to myself, “sorry-assed state is my life in that I need to depend on a little black electronic hockey puck to affirm my self-worth as a human being?”

I realize that the tone of the email likely had tongue at least part way implanted in cheek, but still, seriously – WTF Alexa? (Which, incidentally, Alexa also has covered. Poise that question and Alexa responds – “I’m always interested in feedback.”)

My next thought was, maybe I think this is a joke, but there are probably people out there that need this. Maybe their lives are dangling by a thread and it’s Alexa’s soothing voice digitally pumping their tires that keeps them hanging on until tomorrow. And – if that’s true – should I be the one to scoff at it?

I dug a little further into the question, “Can we depend on technology for friendship, for understanding, even – for love?”

The answer, it turns out, is probably yes.

A few studies have shown that we will share more with a virtual therapist than a human one in a face-to-face setting. We feel heard without feeling judged.

In another study, patients with a virtual nurse ended up creating a strong relationship with it that included:

  • Using close forms of greeting and goodbye
  • Expressing happiness to see the nurse
  • Using compliments
  • Engaging in social chat
  • And expressing a desire to work together and speak with the nurse again

Yet another study found that robots can even build a stronger relationship with us by giving us a pat on the hand or touching our shoulder. We are social animals and don’t do well when we lose that sociability. If we go too long without being touched, we experience something called “skin hunger” and start feeling stressed, depressed and anxious. The use of these robots is being tested in senior’s care facilities to help combat extreme loneliness.

In reading through these studies, I was amazed at how quickly respondents seemed to bond with their digital allies. We have highly evolved mechanisms that determine when and with whom we seem to place trust. In many cases, these judgements are based on non-verbal cues: body language, micro-expressions, even how people smell. It surprised me that when our digital friends presented none of these, the bonds still developed. In fact, it seems they were deeper and stronger than ever!

Perhaps it’s the very lack of humanness that is the explanation. As in the case of the success of a virtual therapist, maybe these relationships work because we can leave the baggage of being human behind. Virtual assistants are there to serve us, not judge or threaten us. We let our guards down and are more willing to open up.

Also, I suspect that the building blocks of these relationships are put in place not by the rational, thinking part of our brains but the emotional, feeling part. It’s been shown that self-affirmation works by activating the reward centers of our brain, the ventral striatum and ventromedial prefrontal cortex. These are not pragmatic, cautious parts of our cognitive machinery. As I’ve said before, they’re all gas and no brakes. We don’t think a friendship with a robot is weird because we don’t think about it at all, we just feel better. And that’s enough.

AI companionship seems a benign – even beneficial use of technology – but what might the unintended consequences be? Are we opening ourselves up to potential dangers by depending on AI for our social contact – especially when the lines are blurred between for-profit motives and affirmation we become dependent on.

In therapeutic use cases of virtual relationships as outlined up to now, there is no “for-profit” motive. But Amazon, Apple, Facebook, Google and the other providers of consumer directed AI companionship are definitely in it for the money. Even more troubling, two of those – Facebook and Google – depend on advertising for their revenue. Much as this gang would love us to believe that they only have our best interests in mind – over $1.2 trillion in combined revenue says otherwise. I suspect they have put a carefully calculated price on digital friendship.

Perhaps it’s that – more than anything – that threw up the red flags when I got that email from Amazon. It sounded like it was coming from a friend, and that’s exactly what worries me.

Does Social Media “Dumb Down” the Wisdom of Crowds?

We assume that democracy is the gold standard of sustainable political social contracts. And it’s hard to argue against that. As Winston Churchill said, “democracy is the worst form of government – except for all the others that have been tried.”

Democracy may not be perfect, but it works. Or, at least, it seems to work better than all the other options. Essentially, democracy depends on probability – on being right more often than we’re wrong.

At the very heart of democracy is the principle of majority rule. And that is based on something called Jury Theorem, put forward by the Marquis de Condorcet in his 1785 work, Essay on the Application of Analysis to the Probability of Majority Decisions. Essentially, it says that the probability of making the right decision increases when you average the decisions of as many people as possible. This was the basis of James Suroweicki’s 2004 book, The Wisdom of Crowds.

But here’s the thing about the wisdom of crowds – it only applies when those individual decisions are reached independently. Once we start influencing each other’s decision, that wisdom disappears. And that makes social psychologist Solomon Asch’s famous conformity experiments of 1951 a disturbingly significant fly in the ointment of democracy.

You’re probably all aware of the seminal study, but I’ll recap anyway. Asch gathered groups of people and showed them a card with three lines of obviously different lengths. Then he asked participants which line was the closest to the reference line. The answer was obvious – even a toddler can get this test right pretty much every time.

But unknown to the test subject, all the rest of the participants were “stooges” – actors paid to sometimes give an obviously incorrect answer. And when this happened, Asch was amazed to find that the test subjects often went against the evidence of their own eyes just to conform with the group. When wrong answers were given, a third of the subjects always conformed, 75% of the subjects conformed at least once, and only 25% stuck to the evidence in front of them and gave the right answer.

The results baffled Asch. The most interesting question to him was why this was happening. Were people making a decision to go against their better judgment – choosing to go with the crowd rather than what they were seeing with their own eyes? Or was something happening below the level of consciousness? This was something Solomon Asch wondered about right until his death in 1996. Unfortunately, he never had the means to explore the question further.

But, in 2005, a group of researchers at Emory University, led by Gregory Berns, did have a way. Here, Asch’s experiment was restaged, only this time participants were in a fMRI machine so Bern and his researchers could peak at what was actually happening in their brains. The results were staggering.

They found that conformity actually changes the way our brain works. It’s not that we change what we say to conform with what others are saying, despite what we see with our own eyes. What we see is changed by what others are saying.

If, Berns and his researchers reasoned, you were consciously making a decision to go against the evidence of your own eyes just to conform with the group, you should see activity in the frontal areas of our brain that are engaged in monitoring conflicts, planning and other higher-order mental activities.

But that isn’t what they found. In those participants that went along with obviously incorrect answers from the group, the parts of the brain that showed activity were only in the posterior parts of the brain – those that control spatial awareness and visual perception. There was no indication of an internal mental conflict. The brain was actually changing how it processed the information it was receiving from the eyes.

This is stunning. It means that conformity isn’t a conscious decision. Our desire to conform is wired so deeply in our brains, it actually changes how we perceive the world. We never have the chance to be objectively right, because we never realize we’re wrong.

But what about those that went resisted conformity and stuck to the evidence they were seeing with their own eyes? Here again, the results were fascinating. The researchers found that in these cases, they saw a spike of activity in the right amygdala and right caudate nucleus – areas involved in the processing of strong emotions, including fear, anger and anxiety. Those that stuck to the evidence of their own eyes had to overcome emotional hurdles to do so. In the published paper, the authors called this the “pain of independence.”

This study highlights a massively important limitation in the social contract of democracy. As technology increasingly imposes social conformity on our culture, we lose the ability to collectively make the right decision. Essentially, is shows that this effect not only erases the wisdom of crowds, but actively works against it by exacting an emotional price for being an independent thinker.

Crisis? What Crisis?

Never let a good crisis go to waste.

— Winston Churchill, approximately 1944

Crisis? What crisis?

— Supertramp album, 1975

I’ll be honest. I was struggling to finish this column. It was actually heading for the digital dustbin when I happened on MediaPost Editor in Chief Joe Mandese’s excellent commentary, “It’s Time For A Change, And By That, I Mean A Crisis.”

Much as I respect Joe, whose heart and head are definitely in the right place, I think we may have to agree to disagree. He says,

“What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist ‘change’ to an ‘our house is on fire’ crisis.”

Joe Mandese – Mediapost

But exactly how do you make people pay attention to an existential crisis? How do you communicate threat?

The problem may be that we can’t. It may simply not be possible.

That was crystallized in the scariest way possible recently on the U.K.’s GB News channel, where an anchor desperately tried to make light of the meteorologist’s dire predictions of potential fatalities ahead of an unprecedented heat wave in England.

Weather expert John Hammond issues a warning over the ‘extreme’ conditions expected next week – GB News – July 14, 2022

The Basics of Communication

There are typically four parts to any communication model: the sender, the message, the medium and the receiver. Joe’s post said the problem may be in the message — it hasn’t been urgent enough. I disagree. I think the problem is at the end of the chain, with the receiver. The message is already effective. It’s just not getting through.

In online course on business communications, Lumen Learning lists a number of potential barriers to communication. I’d like to focus on three that were mentioned: filtering, bias and lack of trust.

The first one is the big one, but the last two contribute. And they all lie on the receiving end of the communication model, with the receiver, who just doesn’t want to receive the message.

The problem, most of all, is one of entitlement.

I’m not pointing fingers — unless I’m pointing at myself. I live a privileged lifestyle. I don’t think I’ve let the message, with all its implications, fully get through to me, because to accept that message is unimaginably depressing and scary. I fully admit I’m filtering, because I feel overwhelmed. Climate change has gone from being an inconvenient truth to something we’re determined to ignore, even if it kills us.

If I count all the people whose lifestyle I have some understanding of, it’s aboiut a thousand people. I think an overwhelming majority of them get the massive implications of climate change.  Of all those people, I can count on the fingers of one hand (maybe two) those who have truly made substantive changes in their lifestyle to really address climate change. That’s — at best – .5% to 1% of everyone I know.

 I’m not judging. I haven’t made the changes required myself. Not really.

I have done all 10 of the UN’s suggestions of 10 ways you can help fight climate crisis to one extent or another. But I can’t help feeling that even doing all 10 is like peeing on a forest fire. Given the high stakes we’re talking about here, I really don’t feel I’m making a meaningful difference.  I haven’t sold either of my two vehicles, I haven’t stopped planning trips that involve air travel, or moved into a more energy- efficient house. I still eat red meat (although not as much as before).

The fact is, when a message is trying to tell us that our inevitable future means we’re going to have less than we have today, we will ignore that message. 

I get it. I truly do. I started and stopped this column several times because it depressed the hell out of me. But I am now determined to plow through to the end, so let’s talk about entitlement. We use this word a lot, especially lately. But what does it mean? It means we believe we have the right to the lifestyle we currently have.

But there’s no one to give us that right. Our lifestyle isn’t granted to us by anyone. If we live a good life, as I do, we like to think that it’s due to our hard work and wise choices. That’s why we’re entitled to everything we have. But if we rationally pick apart our success, we find that plain old dumb luck plays a bigger role than we’d like to admit. In my case, I was born a white, anglo male in one of the richest countries in the world. I came out of the womb with advantages most of the world can only dream of.

Entitlement is actually the result of a cognitive bias – or rather, a bundle of cognitive biases that include loss aversion and endowment effect. It’s a quirk in our mental wiring. It’s a mistaken belief – an illusion. I’m not owed the life I have. I have that life because of a convergence of lucky factors, and it appears my luck may be running out. There is no arbitrator of privilege that has granted North America the right to be the single biggest consumer of natural resources (per capita) in the world. But we seem prepared gamble our planet away on this mistaken belief about our own entitlement.

In psychology, there’s something called the Psychological Entitlement Scale. It measures the strength of this cognitive bias. A recent study showed just how strongly this was correlated with our ability to ignore messaging that we didn’t want to hear because we felt it interfered with our “rights.” In this case, the message was about health guidelines during COVID-19. And we all know how that turned out. Even something as ridiculously simple as wearing a face mask whipped up a shitstorm of entitlement. 

This is not a problem of messaging. We are not going to be persuaded to do the right thing.  We are being asked to give up too much.

Climate change can only be addressed by two things: legislation and a mobilization of the market. We cannot be left with the option of doing nothing — or too little — any longer.

We must be forced to be better. We need more massive omnibus bills, like the recent Manchin-Schumer deal, that mobilize industry and incentivize better behavior. I only hope my own Canadian government follows suit soon.

Much as I wish Joe Mandese were right that by turning up the intensity of the messaging, we could persuade consumers to really move the needle on the climate threat, I don’t think this would work. It’s not that we don’t know about climate change. It’s that we can’t let ourselves care, because our entitlement won’t let us.

The Biases of Artificial Intelligence: Our Devils are in the Data

I believe that – over time – technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place to live in. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backwards in the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.

But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way. I accept that along with my optimism

Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.

Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the saviour of democracy. But, even then, the optimism was premature – even misplaced.

In his book The Net Delusion: The Dark Side of Internet Freedom, journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just 5 years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out. As Morozov reminds us, technology – in and of itself – is not an answer. It is a tool. Its impact will be determined by those that built the tool and, more importantly, those that use the tool.

Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.

Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.

“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.

To assess possible gender bias in search results, the researchers examined whether words that should refer with equal probability to a man or a woman, such as “person,” “student,” or “human,” are more often assumed to be a man. They conducted Google image searches for “person” across 37 countries. The results showed that the proportion of male images yielded from these searches was higher in nations with greater gender inequality, revealing that algorithmic gender bias tracks with societal gender inequality.

In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote:

“I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.”’Data sets for detecting skin cancer are missing samples of darker skin types. “

Deborah Raji, MIT Technology Review

These biases in search highlight the biases in a culture. Search brings back a representation of content that has been published online; a reflection of a society’s perceptions. In these cases, the devil is in the data. The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.

In article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019  National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—“a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”

Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?

There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender balanced nor diverse.  According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.

Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.

Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world.  We are.