The Ten Day Tech Detox

I should have gone cold turkey on tech. I really should have.

It would have been the perfect time – should have been the perfect time.

But I didn’t. As I spent 10 days on BC’s gorgeous sunshine coast with family, I also trundled along my assortment of connected gadgets. 

But I will say it was a partially successful detox. I didn’t crack open the laptop as much as I usually do. I generally restricted use of my iPad to reading a book.

But my phone – it was my phone, always within reach, that tempted me with social media’s siren call.

In a podcast, Andrew Selepak, social media professor at the University of Florida, suggests that rather than doing a total detox that is probably doomed to fail, you use vacations as an opportunity to use tech as a tool rather than an addiction.

I will say that for most of the time, that’s what I did. As long as I was occupied with something I was fine. 

Boredom is the enemy. It’s boredom that catches you. And the sad thing was, I really shouldn’t have been bored. I was in one of the most beautiful places on earth. I had the company of people I loved. I saw humpback whales – up close – for Heaven’s sake. If ever there was a time to live in the moment, to embrace the here and now, this was it. 

The problem, I realized, is that we’re not really comfortable any more with empty spaces – whether they be in conversation, in our social life or in our schedule of activities. We feel guilt and anxiety when we’re not doing anything.

It was an interesting cycle. As I decompressed after many weeks of being very busy, the first few days were fine. “I need this,” I kept telling myself. It’s okay just to sit and read a book. It’s okay not to have every half-hour slot of the day meticulously planned to jam as much in as possible.

That lasted about 48 hours. Then I started feeling like I should be doing something. I was uncomfortable with the empty spaces.

The fact is, as I learned – boredom always has been part of the human experience. It’s a feature – not a bug. As I said, boredom represents the empty spaces that allow themselves to be filled with creativity.  Alicia Walf, a neuroscientist and a senior lecturer in the Department of Cognitive Science at Rensselaer Polytechnic Institute, says it is critical for brain health to let yourself be bored from time to time.

“Being bored can help improve social connections. When we are not busy with other thoughts and activities, we focus inward as well as looking to reconnect with friends and family. 

Being bored can help foster creativity. The eureka moment when solving a complex problem when one stops thinking about it is called insight.

Additionally, being bored can improve overall brain health.  During exciting times, the brain releases a chemical called dopamine which is associated with feeling good.  When the brain has fallen into a predictable, monotonous pattern, many people feel bored, even depressed. This might be because we have lower levels of dopamine.”

That last bit, right there, is the clue why our phones are particularly prone to being picked up in times of boredom. Actually, three things are at work here. The first is that our mobile devices let us carry an extended social network in our pockets. In an article from Harvard, this is explained: “Thanks to the likes of Facebook, Snapchat, Instagram, and others, smartphones allow us to carry immense social environments in our pockets through every waking moment of our lives.”

As Walf said, boredom is our brains way of cueing us to seek social interaction. Traditionally, this was us getting the hell out of our cave – or cabin – or castle – and getting some face time with other humans. 

But technology has short circuited that. Now, we get that social connection through the far less healthy substitution of a social media platform. And – in the most ironic twist – we get that social jolt not by interacting with the people we might happen to be with, but by each staring at a tiny little screen that we hold in our hand.

The second problem is that mobile devices are not designed to leave us alone, basking in our healthy boredom. They are constantly beeping, buzzing and vibrating to get our attention. 

The third problem is that – unlike a laptop or even a tablet – mobile devices are our device of choice when we are jonesing for a dopamine jolt. It’s our phones we reach for when we’re killing time in a line up, riding the bus or waiting for someone in a coffee shop. This is why I had a hard time relegating my phone to being just a tool while I was away.

As a brief aside – even the term “killing time” shows how we are scared to death of being bored. That’s a North American saying – boredom is something to be hunted down and eradicated. You know what Italians call it? “Il dolce far niente” – the sweetness of doing nothing. Many are the people who try to experience life by taking endless photos and posting on various feeds, rather than just living it. 

The fact is, we need boredom. Boredom is good, but we are declaring war on it, replacing it with a destructive need to continually bath our brains in the dopamine high that comes from checking our Facebook feed or latest Tiktok reel. 

At least one of the architects of this vicious cycle feels some remorse (also from the article from Harvard). “ ‘I feel tremendous guilt,’ admitted Chamath Palihapitiya, former Vice President of User Growth at Facebook, to an audience of Stanford students. He was responding to a question about his involvement in exploiting consumer behavior. ‘The short-term, dopamine-driven feedback loops that we have created are destroying how society works,’ “

That is why we have to put the phone down and watch the humpback whales. That, miei amici, is il dolci far niente!

Risk, Reward and the Rebound Market

Twelve years ago, when looking at B2B purchases and buying behaviors, I talked about a risk/reward matrix. I put forward the thought that all purchases have an element of risk and reward in them. In understanding the balance between those two, we can also understand what a buyer is going through.

At the time, I was saying how many B2B purchases have low reward but high risk. This explains the often-arduous B2B buying process, involving RFPs, approved vendor lists, many levels of sign off and a nasty track record of promising prospects suddenly disappearing out of a vendors lead pipeline. It was this mystifying marketplace that caused us to do a large research investigation into B2B buying and lead to me writing the book, The Buyersphere Project: How Businesses Buy from Businesses in the Digital Marketplace.

When I wrote about the matrix right here on Mediapost back then, there were those that said I had oversimplified buying behavior – that even the addition of a third dimension would make the model more accurate and more useful. Better yet, do some stat crunching on realtime data, as suggested by Andre Szykier:

“Simple StatPlot or SPSS in the right hands is the best approach rather than simplistic model proposed in the article.”

Perhaps, but for me, this model still serves as a quick and easy way to start to understand buyer behavior. As British statistician George P. Box once said, “All models are wrong, but some are useful.”

Fast forward to the unusual times we now find ourselves in. As I have said before, as we emerge from a forced 2-year hiatus from normal, it’s inevitable that our definitions of risk and reward in buying behaviors might have to be updated. I was reminded of this when I was last week’s commentary – “Cash-Strapped Consumers Seek Simple Pleasures” by Aaron Paquette. He starts by saying, “With inflation continuing to hover near 40-year highs, consumers seek out savings wherever they can find them — except for one surprising segment.”

Surprising? Not when I applied the matrix. It made perfect sense. Paquette goes on,

“Consumers will trade down for their commodities, but they pay up for their sugar, caffeine or cholesterol fix. They’re going without new clothes or furniture, and buying the cheapest pantry staples, to free scarce funds for a daily indulgence. Starbucks lattes aren’t bankrupting young adults — it’s their crushing student loans. And at a time when consumers face skyrocketing costs for energy, housing, education and medical care, they find that a $5 Big Mac, Frappuccino, or six pack of Coca-Cola is an easy way to “treat yo self.”

I have talked before about what we might expect as the market puts a global pandemic behind us. The concepts of balancing risk and reward are very much at the heart of our buying behaviors. Sociologist Nicholas Christakis explores this in his book Apollo’s Arrow. Right now, we’re in a delicate transition time. We want to reward ourselves but we’re still highly risk averse. We’re going to make purchases that fall into this quadrant of the matrix.

This is a likely precursor to what’s to come, when we move into reward seeking with a higher tolerance of risk. Christakis predicts this to come sometime in 2024: “What typically happens is people get less religious. They will relentlessly seek out social interactions in nightclubs and restaurants and sporting events and political rallies. There’ll be some sexual licentiousness. People will start spending their money after having saved it. They’ll be joie de vivre and a kind of risk-taking, a kind of efflorescence of the arts, I think.”

The consumer numbers shared by Paquette shows we’re dipping our toes into the waters of hedonism . The party hasn’t started yet but we are more than ready to indulge ourselves a little with a reward that doesn’t carry a lot of risk.

50 Shades of Greying

Here is what I know: Lisa LaFlamme – the main anchor of CTV News, one of Canada’s national nightly newscasts – was fired.

What I don’t know is why. There are multiple versions of why floating around. The one that seems to have served as a rallying point for those looking to support Ms. LaFlamme is that she was fired because she was getting old. During COVID she decided to let her hair go to its natural grey. That, according to the popular version, prompted network brass to pull the pin on her contract.

I suspect the real reason why was not quite that cut and dried. The owners of the network, Bell Media, have been relentlessly trimming their payrolls at their various news organizations over the past several years. I know of one such story through a personal connection. The way this one scenario played out sounded very similar to what happened to Lisa LaFlamme – minus the accusations of ageism and gender double standards. In this case, it was largely a matter of dollars and cents. TV news is struggling financially. Long-time on-air talent have negotiated a salary over their careers that is no longer sustainable. Something had to give. These are probably just the casualties attributable to a dying industry. A hundred years ago it would have been blacksmiths and gas lamplighters that were being let go by the thousands. The difference is that the average blacksmith or lamplighter didn’t have a following of millions of people. They also didn’t have social media. They certainly didn’t have corporate PR departments desperately searching for the latest social media “woke” bandwagon to vault upon.

What is interesting is how these things play out through various media channels. In Ms LaFlamme’s case, it was a perfect storm that lambasted Bell Media (which owns the CTV Network). As the ageism rumours began to emerge, anti-ageism social media campaigns were run by Dove, Wendy’s and even Sports Illustrated. LaFlamme wasn’t mentioned by name in most of these, but the connection was clear. Going grey was something to be celebrated, not a cause for contract cancellation. Grey flecked gravitas should be gender neutral. “Who the f*&k were these Millennial corporate pin-heads that couldn’t stand a little grey on the nightly news!”

It makes excellent fodder for the meme-factory, but I suspect the reality wasn’t quite that simple. Ms La Flamme has never publicly revealed the actual reason for dismissal from her point of view. She never mentioned ageism. She simply said she was “blindsided” by the news. The reasoning behind the parting of the ways from Bell Media has largely been left up to conjecture.

A few other things to note.  LaFlamme received the news on June 29th but didn’t share the news until six weeks later (August 15th) on a personal video she shared on her own social media feed. Bell Media offered her the opportunity to have an on-air send off, but she declined. Finally, she also declined several offers from Bell to continue with the network in other roles. She chose instead to deliver her parting shot in the war zone of social media.

To be fair to both sides, if we’re to catalog all the various rumors floating about, there are also those saying that the decision was brought in – in part – by an allegedly toxic work environment in the news department that started at the top, with LaFlamme.

Now, if the reason for the termination actually was ageism, that’s abhorrent. Ms. LaFlamme is actually a few years younger than I am. I would hate to think that people of our age, who should be still at the height of their careers, would be discriminated against simply because of age.

The same is true if the reason was sexism. There should be no distinction between the appropriate age of a male or female national anchor.

But if it’s more complex, which I’m pretty sure it is, it shows how our world doesn’t really deal very well with complexity anymore. The consideration required to understand them don’t fit well within the attention constraints of social media. It’s a lot easier just to sub in a socially charged hot button meme and wait for the inevitable opinion camps to form. Sure, they’ll be one dimensional and about as thoughtful as a sledgehammer, but those types of posts are a much better bet to go viral.

Whatever happened in the CTV National Newsroom, I do know that this shows that business decisions in the media business will have to follow a very different playbook from this point forward. Bell Media fumbled the ball badly on this one. They have been scrambling ever since to save face. It appears that Lisa LaFlamme – and her ragtag band of social media supporters – outplayed them at every turn.

By the way, LaFlamme just nabbed a temporary gig as a “special correspondent” for CityTV, Bell Media’s competitor, covering the funeral of Queen Elizabeth II and the proclamation of King Charles III.  She’s being consummately professional and comforting, garnering a ton of social media support as she eases Canada through the grieving process (our emotional tie to the Crown is another very complex relationship that would require several posts to unpack).  

Well played, Lisa LaFlamme – well played.

Dealing with Daily Doom

“We are Doomed”

The tweet came yesterday from a celebrity I follow. And you know what? I didn’t even bother to look to find out in which particular way we were doomed. That’s probably because my social media feeds are filled by daily predictions of doom. The end being nigh has ceased to be news. It’s become routine. That is sad. But more than that, it’s dangerous.

This is why Joe Mandese and I have agreed to disagree about the role media can play in messaging around climate change , or – for that matter – any of the existential threats now facing us. Alarmist messaging could be the problem, not the solution.

Mandese ended his post with this:

“What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist “change” to an “our house is on fire” crisis.”

Joe Mandese – Mediapost

But here’s the thing. Cranking up the crisis intensity on our messaging might have the opposite effect. It may paralyze us.

Something called “doom scrolling” is now very much a thing. And if you’re looking for Doomsday scenarios, the best place to start is the Subreddit r/collapse thread.

In a 30 second glimpse during the writing of this column, I discovered that democracy is dying, America is on the brink of civil war, Russia is turning off the tap on European oil supplies, we are being greenwashed into complacency, the Amazon Rainforest may never recover from its current environmental destruction and the “Doomsday” glacier is melting faster than expected. That was all above-the-fold. I didn’t even have to scroll for this buffet of all-you-can eat disaster. These were just the appetizers.

There is a reason why social media feeds are full of doom. We are hardwired to pay close attention to threats. This makes apocalyptic prophesizing very profitable for social media platforms. As British academic Julia Bell said in her 2020 book, Radical Attention,

“Behind the screen are impassive algorithms designed to ensure that the most outrageous information gets to our attention first. Because when we are enraged, we are engaged, and the longer we are engaged the more money the platform can make from us.”

Julia Bell – Radical Attention

But just what does a daily diet of doom do for our mental health? Does constantly making us aware of the impending end of our species goad us into action? Does it actually accomplish anything?

Not so much. In fact, it can do the opposite.

Mental health professionals are now treating a host of new climate change related conditions, including eco-grief, eco-anxiety and eco-depression. But, perhaps most alarmingly, they are now encountering something called eco-paralysis.

In an October 2020 Time.com piece on doom scrolling, psychologist Patrick Kennedy-Williams, who specializes in treating climate-related anxieties, was quoted, ““There’s something inherently disenfranchising about someone’s ability to act on something if they’re exposed to it via social media, because it’s inherently global. There are not necessarily ways that they can interact with the issue.” 

So, cranking up the intensity of the messaging on existential threats such as climate change may have the opposite effect, by scaring us into doing nothing. This is because of something called Yerkes-Dodson Law.

By Yerkes and Dodson 1908 – Diamond DM, et al. (2007). “The Temporal Dynamics Model of Emotional Memory Processing: A Synthesis on the Neurobiological Basis of Stress-Induced Amnesia, Flashbulb and Traumatic Memories, and the Yerkes-Dodson Law”. Neural Plasticity: 33. doi:10.1155/2007/60803. PMID 17641736., CC0, https://commons.wikimedia.org/w/index.php?curid=34030384

This “Law”, discovered by psychologists Robert Yerkes and John Dodson in 1908, isn’t so much a law as a psychological model. It’s a typical bell curve. On the front end, we find that our performance in responding to a situation increases along with our attention and interest in that situation. But the line does not go straight up. At some point, it peaks and then goes downhill. Intent gives way to anxiety. The more anxious we become, the more our performance is impaired.

When we fret about the future, we are actually grieving the loss of our present. In this process, we must make our way through the 5 stages of grief introduced by psychiatrist Elisabeth Kübler-Ross in 1969 through her work with terminally ill patients. The stages are: Denial, Anger, Bargaining, Depression and Acceptance.

One would think that triggering awareness would help accelerate us through the stages. But there are a few key differences. In dealing with a diagnosis of terminal illness, typically there is one hammer-blow event when you become aware of the situation. From there dealing with it begins. And – even when it begins – it’s not a linear journey. As anyone who has ever grieved will tell you, what stage you’re in depends on which day I’m talking to you. You can slip for Acceptance to Anger in a heartbeat.

With climate change, awareness doesn’t come just once. The messaging never ends. It’s a constant cycle of crisis, trapping us in a loop that cycles between denial, depression and despair.

An excellent post on Climateandmind.org on climate grief talks about this cycle and how we get trapped within it. Some of us get stuck in a stage and never move on. Even climate scientist and activist Susanne Moser admits to being trapped in something she calls Functional Denial,

“It’s that simultaneity of being fully aware and conscious and not denying the gravity of what we’re creating (with Climate Change), and also having to get up in the morning and provide for my family and fulfill my obligations in my work.”

Susan Moser

It’s exactly this sense of frustration I voiced in my previous post. But the answer is not making me more aware. Like Moser, I’m fully aware of the gravity of the various threats we’re facing. It’s not attention I lack, it’s agency.

I think the time to hope for a more intense form of messaging to prod the deniers into acceptance is long past. If they haven’t changed their minds yet, they ain’t goin’ to!

I also believe the messaging we need won’t come through social media. There’s just too much froth and too much profit in that froth.

What we need – from media platforms we trust – is a frank appraisal of the worst-case scenario of our future. We need to accept that and move on to deal with what is to come. We need to encourage resilience and adaptability. We need hope that while what is to come is most certainly going to be catastrophic, it doesn’t have to be apocalyptic.

We need to know we can survive and start thinking about what that survival might look like.

Does Social Media “Dumb Down” the Wisdom of Crowds?

We assume that democracy is the gold standard of sustainable political social contracts. And it’s hard to argue against that. As Winston Churchill said, “democracy is the worst form of government – except for all the others that have been tried.”

Democracy may not be perfect, but it works. Or, at least, it seems to work better than all the other options. Essentially, democracy depends on probability – on being right more often than we’re wrong.

At the very heart of democracy is the principle of majority rule. And that is based on something called Jury Theorem, put forward by the Marquis de Condorcet in his 1785 work, Essay on the Application of Analysis to the Probability of Majority Decisions. Essentially, it says that the probability of making the right decision increases when you average the decisions of as many people as possible. This was the basis of James Suroweicki’s 2004 book, The Wisdom of Crowds.

But here’s the thing about the wisdom of crowds – it only applies when those individual decisions are reached independently. Once we start influencing each other’s decision, that wisdom disappears. And that makes social psychologist Solomon Asch’s famous conformity experiments of 1951 a disturbingly significant fly in the ointment of democracy.

You’re probably all aware of the seminal study, but I’ll recap anyway. Asch gathered groups of people and showed them a card with three lines of obviously different lengths. Then he asked participants which line was the closest to the reference line. The answer was obvious – even a toddler can get this test right pretty much every time.

But unknown to the test subject, all the rest of the participants were “stooges” – actors paid to sometimes give an obviously incorrect answer. And when this happened, Asch was amazed to find that the test subjects often went against the evidence of their own eyes just to conform with the group. When wrong answers were given, a third of the subjects always conformed, 75% of the subjects conformed at least once, and only 25% stuck to the evidence in front of them and gave the right answer.

The results baffled Asch. The most interesting question to him was why this was happening. Were people making a decision to go against their better judgment – choosing to go with the crowd rather than what they were seeing with their own eyes? Or was something happening below the level of consciousness? This was something Solomon Asch wondered about right until his death in 1996. Unfortunately, he never had the means to explore the question further.

But, in 2005, a group of researchers at Emory University, led by Gregory Berns, did have a way. Here, Asch’s experiment was restaged, only this time participants were in a fMRI machine so Bern and his researchers could peak at what was actually happening in their brains. The results were staggering.

They found that conformity actually changes the way our brain works. It’s not that we change what we say to conform with what others are saying, despite what we see with our own eyes. What we see is changed by what others are saying.

If, Berns and his researchers reasoned, you were consciously making a decision to go against the evidence of your own eyes just to conform with the group, you should see activity in the frontal areas of our brain that are engaged in monitoring conflicts, planning and other higher-order mental activities.

But that isn’t what they found. In those participants that went along with obviously incorrect answers from the group, the parts of the brain that showed activity were only in the posterior parts of the brain – those that control spatial awareness and visual perception. There was no indication of an internal mental conflict. The brain was actually changing how it processed the information it was receiving from the eyes.

This is stunning. It means that conformity isn’t a conscious decision. Our desire to conform is wired so deeply in our brains, it actually changes how we perceive the world. We never have the chance to be objectively right, because we never realize we’re wrong.

But what about those that went resisted conformity and stuck to the evidence they were seeing with their own eyes? Here again, the results were fascinating. The researchers found that in these cases, they saw a spike of activity in the right amygdala and right caudate nucleus – areas involved in the processing of strong emotions, including fear, anger and anxiety. Those that stuck to the evidence of their own eyes had to overcome emotional hurdles to do so. In the published paper, the authors called this the “pain of independence.”

This study highlights a massively important limitation in the social contract of democracy. As technology increasingly imposes social conformity on our culture, we lose the ability to collectively make the right decision. Essentially, is shows that this effect not only erases the wisdom of crowds, but actively works against it by exacting an emotional price for being an independent thinker.

Minority Report Might Be Here — 30 Years Early

“Sometimes, in order to see the light, you have to risk the dark.”

Iris Hineman – 2002’s Minority Report

I don’t usually look to Hollywood for deep philosophical reflection, but today I’m making an exception. Steven Spielberg’s 2002 film Minority Report is balanced on some fascinating ground, ethically speaking. For me, it brought up a rather interesting question – could you get a clear enough picture of someone’s mental state through their social media feed that would allow you to predict pathological behavior? And – even if you could – should you?

If you’re not familiar with the movie, here is the background on this question. In the year 2054, there are three individuals that possess a psychic ability to see events in the future, primarily premeditated murders. These individuals are known at Precognitives, or Precogs. Their predictions are used to set up a PreCrime Division in Washington, DC, where suspects are arrested before they can commit the crime.

Our Social Media Persona

A persona is a social façade – a mask we don that portrays a role we play in our lives. For many of us that now includes the digital stage of social media. Here too we have created a persona, where we share the aspects of ourselves that we feel we need to put out there on our social media platform of choice.

What may surprise us, however, is that even though we supposedly have control over what we share, even that will tell a surprising amount about who we are – both intentionally and unintentionally. And, if those clues are troubling, does our society have a responsibility – or the right – to proactively reach out?

In a commentary published in the American Journal of Psychiatry, Dr. Shawn McNeil said of social media,

“Scientists should be able to harness the predictive potential of these technologies in identifying those most vulnerable. We should seek to understand the significance of a patient’s interaction with social media when taking a thorough history. Future research should focus on the development of advanced algorithms that can efficiently identify the highest-risk individuals.”

Dr. Shawn McNeil

Along this theme, a 2017 study (Liu & Campbell) found that where we fall in the so-called “Big Five” personality traits – neuroticism, extraversion, openness, agreeableness and conscientiousness – as well as the “Big Two” metatraits – plasticity and stability – can be a pretty accurate prediction of how we use social media.

But what if we flip this around?  If we just look at a person’s social media feed, could we tell what their personality traits and metatraits are with a reasonable degree of accuracy? Could we, for instance, assess their mental stability and pick up the warning signs that they might be on the verge of doing something destructive, either to themselves or to someone else? Following this logic, could we spot a potential crime before it happens?

Pathological Predictions

Police are already using social media to track suspects and find criminals. But this is typically applied after the crime has occurred. For instance, police departments regularly scan social media using facial recognition technology to track down suspects. They comb a suspect’s social media feeds to establish whereabouts and gather evidence. Of course, you can only scan social content that people are willing to share. But when these platforms are as ubiquitous as they are, it’s constantly astounding that people share as much as they do, even when they’re on the run from the law.

There are certainly ethical questions about mining social media content for law enforcement purposes. For example, facial recognition algorithms tend to have flaws when it comes to false positives with those of darker complexion, leading to racial profiling concerns. But at least this activity tries to stick with the spirit of the tenet that our justice system is built on: you are innocent until proven guilty.

There must be a temptation, however, to go down the same path as Minority Report and try to pre-empt crime – by identifying a “Precrime”.

Take a school shooting, for example. In the May 31 issue of Fortune, senior technology journalist Jeremy Kahn asked this question: “Could A.I. prevent another school shooting?” In the article, Kahn referenced a study where a team at the Cincinnati Children’s Hospital Medical Center used Artificial Intelligence software to analyze transcripts of teens who went through a preliminary interview with psychiatrists. The goal was to see how well the algorithm compared to more extensive assessments by trained psychiatrists to see if the subject had a propensity to commit violence. They found that assessments matched about 91% of the time.

I’ll restate that so the point hits home: An A.I. algorithm that scanned a preliminary assessment could match much more extensive assessments done by expert professionals 9 out of 10 times –  even without access to the extensive records and patient histories that the psychiatrists had at their disposal.

Let’s go one step further and connect those two dots: If social media content could be used to identify potentially pathological behaviors, and if an AI could then scan that content to predict whether those behaviors could lead to criminal activities, what do we do with that?

It puts us squarely on a very slippery down slope, but we have to acknowledge that we are getting very close to a point where technology forces us to ask a question we’ve never been able to ask before: “If we – with a reasonable degree of success – could prevent violent crimes that haven’t happened yet, should we?”

Memories Made by Media

If you said the year 1967 to me, the memory that would pop into my head would be of Haight-Ashbury (ground zero for the counterculture movement), hippies and the summer of love. In fact, that same memory would effectively stand in for the period 1967 to 1969. In my mind, those three years were variations on the theme of Woodstock, the iconic music festival of 1969.

But none of those are my memories. I was alive, but my own memories of that time are indistinct and fuzzy. I was only 6 that year and lived in Alberta, some 1300 miles from the intersection of Haight and Ashbury Streets, so I have discarded my own personal representative memories. The ones I have were all created by images that came via media.

The Swapping of Memories

This is an example of the two types of memories we have – personal or “lived” memories and collective memories. Collective memories are the memories we get from outside, either for other people or, in my example, from media. As we age, there tends to be a flow back and forth between these two types or memories, with one type coloring the other.

One group of academics proposed an hourglass model as a working metaphor to understand this continuous exchange of memories – with some flowing one way and others flowing the other.  Often, we’re not even aware of which type of memory we’re recalling, personal or collective. Our memories are notoriously bad at reflecting reality.

What is true, however, is that our personal memories and our collective memories tend to get all mixed up. The lower our confidence in our personal memories, the more we tend to rely on collective memories. For periods before we were born, we rely solely on images we borrow.

Iconic Memories

What is true for all memories, ours or the ones we borrow from others, is we put them through a process called “leveling and sharpening.” This is a type of memory consolidation where we throw out some of the detail that is not important to us – this is leveling – and exaggerate other details to make it more interesting – i.e. sharpening.

Take my borrowed memories of 1967, for example. There was a lot more happening in the world than whatever was happening in San Francisco during the Summer of Love, but I haven’t retained any of it in my representative memory of that year. For example, there was a military coup in Greece, the first successful human heart transplant, the creation of the Corporation for Public Broadcasting, a series of deadly tornadoes in Chicago and Typhoon Emma left 140,000 people homeless in the Philippines. But none of that made it into my memory of 1967.

We could call the memories we do keep as “iconic” – which simply means we chose symbols to represent a much bigger and more complex reality – like everything that happened in a 365 day stretch 5 and a half decades ago.

Mass Manufactured Memories

Something else happens when we swap our own personal memories for collective memories – we find much more commonality in our memories. The more removed we become from our own lived experiences, the more our memories become common property.

If I asked you to say the first thing that comes to mind about 2002, you would probably look back through your own personal memory store to see if there was anything there. Chances are it would be a significant event from your own life, and this would make it unique to you. If we had a group of 50 people in a room and I asked that question, I would probably end up with 50 different answers.

But if I asked that same group what the first thing is that comes to mind when I say the year 1967, we would find much more common ground. And that ground would probably be defined by how each of us identify ourselves. For some you might have the same iconic memory that I do – that of the Haight Ashbury and the Summer of Love. Others may have picked the Vietnam War as the iconic memory from that year. But I would venture to guess that in our group of 50, we would end up with only a handful of answers.

When Memories are Made of Media

I am taking this walk down Memory Lane because I want to highlight how much we rely on the media to supply our collective memories. This dependency is critical, because once media images are processed by us and become part of our collective memories, they hold tremendous sway over our beliefs. These memories become the foundation for how we make sense of the world.

This is true for all media, including social media. A study in 2018 (Birkner & Donk) found that “alternative realities” can be formed through social media to run counter to collective memories formed from mainstream media. Often, these collective memories formed through social media are polarized by nature and are adopted by outlier fringes to justify extreme beliefs and viewpoints. This shows that collective memories are not frozen in time but are malleable – continually being rewritten by different media platforms.

Like most things mediated by technology, collective memories are splintering into smaller and smaller groupings, just like the media that are instrumental in their formation.

Sarcastic Much?

“Sarcasm is the lowest form of wit, but the highest form of intelligence.”

Oscar Wilde

I fear the death of sarcasm is nigh. The alarm bells started going when I saw a tweet from John Cleese that referenced a bit from “The Daily Show.”  In it, Trevor Noah used sarcasm to run circles around the logic of Supreme Court Justice Brett Kavanaugh, who had opined that Roe v. Wade should be overturned, essentially booting the question down to the state level to decide.

Against my better judgement, I started scrolling through the comments on the thread — and, within the first couple, found that many of those commenting had completely missed Noah’s point. They didn’t pick up on the sarcasm — at all. In fact, to say they missed the point is like saying Columbus “missed” India. They weren’t even in the same ocean. Perhaps not the same planet.

Sarcasm is my mother tongue. I am fluent in it. So I’m very comfortable with sarcasm. I tend to get nervous in overly sincere environments.

I find sarcasm requires almost a type of meta-cognition, where you have to be able to mentally separate the speaker’s intention from what they’re saying. If you can hold the two apart in your head, you can truly appreciate the art of sarcasm. It’s this finely balanced and recurrent series of contradictions — with tongue firmly placed in cheek — that makes sarcasm so potentially powerful. As used by Trevor Noah, it allows us to air out politically charged issues and consider them at a mental level at least one step removed from our emotional gut reactions.

As Oscar Wilde knew — judging by his quote at the beginning of the post — sarcasm can be a nasty form of humor, but it does require some brain work. It’s a bit of a mental puzzle, forcing us to twist an issue in our heads like a cognitive Rubik’s Cube, looking at it from different angles. Because of this, it’s not for everyone. Some people are just too earnest (again, with a nod to Mr. Wilde) to appreciate sarcasm.

The British excel at sarcasm. John Cleese is a high priest of sarcasm. That’s why I follow him on Twitter. Wilde, of course, turned sarcasm into art. But as Ricky Gervais (who has his own black belt in sarcasm) explains in this piece for Time, sarcasm — and, to be more expansive, all types of irony — have been built into the British psyche over many centuries. This isn’t necessarily true for Americans. 

“There’s a received wisdom in the U.K. that Americans don’t get irony. This is of course not true. But what is true is that they don’t use it all the time. It shows up in the smarter comedies but Americans don’t use it as much socially as Brits. We use it as liberally as prepositions in everyday speech. We tease our friends. We use sarcasm as a shield and a weapon. We avoid sincerity until it’s absolutely necessary. We mercilessly take the piss out of people we like or dislike basically. And ourselves. This is very important. Our brashness and swagger is laden with equal portions of self-deprecation. This is our license to hand it out.”

Ricky Gervais – Time, November 9, 2011

That was written just over a decade ago. I believe it’s even more true today. If you chose to use sarcasm in our age of fake news and social media, you do so at your peril. Here are three reasons why:

First, as Gervais points out, sarcasm doesn’t play equally across all cultures.  Americans — as one example — tend to be more sincere and, as such, take many things meant as sarcastic at face value. Sarcasm might hit home with a percentage of an U.S. audience, but it will go over a lot of American heads. It’s probably not a coincidence that many of those heads might be wearing MAGA hats.

Also, sarcasm can be fatally hamstrung by our TL;DR rush to scroll to the next thing. Sarcasm typically saves its payoff until the end. It intentionally creates a cognitive gap, and you have to be willing to stay with it to realize that someone is, in the words of Gervais, taking the “piss out of you.” Bail too early and you might never recognize it as sarcasm. I suspect more than a few of those who watched Trevor Noah’s piece didn’t stick through to the end before posting a comment.

Finally, and perhaps most importantly, social media tends to strip sarcasm of its context, leaving it hanging out there to be misinterpreted. If you are a regular watcher of “The Daily Show with Trevor Noah,” or “Last Week Tonight with John Oliver,” or even “Late Night with Seth Meyers” (who is one American that’s a master of sarcasm), you realize that sarcasm is part and parcel of it all. But when you repost any bit from any of these shows to social media, moving it beyond its typical audience, you have also removed all the warning signs that say “warning: sarcastic content ahead.” You are leaving the audience to their own devices to “get it.” And that almost never turns out well on social media.

You may say that this is all for the good. The world doesn’t really need more sarcasm. An academic study found that sarcastic messages can be more hurtful to the recipient than a sincere message. Sarcasm can cut deep, and because of this, it can lead to more interpersonal conflict.

But there’s another side to sarcasm. That same study also found that sarcasm can require us to be more creative. The mental mechanisms you use to understand sarcasm are the very same ones we need to use to be more thoughtful about important issues. It de-weaponizes these issues by using humor, while it also forces us to look at them in new ways.

Personally, I believe our world needs more Trevor Noahs, John Olivers and Seth Meyers. Sarcasm, used well, can make us a little smarter, a little more open-minded, and — believe it or not — a little more compassionate.

Using Science for Selling: Sometimes Yes, Sometimes No

A recent study out of Ohio State University seems like one of those that the world really didn’t need. The researchers were exploring whether introducing science into the marketing would help sell chocolate chip cookies.

And to us who make a living in marketing, this is one of those things that might make us say “Duh, you needed research to tell us that? Of course you don’t use science to sell chocolate chip cookies!”

But bear with me, because if we keep asking why enough, we can come up with some answers that might surprise us.

So, what did the researchers learn? I quote,

“Specifically, since hedonic attributes are associated with warmth, the coldness associated with science is conceptually disfluent with the anticipated warmth of hedonic products and attributes, reducing product valuation.”

Ohio State Study

In other words – much simpler and fewer in number – science doesn’t help sell cookies. And it’s because our brains think differently about some things than other.

For example, a study published in the journal Computers in Human Behavior (Casado-Aranda, Sanchez-Fernandez and Garcia) found that when we’re exposed to “hedonic” ads – ads that appeal to pleasurable sensations – the parts of our brain that retrieve memories kicks in. This isn’t true when we see utilitarian ads. Predictably, we approach those ads as a problem to be solved and engage the parts of our brain that control working memory and the ability to focus our attention.

Essentially, these two advertising approaches take two different paths in our awareness, one takes the “thinking” path and one takes the “feeling” path. Or, as Nobel Laureate Daniel Kahneman would say, one takes the “thinking slow” path and one takes the “thinking fast” path.

Yet another study begins to show why this may be so. Let’s go back to chocolate chip cookies for a moment. When you smell a fresh baked cookie, it’s not just the sensory appeal “in the moment” that makes the cookie irresistible. It’s also the memories it brings back for you. We know that how things smell is a particularly effective way to trigger this connection with the past. Certain smells – like that of cookies just out of the oven – can be the shortest path between today and some childhood memory. These are called associative memories. And they’re a big part of “feeling” something rather than just “thinking” about it.

At the University of California – Irvine – Neuroscientists discovered a very specific type of neuron in our memory centers that oversee the creation of new associative memories. They’re called “fan cells” and it seems that these neurons are responsible for creating the link between new input and those emotion-inducing memories that we may have tucked away from our past. And – critically – it seems that dopamine is the key to linking the two. When our brains “smell” a potential reward, it kicks these fan cells into gear and our brain is bathed in the “warm fuzzies.” Lead research Kei Igarashi, said,

“We never expected that dopamine is involved in the memory circuit. However, when the evidence accumulated, it gradually became clear that dopamine is involved. These experiments were like a detective story for us, and we are excited about the results.”

Kei Igarashi – University of California – Irvine

Not surprisingly – as our first study found – introducing science into this whole process can be a bit of a buzz kill. It would be like inviting Bill Nye the Science Guy to teach you about quantum physics during your Saturday morning cuddle time.

All of this probably seems overwhelmingly academic to you. Selling something like chocolate chip cookies isn’t something that should take three different scientific studies and strapping several people inside a fMRI machine to explain. We should be able to rely on our guts, and our guts know that science has no place in a campaign built on an emotional appeal.

But there is a point to all this. Different marketing approaches are handled by different parts of the brain, and knowing that allows us to reinforce our marketing intuition with a better understanding of why we humans do the things we do.

Utilitarian appeals activate the parts of the brain that are front and center, the data crunching, evaluating and rational parts of our cognitive machinery.

Hedonic appeals probe the subterranean depths of our brains, unpacking memories and prodding emotions below the thresholds of us being conscious of the process. We respond viscerally – which literally means “from our guts”.

If we’re talking about selling chocolate chip cookies, we have moved about as far towards the hedonic end of the scale as we can. At the other end we would find something like motor oil – where scientific messaging such as “advanced formulation” or “proven engine protection” would be more persuasive. But almost all other products fall somewhere in between. They are a mix of hedonic and utilitarian factors. And we haven’t even factored in the most significant of all consumer considerations – risk and how to avoid it. Think how complex things would get in our brains if we were buying a new car!

Buying chocolate chip cookies might seem like a no brainer – because – well – it almost is. Beyond dosing our neural pathways with dopamine, our brains barely kick in when considering whether to grab a bag of Chips Ahoy on our next trip to the store. In fact, the last thing you want your brain to do when you’re craving chewy chocolate is to kick in. Then you would start considering things like caloric intake and how you should be cutting down on processed sugar. Chocolate chip cookies might be a no-brainer, but almost nothing else in the consumer world is that simple.

Marketing is relying more and more on data. But data is typically restricted to answering “who”, “what”, “when” and “where” questions. It’s studies like the ones I shared here that start to pick apart the “why” of marketing.

And when things get complex, asking “why” is exactly what we need to do.

Making Time for Quadrant Two

Several years ago, I read Stephen Covey’s “The 7 Habits of Highly Effective People.” It had a lasting impact on me. Through my life, I have found myself relearning those lessons over and over again.

One of them was the four quadrants of time management. How we spend our time in these quadrants determines how effective we are.

 Imagine a box split into four quarters. On the upper left box, we’ll put a label: “Important and Urgent.” Next to it, in the upper right, we’ll put a label saying “Important But Not Urgent.” The label for the lower left is “Urgent but Not Important.” And the last quadrant — in the lower right — is labeled “Not Important nor Urgent.”

The upper left quadrant — “Important and Urgent” — is our firefighting quadrant. It’s the stuff that is critical and can’t be put off, the emergencies in our life.

We’ll skip over quadrant two — “Important But Not Urgent” — for a moment and come back to it.

In quadrant three — “Urgent But Not Important” — are the interruptions that other people brings to us. These are the times we should say, “That sounds like a you problem, not a me problem.”

Quadrant four is where we unwind and relax, occupying our minds with nothing at all in order to give our brains and body a chance to recharge. Bingeing Netflix, scrolling through Facebook or playing a game on our phones all fall into this quadrant.

And finally, let’s go back to quadrant two: “Important But Not Urgent.” This is the key quadrant. It’s here where long-term planning and strategy live. This is where we can see the big picture.

The secret of effective time management is finding ways to shift time spent from all the other quadrants into quadrant two. It’s managing and delegating emergencies from quadrant one, so we spend less time fire-fighting. It’s prioritizing our time above the emergencies of others, so we minimize interruptions in quadrant three. And it’s keeping just enough time in quadrant four to minimize stress and keep from being overwhelmed.

The lesson of the four quadrants came back to me when I was listening to an interview with Dr. Sandro Galea, epidemiologist and author of “The Contagion Next Time.” Dr. Galea was talking about how our health care system responded to the COVID pandemic. The entire system was suddenly forced into quadrant one. It was in crisis mode, trying desperately to keep from crashing. Galea reminded us that we were forced into this mode, despite there being hundreds of lengthy reports from previous pandemics — notably the SARS crisis–– containing thousands of suggestions that could have helped to partially mitigate the impact of COVID.

Few of those suggestions were ever implemented. Our health care system, Galea noted, tends to continually lurch back and forth within quadrant one, veering from crisis to crisis. When a crisis is over, rather than go to quadrant two and make the changes necessary to avoid similar catastrophes in the future, we put the inevitable reports on a shelf where they’re ignored until it is — once again — too late.

For me, that paralleled a theme I have talked about often in the past — how we tend to avoid grappling with complexity. Quadrant two stuff is, inevitably, complex in nature. The quadrant is jammed with what we call wicked problems. In a previous column, I described these as, “complex, dynamic problems that defy black-and-white solutions. These are questions that can’t be answered by yes or no — the answer always seems to be maybe.  There is no linear path to solve them. You just keep going in loops, hopefully getting closer to an answer but never quite arriving at one. Usually, the optimal solution to a wicked problem is ‘good enough — for now.’”

That’s quadrant two in a nutshell. Quadrant-one problems must be triaged into a sort of false clarity. You have to deal with the critical stuff first. The nuances and complexity are, by necessity, ignored. That all gets pushed to quadrant two, where we say we will deal with it “someday.”

Of course, someday never comes. We either stay in quadrant one, are hijacked into quadrant three, or collapse through sheer burn-out into quadrant four. The stuff that waits for us in quadrant two is just too daunting to even consider tackling.

This has direct implications for technology and every aspect of the online world. Our industry, because of its hyper-compressed timelines and the huge dollars at stake, seems firmly lodged in the urgency of quadrant one. Everything on our to-do list tends to be a fire we have to put out. And that’s true even if we only consider the things we intentionally plan for. When we factor in the unplanned emergencies, quadrant one is a time-sucking vortex that leaves nothing for any of the other quadrants.

But there is a seemingly infinite number of quadrant two things we should be thinking about. Take social media and privacy, for example. When an online platform has a massive data breach, that is a classic quadrant one catastrophe. It’s all hands on deck to deal with the crisis. But all the complex questions around what our privacy might look like in a data-inundated world falls into quadrant two. As such, they are things we don’t think much about. It’s important, but it’s not urgent.

Quadrant two thinking is systemic thinking, long-term and far-reaching. It allows us to build the foundations that helps to mitigate crisis and minimize unintended consequences.

In a world that seems to rush from fire to fire, it is this type of thinking that could save our asses.