Why is Everything Now ‘Unprecedented’?

Just once, I would like to get through one day without hearing the word “unprecedented.” And I wonder, is that just the media trying to get a click, or is the world truly that terrible?

Take the Olympics. In my lifetime, I’ve never seen an Olympics like this one. Empty stands. Athletes having to leave within 48 hours of their last event. Opening and closing ceremonies unlike anything we have ever seen. It’s, well — unprecedented.

The weather is unprecedented. What is happening in politics is unprecedented. The pandemic is unprecedented, at least in our lifetimes. I don’t know about you, but I feel like I’m watching a blockbuster where the world will eventually end — but we just haven’t got to that part of the movie yet. I feel the palpable sensation of teetering on the edge of a precipice. And I’m pretty sure it’s happened before.

Take the lead-ups to the two world wars, for example. If you plot a timeline of the events that led to either July 20, 1914 or Sept. 1, 1939, there is a noticeable acceleration of momentum. At first, the points on the timeline are spread apart, giving the world a chance to once again catch its collective breath. But as we get closer and closer to those dates circled in red, things pick up. There are cascades of events that eventually lead to the crisis point. Are we in the middle of such a cascade?

Part of this might just be network knock-on effects that happen in complex environments. But I also wonder if we just become a little shell- shocked, being nudged into a numb acceptance of things we would have once found intolerable.

Author and geographer Jared Diamond calls this “creeping normality. “ In his book “Collapse: How Societies Choose to Fail or Succeed,” he used the example of the deforestation and environmental degradation that happened on Easter Island — and how, despite the impending doom, the natives still decided to chop down the last tree: “I suspect, though, that the disaster happened not with a bang but with a whimper. After all, there are those hundreds of abandoned statues to consider. The forest the islanders depended on for rollers and rope didn’t simply disappear one day—it vanished slowly, over decades.”

Creeping normality continually and imperceptibly nudges us from the unacceptable to the acceptable and we don’t even notice it’s happening. It’s a cognitive bias that keeps us from seeing reality for what it is. Creeping normality is what happens when our view of the world comes through an Overton Window.

I have mentioned the concept of the Overton Window before.  Overton Window was first introduced by political analyst Joseph Lehman and was named after his colleague, Joseph Overton. It was initially coined to show that the political policies that the public finds acceptable will shift over time. What was once considered unthinkable can eventually become acceptable or even popular, given the shifting sensitivities of the public. As an example, the antics of Donald Trump would once be considered unacceptable in any public venue — but as our reality shifted, we saw them eventually become mainstream from an American president.

I suspect that the media does the same thing with our perception of the world in general. The news media demands the exceptional. We don’t click on “ordinary.” So it consistently shifts our Overton Window of what we pay attention to, moving us toward the outrageous. Things that once would have caused riots are now greeted with a yawn. This is combined with the unrelenting pace of the news cycle. What was outrageous today slips into yesterday, to be replaced with what is outrageous today.

And while I’m talking about outrageous, let’s look at the root of that term. The whole point of something being outrageous is to prompt us into being outraged — or moved enough to take action. And, if our sensitivity to outrage is constantly being numbed, we are no longer moved enough to act.

When we become insensitive to things that are unprecedented, we’re in a bad place. Our trust in information is gone. We seek information that comforts us that the world is not as bad as we think it is. And we ignore the red flags we should be paying attention to.

If you look at the lead-ups to both world wars, you see this same pattern. Things that happened regularly in 1914 or 1939, just before the outbreak of war, would have been unimaginable just a few years earlier. The momentum of mayhem picked up as the world raced to follow a rapidly moving Overton Window. Soon, before we knew it, all hell broke loose and the world was left with only one alternative: going to war.

An Overton Window can just happen, or it can be intentionally planned. Politicians from the fringes, especially the right, have latched on to the Window, taking something intended to be an analysis and turning it into a strategy. They now routinely float “policy balloons” that they know are on the fringe, hoping to trigger a move in our Window to either the right or left. Over time, they can use this strategy to introduce legislation that would once have been vehemently rejected.

The danger in all this is the embedding of complacency. Ultimately, our willingness to take action against threat is all that keeps our society functioning. Whether it’s our health, our politics or our planet, we have to be moved to action before it’s too late.

When the last tree falls on Easter Island, we don’t want to be the ones with the axe in our hands.

A Hybrid Work Approach To Creativity

Last week I introduced the concept of burstiness, meaning the bursts of creativity that can happen when a group is on a roll.

Burstiness requires trust: a connection in the group that creates psychological safety. But I would go one step further. It also requires respect — an intuitive acknowledgement of the value of contribution from everyone in the group. It’s a type of recursive high that builds on itself, as each contribution sparks something else from the group. It’s like the room has caught fire and, as the burstiness continues, everyone tries to add to the flames.

We’ve used jazz as an example of burstiness. But there are other great examples, like theater improv. Research has found that the brain actually changes how it acts when it’s engaged in these activities, according to a Psychology Today article.

A 2008 fMRI study found that that different parts of the brain lit up when musicians improvised rather than just playing scales. The brain shifted into a different gear. The dorsolateral prefrontal cortex decreased in activity, and the medial prefrontal cortex increased. This is a fascinating finding, because the dorsolateral prefrontal cortex is the part of the brain where we look at ourselves critically and the media prefrontal cortex is linked with language and creativity. A follow-up study was done on improv actors, and the findings were remarkably similar.

This modality of the brain is important to understand. If we can create the conditions that lead to creativity, magic can happen.

Also, this is a team sport. Creativity is almost never exclusively a solo pursuit.

In 1995, Alfonso Montuori and Ronald Purser wrote an essay deconstructing the myth of the lone genius. In it, they showed that creativity almost always relies on social interaction. There is a system of creativity, an ecology that creates the conditions necessary for inspiration.

We love the story of the eccentric solitary genius toiling away in a loft somewhere, but it almost never happens that way. Da Vinci and Michelangelo had “schools” of apprentices that helped turn out their masterpieces. Mozart was a pretty social guy whose creativity fed off interactions with his court patrons and other composers of the era.

But we also have to understand that a little creative magic can go a long way. You don’t have to be 100% creative all the time. In a corporate setting, creativity is a spark. Then there is a lot of non-creative work required to fan it into a flame.

Given this, perhaps the advent of hybrid virtual-traditional workplace models might be a suitable fit for encouraging inspiration, if we use them correctly and not try to force-fit our intentions into the wrong workplace framework.

A virtual work-from-home environment is great for efficiency and getting stuff done. Our boss isn’t hovering over our cubicle asking us if we “have a second” to discuss whatever happens to be on his mind at this particular moment. We’re not wasting hours in tedious, unproductive meetings or on a workplace commute.

On the flip side, if creativity is our goal, there is no substitute for being “in the room where it happens.” A full bandwidth of human interaction is required for the psychological safety we need to take creative risks. These creative summits need to be in person and carefully constructed to provide the conditions needed for creativity. Interdisciplinary and diverse teams who know and trust each other implicitly need to be physically brought together for “improv” sessions. The rules of engagement should be clearly understood.

And unless bosses can participate fully “in kind” (a great example of this is Trevor Noah in the “Daily Show” example I mentioned last week from Adam Grant’s “Worklife” podcast), they should stay the hell out of the room.

Be ruthless about limiting attendance for creative sessions to just those who bring something to the table and have already built a psychological “safe space” with each other through face-to-face connections. Just one wrong person in the room can short-circuit the entire exercise.

This hybrid model doesn’t allow for the serendipity of creativity — that chance interaction in the lunchroom or the offhand comment that is the first domino to fall in an inspirational chain reaction. It also puts a constrained timeline on creativity, forcing it into specific squares on a calendar. But at least it recognizes the unique prerequisites of creativity and addresses them in an honest manner.

One last thought on creativity. Again, we go back to Anita Williams Wooley, the Carnegie Mellon professor who first identified “burstiness.” In a 2018 study with Christopher Riedl, she shows that even with a remote workplace, “bursty” communications can lead to more innovative teams.

“People often think that constant communication is most effective, but actually, we find that bursts of rapid communication, followed by longer periods of silence, are telltale signs of successful teams,” she notes.

This communication template mimics the hybrid model I mentioned before. It compartmentalizes our work activities, adopting communication styles that best suit the different modalities required: the effectiveness of collaboration and innovation, and the efficiency of getting the work done. Wooley suggests using a synchronous form of communication for the “bursts” — perhaps even the old-fashioned phone. And then leave everybody alone for a period of radio silence and let them get their work done.

Why Our Brains Struggle With The Threat Of Data Privacy

It seems contradictory. We don’t want to share our personal data but, according to a recent study reported on by MediaPost’s Laurie Sullivan, we want the brands we trust to know us when we come shopping. It seems paradoxical.

But it’s not — really.  It ties in with the way we’ve always been thinking.

Again, we just have to understand that we really don’t understand how the data ecosystem works — at least, not on an instant and intuitive level. Our brains have no evolved mechanisms that deal with new concepts like data privacy. So we have borrowed other parts of the brain that do exist. Evolutionary biologists call this “exaption.”

For example, the way we deal with brands seems to be the same way we deal with people — and we have tons of experience doing that. Some people we trust. Most people we don’t. For the people we trust, we have no problem sharing something of our selves. In fact, it’s exactly that sharing that nurtures relationships and helps them grow.

It’s different with people we don’t trust. Not only do we not share with them, we work to avoid them, putting physical distance between us and them. We’d cross to the other side of the street to avoid bumping into them.

In a world that was ordered and regulated by proximity, this worked remarkably well. Keeping our enemies at arm’s length generally kept us safe from harm.

Now, of course, distance doesn’t mean the same thing it used to. We now maneuver in a world of data, where proximity and distance have little impact. But our brains don’t know that.

As I said, the brain doesn’t really know how digital data ecosystems work, so it does its best to substitute concepts it has evolved to handle those it doesn’t understand at an intuitive level.

The proxy for distance the brain seems to use is task focus. If we’re trying to do something, everything related to that thing is “near” and everything not relevant to it is “far. But this is an imperfect proxy at best and an outright misleading one at worst.

For example, we will allow our data to be collected in order to complete the task. The task is “near.” In most cases, the data we share has little to do with the task we’re trying to accomplish. It is labelled by the brain as “far” and therefore poses no immediate threat.

It’s a bait and switch tactic that data harvesters have perfected. Our trust-warning systems are not engaged because there are no proximate signs to trigger them. Any potential breaches of trust happen well after the fact – if they happen at all. Most times, we’re simply not aware of where our data goes or what happens to it. All we know is that allowing that data to be collected takes us one step closer to accomplishing our task.

That’s what sometimes happens when we borrow one evolved trait to deal with a new situation:  The fit is not always perfect. Some aspects work, others don’t.

And that is exactly what is happening when we try to deal with the continual erosion of online trust. In the moment, our brain is trying to apply the same mechanisms it uses to assess trust in a physical world. What we don’t realize is that we’re missing the warning signs our brains have evolved to intuitively look for.

We also drag this evolved luggage with us when we’re dealing with our favorite brands. One of the reasons you trust your closest friends is that they know you inside and out. This intimacy is a product of a physical world. It comes from sharing the same space with people.

In the virtual world, we expect the brands we know and love to have this same knowledge of us. It frustrates us when we are treated like a stranger. Think of how you would react if the people you love the most gave you the same treatment.

This jury-rigging of our personal relationship machinery to do double duty for the way we deal with brands may sound far-fetched, but marketing brands have only been around for a few hundred years. That is just not enough time for us to evolve new mechanisms to deal with them.

Yes, the rational, “slow loop” part of our brains can understand brands, but the “fast loop” has no “brand” or “data privacy” modules. It has no choice but to use the functional parts it does have.

As I mentioned in a previous post, there are multiple studies that indicate that it’s these parts of our brain that fire instantly, setting the stage for all the rationalization that will follow. And, as our own neuro-imaging study showed, it seems that the brain treats brands the same way it treats people.

I’ve been watching this intersection between technology and human behaviour for a long time now. More often than not, I see this tendency of the brain to make split-section decisions in environments where it just doesn’t have the proper equipment to make those decisions. When we stop to think about these things, we believe we understand them. And we do, but we had to stop to think. In the vast majority of cases, that’s just not how the brain works.

Media: The Midpoint of the Stories that Connect Us

I’m in the mood for navel gazing: looking inward.

Take the concept of “media,” for instance. Based on the masthead above this post, it’s what this site — and this editorial section — is all about. I’m supposed to be on the “inside” when it comes to media.

But media is also “inside” — quite literally. The word means “middle layer,” so it’s something in between.

There is a nuance here that’s important. Based on the very definition of the word, it’s something equidistant from both ends. And that introduces a concept we in media must think about: We have to meet our audience halfway. We cannot take a unilateral view of our function.

When we talk about media, we have to understand what gets passed through this “middle layer.” Is it information? Well, then we have to decide what information is. Again, the etymology of the word “inform” shows us that informing someone is to “give form to their mind.” But that mind isn’t a blank slate or a lump of clay to be molded as we want. There is already “form” there. And if, through media, we are meeting them halfway, we have to know something about what that form may be.

We come back to this: Media is the midpoint between what we, the tellers,  believe, and what we want our audience to believe. We are looking for the shortest distance between those two points. And, as self-help author Patti Digh wrote, “The shortest distance between two people is a story.”

We understand the world through stories — so media has become the platform for the telling of stories. Stories assume a common bond between the teller and the listener. It puts media squarely in the middle ground that defines its purpose, the point halfway between us. When we are on the receiving end of a story, our medium of choice is the one closest to us, in terms of our beliefs and our world narrative. These media are built on common ideological ground.

And, if we look at a recent study that helps us understand how the brain builds models of the things around us, we begin to understand the complexity that lies within a story.

This study from the Max Planck Institute for Human Cognitive and Brain Sciences shows that our brains are constantly categorizing the world around us. And if we’re asked to recognize something, our brains have a hierarchy of concepts that it will activate, depending on the situation. The higher you go in the hierarchy, the more parts of your brain that are activated.

For example, if I asked you to imagine a phone ringing, the same auditory centers in your brain that activate when you actually hear the phone would kick into gear and give you a quick and dirty cognitive representation of the sound. But if I asked you to describe what your phone does for you in your life, many more parts of your brain would activate, and you would step up the hierarchy into increasingly abstract concepts that define your phone’s place in your own world. That is where we find the “story” of our phone.

As psychologist Robert Epstein  says in this essay, we do not process a story like a computer. It is not data that we crunch and analyze. Rather, it’s another type of pattern match, between new information and what we already believe to be true.

As I’ve said many times, we have to understand why there is such a wide gap in how we all interpret the world. And the reason can be found in how we process what we take in through our senses.

The immediate sensory interpretation is essentially a quick and dirty pattern match. There would be no evolutionary purpose to store more information than is necessary to quickly categorize something. And the fidelity of that match is just accurate enough to do the job — nothing more.

For example, if I asked you to draw a can of Coca-Cola from memory, how accurate do you think it would be? The answer, proven over and over again, is that it probably wouldn’t look much like the “real thing.”

That’s coming from one sense, but the rest of your senses are just as faulty. You think you know how Coke smells and tastes and feels as you drink it, but these are low fidelity tags that act in a split second to help us recognize the world around us. They don’t have to be exact representations because that would take too much processing power.

But what’s really important to us is our “story” of Coke. That was clearly shown in one of my favorite neuromarketing studies, done at Baylor University by Read Montague.

He and his team reenacted the famous Pepsi Challenge — a blind taste test pitting Coke against Pepsi. But this time, they scanned the participant’s brains while they were drinking. The researchers found that when Coke drinkers didn’t know what they were drinking, only certain areas of their brains activated, and it didn’t really matter if they were drinking Coke or Pepsi.

But when they knew they were drinking Coke, suddenly many more parts of the brain started lighting up, including the prefrontal cortex, the part of the brain that is usually involved in creating our own personal narratives to help us understand our place in the world.

And while the actual can of Coke doesn’t change from person to person, our Story of Coke can be an individual to us as our own fingerprints.

We in the media are in the business of telling stories. This post is a story. Everything we do is a story. Sometimes they successfully connect with others, and sometimes they don’t. But in order to make effective use of the media we chose as a platform, we must remember we can only take a story halfway. On the other end there is our audience, each of whom has their own narratives that define them. Media is the middle ground where those two things connect.

The Split-Second Timing of Brand Trust

Two weeks ago, I talked about how brand trust can erode so quickly and cause so many issues. I intimated that advertising and branding have become decoupled — and advertising might even erode brand trust, leading to a lasting deficit.

Now I think that may be a little too simplistic. Brand trust is a holistic thing — the sum total of many moving parts. Taking advertising in isolation is misleading. Will one social media ad for a brand lead to broken trust? Probably not. But there may be a cumulative effect that we need to be aware of.

In looking at the Edelman Trust Barometer study closer, a very interesting picture emerges. Essentially, the study shows there is a trust crisis. Edelman calls it information bankruptcy.

The slide in trust is probably not surprising. It’s hard to be trusting when you’re afraid, and if there’s one thing the Edelman Barometer shows, it’s that we are globally fearful. Our collective hearts are in our mouths. And when this happens, we are hardwired to respond by lowering our trust and raising our defenses.

But our traditional sources for trusted information — government and media — have also abdicated their responsibilities to provide it. They have instead stoked our fears and leveraged our divides for their own gains. NGOs have suffered the same fate. So, if you can’t trust the news, your leaders or even your local charity, who can you trust?

Apparently, you can trust a corporation. Edelman shows that businesses are now the most trusted organizations in North America. Media, especially social media, is the least trusted institution. I find this profoundly troubling, but I’ll put that aside for a future post. For now, let’s just accept it at face value.

As I said in that previous column, we want to trust brands more than ever. But we don’t trust advertising. This creates a dilemma for the marketer.

This all brings to mind a study I was involved with a little over 10 years ago. Working with Simon Fraser University, we wanted to know how the brain responded to trusted brands. The initial results were fascinating — but unfortunately, we never got the chance to do the follow-up study we intended.

This was an ERP study (event-related potential), where we looked at how the brain responded when we showed brand images as a stimulus. ERP studies are useful to better understand the immediate response of the brain to something — the fast loop I talk so much about — before the slow loop has a chance to kick in and rationalize things.

We know now that what happens in this fast loop really sets the stage for what comes after. It essentially makes up the mind, and then the slow loop adds rational justification for what has already been decided.

What we found was interesting: The way we respond to our favorite brands is very similar to the way we respond to pictures of our favorite people. The first hint of this occurred in just 150 milliseconds, about one-sixth of a second. The next reinforcement was found at 400 milliseconds. In that time, less than half a second in total, our minds were made up. In fact, the mind was basically made up in about the same time it takes to blink an eye.  Everything that followed was just window dressing.

This is the power of trust. It takes a split second for our brains to recognize a situation where it can let its guard down. This sets in motion a chain of neurological events that primes the brain for cooperation and relationship-building. It primes the oxytocin pump and gets it flowing. And this all happens just that quickly.

On the other side, if a brand isn’t trusted, a very different chain of events occurs just as quickly. The brain starts arming itself for protection. Our amygdala starts gearing up. We become suspicious and anxious.

This platform of brand trust — or lack of it — is built up over time. It is part of our sense-making machinery. Our accumulating experience with the brand either adds to our trust or takes it away.

But we must also realize that if we have strong feelings about a brand, one way or the other, it then becomes a belief. And once this happens, the brain works hard to keep that belief in place. It becomes virtually impossible at that point to change minds. This is largely because of the split-second reactions our study uncovered.

This sets very high stakes for marketers today. More than ever, we want to trust brands. But we also search for evidence that this trust is warranted in a very different way. Brand building is the accumulation of experience over all touch points. Each of those touch points has its own trust profile. Personal experience and word of mouth from those we know is the highest. Advertising on social media is one of the lowest.

The marketer’s goal should be to leverage trust-building for the brand in the most effective way possible. Do it correctly, through the right channels, and you have built trust that’s triggered in an eye blink. Screw it up, and you may never get a second chance.

Social Media Reflects Rights Vs. Obligations Split

Last week MediaPost writer (and my own editor here on Media Insider) Phyllis Fine asked this question in a post: “Can Social Media Ease the Path to Herd Immunity?” The question is not only timely, but also indicative of the peculiar nature of social media that could be stated thus: for every point of view expressed, there is an equal — and opposite — point of view. Fine’s post quotes a study from the Institute of Biomedical Ethics and History of Medicine at the University of Zurich, which reveals, “Anti-vaccination supporters find fertile ground in particular on Facebook and Twitter.”

Here’s the thing about social media. No matter what the message might be, there will be multiple interpretations of it. Often, the most extreme interpretations will be diametrically opposed to each other. It’s stunning how the very same content can illustrate the vast ideological divides that separate us.

I’ve realized that the only explanation for this is that our brains must work differently. We’re not even talking apples and oranges here. This is more like ostrich eggs and vacuum cleaners.

This is not my own revelation. There’s a lot of science behind it. An article in Scientific American catalogs some of the difference between conservative and liberal brains. Even the actual structure is different. According to the article: “The volume of gray matter, or neural cell bodies, making up the anterior cingulate cortex, an area that helps detect errors and resolve conflicts, tends to be larger in liberals. And the amygdala, which is important for regulating emotions and evaluating threats, is larger in conservatives.”

We have to understand that a right-leaning brain operates very differently than a left-leaning brain. Recent neuro-imaging studies have shown that they can consider the very same piece of information and totally different sections of their respective brains light up. They process information differently.

In a previous post about this topic, I quoted biologist and author Robert Sapolsky as saying, “Liberals are more likely to process information systematically, recognize differences in argument quality, and to be persuaded explicitly by scientific evidence, whereas conservatives are more likely to process information heuristically, attend to message-irrelevant cues such as source similarity, and to be persuaded implicitly through evaluative conditioning. Conservatives are also more likely than liberals to rely on stereotypical cues and assume consensus with like-minded others.”

Or, to sum it up in plain language: “Conservatives start gut and stay gut; liberals go from gut to head.”

This has never been clearer than in the past year. Typically, the information being processed by a conservative brain would have little overlap with the information being processed by a liberal brain. Each would care and think about different things.

But COVID-19 has forced the two circles of this particular Venn diagram together, creating a bigger overlap in the middle. We are all focused on information about the pandemic. And this has created a unique opportunity to more directly compare the cognitive habits of liberals versus conservatives.

Perhaps the biggest difference is in the way each group defines morality. At the risk of a vast oversimplification, the right tends to focus on individual rights, especially those they feel they’re personally are at risk of losing. The left thinks more in terms of societal obligations: What do we need to do — or not do — for the greater good of us all?  To paraphrase John F. Kennedy, conservatives ask what their country can do for them; liberals ask what they can do for their country.

This theory is part of Jonathon Haidt’s Moral Foundations Theory. What Haidt, working with others, has found is that both the right and left have morals, but they are defined differently. This “moral pluralism” means that two people can look at the same social media post but take two entirely different messages from it. And both will insist their interpretation is the correct one. Liberals can see a post about getting a vaccine as an appeal to their concern for the collective well being of their community. Conservatives see it as an attack on their personal rights.

So when we ask a question like “Can social media ease the path to herd immunity?” we run into the problem of message interpretation. For some, it will be preaching to the choir. For others, it will have the same effect as a red cape in front of a bull.

It’s interesting that the vaccine question is being road-blocked by this divide between rights and obligations. It shows just how far the two sides are apart. With a vaccine, at least both sides have skin in the game. Getting a vaccine can save your life, no matter how you vote. Wearing a face mask is a different matter.

In my lifetime, I have never seen a more overt signalling of ideological leanings than whether you choose to wear a face mask or not. When we talk about rights vs obligations, this is the ultimate acid test. If I insist on wearing a mask, as I do, I’m not wearing it for me, I’m wearing it for you. It’s part of my obligation to my community. But if you refuse to wear a mask, it’s pretty obvious who you’re focused on.

The thing that worries me the most about this moral dualism is that a moral fixation on individual rights is not sustainable. It’s assuming that our society is a zero-sum game. In order for me to win, you must lose. If we focus instead on our obligations, we approach society with an abundance mentality. As we contribute, we all benefit.

At least, that’s how my brain sees it.

Picking Apart the Concept of Viral Videos

In case you’re wondering, the most popular video on YouTube is the toxic brain worm Baby Shark Dance. It has over 8.2 billion views.

And from that one example, we tend to measure everything that comes after.  Digital has screwed up our idea of what it means to go viral. We’re not happy unless we get into the hyper-inflated numbers typical of social media influencers. Maybe not Baby Shark numbers, but definitely in the millions.

But does that mean that something that doesn’t hit these numbers is a failure? An old stat I found said that over half of YouTube videos have less than 500 views. I couldn’t find a more recent tally, but I suspect that’s still true.

And, if it is, my immediate thought is that those videos must suck. They weren’t worth sharing. They didn’t have what it takes to go viral. They are forever stuck in the long, long tail of YouTube wannabes.

But is going viral all it’s cracked up to be?

Let’s do a little back-of-an-envelope comparison. A week and a half ago, I launched a video that has since gotten about 1,500 views. A few days ago, a YouTuber named MrBeast launched a video titled, “I Spent 50 Hours Buried Alive.” In less than 24 hours, it racked up over 30 million views. Compared to that, one might say my launch was a failure. But was it?  It depends on what your goals for a video are. And it also depends on the structure of social networks.

Social networks are built of nodes. Within the node, people are connected by strong ties. They have a lot in common. But nodes are often connected by weak ties. These bonds stretch across groups that have less in common. Understanding this structure is important in understanding how a video might spread through a network.

Depending on your video’s content, it may never move beyond one node. It may not have the characteristics necessary to get passed along the ties that connect separate nodes. This was something I explored many years ago when I looked at how rumors spread through social networks. In that post, I talked about a study by Frenzen and Nakamoto that looked at some of the variables required to make a rumor spread between nodes.

Some of the same dynamics hold true when we look at viral videos. If you’ve had less than 500 views, as apparently over 50% of YouTube videos do, chances are you got stuck in a node. But this might not be a bad thing. Sometimes going deep is better than going wide.

My video, for example, is definitely aimed at one particular audience, people of Italian descent in the region where I live. According to the latest government census, the total possible “target” for my video is probably less than 10,000 people. And, if this is the case, I’ve already reached 15% of my audience. That’s not a mind-blowing success record, but it’s a start.

My goal for the video was to ignite an interest in my audience to learn more about their own heritage. And it seems to be working. I’ve never seen more interest in people wanting to learn about their own ancestors in particular, or the story of Italians in the Okanagan region of British Columbia in general.

My goal was never to just get a like or even a share, although that would be nice. My goal was to move people enough to act. I wanted to go deep, not wide.

To go “deep,” you have to fully leverage those “strong ties.” What is the stuff those ties are made of? What is the common ground within the node? The things that make people watch all 13-and-a-half minutes of a video about Italian immigrants are the very same things that will keep it stuck within that particular node. As long as it stays there, it will be interesting and relevant. But it won’t jump across a weak tie, because there is no common ground to act as a launching pad.

If the goal is to go “wide” and set a network effect in motion, then you have to play to the lowest common denominator: those universal emotions that we all share, which can be ignited just long enough to capture a quick view and a social share. According to this post about how to go viral, they are: status, identity protection, being helpful, safety, order, novelty, validation and voyeurism.

Another way to think of it is this: Do you want your content to trigger “fast” thinking or “slow” thinking? Again, I use Nobel laureate Daniel Kahneman’s cognitive analogy about how the brain works at two levels: fast and slow. If you want your content to “go wide,” you want to trigger the “fast” circuits of the brain. If you want your content to “go deep,” you’re looking to activate the “slow” circuits. It doesn’t mean that “deep” content can’t be emotionally charged. The opposite is often true. But these are emotions that require some cognitive focus and mindfulness, not a hair-trigger reaction. And, if you’re successful, that makes them all the more powerful. These are emotions that serve their inherent purpose. They move us to action.

I think this whole idea of going “viral” suffers from the same hyper-inflation of expectations that seems to affect everything that goes digital. We are naturally comparative and competitive animals, and the world that’s gone viral tends to focus us on quantity rather than quality. We can’t help looking at trending YouTube videos and hoping that our video will get launched into the social sharing stratosphere.

But that doesn’t mean a video that stays stuck with a few hundred views didn’t do its job. Maybe the reason the numbers are low is that the video is doing exactly what it was intended to do.

COVID And The Chasm Crossing

For most of us, it’s been a year living with the pandemic. I was curious what my topic was a year ago this week. It was talking about the brand crisis at a certain Mexican brewing giant when its flagship brand was suddenly and unceremoniously linked with a global pandemic. Of course, we didn’t know then just how “global” it would be back then.

Ahhh — the innocence of early 2020.

The past year will likely be an historic inflection point in many societal trend lines. We’re not sure at this point how things will change, but we’re pretty sure they will change. You can’t take what has essentially been a 12-month anomaly in everything we know as normal, plunk it down on every corner of the globe and expect everything just to bounce back to where it was.

If I could vault 10 years in the future and then look back at today, I suspect I would be talking about how our relationship with technology changed due to the pandemic. Yes, we’re all sick of Zoom. We long for the old days of actually seeing another face in the staff lunchroom. And we realize that bingeing “Emily in Paris” on Netflix comes up abysmally short of the actual experience of stepping in dog shit as we stroll along the Seine.

C’est la vie.

But that’s my point. For the past 12 months, these watered-down digital substitutes have been our lives. We were given no choice. And some of it hasn’t sucked. As I wrote last week, there are times when a digital connection may actually be preferable to a physical one.

There is now a whole generation of employees who are considering their work-life balance in the light of being able to work from home for at least part of the time. Meetings the world over are being reimagined, thanks to the attractive cost/benefit ratio of being able to attend virtually. And, for me, I may have permanently swapped riding my bike trainer in my basement for spin classes in the gym. It took me a while to get used to it, but now that I have, I think it will stick.

Getting people to try something new — especially when it’s technology — is a tricky process. There are a zillion places on the uphill slope of the adoption curve where we can get mired and give up. But, as I said, that hasn’t been an option for us in the past 12 months. We had to stick it out. And now that we have, we realize we like much of what we were forced to adopt. All we’re asking for is the freedom to pick and choose what we keep and what we toss away.

I suspect  many of us will be a lot more open to using technology now that we have experienced the tradeoffs it entails between effectiveness and efficiency. We will make more room in our lives for a purely utilitarian use of technology, stripped of the pros and cons of “bright shiny object” syndrome.

Technology typically gets trapped at both the dread and pseudo-religious devotion ends of the Everett Rogers Adoption Curve. Either you love it, or you hate it. Those who love it form the market that drives the development of our technology, leaving those who hate it further and further behind.

As such, the market for technology tends to skew to the “gee whiz” end of the market, catering to those who buy new technology just because it’s new and cool. This bias has embedded an acceptance of planned obsolescence that just seems to go hand-in-hand with the marketing of technology. 

My previous post about technology leaving seniors behind is an example of this. Even if seniors start out as early adopters, the perpetual chase of the bright shiny object that typifies the tech market can leave them behind.

But COVID-19 changed all that. It suddenly forced all of us toward the hump that lies in the middle of the adoption curve. It has left the world no choice but to cross the “chasm” that  Geoffrey Moore wrote about 30 years ago in his book “Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers.” He explained that the chasm was between “visionaries (early adopters) and pragmatists (early majority),” according to Wikipedia.

This has some interesting market implications. After I wrote my post, a few readers reached out saying they were working on solutions that addressed the need of seniors to stay connected with a device that is easier for them to use and is not subject to the need for constant updating and relearning. Granted, neither of them was from Apple nor Google, but at least someone was thinking about it.

As the pandemic forced the practical market for technology to expand, bringing customers who had everyday needs for their technology, it created more market opportunities. Those opportunities create pockets of profit that allow for the development of tools for segments of the market that used to be ignored.

It remains to be seen if this market expansion continues after the world returns to a more physically based definition of normal. I suspect it will.

This market evolution may also open up new business model opportunities — where we’re actually willing to pay for online services and platforms that used to be propped up by selling advertising. This move alone would take technology a massive step forward in ethical terms. We wouldn’t have this weird moral dichotomy where marketers are grieving the loss of data (as fellow Media Insider Ted McConnell does in this post) because tech is finally stepping up and protecting our personal privacy.

Perhaps — I hope — the silver lining in the past year is that we will look at technology more as it should be: a tool that’s used to make our lives more fulfilling.

The Crazy World of Our Media Obsessions

Are you watching the news less? Me too. Now that the grownups are back in charge, I’m spending much less time checking my news feed.

Whatever you might say about the last four years, it certainly was good for the news business. It was one long endless loop of driving past a horrific traffic accident. Try as we might, we just couldn’t avoid looking.

But according to Internet analysis tool Alexa.com, that may be over. I ran some traffic rank reports for major news portals and they all look the same: a ramp-up over the past 90 days to the beginning of February, and then a precipitous drop off a cliff.

While all the top portals have a similar pattern, it’s most obvious on Foxnews.com.

It was as if someone said, “Show’s over folks. There’s nothing to see here. Move along.” And after we all exhaled, we did!

Not surprisingly, we watch the news more when something terrible is happening. It’s an evolved hardwired response called negativity bias.

Good news is nice. But bad news can kill you. So it’s not surprising that bad news tends to catch our attention.

But this was more than that. We were fixated by Trump. If it were just our bias toward bad news, we would still eventually get tired of it.

That’s exactly what happened with the news on COVID-19. We worked through the initial uncertainty and fear, where we were looking for more information, and at some point moved on to the subsequent psychological stages of boredom and anger. As we did that, we threw up our hands and said, “Enough already!”

But when it comes to Donald Trump, there was something else happening.

It’s been said that Trump might have been the best instinctive communicator to ever take up residence in the White House. We might not agree with what he said, but we certainly were listening.

And while we — and by we, I mean me — think we would love to put him behind us, I believe it behooves us to take a peek under the hood of this particular obsession. Because if we fell for it once, we could do it again.

How the F*$k did this guy dominate our every waking, news-consuming moment for the past four years?

We may find a clue in Bob Woodward’s book on Trump, Rage. He explains that he was looking for a “reflector” — a person who knew Trump intimately and could provide some relatively objective insight into his character.

Woodward found a rather unlikely candidate for his reflector: Trump’s son-in-law, Jared Kushner.

I know, I know — “Kushner?” Just bear with me.

In Woodward’s book, Kushner says there were four things you needed to read and “absorb” to understand how Trump’s mind works.

The first was an op-ed piece in The Wall Street Journal by Peggy Noonan called “Over Trump, We’re as Divided as Ever.” It is not complimentary to Trump. But it does begin to provide a possible answer to our ongoing fixation. Noonan explains: “He’s crazy…and it’s kind of working.”

The second was the Cheshire Cat in Alice in Wonderland. Kushner paraphrased: “If you don’t know where you’re going, any path will get you there.” In other words, in Trump’s world, it’s not direction that matters, it’s velocity.

The third was Chris Whipple’s book, The Gatekeepers: How the White House Chiefs of Staff Define Every Presidency. The insight here is that no matter how clueless Trump was about how to do his job, he still felt he knew more than his chiefs of staff.

Finally, the fourth was Win Bigly: Persuasion in a World Where Facts Don’t Matter, by Scott Adams. That’s right — Scott Adams, the same guy who created the “Dilbert” comic strip. Adams calls Trump’s approach “Intentional Wrongness Persuasion.”

Remember, this is coming from Kushner, a guy who says he worships Trump. This is not apologetic. It’s explanatory — a manual on how to communicate in today’s world. Kushner is embracing Trump’s instinctive, scorched-earth approach to keeping our attention focused on him.

It’s — as Peggy Noonan realized — leaning into the “crazy.”  

Trump represented the ultimate political tribal badge. All you needed to do was read one story on Trump, and you knew exactly where you belonged. You knew it in your core, in your bones, without any shred of ambiguity or doubt. There were few things I was as sure of in this world as where I stood on Donald J. Trump.

And maybe that was somehow satisfying to me.

There was something about standing one side or the other of the divide created by Trump that was tribal in nature.

It was probably the clearest ideological signal about what was good and what was bad that we’ve seen for some time, perhaps since World War II or the ’60s — two events that happened before most of our lifetimes.

Trump’s genius was that he somehow made both halves of the world believe they were the good guys.

In 2018, Peggy Noonan said that “Crazy won’t go the distance.” I’d like to believe that’s so, but I’m not so sure. There are certainly others that are borrowing a page from Trump’s playbook.  Right-wing Republicans Marjorie Taylor Greene and Lauren Boebert are both doing “crazy” extraordinarily well. The fact that almost none of you had to Google them to know who they are proves this.

Whether we’re loving to love, or loving to hate, we are all fixated by crazy.

The problem here is that our media ecosystem has changed. “Crazy” used to be filtered out. But somewhere along the line, news outlets discovered that “crazy” is great for their bottom lines.

As former CBS Chairman and CEO Leslie Moonves said when Trump became the Republican Presidential forerunner back in 2016, “It may not be good for America, but it’s damned good for CBS.”

Crazy draws eyeballs like, well, like crazy. It certainly generates more user views then “normal” or “competent.”

In our current media environment  — densely intertwined with the wild world of social media — we have no crazy filters. All we have now are crazy amplifiers.

And the platforms that allow this all try to crowd on the same shaky piece of moral high ground.

According to them, it’s not their job to filter out crazy. It’s anti-free speech. It’s un-American. We should be smart enough to recognize crazy when we see it.

Hmmm. Well, we know that’s not working.

Connected Technologies are Leaving Our Seniors Behind

One of my pandemic projects has been editing a video series of oral history interviews we did with local seniors in my community. Last week, I finished the first video in the series. The original plan, pre-pandemic, was to unveil the video as a special event at a local theater, with the participants attending. Obviously, given our current reality, we had to change our plans.

We, like the rest of the world, moved our event online. As I started working through the logistics of this, I quickly realized something: Our seniors are on the other side of a wide and rapidly growing chasm. Yes, our society is digitally connected in ways we never were before, but those connections are not designed for the elderly. In fact, if you were looking for something that seems to be deliberately designed to disadvantage a segment of our population, it would be hard to find a better example than Internet connection and the elderly.

I have to admit, for much of the past year, I have been pretty focused on what I have sacrificed because of the pandemic. But I am still a pretty connected person. I can Zoom and have a virtual visit with my friends. If I wonder how my daughters are doing, I can instantly text them. If I miss their faces, I can FaceTime them. 

I have taken on the projects I’ve been able to do thanks to the privilege of being wired into the virtual world.   I can even go on a virtual bike ride with my friends through the streets of London, courtesy of Zwift.

Yes, I have given up things, but I have also been able find digital substitutes for many of those things. I’m not going to say it’s been perfect, but it’s certainly been passable.

My stepdad, who is turning 86, has been able to do none of those things. He is in a long-term care home in Alberta, Canada. His only daily social connections consist of brief interactions with staff during mealtime and when they check his blood sugar levels and give him his medication. All the activities that used to give him a chance to socialize are gone. Imagine life for him, where his sum total of connection is probably less than 30 minutes a day. And, on most days, none of that connecting is done with the people he loves.

Up until last week, family couldn’t even visit him. He was locked down due to an outbreak at his home. For my dad, there were no virtual substitutes available. He is not wired in any way for digital connection. If anyone has paid the social price of this pandemic, it’s been my dad and people like the seniors I interviewed, for whom I was desperately trying to find a way for them just to watch a 13-minute video that they had starred in.

A recent study by mobile technology manufacturer Ericsson looked specifically at the relationship between technology and seniors during the pandemic. The study focused on what the company termed the “young-old” seniors, those aged 65-74. They didn’t deal with “middle-old” (aged 75-85) or “oldest-old” (86 plus) because — well, probably because Ericsson couldn’t find enough who were connected to act as a representative sample.

But they did find that even the “young old” were falling behind in their ability to stay connected thanks to COVID-19. These are people who have owned smartphones for at least a decade, many of whom had to use computers and technology in their jobs. Up until a year ago, they were closing the technology gap with younger generations. Then, last March, they started to fall behind.

They were still using the internet, but younger people were using it even more. And, as they got older, they were finding it increasingly daunting to adopt new platforms and technology. They didn’t have the same access to “family tech support” of children or grandchildren to help get them over the learning curve. They were sticking to the things they knew how to do as the rest of the world surged forward and started living their lives in a digital landscape.

But this was not the group that was part of my video project. My experience had been with the “middle old” and “oldest old.” Half fell into the “middle old” group and half fell into the “oldest old” group. Of the eight seniors I was dealing with, only two had emails. If the “young old” are being left behind by technology, these people were never in the race to begin with. As the world was forced to reset to an online reality, these people were never given the option. They were stranded in a world suddenly disconnected from everything they knew and loved.

Predictably, the Ericsson study proposes smartphones as the solution for many of the problems of the pandemic, giving seniors more connection, more confidence and more capabilities. If only they got connected, the study says, life will be better.

But that’s not a solution with legs. It won’t go the distance. And to understand why, we just have to look at the two age cohorts the study didn’t focus on, the “middle old” and the “oldest old.”

Perhaps the hardest hit have been the “oldest old,” who have sacrificed both physical and digital connection, as this Journals of Gerontologyarticle notes.   Four from my group lived in long-term care facilities. Many of these were locked down at some point due to local outbreaks within the facility. Suddenly, that family support they required to connect with their family and friends was no longer available. The technological tools  that we take for granted — which we were able to slot in to take the place of things we were losing — were unimaginable to them. They were literally sentenced to solitary confinement.

A recent study from Germany found that only 3% of those living in long-term care facilities used an internet-connected device. A lot of the time, cognitive declines, even when they’re mild, can make trying to use technology an exercise in frustration.

When my dad went into his long-term care home, my sister and I gave him one of our old phones so he could stay connected. We set everything up and did receive a few experimental texts from him. But soon, it just became too confusing and frustrating for him to use without our constant help. He played solitaire on it for a while, then it ended up in a drawer somewhere. We didn’t push the issue. It just wasn’t the right fit.

But it’s not just my dad who struggled with technology. Even if an aging population starts out as reasonably proficient users, it can be overwhelming to keep up with new hardware, new operating systems and new security requirements. I’m not even “young old” yet, and I’ve worked with technology all my life. I owned a digital marketing company, for heaven’s sake. And even for me, it sometimes seems like a full-time job staying on top of the constant stream of updates and new things to learn and troubleshoot. As connected technology leaps forward, it does not seem unduly concerned that it’s leaving the most vulnerable segment of our population behind.

COVID-19 has pushed us into a virtual world where connection is not just a luxury, but a condition of survival. We need to connect to live. That is especially true for our seniors, who have had all the connections they relied on taken from them. We can’t leave them behind. Connected technology can no longer ignore them.

This is one gap we need to build a bridge over.