Facebook Friends Do Not Equal Real Friends

Last week, an acquaintance of mine posted on Facebook that he had just run his first 10 K. He also included a photo of the face of his Apple Watch showing his time.

Inevitably, snarkiness ensued in the comments section.

There were a few genuine messages of congratulations, but there was more virtual alpha headbutting along the lines of “that’s the best you could do?”

Finally, the original poster did a follow-up post saying (and I paraphrase liberally), “Hey, relax! I’m not looking for attaboys or coaching advice. I just wanted to let you know I ran 10 km, and I’m kinda proud of myself. It was important to me.”

This points out something we don’t often realize about our virtual social networks: They just don’t operate in the same way they do in the real world. And there are reasons why they don’t.

In the 1990s, British anthropologist Robin Dunbar was spending a lot of time hanging around monkeys. And he noticed something: They groom each other. A lot. In fact, they spend a huge chunk of their day grooming each other.

Why?

Intrigued, he started correlating brain size with social behavior. He found that primates in particular have some pretty impressive social coordination machinery locked up there in their noggins. Humans, for instance, seem to be able to juggle about 150 reasonably active social connections. This is now called Dunbar’s Number, which has become a pseudoscience trope — an intellectual tidbit we throw out to sound erudite.

Proof that we really don’t understand Dunbar’s original insight is to see what’s happened to his number, now updated for the social media age. For example, according to Brandwatch, the average number of Facebook friends is 338. That would be more than twice Dunbar’s Number. And so, predictably, there are those who say Dunbar’s Number is no longer valid. We can now handle much bigger friend networks thanks to social media.

But we can’t. And my example at the top of this post shows that.

Maintaining a friendship requires cognitive effort. There is a big difference between a Facebook “friend” and a true friend. True friends will pick lice out of your fur — or they would, if they were monkeys. Facebook Friends feel they’re entitled to belittle your 10K run. See the difference?

Let’s go back to Robin’s Dunbar’s original thesis. Dunbar actually mentioned many numbers (all are approximations):

— Five “intimate” friends. This is your support group — the people who know you best.

— 15 “sympathetic” friends whom you can confide in.

— 50 “close” friends. You may not see them all the time, but if you were having a milestone birthday party, they’d be on your guest list.

— Now we have the 150 “friends.” If you ran into them on the street, you’d probably suggest a cup of coffee (or, in my case, a beer) for a chance to catch up.

— The next circle out is 500 “acquaintances.” You probably know just the briefest of back stories about them — like how you know them.

—  Finally, we have 1,500 as our cognitive limit. On a good day, we may remember their name if we see them.

Here’s a quick and clever thought exercise to sort your network into one of these groups (this courtesy of my daughter Lauren — I give credit where credit is due). Imagine someone walks up to you and asks, “How are you doing?”

How you answer this question will depend on which group the questioner falls into. The biggest group of 1,500 probably won’t ask. They don’t care. The group of 500 acquaintances will get a standard “Fine” in response. There will be no follow-up. The 150 will get a little more — a few details of a big event or life development if relevant. The 50 “close friends” will get slightly more honesty. Perhaps you’ll be willing to guardedly open up some sensitive areas. The 15 “sympathetic friends” are a safe zone. You’ll feel like you can open up completely. And the five “intimate friends” don’t have to ask. They know how you’re doing.

I’ve talked before about strong ties and weak ties in our social networks. Strong ties are built through shared experiences and understanding. You really have to know someone to have a strong tie with them. They are the ties that bind the first two of Dunbar’s circles.

As we move to the third circle, the “close friends,” we’re moving into the transition zone between strong ties and weak ties. From there on, it’s all weak ties. If you need a job or a recommendation of a good plumber, you’d reach out. Otherwise, you have little in common.

The stronger the tie, the more effort it takes to maintain it. These are the cognitive limits that Dunbar was talking about. You have to remember all those back stories, those things they love and hate, what motivates them, what makes them sad. It takes time to learn all those things. And it takes a frequency of connection to keep up with them as they change. We are not static creatures — as has been shown especially in the last year.

This is the problem with social media. When we post something, we generally don’t post just for our intimate friends, or our sympathetic friends. We post it across our whole network, bound with both strong and weak ties. We have lost the common social understanding that keeps us sane in the real world.

For my 10K runner, those in their closest circles would have responded appropriately. But most of those who did comment, the ones who had no strong ties to the poster, didn’t know the 10K was a big deal.

Facebook does have some tools for limited posts to selected groups, but almost none of us use them or maintain them. We don’t have the time.

This is where Dunbar’s insight on our social capabilities breaks down when it comes to social media.  In the real world, multiple factors —  including physical proximity, shared circumstances and time spent with each other —  naturally keep our network sorted into the right categories.

But these factor don’t apply in social media. We broadcast out to all circles at once. And those circles, in turn, feel entitled by the false intimacy of social media to respond without the context needed to do so appropriately.

Our current circumstances are exacerbating this problem. In normal times, we might not be posting as much as we currently are on social media. But for many of us, Facebook might be all we’ve got. We just have to realize that if we’re depending on it for social affirmation, this virtual world doesn’t play by the same rules as the physical one.

Social Media Reflects Rights Vs. Obligations Split

Last week MediaPost writer (and my own editor here on Media Insider) Phyllis Fine asked this question in a post: “Can Social Media Ease the Path to Herd Immunity?” The question is not only timely, but also indicative of the peculiar nature of social media that could be stated thus: for every point of view expressed, there is an equal — and opposite — point of view. Fine’s post quotes a study from the Institute of Biomedical Ethics and History of Medicine at the University of Zurich, which reveals, “Anti-vaccination supporters find fertile ground in particular on Facebook and Twitter.”

Here’s the thing about social media. No matter what the message might be, there will be multiple interpretations of it. Often, the most extreme interpretations will be diametrically opposed to each other. It’s stunning how the very same content can illustrate the vast ideological divides that separate us.

I’ve realized that the only explanation for this is that our brains must work differently. We’re not even talking apples and oranges here. This is more like ostrich eggs and vacuum cleaners.

This is not my own revelation. There’s a lot of science behind it. An article in Scientific American catalogs some of the difference between conservative and liberal brains. Even the actual structure is different. According to the article: “The volume of gray matter, or neural cell bodies, making up the anterior cingulate cortex, an area that helps detect errors and resolve conflicts, tends to be larger in liberals. And the amygdala, which is important for regulating emotions and evaluating threats, is larger in conservatives.”

We have to understand that a right-leaning brain operates very differently than a left-leaning brain. Recent neuro-imaging studies have shown that they can consider the very same piece of information and totally different sections of their respective brains light up. They process information differently.

In a previous post about this topic, I quoted biologist and author Robert Sapolsky as saying, “Liberals are more likely to process information systematically, recognize differences in argument quality, and to be persuaded explicitly by scientific evidence, whereas conservatives are more likely to process information heuristically, attend to message-irrelevant cues such as source similarity, and to be persuaded implicitly through evaluative conditioning. Conservatives are also more likely than liberals to rely on stereotypical cues and assume consensus with like-minded others.”

Or, to sum it up in plain language: “Conservatives start gut and stay gut; liberals go from gut to head.”

This has never been clearer than in the past year. Typically, the information being processed by a conservative brain would have little overlap with the information being processed by a liberal brain. Each would care and think about different things.

But COVID-19 has forced the two circles of this particular Venn diagram together, creating a bigger overlap in the middle. We are all focused on information about the pandemic. And this has created a unique opportunity to more directly compare the cognitive habits of liberals versus conservatives.

Perhaps the biggest difference is in the way each group defines morality. At the risk of a vast oversimplification, the right tends to focus on individual rights, especially those they feel they’re personally are at risk of losing. The left thinks more in terms of societal obligations: What do we need to do — or not do — for the greater good of us all?  To paraphrase John F. Kennedy, conservatives ask what their country can do for them; liberals ask what they can do for their country.

This theory is part of Jonathon Haidt’s Moral Foundations Theory. What Haidt, working with others, has found is that both the right and left have morals, but they are defined differently. This “moral pluralism” means that two people can look at the same social media post but take two entirely different messages from it. And both will insist their interpretation is the correct one. Liberals can see a post about getting a vaccine as an appeal to their concern for the collective well being of their community. Conservatives see it as an attack on their personal rights.

So when we ask a question like “Can social media ease the path to herd immunity?” we run into the problem of message interpretation. For some, it will be preaching to the choir. For others, it will have the same effect as a red cape in front of a bull.

It’s interesting that the vaccine question is being road-blocked by this divide between rights and obligations. It shows just how far the two sides are apart. With a vaccine, at least both sides have skin in the game. Getting a vaccine can save your life, no matter how you vote. Wearing a face mask is a different matter.

In my lifetime, I have never seen a more overt signalling of ideological leanings than whether you choose to wear a face mask or not. When we talk about rights vs obligations, this is the ultimate acid test. If I insist on wearing a mask, as I do, I’m not wearing it for me, I’m wearing it for you. It’s part of my obligation to my community. But if you refuse to wear a mask, it’s pretty obvious who you’re focused on.

The thing that worries me the most about this moral dualism is that a moral fixation on individual rights is not sustainable. It’s assuming that our society is a zero-sum game. In order for me to win, you must lose. If we focus instead on our obligations, we approach society with an abundance mentality. As we contribute, we all benefit.

At least, that’s how my brain sees it.

Picking Apart the Concept of Viral Videos

In case you’re wondering, the most popular video on YouTube is the toxic brain worm Baby Shark Dance. It has over 8.2 billion views.

And from that one example, we tend to measure everything that comes after.  Digital has screwed up our idea of what it means to go viral. We’re not happy unless we get into the hyper-inflated numbers typical of social media influencers. Maybe not Baby Shark numbers, but definitely in the millions.

But does that mean that something that doesn’t hit these numbers is a failure? An old stat I found said that over half of YouTube videos have less than 500 views. I couldn’t find a more recent tally, but I suspect that’s still true.

And, if it is, my immediate thought is that those videos must suck. They weren’t worth sharing. They didn’t have what it takes to go viral. They are forever stuck in the long, long tail of YouTube wannabes.

But is going viral all it’s cracked up to be?

Let’s do a little back-of-an-envelope comparison. A week and a half ago, I launched a video that has since gotten about 1,500 views. A few days ago, a YouTuber named MrBeast launched a video titled, “I Spent 50 Hours Buried Alive.” In less than 24 hours, it racked up over 30 million views. Compared to that, one might say my launch was a failure. But was it?  It depends on what your goals for a video are. And it also depends on the structure of social networks.

Social networks are built of nodes. Within the node, people are connected by strong ties. They have a lot in common. But nodes are often connected by weak ties. These bonds stretch across groups that have less in common. Understanding this structure is important in understanding how a video might spread through a network.

Depending on your video’s content, it may never move beyond one node. It may not have the characteristics necessary to get passed along the ties that connect separate nodes. This was something I explored many years ago when I looked at how rumors spread through social networks. In that post, I talked about a study by Frenzen and Nakamoto that looked at some of the variables required to make a rumor spread between nodes.

Some of the same dynamics hold true when we look at viral videos. If you’ve had less than 500 views, as apparently over 50% of YouTube videos do, chances are you got stuck in a node. But this might not be a bad thing. Sometimes going deep is better than going wide.

My video, for example, is definitely aimed at one particular audience, people of Italian descent in the region where I live. According to the latest government census, the total possible “target” for my video is probably less than 10,000 people. And, if this is the case, I’ve already reached 15% of my audience. That’s not a mind-blowing success record, but it’s a start.

My goal for the video was to ignite an interest in my audience to learn more about their own heritage. And it seems to be working. I’ve never seen more interest in people wanting to learn about their own ancestors in particular, or the story of Italians in the Okanagan region of British Columbia in general.

My goal was never to just get a like or even a share, although that would be nice. My goal was to move people enough to act. I wanted to go deep, not wide.

To go “deep,” you have to fully leverage those “strong ties.” What is the stuff those ties are made of? What is the common ground within the node? The things that make people watch all 13-and-a-half minutes of a video about Italian immigrants are the very same things that will keep it stuck within that particular node. As long as it stays there, it will be interesting and relevant. But it won’t jump across a weak tie, because there is no common ground to act as a launching pad.

If the goal is to go “wide” and set a network effect in motion, then you have to play to the lowest common denominator: those universal emotions that we all share, which can be ignited just long enough to capture a quick view and a social share. According to this post about how to go viral, they are: status, identity protection, being helpful, safety, order, novelty, validation and voyeurism.

Another way to think of it is this: Do you want your content to trigger “fast” thinking or “slow” thinking? Again, I use Nobel laureate Daniel Kahneman’s cognitive analogy about how the brain works at two levels: fast and slow. If you want your content to “go wide,” you want to trigger the “fast” circuits of the brain. If you want your content to “go deep,” you’re looking to activate the “slow” circuits. It doesn’t mean that “deep” content can’t be emotionally charged. The opposite is often true. But these are emotions that require some cognitive focus and mindfulness, not a hair-trigger reaction. And, if you’re successful, that makes them all the more powerful. These are emotions that serve their inherent purpose. They move us to action.

I think this whole idea of going “viral” suffers from the same hyper-inflation of expectations that seems to affect everything that goes digital. We are naturally comparative and competitive animals, and the world that’s gone viral tends to focus us on quantity rather than quality. We can’t help looking at trending YouTube videos and hoping that our video will get launched into the social sharing stratosphere.

But that doesn’t mean a video that stays stuck with a few hundred views didn’t do its job. Maybe the reason the numbers are low is that the video is doing exactly what it was intended to do.

Splitting Ethical Hairs in an Online Ecosystem

In looking for a topic for today’s post, I thought it might be interesting to look at the Lincoln Project. My thought was that it would be an interesting case study in how to use social media effectively.

But what I found is that the Lincoln Project is currently imploding due to scandal. And you know what? I wasn’t surprised. Disappointed? Yes. Surprised? No.

While we on the left of the political spectrum may applaud what the Lincoln Project was doing, let’s make no mistake about the tactics used. It was the social media version of Nixon’s Dirty Tricks. The whole purpose was to bait Trump into engaging in a social media brawl. This was political mudslinging, as practiced by veteran warriors. The Lincoln Project was comfortable with getting down and dirty.

Effective? Yes. Ethical? Borderline.

But what it did highlight is the sordid but powerful force of social media influence. And it’s not surprising that those with questionable ethics, as some of the Lincoln Project leaders have proven to be, were attracted to it.

Social media is the single biggest and most effective influencer on human behavior ever invented. And that should scare the hell out of us, because it’s an ecosystem in which sociopaths will thrive.

A definition of Antisocial Personality Disorder (the condition from which sociopaths suffer) states, “People with ASPD may also use ‘mind games’ to control friends, family members, co-workers, and even strangers. They may also be perceived as charismatic or charming.”

All you have to do is substitute “social media” for “mind games,” and you’ll get my point.  Social media is sociopathy writ large.

That’s why we — meaning marketers — have to be very careful what we wish for. Since Google cracked down on personally identifiable information, following in the footsteps of Apple, there has been a great hue and cry from the ad-tech community about the unfairness of it all. Some of that hue and cry has issued forth here at MediaPost, like Ted McConnell’s post a few weeks ago, “Data Winter is Coming.”

And it is data that’s at the center of all this. Social media continually pumps personal data into the online ecosystem. And it’s this data that is the essential life force of the ecosystem. Ad tech sucks up that data as a raw resource and uses it for ad delivery across multiple channels. That’s the whole point of the personal identifiers that Apple and Google are cracking down on.

I suppose one could  draw an artificial boundary between social media and ad targeting in other channels, but that would be splitting hairs. It’s all part of the same ecosystem. Marketers want the data, no matter where it comes from, and they want it tied to an individual to make targeting their campaigns more effective.

By building and defending an ecosystem that enables sociopathic predators, we are contributing to the problem. McConnell and I are on opposite sides of the debate here. While I don’t disagree with some of his technical points about the efficacy of Google and Apple’s moves to protect privacy, there is a much bigger question here for marketers: Should we protect user privacy, even if it makes our jobs harder?

There has always been a moral ambiguity with marketers that I find troubling. To be honest, it’s why I finally left this industry. I was tired of the “yes, but” justification that ignored all the awful things that were happening for the sake of a handful of examples that showed the industry in a better light.

And let’s just be honest about this for a second: using personally identifiable data to build a more effective machine to influence people is an awful thing. Can it be used for good? Yes. Will it be? Not if the sociopaths have anything to say about it. It’s why the current rogue’s gallery of awful people are all scrambling to carve out as big a piece of the online ecosystem as they can.

Let’s look at nature as an example. In biology, a complex balance has evolved between predators and prey. If predators are too successful, they will eliminate their prey and will subsequently starve. So a self-limiting cycle emerges to keep everything in balance. But if the limits are removed on predators, the balance is lost. The predators are free to gorge themselves.

When it comes to our society, social media has removed the limits on “prey.” Right now, there is a never-ending supply.

It’s like we’re building a hen house, inviting a fox inside and then feigning surprise when the shit hits the fan. What the hell did we expect?

COVID And The Chasm Crossing

For most of us, it’s been a year living with the pandemic. I was curious what my topic was a year ago this week. It was talking about the brand crisis at a certain Mexican brewing giant when its flagship brand was suddenly and unceremoniously linked with a global pandemic. Of course, we didn’t know then just how “global” it would be back then.

Ahhh — the innocence of early 2020.

The past year will likely be an historic inflection point in many societal trend lines. We’re not sure at this point how things will change, but we’re pretty sure they will change. You can’t take what has essentially been a 12-month anomaly in everything we know as normal, plunk it down on every corner of the globe and expect everything just to bounce back to where it was.

If I could vault 10 years in the future and then look back at today, I suspect I would be talking about how our relationship with technology changed due to the pandemic. Yes, we’re all sick of Zoom. We long for the old days of actually seeing another face in the staff lunchroom. And we realize that bingeing “Emily in Paris” on Netflix comes up abysmally short of the actual experience of stepping in dog shit as we stroll along the Seine.

C’est la vie.

But that’s my point. For the past 12 months, these watered-down digital substitutes have been our lives. We were given no choice. And some of it hasn’t sucked. As I wrote last week, there are times when a digital connection may actually be preferable to a physical one.

There is now a whole generation of employees who are considering their work-life balance in the light of being able to work from home for at least part of the time. Meetings the world over are being reimagined, thanks to the attractive cost/benefit ratio of being able to attend virtually. And, for me, I may have permanently swapped riding my bike trainer in my basement for spin classes in the gym. It took me a while to get used to it, but now that I have, I think it will stick.

Getting people to try something new — especially when it’s technology — is a tricky process. There are a zillion places on the uphill slope of the adoption curve where we can get mired and give up. But, as I said, that hasn’t been an option for us in the past 12 months. We had to stick it out. And now that we have, we realize we like much of what we were forced to adopt. All we’re asking for is the freedom to pick and choose what we keep and what we toss away.

I suspect  many of us will be a lot more open to using technology now that we have experienced the tradeoffs it entails between effectiveness and efficiency. We will make more room in our lives for a purely utilitarian use of technology, stripped of the pros and cons of “bright shiny object” syndrome.

Technology typically gets trapped at both the dread and pseudo-religious devotion ends of the Everett Rogers Adoption Curve. Either you love it, or you hate it. Those who love it form the market that drives the development of our technology, leaving those who hate it further and further behind.

As such, the market for technology tends to skew to the “gee whiz” end of the market, catering to those who buy new technology just because it’s new and cool. This bias has embedded an acceptance of planned obsolescence that just seems to go hand-in-hand with the marketing of technology. 

My previous post about technology leaving seniors behind is an example of this. Even if seniors start out as early adopters, the perpetual chase of the bright shiny object that typifies the tech market can leave them behind.

But COVID-19 changed all that. It suddenly forced all of us toward the hump that lies in the middle of the adoption curve. It has left the world no choice but to cross the “chasm” that  Geoffrey Moore wrote about 30 years ago in his book “Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers.” He explained that the chasm was between “visionaries (early adopters) and pragmatists (early majority),” according to Wikipedia.

This has some interesting market implications. After I wrote my post, a few readers reached out saying they were working on solutions that addressed the need of seniors to stay connected with a device that is easier for them to use and is not subject to the need for constant updating and relearning. Granted, neither of them was from Apple nor Google, but at least someone was thinking about it.

As the pandemic forced the practical market for technology to expand, bringing customers who had everyday needs for their technology, it created more market opportunities. Those opportunities create pockets of profit that allow for the development of tools for segments of the market that used to be ignored.

It remains to be seen if this market expansion continues after the world returns to a more physically based definition of normal. I suspect it will.

This market evolution may also open up new business model opportunities — where we’re actually willing to pay for online services and platforms that used to be propped up by selling advertising. This move alone would take technology a massive step forward in ethical terms. We wouldn’t have this weird moral dichotomy where marketers are grieving the loss of data (as fellow Media Insider Ted McConnell does in this post) because tech is finally stepping up and protecting our personal privacy.

Perhaps — I hope — the silver lining in the past year is that we will look at technology more as it should be: a tool that’s used to make our lives more fulfilling.

To Be There – Or Not To Be There

According to Eventbrite, hybrid events are the hottest thing for 2021. So I started thinking, what would that possibly look like, as a planner or a participant?

The interesting thing about hybrid events is that they force us to really think about how we experience things. What process do we go through when we let the outside world in? What do we lose if we do that virtually? What do we gain, if anything? And, more importantly, how do we connect with other people during those experiences?

These are questions we didn’t think much about even a year ago. But today, in a reality that’s trying to straddle both the physical and virtual worlds, they are highly relevant to how we’ll live our lives in the future.

The Italian Cooking Lesson

First, let’s try a little thought experiment.

In our town, the local Italian Club — in which both my wife and I are involved — offered cooking lessons before we were all locked down. Groups of eight to 12 people would get together with an exuberant Italian chef in a large commercial kitchen, and together they would make an authentic dish like gnocchi or ravioli. There was a little vino, a little Italian culture and a lot of laughter. These classes were a tremendous hit.

That all ended last March. But we hope to we start thinking about offering them again late in 2021 or 2022. And, if we do, would it make sense to offer them as a “hybrid” event, where you can participate in person or pick up a box of preselected ingredients and follow along in your own kitchen?

As an event organizer, this would be tempting. You can still charge the full price for physical attendance where you’re restricted to 12 people, but you could create an additional revenue stream by introducing a virtual option that could involve as many people as possible. Even at a lower registration fee, it would still dramatically increase revenue at a relatively small incremental cost. It would be “molto” profitable.

But now consider this as an attendee.Would you sign up for a virtual event like that? If you had no other option to experience it, maybe. But what if you could actually be there in person? Then what? Would you feel relegated to a second-class experience by being isolated in your own kitchen, without many of the sensory benefits that go along with the physical experience?

The Psychology of Zoom Fatigue

When I thought about our cooking lesson example, I was feeling less than enthused. And I wondered why.

It turns out that there’s some actual brain science behind my digital ennui. In an article in the Psychiatric Times, Jena Lee, MD, takes us on a “Neuropsychological Exploration of Zoom Fatigue.”

A decade ago, I was writing a lot about how we balance risk and reward. I believe that a lot of our behaviors can be explained by how we calculate the dynamic tension between those two things. It turns out that it may also be at the root of how we feel about virtual events. Dr. Lee explains,

“A core psychological component of fatigue is a rewards-costs trade-off that happens in our minds unconsciously. Basically, at every level of behavior, a trade-off is made between the likely rewards versus costs of engaging in a certain activity.”

Let’s take our Italian cooking class again. Let’s imagine we’re there in person. For our brain, this would hit all the right “reward” buttons that come with being physically “in the moment.” Subconsciously, our brains would reward us by releasing oxytocin and dopamine along with other “pleasure” neurochemicals that would make the experience highly enjoyable for us. The cost/reward calculation would be heavily weighted toward “reward.”

But that’s not the case with the virtual event. Yes, it might still be considered “rewarding,” but at an entirely different — and lesser — scale of the same “in-person” experience. In addition, we would have the additional costs of figuring out the technology required, logging into the lesson and trying to follow along. Our risk/reward calculator just might decide the tradeoffs required weren’t worth it.

Without me even knowing it, this was the calculation that was going on in my head that left me less than enthused.

 But there is a flip side to this.

Reducing the Risk Virtually

Last fall, a new study from Oracle in the U.K. was published with the headline, “82% of People Believe Robots Can Support Their Mental Health Better than Humans.”

Something about that just didn’t seem right to me. How could this be? Again, we had the choice between virtual and physical connection, and this time the odds were overwhelmingly in favor of the virtual option.

But when I thought about it in terms of risk and reward, it suddenly made sense. Talking about our own mental health is a high-risk activity. It’s sad to say, but opening up to your manager about job-related stress could get you a sympathetic ear, or it could get you fired. We are taking baby steps towards destigmatizing mental health issues, but we’re at the beginning of a very long journey.

In this case, the risk/reward calculation is flipped completely around. Virtual connections, which rely on limited bandwidth — and therefore limited vulnerability on our part — seem like a much lower risk alternative than pouring our hearts out in person. This is especially true if we can remain anonymous.

It’s All About Human Hardware

The idea of virtual/physical hybrids with expanded revenue streams will be very attractive to marketers and event organizers. There will be many jumping on this bandwagon. But, like all the new opportunities that technology brings us, it has to interface with a system that has been around for hundreds of thousands of years — otherwise known as our brain.

The Crazy World of Our Media Obsessions

Are you watching the news less? Me too. Now that the grownups are back in charge, I’m spending much less time checking my news feed.

Whatever you might say about the last four years, it certainly was good for the news business. It was one long endless loop of driving past a horrific traffic accident. Try as we might, we just couldn’t avoid looking.

But according to Internet analysis tool Alexa.com, that may be over. I ran some traffic rank reports for major news portals and they all look the same: a ramp-up over the past 90 days to the beginning of February, and then a precipitous drop off a cliff.

While all the top portals have a similar pattern, it’s most obvious on Foxnews.com.

It was as if someone said, “Show’s over folks. There’s nothing to see here. Move along.” And after we all exhaled, we did!

Not surprisingly, we watch the news more when something terrible is happening. It’s an evolved hardwired response called negativity bias.

Good news is nice. But bad news can kill you. So it’s not surprising that bad news tends to catch our attention.

But this was more than that. We were fixated by Trump. If it were just our bias toward bad news, we would still eventually get tired of it.

That’s exactly what happened with the news on COVID-19. We worked through the initial uncertainty and fear, where we were looking for more information, and at some point moved on to the subsequent psychological stages of boredom and anger. As we did that, we threw up our hands and said, “Enough already!”

But when it comes to Donald Trump, there was something else happening.

It’s been said that Trump might have been the best instinctive communicator to ever take up residence in the White House. We might not agree with what he said, but we certainly were listening.

And while we — and by we, I mean me — think we would love to put him behind us, I believe it behooves us to take a peek under the hood of this particular obsession. Because if we fell for it once, we could do it again.

How the F*$k did this guy dominate our every waking, news-consuming moment for the past four years?

We may find a clue in Bob Woodward’s book on Trump, Rage. He explains that he was looking for a “reflector” — a person who knew Trump intimately and could provide some relatively objective insight into his character.

Woodward found a rather unlikely candidate for his reflector: Trump’s son-in-law, Jared Kushner.

I know, I know — “Kushner?” Just bear with me.

In Woodward’s book, Kushner says there were four things you needed to read and “absorb” to understand how Trump’s mind works.

The first was an op-ed piece in The Wall Street Journal by Peggy Noonan called “Over Trump, We’re as Divided as Ever.” It is not complimentary to Trump. But it does begin to provide a possible answer to our ongoing fixation. Noonan explains: “He’s crazy…and it’s kind of working.”

The second was the Cheshire Cat in Alice in Wonderland. Kushner paraphrased: “If you don’t know where you’re going, any path will get you there.” In other words, in Trump’s world, it’s not direction that matters, it’s velocity.

The third was Chris Whipple’s book, The Gatekeepers: How the White House Chiefs of Staff Define Every Presidency. The insight here is that no matter how clueless Trump was about how to do his job, he still felt he knew more than his chiefs of staff.

Finally, the fourth was Win Bigly: Persuasion in a World Where Facts Don’t Matter, by Scott Adams. That’s right — Scott Adams, the same guy who created the “Dilbert” comic strip. Adams calls Trump’s approach “Intentional Wrongness Persuasion.”

Remember, this is coming from Kushner, a guy who says he worships Trump. This is not apologetic. It’s explanatory — a manual on how to communicate in today’s world. Kushner is embracing Trump’s instinctive, scorched-earth approach to keeping our attention focused on him.

It’s — as Peggy Noonan realized — leaning into the “crazy.”  

Trump represented the ultimate political tribal badge. All you needed to do was read one story on Trump, and you knew exactly where you belonged. You knew it in your core, in your bones, without any shred of ambiguity or doubt. There were few things I was as sure of in this world as where I stood on Donald J. Trump.

And maybe that was somehow satisfying to me.

There was something about standing one side or the other of the divide created by Trump that was tribal in nature.

It was probably the clearest ideological signal about what was good and what was bad that we’ve seen for some time, perhaps since World War II or the ’60s — two events that happened before most of our lifetimes.

Trump’s genius was that he somehow made both halves of the world believe they were the good guys.

In 2018, Peggy Noonan said that “Crazy won’t go the distance.” I’d like to believe that’s so, but I’m not so sure. There are certainly others that are borrowing a page from Trump’s playbook.  Right-wing Republicans Marjorie Taylor Greene and Lauren Boebert are both doing “crazy” extraordinarily well. The fact that almost none of you had to Google them to know who they are proves this.

Whether we’re loving to love, or loving to hate, we are all fixated by crazy.

The problem here is that our media ecosystem has changed. “Crazy” used to be filtered out. But somewhere along the line, news outlets discovered that “crazy” is great for their bottom lines.

As former CBS Chairman and CEO Leslie Moonves said when Trump became the Republican Presidential forerunner back in 2016, “It may not be good for America, but it’s damned good for CBS.”

Crazy draws eyeballs like, well, like crazy. It certainly generates more user views then “normal” or “competent.”

In our current media environment  — densely intertwined with the wild world of social media — we have no crazy filters. All we have now are crazy amplifiers.

And the platforms that allow this all try to crowd on the same shaky piece of moral high ground.

According to them, it’s not their job to filter out crazy. It’s anti-free speech. It’s un-American. We should be smart enough to recognize crazy when we see it.

Hmmm. Well, we know that’s not working.

Connected Technologies are Leaving Our Seniors Behind

One of my pandemic projects has been editing a video series of oral history interviews we did with local seniors in my community. Last week, I finished the first video in the series. The original plan, pre-pandemic, was to unveil the video as a special event at a local theater, with the participants attending. Obviously, given our current reality, we had to change our plans.

We, like the rest of the world, moved our event online. As I started working through the logistics of this, I quickly realized something: Our seniors are on the other side of a wide and rapidly growing chasm. Yes, our society is digitally connected in ways we never were before, but those connections are not designed for the elderly. In fact, if you were looking for something that seems to be deliberately designed to disadvantage a segment of our population, it would be hard to find a better example than Internet connection and the elderly.

I have to admit, for much of the past year, I have been pretty focused on what I have sacrificed because of the pandemic. But I am still a pretty connected person. I can Zoom and have a virtual visit with my friends. If I wonder how my daughters are doing, I can instantly text them. If I miss their faces, I can FaceTime them. 

I have taken on the projects I’ve been able to do thanks to the privilege of being wired into the virtual world.   I can even go on a virtual bike ride with my friends through the streets of London, courtesy of Zwift.

Yes, I have given up things, but I have also been able find digital substitutes for many of those things. I’m not going to say it’s been perfect, but it’s certainly been passable.

My stepdad, who is turning 86, has been able to do none of those things. He is in a long-term care home in Alberta, Canada. His only daily social connections consist of brief interactions with staff during mealtime and when they check his blood sugar levels and give him his medication. All the activities that used to give him a chance to socialize are gone. Imagine life for him, where his sum total of connection is probably less than 30 minutes a day. And, on most days, none of that connecting is done with the people he loves.

Up until last week, family couldn’t even visit him. He was locked down due to an outbreak at his home. For my dad, there were no virtual substitutes available. He is not wired in any way for digital connection. If anyone has paid the social price of this pandemic, it’s been my dad and people like the seniors I interviewed, for whom I was desperately trying to find a way for them just to watch a 13-minute video that they had starred in.

A recent study by mobile technology manufacturer Ericsson looked specifically at the relationship between technology and seniors during the pandemic. The study focused on what the company termed the “young-old” seniors, those aged 65-74. They didn’t deal with “middle-old” (aged 75-85) or “oldest-old” (86 plus) because — well, probably because Ericsson couldn’t find enough who were connected to act as a representative sample.

But they did find that even the “young old” were falling behind in their ability to stay connected thanks to COVID-19. These are people who have owned smartphones for at least a decade, many of whom had to use computers and technology in their jobs. Up until a year ago, they were closing the technology gap with younger generations. Then, last March, they started to fall behind.

They were still using the internet, but younger people were using it even more. And, as they got older, they were finding it increasingly daunting to adopt new platforms and technology. They didn’t have the same access to “family tech support” of children or grandchildren to help get them over the learning curve. They were sticking to the things they knew how to do as the rest of the world surged forward and started living their lives in a digital landscape.

But this was not the group that was part of my video project. My experience had been with the “middle old” and “oldest old.” Half fell into the “middle old” group and half fell into the “oldest old” group. Of the eight seniors I was dealing with, only two had emails. If the “young old” are being left behind by technology, these people were never in the race to begin with. As the world was forced to reset to an online reality, these people were never given the option. They were stranded in a world suddenly disconnected from everything they knew and loved.

Predictably, the Ericsson study proposes smartphones as the solution for many of the problems of the pandemic, giving seniors more connection, more confidence and more capabilities. If only they got connected, the study says, life will be better.

But that’s not a solution with legs. It won’t go the distance. And to understand why, we just have to look at the two age cohorts the study didn’t focus on, the “middle old” and the “oldest old.”

Perhaps the hardest hit have been the “oldest old,” who have sacrificed both physical and digital connection, as this Journals of Gerontologyarticle notes.   Four from my group lived in long-term care facilities. Many of these were locked down at some point due to local outbreaks within the facility. Suddenly, that family support they required to connect with their family and friends was no longer available. The technological tools  that we take for granted — which we were able to slot in to take the place of things we were losing — were unimaginable to them. They were literally sentenced to solitary confinement.

A recent study from Germany found that only 3% of those living in long-term care facilities used an internet-connected device. A lot of the time, cognitive declines, even when they’re mild, can make trying to use technology an exercise in frustration.

When my dad went into his long-term care home, my sister and I gave him one of our old phones so he could stay connected. We set everything up and did receive a few experimental texts from him. But soon, it just became too confusing and frustrating for him to use without our constant help. He played solitaire on it for a while, then it ended up in a drawer somewhere. We didn’t push the issue. It just wasn’t the right fit.

But it’s not just my dad who struggled with technology. Even if an aging population starts out as reasonably proficient users, it can be overwhelming to keep up with new hardware, new operating systems and new security requirements. I’m not even “young old” yet, and I’ve worked with technology all my life. I owned a digital marketing company, for heaven’s sake. And even for me, it sometimes seems like a full-time job staying on top of the constant stream of updates and new things to learn and troubleshoot. As connected technology leaps forward, it does not seem unduly concerned that it’s leaving the most vulnerable segment of our population behind.

COVID-19 has pushed us into a virtual world where connection is not just a luxury, but a condition of survival. We need to connect to live. That is especially true for our seniors, who have had all the connections they relied on taken from them. We can’t leave them behind. Connected technology can no longer ignore them.

This is one gap we need to build a bridge over.

The Ebbs and Flows of Consumerism in a Post-Pandemic World

As MediaPost’s Joe Mandese reported last Friday, advertising was, quite literally, almost decimated worldwide in 2020. If you look at the forecasts of the top agency holding companies, ad spends were trimmed by an average of 6.1%. It’s not quite one dollar in 10, but it’s close.

These same companies are forecasting a relative bounceback in 2021, starting slow and accelerating quarter by quarter through the year — but that still leaves the 2021 spend forecast back at 2018 levels.

And as we know, everything about 2021 is still very much in flux. If the year 2021 was a pack of cards, almost every one of them would be wild.

This — according to physician, epidemiologist and sociologist Nicholas Christakis — is not surprising.

Christakis is one of my favorite observers of network effects in society. His background in epidemiological science gives him a unique lens to look at how things spread through the networks of our world, real and virtual. It also makes him the perfect person to comment on what we might expect as we stagger out of our current crisis.

In his latest book, “Apollo’s Arrow,” he looks back to look forward to what we might expect — because, as he points out, we’ve been here before.

While the scope and impact of this one is unusual, such health crises are nothing new. Dozens of epidemics and a few pandemics have happened in my lifetime alone, according to this Wikipedia chart.

This post goes live on Groundhog Day, perhaps the most appropriate of all days for it to run. Today, however, we already know what the outcome will be. The groundhog will see its shadow and there will be six more months (at least) of pandemic to deal with. And we will spend that time living and reliving the same day in the same way with the same routine.

Christakis expects this phase to last through the rest of this year, until the vaccines are widely distributed, and we start to reach herd immunity.

During this time, we will still have to psychologically “hunker down” like the aforementioned groundhog, something we have been struggling with. “As a society we have been very immature,” said Christakis. “Immature, and typical as well, we could have done better.”

This phase will be marked by a general conservatism that will go in lockstep with fear and anxiety, a reluctance to spend and a trend toward risk aversion and religion.

Add to this the fact that we will still be dealing with widespread denialism and anger, which will lead to a worsening vicious circle of loss and crisis. The ideological cracks in our society have gone from annoying to deadly.

Advertising will have to somehow negotiate these choppy waters of increased rage and reduced consumerism.

Then, predicts Christakis, starting some time in 2022, we will enter an adjustment period where we will test and rethink the fundamental aspects of our lives. We will be learning to live with COVID-19, which will be less lethal but still very much present.

We will likely still wear masks and practice social distancing. Many of us will continue to work from home. Local flare-ups will still necessitate intermittent school and business closures. We will be reluctant to be inside with more than 20 or 30 people at a time. It’s unlikely that most of us will feel comfortable getting on a plane or embarking on a cruise ship. This period, according to Christakis, will last for a couple years.

Again, advertising will have to try to thread this psychological needle between fear and hope. It will be a fractured landscape on which to build a marketing strategy. Any pretense of marketing to the masses, a concept long in decline, will now be truly gone. The market will be rife with confusing signals and mixed motivations. It will be incumbent on advertisers to become very, very good at “reading the room.”

Finally, starting in 2024, we will have finally put the pandemic behind us. Now, says Christakis, four years of pent-up demand will suddenly burst through the dam of our delayed self-gratification. We will likely follow the same path taken a century ago, when we were coming out of a war and another pandemic, in the period we call the “Roaring Twenties.”

Christakis explained: “What typically happens is people get less religious. They will relentlessly seek out social interactions in nightclubs and restaurants and sporting events and political rallies. There’ll be some sexual licentiousness. People will start spending their money after having saved it. They’ll be joie de vivre and a kind of risk-taking, a kind of efflorescence of the arts, I think.”

Of course, this burst of buying will be built on the foundation of what came before. The world will likely be very different from its pre-pandemic version. It will be hard for marketers to project demand in a straight line from what they know, because the experiences they’ve been using as their baseline are no longer valid. Some things may remain the same, but some will be changed forever.

COVID-19 will have pried many of the gaps in our society further apart — most notably those of income inequality and ideological difference. A lingering sense of nationalism and protectionism born from dealing with a global emergency could still be in place.

Advertising has always played an interesting role in our lives. It both motivates and mirrors us.

But the reflection it shows is like a funhouse mirror: It distorts some aspects of our culture and ignores others. It creates demand and hides inconvenient truths. It professes to be noble, while it stokes the embers of our ignobility. It amplifies the duality of our human nature.

Interesting times lie ahead. It remains to be seen how that is reflected in the advertising we create and consume.

The Academics of Bullsh*t

“One of the most salient features of our culture is that there is so much bullshit. Everyone knows this. Each of us contributes his share. But we tend to take the situation for granted.”—

from On Bullshit,” an essay by philosopher Henry Frankfurt.

Would it surprise you to know that I have found not one, but two academic studies on organizational bullshit? And I mean that non-euphemistically. The word “bullshit” is actually in the title of both studies. I B.S. you not.

In fact, organizational bullshit has become a legitimate field of study. Academics are being paid to dig into it — so to speak. There are likely bullshit grants, bullshit labs, bullshit theories, bullshit paradigms and bullshit courses. There are definitely bullshit professors.  There is even an OBPS — the Organization Bullshit Perception Scale — a way to academically measure bullshit in a company.

Many years ago, when I was in the twilight of my time with the search agency I had founded, I had had enough of the bullshit I was being buried under, shoveled there by the company that had acquired us. I was drowning in it. So I vented right here, on MediaPost. I dared you to imagine what it would be like to actually do business without bullshit getting in the way.

My words fell on deaf ears. Bullshit has proliferated since that time. It has been enshrined up and down our social, business and governmental hierarchies, becoming part of our “new” organizational normal. It has picked up new labels, like “fake news” and “alternate facts.” It has proven more dangerous than I could have ever imagined. And it is this dangerous because we are ignoring it, which is legitimizing it.

Henry Frankfurt defined the concept and set it apart from lying. Liars know the truth and are trying to hide it. Bullshitters don’t care if what they say is true or false. They only care if their listener is persuaded. That’s as good a working definition of the last four years as any I’ve heard.

But at least one study indicates bullshit may have a social modality — acceptable in some contexts, but corrosive in others. Marketing, for example, is highlighted by the authors as an industry built on a foundation of bullshit:

“advertising and public relations agencies and consultants are likely to be ‘full of It,’ and in some cases even make the production of bullshit an important pillar of their business.”

In these studies, researchers speculate that bullshit might actually serve a purpose in organizations. It may allow for strategic motivation before there is an actual strategy in place. This brand of bullshit is otherwise known as “blue-sky thinking” or “out-of-the-box thinking.”

But if this is true, there is a very narrow window indeed where this type of bullshit could be considered beneficial. The minute there are facts to deal with, they should be dealt with. But the problem is that the facts never quite measure up to the vision of the bullshit. Once you open the door to allowing bullshit, it becomes self-perpetuating.

I grew up in the country. I know how hard it is to get rid of bullshit.

The previous example is what I would call strategic bullshit — a way to “grease the wheels” and get the corporate machine moving. But it often leads directly to operational bullshit — which is toxic to an organization, serving to “gum up the gears” and prevent anything real and meaningful from happening. This was the type of bullshit that was burying me back in 2013 when I wrote that first column. It’s also the type of bullshit that is paralyzing us today.

According to the academic research into bullshit, when we’re faced with it, we have four ways to respond: exit, voice, loyalty or neglect. Exit means we try to escape from the bullshit. Loyalty means we wallow in it, spreading it wider and thicker. Neglect means we just ignore it. And Voice means we stand up to the bullshit and confront it.  I’m guessing you’ve already found yourself in one of those four categories.

Here’s the thing. As marketers and communicators, we have to face the cold, ugly truth of our ongoing relationship with bullshit. We all have to deal with it. It’s the nature of our industry.

But how do we deal with it? Most times, in most situations, it’s just easier to escape or ignore it. Sometimes it may serve our purpose to jump on the bullshit bandwagon and spread it. But given the overwhelming evidence of where bullshit has led us in the recent past, we all should be finding our voice to call bullshit on bullshit.