It’s Not Whether We Like Advertising – It’s Whether We Accept Advertising

Last week, I said we didn’t like advertising. That – admittedly – was a blanket statement.

In response, MediaPost reader Kevin VanGundy said:

“I’ve been in advertising for 39 years and I think the premise that people don’t like advertising is wrong. People don’t like bad advertising.”

I think there’s truth in both statements. The problem here is the verb I chose to use: “like.” The future of advertising hangs not on what we like, but on what we accept. Like is an afterthought. By the time we decide whether we like something or not, we’ve already been exposed to it. It’s whether we open the door to that exposure that will determine the future of advertising. So let’s dig a little deeper there, shall we?

First, seeing as we started with a blanket statement, let’s spend a little time unpacking this idea of “liking” advertising. As Mr. VanGundy agreed, we don’t like bad advertising. The problem is that most advertising is bad, in that it’s not really that relevant to us “in the moment.” Even with the best programmatic algorithms currently being used, the vast majority of the targeted advertising presented to me is off the mark. It’s irrelevant, it’s interruptive and that makes it irritating.

Let’s explore how the brain responds to this. Our brains love to categorize and label, based on our past experience. It’s the only way we can sort through and process the tsunami of input we get presented with on a daily basis. So, just like my opening sentence, the brain makes blanket statements. It doesn’t deal with nuance very well, at least in the subconscious processing of stimuli. It quickly categorizes into big generic buckets and sorts the input, discarding most of it as unworthy of attention and picking the few items of interest out of the mix. In this way, our past experience predicts our future behavior, in terms of what we pay attention to. And if we broadly categorize advertising as irritating, this will lessen the amount of attention we’re willing to pay.

As a thought experiment to support my point, think of what you would do if you were to click on a news story in the Google results and when you arrive at the article page, you get the pop up informing you that you had your ad-blocker on. You have been given two options: whitelist the page so you receive advertising or keep your ad-blocker on and read the page anyway. I’m betting you would keep your ad-blocker on. It’s because you were given a choice and that choice included the option to avoid advertising – which you did because advertising annoys you.

To further understand why the exchange that forms the foundation of advertising is crumbling, we have to understand that much of the attentional focused activity in the brain is governed by a heuristic algorithm that is constantly calculating trade-offs between resources and reward. It governs our cognitive resources by predicting what would have to be invested versus what the potential reward might be. This subconscious algorithm tends to be focused on the task at hand. Anything that gets in the way of the contemplated task is an uncalculated investment of resources. And the algorithm is governed by our past experience and broad categorizations. It you have categorized advertising as “bad” the brain will quickly cut that category out of consideration. The investment of attention is not warranted given the expected reward. If you did happen to be served a “good” ad that managed to make it into consideration – based on an exception to our general categorization that advertising is annoying – that can change, but the odds are stacked against it. It’s just that low probability occurrence that the entire ad industry is built on.

Finally, let’s look at that probability. In the past, the probability was high enough to warrant the investment of ad dollars. The probability was higher because our choices were fewer. Often, we only had one path to get to what we sought, and that path lead through an ad. The brain had no other available options. That’s no longer the case. Let’s go back to our ad-blocker example.

Let’s say the pop-up didn’t give us a choice – we had to whitelist to see the article. The resource – reward algorithm kicks into action: What are the odds we could find the information – ad-free –  elsewhere? How important is the information to us? Will we ever want to come back to this site to read another article? Perhaps we give in and whitelist. Or perhaps we just abandon the site with a sour taste in our mouth. The later was happening more and more, which is why we see fewer news sites offering the whitelist or nothing option now. The probability of our market seeing an ad is dropping because they have more ad-free alternatives. Or at least, they think they do.

And it’s this thought – precisely this thought – that is eroding the foundation of advertising, whether we like it or not.

 

Why Marketing is Increasingly Polarizing Everything

 

Trump. Kanye. Kaepernick. Miracle Whip.

What do these things all have in common? They’re polarizing. Just the mention of them probably stirs up strong feelings in you, one way or the other.

Wait. Miracle Whip?

Yep. Whether you love or hate Miracle Whip is perhaps the defining debate of our decade.

Okay, maybe not. But it turns out that Miracle Whip – which I always thought of as the condiment counterpart to vanilla – is a polarized brand, according to an article in the Harvard Business Review.  And far from being aghast at the thought, Kraft Foods, the maker of Miracle Whip, embraced the polarization with gusto. They embedded it in their marketing.

I have to ask – when did it become a bad thing to be vanilla? I happen to like vanilla. But I always order something else. And there’s the rub. Vanilla is almost never our first choice, because we don’t like to be perceived as boring.

Boring is the kiss of death for marketing. So even Miracle Whip, which is literally “boring” in a jar, is trying to “whip” up some controversy. Our country is being split down the middle and driven to either side – shoved to margins of outlier territory. Outrageous is not only acceptable. It’s become desirable. And marketing is partly to blame.

We marketers are enamored with this idea of “viralness.” We want advertising to be amplified through our target customer’s social networks. Boring never gets retweeted or shared. We need to be jolted out of those information filters we have set on high alert. That’s why polarization works. By moving to extremes, brands catch our attention. And as they move to extremes, they drag us along with them. Increasingly, the brands we chose as our own identifying badges are moving away from any type of common middle ground. Advertising is creating a nation of ideological tribes that have an ever-increasing divide separating them.

The problem is that polarization works. Look at Nike. As Sarah Mahoney recently documented in a Mediapost article, the Colin Kaepernick campaign turned some impressive numbers for Nike. Research from Kantar Millward Brown found these ads were particularly effective in piercing our ennui. The surprising part is that it did it on both sides of the divide. Based on Kantar’s Link evaluation, the ad scored in the top 15% of ads on something called “Power Contribution.” According to Kantar, that’s the ad’s “potential to impact long-term equity.” If we strip away the “market-speak” from this, that basically means the Kaepernick ads make them an excellent tribal badge to rally around.

If you’re a marketer, it’s hard to argue with those numbers. And Is it really important if half the world loves a brand, and the other half hates it? I suspect it is. The problem comes when we look at exactly the same thing Kantar’s Link Evaluation measures – what is the intensity of feeling you have towards a brand? The more intense the feeling, the less rational you are. And if the object of your affection lies in outlier territory – those emotions can become highly confrontational towards those on the other side of the divide. Suddenly, opinions become morals, and there is no faster path to hate than embracing a polarized perspective on morality. The more that emotionally charged marketing pushes us towards the edges, the harder it is to respect opinions that are opposed to our own. This embracing of polarization in non-important areas – like which running shoes you choose to wear – increases polarization in other areas where it’s much more dangerous. Like politics.

As if we haven’t seen enough evidence of this lately, polarized politics can cripple a country. In a recent interview on NPR, Georgia State political science professor Jennifer McCoy listed three possible outcomes from polarization. First, the country can enter polarization gridlock, where nothing can get done because there is a complete lack of trust between opposing parties. Secondly, a polarization pendulum can occur, where power swings back and forth between the two sides and most of the political energy is expended undoing the initiatives of the previous government. Often there is little logic to this, other than the fact that the initiatives were started by “them” and not “us.” Finally, one side can find a way to stay in power and then actively work to diminish and vanquish the other side by dismantling democratic platforms.

Today, as you vote, you’ll see ample evidence of the polarization of America. You’ll also see that at least two of the three outcomes of polarization are already playing out. We marketers just have to remember that while we love it when a polarized brand goes viral, there may be another one of those intended consequences lurking in the background.

 

 

The Rank and File: Life and Work in a Quantified World

No one likes to be a number – unless, of course – that number is one. Then it’s okay.

Rankings started to be crucial to me back in 1996 when I jumped into the world of search engine optimization. Suddenly, the ten blue links on a search results page took on critical importance. The most important, naturally, was the first result. It turned on the tap for a flow of business many local organizations could only dream of. My company once got a California Mustang part retailer that number one ranking for “Ford Mustang Parts.” The official site of For – Ford.com – was number two. The California business did very well for a few years. We probably made them rich. Then Google came along and the party was over. We soon found that as quickly as that tap could be turned on, it could also be turned off. We and our clients rode the stormy waters of multiple Google updates. We called it the Google dance.

Now that I’m in my second life (third? fourth?) as a tourism operator I’m playing that ranking game again. This time it’s with TripAdvisor. You would not believe how important a top ranking in your category is here. Again, your flow of business can be totally at the mercy of how well you rank.

The problem with TripAdvisor is not so much with the algorithm in the background or the criteria used for ranking. The problem is with the Delta between riches and rags. If you drop below the proverbial fold in TripAdvisor, your tourism business can shrivel up and die. One bad review could be the difference. I feel like I’m dancing the Google dance all over again.

But at least TripAdvisor is what I would call a proximate ranking site. The source of the rankings is closely connected to the core nature of the industry. Tourism is all about experiences. And TripAdvisor is a platform for experience reviews. There is some wiggle room there for gaming the system, but the unintended consequences are kept to a minimum. If you’re in the business of providing good experiences, you should do well in TripAdvisor. And if you pay attention to the feedback on TripAdvisor, your business should improve. This is a circle that is mostly virtuous.

Such is not always the case. Take teaching, for example. Ratemyprofessors.com is a ranking site for teachers and professors, based on feedback from students. If you read through the reviews, it soon becomes obvious that funny, relatable, good-looking profs fare better than those who are less socially gifted. It has become a popularity contest for academics. Certainly, some of those things may factor into the effectiveness of a good educator, but there is a universe of other criteria that are given short-shrift on the site. Teaching is a subtle and complex profession. Is a popular prof necessarily a good prof? If too much reliance is placed on ratings like those found on ratemyprofessors.com, will the need to be popular push some of those other less-rankable attributes to the background?

But let’s step back even a bit further. Along with the need to quantify everything there is also a demand for transparency. Let’s step into another classroom, this time in your local elementary school. The current push is to document what’s happening in the classroom and share it on a special portal that parents have access to. In theory, this sounds great. Increasing collaboration and streamlining the communication triangle between teachers, students and parents should be a major step forward. But it’s here where unintended consequences can run the education process off the rails. Helicopter parents are the most frequent visitors to the portal. They also dominate these new communication channels that are now open to their children’s teachers. And – if you know a helicopter parent – you know these are people who have no problem picking up the phone and calling the school administrator or even the local school board to complain about a teacher. Suddenly, teachers feel they’re constantly under the microscope. They alter their teaching style and course content to appeal to the types of parents that are constantly monitoring them.

Even worse, the teacher finds themselves constantly interrupting their own lesson in order to document what’s going on to keep these parents satisfied. What appears to be happening in the classroom through social media becomes more important than what’s actually happening in the classroom. If you’ve ever tried to actively present to a group and also document what’s happening at the same time, you know how impossible this can be. Pity then the poor teacher of your children.

There is a quote that is often (incorrectly) attributed to management guru Peter Drucker, “If you can’t measure it, you can’t manage it.” The reality is a lot more nuanced. As we’re finding out, what you’re actually measuring matters a lot. It may be leading you in completely the wrong direction.

Our Trust Issues with Advertising Based Revenue Models

Facebook’s in the soup again. They’re getting their hands slapped for tracking our location. And I have to ask, why is anyone surprised they’re tracking our location? I’ve said this before, but I’ll say it again. What is good for us is not good for Facebook’s revenue model. And vice versa. Social platforms should never be driven by advertising. Period. Advertising requires targeting. And when you combine prospect targeting and the digital residue of our online activities, bad things are bound to happen. It’s inevitable, and it’s going to get worse. Facebook’s future earnings absolutely dictate that they have to try to get us to spend more time on their platform and they have to be more invasive about tracking what we do with that time. Their walled data garden and their reluctance to give us a peak at what’s happening inside should be massive red flags.

Our social activities are already starting to fragment across multiple platforms – and multiple accounts within each of those platforms. We are socially complex people and it’s naïve to think that all that complexity could be contained within any one ecosystem – even one as sprawling as Facebook’s.  In our real lives – you know – the life you lead when you’re not staring at your phone – our social activities are as varied as our moods, our activities, our environment and the people we are currently sharing that environment with. Being social is not a single aspect of our lives. It is the connecting tissue of all that we are. It binds all the things we do into a tapestry of experience. It reflects who we are and our identities are shaped by it. Even when we’re alone, as I am while writing this column, we are being social. I am communicating with each of you and the things I am communicated are shaped by my own social experiences.

My point here is that being social is not something we turn on and off. We don’t go somewhere to be social. We are social. To reduce social complexity and try to contain it within an online ecosystem is a fool’s errand. Trying to support it with advertising just makes it worse. A revenue model based on advertising is self-limiting. It has always been a path of least resistance, which is why it’s so commonly used. It places no financial hurdles on the path to adoption. We have never had to pay money to use Facebook, or Instagram, or Snapchat. But we do pay with our privacy. And eventually, after the inevitable security breaches, we also lose our trust. That lack of trust limits the effectiveness of any social medium.

Of course, it’s not just social media that suffers from the trust issues that come with advertising-based revenue. This advertising driven path has worked up to now because trust was never really an issue. We took comfort in our perceived anonymity in the eyes of the marketer. We were part of a faceless, nameless mass market that traded attention for access to information and entertainment. Advertising works well with mass. As I mentioned, there are no obstacles to adoption. It was the easiest way to assemble the biggest possible audience. But we now market one to one. And as the ones on the receiving end, we are now increasingly seeking functionality. That is a fundamentally different precept. When we seek to do things, rather than passively consume content, we can no longer remain anonymous. We make choices, we go places, we buy stuff, we do things. In doing this, we leave indelible footprints which are easy to track and aggregate.

Our online and offline lives have now melded to the point where we need – and expect – something more than a collection of platforms offering fragmented functionality. What we need is a highly personalized OS, a foundational operating system that is intimately designed just for us and connects the dots of functionality. This is already happening in bits and pieces through the data we surrender when we participate in the online world. But that data lives in thousands of different walled gardens, including the social platforms we use. Then that data is used to target advertising to us. And we hate advertising. It’s a fundamentally flawed contract that we will – given a viable alternative – opt out of. We don’t trust the social platforms we use and we’re right not to. If we had any idea of depth or degree of personal information they have about us, we would be aghast.  I have said before that we are willing to trade privacy for functionality and I still believe this. But once our trust has been broken, we are less willing to surrender that private data, which is essential to the continued profitability of an ad-supported platform.

We need to own our own data. This isn’t so much to protect our privacy as it is to build a new trust contract that will allow that data to be used more effectively for our own purposes and not that of a corporation whose only motive is to increase their own profit. We need to remove the limits imposed by a flawed functionality offering based on serving ads that we don’t want to us. If we’re looking for the true disruptor in advertising, that’s it in nutshell.

 

Life After Google: The Great Social Experiment

Google is fascinating. And not because of an algorithm, or technology, or it’s balance sheet.

Google is fascinating at a human level. It is perhaps the greatest corporate-based social experiment conducted so far this century.

If we are talking about the Silicon Valley ethos, Google is the iconic example. There were other companies that started down the road of turning their corporate culture into a secular religion before Google, but it was this company that crystallized it. It is the company caricaturized in pop culture, whether it be the thinly disguised Hooli of the series Silicon Valley, a dystopian The Circle in the forgettable movie of the same name, or even straight up Google besieged by Owen Wilson and Vince Vaughn in The Internship.

By now, Facebook has arguably picked up the mantle of the iconic Silicon Valley culture but that’s also what makes Google interesting. Google – and it’s carefully crafted hyper-drive working culture – has now been around for two decades. Many have passed through the crucible of all that is Google and have emerged on the other side.  Today I offer three interesting examples of Life after Google – with three people who are making their social statements in three very different ways. Let’s call them the Evangelist, the Novelist and the Algorithmist.

The Evangelist – Tristan Harris

Harris_Tristan_AIF2018_0I’ve talked about Tristan before in this column. He is the driving force behind Time Well Spent and The Center for Humane Technology. Harris has been called the “closest thing Silicon Valley has to a conscience” by The Atlantic Magazine. He was sucked into the Google vortex when Google acquired his company – Apture – in 2011. Harris then spent some time as Google’s Design Ethicist before leaving Google in 2016 to work full-time on “reforming the attention economy.”

Tristan shares my fear that technology may be playing nasty tricks with our minds below the waterline of consciousness. He focuses on the nexus of that influence, the handful of companies that steer our thoughts without us even being aware of it. In Tristan’s crusade, the prime suspect is Facebook but Google also shares the blame. And the crime is the theft of our attention. This asset, which we believe is in our control, is actually being consciously diverted into areas by someone other than ourselves. Design engineers are all too aware of our psychological hot buttons and push them mercilessly because, in this new economy, attention equals profitability. For that reason, Harris’s message would risk being disingenuous coming from the Googleplex. As I’ve said before, it’s hard to believe that Google or Facebook would lead the “Time Well Spent” charge when doing so would directly impact their bottom line.

The Novelist – Jessica Powell

JessicaPowell_057_hires.0Jessica Powell was the head of communications at Google. And while she says her new novel – “The Big Disruption” –  is not a Google tell-all, the subject matter is definitely targeted right at her former employer. The name of the company in the novel has been changed to Anahata, but anyone who has ever visited the Google campus would recognize her descriptions instantly. The Big Disruption is satire and as such it’s been exaggerated for effect.  But hidden amongst the wild caricatures are spot-on revelations about Silicon Valley. The Novel – available at Medium.com – is billed as a “totally fictional but essentially true Silicon Valley Story.  And a review in the New York Times states, “while the events in her satire are purposefully and hilariously over the top, her diagnosis of Silicon Valley’s cultural stagnancy is so spot on that it’s barely contestable.”

The Algorithmist – Max Hawkins

maxhawkinsSo, if we’re talking about surrendering control of our lives to technology, we have to talk about Max Hawkins. His life is determined by an algorithm. Actually, his life is determined by a few algorithms.

Hawkins was profiled in the NPR podcast Invisibilia. He left his job as a Google software engineer three years ago. Max has created a number of programs that randomly direct his life. For example, he created a program that scraped Facebook’s events API and sent him to them at random. Suggestions about where and what he eats are also randomly generated by an algorithm. Even his new tattoo was determined by an algorithm that scraped Google Images for line art suggestions. Accordingly to an interview on Medium, his latest project involves a machine that scans books for verb-object pairs – such as “hire a babysitter” or even “kill a deer”  – randomly presenting them to him to act on.

The interesting thing about Hawkins’ experiment is the authority he gives to his algorithms. He feels comfortable doing this because of the randomness of the process. He speculates what this might mean for the world at large, “It’d be interesting to imagine what would happen if all power was distributed randomly. A randomized socialism where the computer decides that you’re rich for a couple of months and you get to see what it’s like to wield power and after that you’re poor for a while. There’s a certain fairness to that.”

For these three – life at Google must have shaped what came after.

Avoiding the Truth: Dodging Reality through Social Media

“It’s very hard to imagine that the revival of Flat Earth theories could have happened without the internet.”

Keith Kahn-Harris – Sociologist and author of “Denial: The Unspeakable Truth” – in an interview on CBC Radio

On November 9, 2017, 400 people got together in Raleigh, North Carolina. They all believe the earth is flat. This November 15th and 16th, they will do it again in Denver, Colorado. If you are so inclined, you could even join other flat earthers for a cruise in 2019. The Flat Earth Society is a real thing. They have their own website. And – of course – they have their own Facebook page (actually, there seems to be a few pages. Apparently, there are Flat Earth Factions.)

Perhaps the most troubling thing is this: it isn’t a joke. These people really believe the earth is flat.

How can this happen in 2018? For the answer, we have to look inwards – and backwards – to discover a troubling fact about ourselves. We’re predisposed to believe stuff that isn’t true. And, as Mr. Kahn-Harris points out, this can become dangerous when we add an obsessively large dose of time spent online, particularly with social media.

It makes sense that there was an evolutionary advantage to a group of people who lived in the same area and dealt with the same environmental challenges to have the same basic understanding about things. These commonly held beliefs allowed group learnings to be passed down to the individual: eating those red berries would make you sick, wandering alone in the savannah was not a good idea, coveting thy neighbor’s wife might get you stabbed in the middle of the night. Our beliefs often saved our ass.

Because of this, it was in our interest to protect our beliefs. They formed part of our “fast” reasoning loop, not requiring our brain to kick in to do any processing. Cognitive scientists refer to this as “fluency”.  Our brains have evolved to be lazy. If they don’t have to work, they don’t. And in the adaptive environment we evolved in – for reasons already stated – this cognitive short cut generally worked to our benefit. Ask anyone who has had to surrender a long-held belief. It’s tough to do. Overturning a belief requires a lot of cognitive horsepower. It’s far easier to protect them with a scaffolding of supporting “facts” – no matter how shaky it may be.

Enter the Internet. And the usual suspect? Social media.

As I said last week, the truth is often hard to handle – especially if it runs head long into our beliefs. I don’t want to believe in climate change because the consequences of that truth are mind-numbingly frightening. But I find I’m forced to. I also don’t believe the earth is flat. For me, in both cases, the evidence is undeniable. That’s me, however. There are plenty of people who don’t believe climate change is real and – according to the Facebook Official Flat Earth Discussion group – there are at least 107,372 people that believe the earth is flat. The same evidence is also available to them. Why are we different?

When it comes to our belief structure, we all have different mindsets, plotted on a spectrum of credulity. I’m what you may call a scientific skeptic. I tend not to believe something is true unless I see empirical evidence supporting it. There are others who tend to believe in things at a much lower threshold. And this tendency is often found across multiple domains. The mindset that embraces creationism, for example, has been shown to also embrace conspiracy theories.

In the pre-digital world, our beliefs were a feature, not a bug. When we shared a physical space with others, we also relied on a shared “mind-space” that served us well. Common beliefs created a more cohesive social herd and were typically proven out over time against the reality of our environment. Beneficial beliefs were passed along and would become more popular, while non-beneficial beliefs were culled from the pack. It was the cognitive equivalent of Adam Smith’s “Invisible hand.” We created a belief marketplace.

Beliefs are moderated socially. The more unpopular our own personal beliefs, the more pressure there is to abandon them. There is a tipping point mechanism at work here. Again, in a physically defined social group, those whose mindsets tend to look for objective proof will be the first to abandon a belief that is obviously untrue. From this point forward, social contagion can be more effective factor in helping the new perspective spread through a population than the actual evidence. “What is true?” is not as important as “what does my neighbor believe to be true?”

This is where social media comes in. On Facebook, a community is defined in the mind, not in any particular physical space. Proximity becomes irrelevant. Online, we can always find others that believe in the same things we do. A Flat Earther can find comfort by going on a cruise with hundreds of other Flat Earthers and saying that a 107,372 people can’t be wrong. They can even point to “scientific” evidence proving their case. For example, if the earth wasn’t flat, a jetliner would have to continually point its nose down to keep from flying off into space (granted, this argument conveniently ignores gravity and all types of other physics, but why quibble).

Social media provides a progressive banquet of options for dealing with unpleasant truths. Probably the most benign of these is something I wrote about a few weeks back – slacktivism. At least slacktivisits acknowledge the truth. From there, you can progress to a filtering of facts (only acknowledging the truths you can handle), wilful ignorance (purposely avoiding the truth), denialism (rejecting the truth) and full out fantasizing (manufacturing an alternate set of facts). Examples of all these abound on social media.

In fact, the only thing that seems hard to find on Facebook is the bare, unfiltered, unaltered truth. And that’s probably because we’re not looking for it.

 

Why We No Longer Want to Know What’s True

“Truth isn’t truth” – Rudy Giuliani – August 19, 2018

Even without Giuliani’s bizarre statement, we’re developing a weird relationship with the truth. It’s becoming even more inconvenient. It’s certainly becoming more worrisome. I was chatting with a psychiatrist the other day who counsels seniors. I asked him if he was noticing more general anxiety in that generation – a feeling of helplessness with how the world seems to be going to hell in a handbasket. I asked him that because I am less optimistic about the future than I ever have been in my life. I wanted to know if that was unusual. He said it wasn’t – I had plenty of company.

You can pick the truth that is most unsettling. Personally, I lose sleep over climate change, the rise of populist politics and the resurgence of xenophobia. I have to limit the amount of news I consume in any day, because it sends me into a depressive state. I feel helpless. And as much as I’m limiting my intake because of my own mental health, I can’t help thinking that this is a dangerous path I’m heading down.

After doing a little research, I have found that things like PTSD (President Trump Stress Disorder) or TAD (Trump Anxiety Disorder) are real things. They’re recognized by the American Psychological Association. After a ten-year decline, anxiety levels in the US spiked dramatically after November, 2016.  Clinical psychologist Jennifer Panning, who coined TAD, says “the symptoms include feeling a loss of control and helplessness, and fretting about what’s happening in the country and spending excessive time on social media.”

But it’s not just the current political climate that’s causing anxiety. It’s also the climate itself. Enter “ecoanxiety.” Again…the APA in a recent paper nails a remarkably accurate diagnosis of how I’m feeling: “Gradual, long-term changes in climate can also surface a number of different emotions, including fear, anger, feelings of powerlessness, or exhaustion.”

“You can’t handle the truth” – Colonel Nathan R. Jessep (from the movie “A Few Good Men”)

So – when the truth scares the hell out of you – what do you do?  We can find a few clues in the quotes above. One is this idea of a loss of control. The other is spending excessive time on social media. My belief is that the later exacerbates the former.

In a sense, Rudy Giuliani is right. Truth isn’t truth, at least, not on the receiving end. We all interpret truth within the context of our own perceived reality. This in no way condones the manipulation of truth upstream from when it reaches us. We need to trust that our information sources are providing us the closest thing possible to a verifiable and objective view of truth.  But we have to accept the fact that for each of us, truth will ultimately be filtered through our own beliefs and understanding of what is real. Part of our own perceived reality is how in control we feel of the current situation. And this is where we begin to see the creeping levels of anxiety.

In 1954, psychologist Julian Rotter introduced the idea of a “locus of control” –the degree of control we believe we have over our own lives. For some of us, our locus is tipped to the internal side. We believe we are firmly at the wheel of our own lives. Others have an external locus, believing that life is left to forces beyond our control. But like most concepts in psychology, the locus of control is not a matter of black and white. It is a spectrum of varying shades of gray. And anxiety can arise when our view of reality seems to be beyond our own locus of control.

The word locus itself comes from the Latin for “place” or “location.” Typically, our control is exercised over those things that are physically close to us. And up until a 150 years ago, that worked well. We had little awareness of things beyond our own little world so we didn’t need to worry about them. But electronic media changed that. Suddenly, we were aware of wars, pestilence, poverty, famines and natural disasters from around the world. This made us part of Marshall McLuhan’s “Global Village.” The circle of our “locus of awareness” suddenly had to accommodate the entire world but our “locus of control” just couldn’t keep pace.

Even with this expansion of awareness, one could still say that truth remained relatively true. There was an editorial check and balance process that checked the veracity of the information we were presented. It certainly wasn’t perfect, but we could place some confidence in the truth of what we read, saw and heard.

And then came social media. Social media creates a nasty feedback loop when it comes to the truth. Once again, Dr. Panning typified these new anxieties as, “fretting about what’s happening in the country and spending excessive time on social media.” The algorithmic targeting of social media platforms means that you’re getting a filtered version of the truth. Facebook knows exactly what you’re most anxious about and feeds you a steady diet of content tailored specifically to those anxieties. We have the comfort of seeing posts from members of our network that seem to fear the same things we do and share the same beliefs. But the more time we spend seeking this comfort, the more we’re exposed to the anxiety inducing triggers and the further and further we drift from the truth. It creates a downward spiral that leads to these new types of environmental anxiety we are seeing. And to deal with those anxieties we’re developing new strategies for handling the truth – or, at least – our version of the truth. That’s where I’ll pick up next week.