The Split-Second Timing of Brand Trust

Two weeks ago, I talked about how brand trust can erode so quickly and cause so many issues. I intimated that advertising and branding have become decoupled — and advertising might even erode brand trust, leading to a lasting deficit.

Now I think that may be a little too simplistic. Brand trust is a holistic thing — the sum total of many moving parts. Taking advertising in isolation is misleading. Will one social media ad for a brand lead to broken trust? Probably not. But there may be a cumulative effect that we need to be aware of.

In looking at the Edelman Trust Barometer study closer, a very interesting picture emerges. Essentially, the study shows there is a trust crisis. Edelman calls it information bankruptcy.

The slide in trust is probably not surprising. It’s hard to be trusting when you’re afraid, and if there’s one thing the Edelman Barometer shows, it’s that we are globally fearful. Our collective hearts are in our mouths. And when this happens, we are hardwired to respond by lowering our trust and raising our defenses.

But our traditional sources for trusted information — government and media — have also abdicated their responsibilities to provide it. They have instead stoked our fears and leveraged our divides for their own gains. NGOs have suffered the same fate. So, if you can’t trust the news, your leaders or even your local charity, who can you trust?

Apparently, you can trust a corporation. Edelman shows that businesses are now the most trusted organizations in North America. Media, especially social media, is the least trusted institution. I find this profoundly troubling, but I’ll put that aside for a future post. For now, let’s just accept it at face value.

As I said in that previous column, we want to trust brands more than ever. But we don’t trust advertising. This creates a dilemma for the marketer.

This all brings to mind a study I was involved with a little over 10 years ago. Working with Simon Fraser University, we wanted to know how the brain responded to trusted brands. The initial results were fascinating — but unfortunately, we never got the chance to do the follow-up study we intended.

This was an ERP study (event-related potential), where we looked at how the brain responded when we showed brand images as a stimulus. ERP studies are useful to better understand the immediate response of the brain to something — the fast loop I talk so much about — before the slow loop has a chance to kick in and rationalize things.

We know now that what happens in this fast loop really sets the stage for what comes after. It essentially makes up the mind, and then the slow loop adds rational justification for what has already been decided.

What we found was interesting: The way we respond to our favorite brands is very similar to the way we respond to pictures of our favorite people. The first hint of this occurred in just 150 milliseconds, about one-sixth of a second. The next reinforcement was found at 400 milliseconds. In that time, less than half a second in total, our minds were made up. In fact, the mind was basically made up in about the same time it takes to blink an eye.  Everything that followed was just window dressing.

This is the power of trust. It takes a split second for our brains to recognize a situation where it can let its guard down. This sets in motion a chain of neurological events that primes the brain for cooperation and relationship-building. It primes the oxytocin pump and gets it flowing. And this all happens just that quickly.

On the other side, if a brand isn’t trusted, a very different chain of events occurs just as quickly. The brain starts arming itself for protection. Our amygdala starts gearing up. We become suspicious and anxious.

This platform of brand trust — or lack of it — is built up over time. It is part of our sense-making machinery. Our accumulating experience with the brand either adds to our trust or takes it away.

But we must also realize that if we have strong feelings about a brand, one way or the other, it then becomes a belief. And once this happens, the brain works hard to keep that belief in place. It becomes virtually impossible at that point to change minds. This is largely because of the split-second reactions our study uncovered.

This sets very high stakes for marketers today. More than ever, we want to trust brands. But we also search for evidence that this trust is warranted in a very different way. Brand building is the accumulation of experience over all touch points. Each of those touch points has its own trust profile. Personal experience and word of mouth from those we know is the highest. Advertising on social media is one of the lowest.

The marketer’s goal should be to leverage trust-building for the brand in the most effective way possible. Do it correctly, through the right channels, and you have built trust that’s triggered in an eye blink. Screw it up, and you may never get a second chance.

Social Media Reflects Rights Vs. Obligations Split

Last week MediaPost writer (and my own editor here on Media Insider) Phyllis Fine asked this question in a post: “Can Social Media Ease the Path to Herd Immunity?” The question is not only timely, but also indicative of the peculiar nature of social media that could be stated thus: for every point of view expressed, there is an equal — and opposite — point of view. Fine’s post quotes a study from the Institute of Biomedical Ethics and History of Medicine at the University of Zurich, which reveals, “Anti-vaccination supporters find fertile ground in particular on Facebook and Twitter.”

Here’s the thing about social media. No matter what the message might be, there will be multiple interpretations of it. Often, the most extreme interpretations will be diametrically opposed to each other. It’s stunning how the very same content can illustrate the vast ideological divides that separate us.

I’ve realized that the only explanation for this is that our brains must work differently. We’re not even talking apples and oranges here. This is more like ostrich eggs and vacuum cleaners.

This is not my own revelation. There’s a lot of science behind it. An article in Scientific American catalogs some of the difference between conservative and liberal brains. Even the actual structure is different. According to the article: “The volume of gray matter, or neural cell bodies, making up the anterior cingulate cortex, an area that helps detect errors and resolve conflicts, tends to be larger in liberals. And the amygdala, which is important for regulating emotions and evaluating threats, is larger in conservatives.”

We have to understand that a right-leaning brain operates very differently than a left-leaning brain. Recent neuro-imaging studies have shown that they can consider the very same piece of information and totally different sections of their respective brains light up. They process information differently.

In a previous post about this topic, I quoted biologist and author Robert Sapolsky as saying, “Liberals are more likely to process information systematically, recognize differences in argument quality, and to be persuaded explicitly by scientific evidence, whereas conservatives are more likely to process information heuristically, attend to message-irrelevant cues such as source similarity, and to be persuaded implicitly through evaluative conditioning. Conservatives are also more likely than liberals to rely on stereotypical cues and assume consensus with like-minded others.”

Or, to sum it up in plain language: “Conservatives start gut and stay gut; liberals go from gut to head.”

This has never been clearer than in the past year. Typically, the information being processed by a conservative brain would have little overlap with the information being processed by a liberal brain. Each would care and think about different things.

But COVID-19 has forced the two circles of this particular Venn diagram together, creating a bigger overlap in the middle. We are all focused on information about the pandemic. And this has created a unique opportunity to more directly compare the cognitive habits of liberals versus conservatives.

Perhaps the biggest difference is in the way each group defines morality. At the risk of a vast oversimplification, the right tends to focus on individual rights, especially those they feel they’re personally are at risk of losing. The left thinks more in terms of societal obligations: What do we need to do — or not do — for the greater good of us all?  To paraphrase John F. Kennedy, conservatives ask what their country can do for them; liberals ask what they can do for their country.

This theory is part of Jonathon Haidt’s Moral Foundations Theory. What Haidt, working with others, has found is that both the right and left have morals, but they are defined differently. This “moral pluralism” means that two people can look at the same social media post but take two entirely different messages from it. And both will insist their interpretation is the correct one. Liberals can see a post about getting a vaccine as an appeal to their concern for the collective well being of their community. Conservatives see it as an attack on their personal rights.

So when we ask a question like “Can social media ease the path to herd immunity?” we run into the problem of message interpretation. For some, it will be preaching to the choir. For others, it will have the same effect as a red cape in front of a bull.

It’s interesting that the vaccine question is being road-blocked by this divide between rights and obligations. It shows just how far the two sides are apart. With a vaccine, at least both sides have skin in the game. Getting a vaccine can save your life, no matter how you vote. Wearing a face mask is a different matter.

In my lifetime, I have never seen a more overt signalling of ideological leanings than whether you choose to wear a face mask or not. When we talk about rights vs obligations, this is the ultimate acid test. If I insist on wearing a mask, as I do, I’m not wearing it for me, I’m wearing it for you. It’s part of my obligation to my community. But if you refuse to wear a mask, it’s pretty obvious who you’re focused on.

The thing that worries me the most about this moral dualism is that a moral fixation on individual rights is not sustainable. It’s assuming that our society is a zero-sum game. In order for me to win, you must lose. If we focus instead on our obligations, we approach society with an abundance mentality. As we contribute, we all benefit.

At least, that’s how my brain sees it.

To Be There – Or Not To Be There

According to Eventbrite, hybrid events are the hottest thing for 2021. So I started thinking, what would that possibly look like, as a planner or a participant?

The interesting thing about hybrid events is that they force us to really think about how we experience things. What process do we go through when we let the outside world in? What do we lose if we do that virtually? What do we gain, if anything? And, more importantly, how do we connect with other people during those experiences?

These are questions we didn’t think much about even a year ago. But today, in a reality that’s trying to straddle both the physical and virtual worlds, they are highly relevant to how we’ll live our lives in the future.

The Italian Cooking Lesson

First, let’s try a little thought experiment.

In our town, the local Italian Club — in which both my wife and I are involved — offered cooking lessons before we were all locked down. Groups of eight to 12 people would get together with an exuberant Italian chef in a large commercial kitchen, and together they would make an authentic dish like gnocchi or ravioli. There was a little vino, a little Italian culture and a lot of laughter. These classes were a tremendous hit.

That all ended last March. But we hope to we start thinking about offering them again late in 2021 or 2022. And, if we do, would it make sense to offer them as a “hybrid” event, where you can participate in person or pick up a box of preselected ingredients and follow along in your own kitchen?

As an event organizer, this would be tempting. You can still charge the full price for physical attendance where you’re restricted to 12 people, but you could create an additional revenue stream by introducing a virtual option that could involve as many people as possible. Even at a lower registration fee, it would still dramatically increase revenue at a relatively small incremental cost. It would be “molto” profitable.

But now consider this as an attendee.Would you sign up for a virtual event like that? If you had no other option to experience it, maybe. But what if you could actually be there in person? Then what? Would you feel relegated to a second-class experience by being isolated in your own kitchen, without many of the sensory benefits that go along with the physical experience?

The Psychology of Zoom Fatigue

When I thought about our cooking lesson example, I was feeling less than enthused. And I wondered why.

It turns out that there’s some actual brain science behind my digital ennui. In an article in the Psychiatric Times, Jena Lee, MD, takes us on a “Neuropsychological Exploration of Zoom Fatigue.”

A decade ago, I was writing a lot about how we balance risk and reward. I believe that a lot of our behaviors can be explained by how we calculate the dynamic tension between those two things. It turns out that it may also be at the root of how we feel about virtual events. Dr. Lee explains,

“A core psychological component of fatigue is a rewards-costs trade-off that happens in our minds unconsciously. Basically, at every level of behavior, a trade-off is made between the likely rewards versus costs of engaging in a certain activity.”

Let’s take our Italian cooking class again. Let’s imagine we’re there in person. For our brain, this would hit all the right “reward” buttons that come with being physically “in the moment.” Subconsciously, our brains would reward us by releasing oxytocin and dopamine along with other “pleasure” neurochemicals that would make the experience highly enjoyable for us. The cost/reward calculation would be heavily weighted toward “reward.”

But that’s not the case with the virtual event. Yes, it might still be considered “rewarding,” but at an entirely different — and lesser — scale of the same “in-person” experience. In addition, we would have the additional costs of figuring out the technology required, logging into the lesson and trying to follow along. Our risk/reward calculator just might decide the tradeoffs required weren’t worth it.

Without me even knowing it, this was the calculation that was going on in my head that left me less than enthused.

 But there is a flip side to this.

Reducing the Risk Virtually

Last fall, a new study from Oracle in the U.K. was published with the headline, “82% of People Believe Robots Can Support Their Mental Health Better than Humans.”

Something about that just didn’t seem right to me. How could this be? Again, we had the choice between virtual and physical connection, and this time the odds were overwhelmingly in favor of the virtual option.

But when I thought about it in terms of risk and reward, it suddenly made sense. Talking about our own mental health is a high-risk activity. It’s sad to say, but opening up to your manager about job-related stress could get you a sympathetic ear, or it could get you fired. We are taking baby steps towards destigmatizing mental health issues, but we’re at the beginning of a very long journey.

In this case, the risk/reward calculation is flipped completely around. Virtual connections, which rely on limited bandwidth — and therefore limited vulnerability on our part — seem like a much lower risk alternative than pouring our hearts out in person. This is especially true if we can remain anonymous.

It’s All About Human Hardware

The idea of virtual/physical hybrids with expanded revenue streams will be very attractive to marketers and event organizers. There will be many jumping on this bandwagon. But, like all the new opportunities that technology brings us, it has to interface with a system that has been around for hundreds of thousands of years — otherwise known as our brain.

Our Disappearing Attention Spans

Last week, Mediapost Editor in Chief Joe Mandese mused about our declining attention spans. He wrote, 

“while in the past, the most common addictive analogy might have been opiates — as in an insatiable desire to want more — these days [consumers] seem more like speed freaks looking for the next fix.”

Mandese cited a couple of recent studies, saying that more than half of mobile users tend to abandon any website that takes longer than three seconds to load. That

“has huge implications for the entire media ecosystem — even TV and video — because consumers increasingly are accessing all forms of content and commerce via their mobile devices.”

The question that begs to be asked here is, “Is a short attention span a bad thing?” The famous comparison is that we are now more easily distracted than a goldfish. But does a shorter attention span negatively impact us, or is it just our brain changing to be a better fit with our environment?

Academics have been debating the impact of technology on our ability to cognitively process things for some time. Journalist Nicholas Carr sounded the warning in his 2010 book, “The Shallows,” where he wrote, 

” (Our brains are) very malleable, they adapt at the cellular level to whatever we happen to be doing. And so the more time we spend surfing, and skimming, and scanning … the more adept we become at that mode of thinking.”

Certainly, Carr is right about the plasticity of our brains. It’s one of the most advantageous features about them. But is our digital environment forever pushing our brains to the shallow end of the pool? Well, it depends. Context is important. One of the biggest factors in determining how we process the information we’re seeing is the device where we’re seeing it.

Back in 2010, Microsoft did a large-scale ethnographic study on how people searched for information on different devices. The researchers found those behaviors differed greatly depending on the platform being used and the intent of the searcher. They found three main categories of search behaviors:

  • Missions are looking for one specific answer (for example, an address or phone number) and often happen on a mobile device.
  • Excavations are widespread searches that need to combine different types of information (for example, researching an upcoming trip or major purchase). They are usually launched on a desktop.
  • Finally, there are Explorations: searching for novelty, often to pass the time. These can happen on all types of devices and can often progress through different devices as the exploration evolves. The initial search may be launched on a mobile device, but as the user gets deeper into the exploration, she may switch to a desktop.

The important thing about this research was that it showed our information-seeking behaviors are very tied to intent, which in turn determines the device used. So, at a surface level, we shouldn’t be too quick to extrapolate behaviors seen on mobile devices with certain intents to other platforms or other intents. We’re very good at matching a search strategy to the strengths and weaknesses of the device we’re using.

But at a deeper level, if Carr is right (and I believe he is) about our constant split-second scanning of information to find items of interest making permanent changes in our brains, what are the implications of this?

For such a fundamentally important question, there is a small but rapidly growing body of academic research that has tried to answer it. To add to the murkiness, many of the studies done contradict each other. The best summary I could find of academia’s quest to determine if “the Internet is making us stupid” was a 2015 article in academic journal The Neuroscientist.

The authors sum up by essentially saying both “yes” — and “no.” We are getting better at quickly filtering through reams of information. We are spending fewer cognitive resource memorizing things we know we can easily find online, which theoretically leaves those resources free for other purposes. Finally, for this post, I will steer away from commenting on multitasking, because the academic jury is still very much out on that one.

But the authors also say that 

“we are shifting towards a shallow mode of learning characterized by quick scanning, reduced contemplation and memory consolidation.”

The fact is, we are spending more and more of our time scanning and clicking. There are inherent benefits to us in learning how to do that faster and more efficiently. The human brain is built to adapt and become better at the things we do all the time. But there is a price to be paid. The brain will also become less capable of doing the things we don’t do as much anymore. As the authors said, this includes actually taking the time to think.

So, in answer to the question “Is the Internet making us stupid?,” I would say no. We are just becoming smart in a different way.

But I would also say the Internet is making us less thoughtful. And that brings up a rather worrying prospect.

As I’ve said many times before, the brain thinks both fast and slow. The fast loop is brutally efficient. It is built to get stuff done in a split second, without having to think about it. Because of this, the fast loop has to be driven by what we already know or think we know. Our “fast” behaviors are necessarily bounded by the beliefs we already hold. It’s this fast loop that’s in control when we’re scanning and clicking our way through our digital environments.

But it’s the slow loop that allows us to extend our thoughts beyond our beliefs. This is where we’ll find our “open minds,” if we have such a thing. Here, we can challenge our beliefs and, if presented with enough evidence to the contrary, willingly break them down and rebuild them to update our understanding of the world. In the sense-making loop, this is called reframing.

The more time we spend “thinking fast” at the expense of “thinking slow,” the more we will become prisoners to our existing beliefs. We will be less able to consolidate and consider information that lies beyond those boundaries. We will spend more time “parsing” and less time “pondering.” As we do so, our brains will shift and change accordingly.

Ironically, our minds will change in such a way to make it exceedingly difficult to change our minds.

The Potential Woes of Working from Home

Many of you have now had a few months under your belt working virtually from home rather than going to the office. At least some of you are probably considering continuing to do so even after COVID recedes and the all clear is given to return to normal. A virtual workplace makes all kinds of rational sense – both for employees and employers. But there are irrational reasons why you might want to think twice before you fully embrace going virtual.

About a decade ago, my company also went with a hybrid virtual/physical workplace. As the CEO, there was a lot I liked about it. It was a lot more economical than leasing more office space. It gave us the flexibility to recruit top talent in areas where we had no physical presence. And it seemed that technology was up to the task of providing the communication and work-flow tools we needed to support our virtual members.

On the whole, our virtual employees also seemed to like it. It gave them more flexibility in their workday. It also made it less formal. If you wanted to work in pajamas and bunny slippers, so be it. And with a customer base spread across many time zones, it also made it easier to shift client calls to times that were mutually acceptable.

It seemed to be a win-win. For awhile. Then we noticed that all was not wonderful in work-from-home land.

I can’t say productivity declined. We were always a results-based workplace so as long as the work got done, we were happy. But we started to feel a shift in our previously strong corporate culture. We found team-member complaints about seemingly minor things skyrocket. We found less cohesion across teams. Finally – and most critically – it started to impact our relationships with our customers.

Right about the time all this was happening, we were acquired by a much bigger company. One of the dictates that was handed down from the new owners was that we establish physical offices and bring our virtual employees back to the mothership for the majority of their work-week. At the time, I wasn’t fully aware of the negative consequences of going virtual so I initially fought the decision. But to be honest, I was secretly happy. I knew something wasn’t quite right. I just wasn’t sure what it was. I suspected it might have been our new virtual team members.

The move back to a physical workplace was a tough one. Our virtual team members were very vocal about how this was a loss of their personal freedom. New HR fires were erupting daily and I spent much of my time fighting them. This, combined with the inevitable cultural consequences of being acquired, often made me shake my head in bewilderment. Life in our company was turning into a shit-show.

I wish I could say that after we all returned to the same workplace, we joined hands and sang a rousing chorus of Kumbaya. We didn’t. The damage had been done. Many of the disgruntled former virtual team members ended up moving on. The cultural core of the company remained with our original team members who had worked in the same office location for several years. I eventually completed my contract and went my own way.

I never fully determined what the culprit was. Was it our virtual team members? Or was it the fact that we embraced a virtual workplace without considering unintended consequences. I suspected it was a little of both.

Like I said, that was a decade ago. From a rational perspective, all the benefits of a virtual workplace seem even more enticing than they did then. But in the last 10 years, there has been research done on those irrational factors that can lead to the cracks in a corporate culture that we experienced.

Mahdi Roghanizad is an organizational behavior specialist from Ryerson University in Toronto. He has long looked at the limitations of computerized communication. And his research provides a little more clarity into our failed experiment with a virtual workplace.

Roghanizad has found that without real-life contact, the parts of our brain that provide us with the connections needed to build trust never turn on. In order to build a true relationship with another person, we need something called the Theory of Mind. According to Wikipedia, “Theory of mind is necessary to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.”

But unless we’re physically face-to-face with another person, our brain doesn’t engage in this critical activity. “Eye contact is required to activate that theory of mind and when the eye contact is not there, the whole other signal information is not processed by our brain,” said Roghanizad. Even wearing a pair of sunglasses is enough to short circuit the process. Relegating contact to a periodic Zoom call guarantees that this empathetic part of our brains will never kick in.

But it’s not just being eye-ball to eye-ball. There are other non-verbal cues we rely on to connect with other people and create a Theory of Mind. Other research has shown the importance of pheromones and physical gestures like crossing your arms and leaning forward or back. This is why we subconsciously start to physically imitate people we’re talking to. The stronger the connection with someone, the more we imitate them.

This all comes back to the importance of bandwidth in the real world. A digital connection cannot possibly incorporate all the nuance of a face-to-face connection. And whether we realize it or not, we rely on that bandwidth to understand other people. From that understanding comes the foundations of trusted relationships. And trusted relationships are the difference between a high-functioning work team and a dysfunctional one.

I wish I knew that ten years ago.

Deconstructing the Google/Facebook Duopoly

We’ve all heard about it. The Google/Facebook Duopoly. This was what I was going to write about last week before I got sidetracked. I’m back on track now (or, at least, somewhat back on track). So let’s start by understanding what a duopoly is…

…a situation in which two suppliers dominate the market for a commodity or service.

And this, from Wikipedia…

… In practice, the term is also used where two firms have dominant control over a market.

So, to have a duopoly, you need two things: domination and control. First, let’s deal with the domination question. In 2017, Google and Facebook together took a very healthy 59% slice of all digital ad revenues in the US. Google captured 38.6% of that, with Facebook capturing 20%. That certainly seems dominant. But if online marketing is the market, that is a very large basket with a lot of different items thrown in. So, let’s do a broad categorization to help deconstruct this a bit.  Typically, when I try to understand marketing, I like to start with humans – or more specifically – what that lump of grey matter we call a brain is doing. And if we’re talking about marketing, we’re talking about attention – how our brains are engaging with our environment. That is an interesting way to divide up the market we’re talking about, because it neatly bisects the attentional market, with Google on one side and Facebook on the other.

Google dominates the top down, intent driven, attentionally focused market. If you’re part of this market, you have something in mind and you’re trying to find it. If we use search as a proxy for this attentional state (which is the best proxy I can think of) we see just how dominate Google is. It owns this market to a huge degree. According to Statista, Google has about 87% of the total worldwide search market in April of 2018. The key metric here is success. Google needs to be the best way to fulfill those searches. And if market share is any indication, it is.

Facebook apparently dominates the bottom up awareness market. These are the people killing time online and they are not actively looking with commercial intent. This is more of an awareness play where attention has to be diverted to an advertising message. Therefore, time spent becomes the key factor. You need to be in front of the right eyeballs, and so you need a lot of eyeballs and a way to target to the right ones.

Here is where things get interesting. If we look at share of consumer time, Google dominates here. But there is a huge caveat, which I’ll get to in a second. According to a report this spring by Pivotal Research, Google owns just under 28% of all the time we spend consuming digital content. Facebook has just over a 16% share of this market. So why do we have a duopoly and not a monopoly? It’s because of that caveat – a whopping slice of Google’s “time spent” dominance comes from YouTube. And YouTube has an entirely different attentional profile – one that’s much harder to present advertising against. When you’re watching a video on YouTube, your attention is “locked” on the video. Disrupting that attention erodes the user experience. So Google has had a tough time monetizing YouTube.

According to Seeking Alpha, Google’s search ad business will account for 68% of their total revenue of $77 billion this year. That’s over 52 billion dollars that is in that “top-down” attentionally focused bucket. YouTube, which is very much in the “bottom-up” disruptive bucket, accounts for $12 Billion in advertising revenues. Certainly nothing to sneeze at, but not on the same scale as Google’s search business. Facebook’s revenue, at about $36 B, is also generated by this same “bottom up” market, but they have a different attentional profile. The Facebook user is not as “locked in” as they are on YouTube. With the right targeting tools, something that Facebook has excelled at, you have a decent chance of gaining their attention long enough to notice your ad.

Domination

If we look at the second part of the definition of a duopoly – that of control – we see some potential chinks in the armor of both Google and Facebook. Typically, market control was in the form of physical constraints against the competition. But in this new type of market, the control can only be in the minds of the users. The barriers to competitive entry are all defined in mental terms.

In  Google’s case, they have a single line of defense: they have to be an unbreakable habit. Habits are mental scripts that depend on two things – obvious environmental cues that trigger habitual behavior and acceptable outcomes once the script completes. So, to maintain their habit, Google has to ensure that whatever environment you might be in when searching online for something, Google is just a click or two away. Additionally, they have to meet a certain threshold of success. Habits are tough to break, but there are two areas of vulnerability in Google’s dominance.

Facebook is a little different. They need to be addictive. This is a habit taken to the extreme. Addictions depend on pushing certain reward buttons in the brain that lead to an unhealthy behavioral script which become obsessive. The more addicted you are to Facebook and its properties, the more successful they will be in their dominance of the market. You can see the inherent contradiction here. Despite Facebook’s protests to the contrary, with their current revenue model they can only succeed at the expense of our mental health.

I find these things troubling. When you have two for-profit organizations fighting to dominate a market that is defined in our own minds, you have the potential for a lot of unhealthy corporate decisions.

 

Is Busy the New Alpha?

Imagine you’ve just been introduced into a new social situation. Your brain immediately starts creating a social hierarchy. That’s what we do. We try to identify the power players. The process by which we do this is interesting. The first thing we do is look for obvious cues. In a new job, that would be titles and positions. Then, the process becomes very Bayesian – we form a base understanding of the hierarchy almost immediately and then constantly update it as we gain more knowledge. We watch power struggles and update our hierarchy based on the winners and losers. We start assigning values to the people in this particular social network and; more importantly, start assessing our place in the network and our odds for ascending in the hierarchy.

All of that probably makes sense to you as you read it. There’s nothing really earth shaking or counter intuitive. But what is interesting is that the cues we use to assign standings are context dependent. They can also change over time. What’s more, they can vary from person to person or generation to generation.

In other words, like most things, our understanding of social hierarchy is in the midst of disruption.

An understanding of hierarchy appears to be hardwired into us. A recent study found that humans can determine social standing and the accumulation of power pretty much as soon as they can walk. Toddlers as young as 17 months could identify the alphas in a group. One of the authors of the study, University of Washington psychology professor Jessica Sommerville , said that even the very young can “see that someone who is more dominant gets more stuff.” That certainly squares with our understanding of how the world works. “More stuff” has been how we’ve determined social status for hundreds of years. In sociology, it’s called conspicuous consumption, a term coined by sociologist Thorstein Veblen. And it’s a signaling strategy that evolved in humans over our recorded history. The more stuff we had, and the less we had to do to get that stuff, the more status we had. Just over a hundred years ago, Veblen called this the Leisure Class.

But today that appears to be changing. A recent study seems to indicate that we now associate busyness with status. Here, it’s time – not stuff – that is the scarce commodity. Social status signaling is more apt to involve complaining about how we never go on a vacation than about our “summer on the continent”.

At least, this seems to be true in the U.S. The researchers also ran their study in Italy and there the situation was reversed. Italians still love their lives of leisure. The U.S. is the only developed country in the world without a single legally required paid vacation day or holiday. In Italy, every employee is entitled to at least 32 paid days off per year.

In our world of marketing – which is acutely aware of social signaling – this could create some interesting shifts in messaging. I think we’re already seeing this. Campaigns aimed at busy people seem to equate scarcity of time with success. The one thing missing in all this social scrambling – whether it be conspicuous consumption or working yourself to death – might be happiness. Last year a study out of the University of British Columbia found a strong link between those who value their time more than money and happiness.

Maybe those Italians are on to something.

Our Brain on Reviews

There’s an interesting new study that was just published about how our brain mathematically handles online reviews that I wanted to talk about today. But before I get to that, I wanted to talk about foraging a bit.

The story of how science discovered our foraging behaviors serves as a mini lesson in how humans tick. The economists of the 1940’s and 50’s discovered the world of micro-economics, based on the foundation that humans were perfectly rational – we were homo economicus. When making personal economic choices in a world of limited resources, we maximized utility. The economists of the time assumed this was a uniquely human property, bequeathed on us by virtue of the reasoning power of our superior brains.

In the 60’s, behavior ecologists knocked our egos down a peg or two. It wasn’t just humans that could do this. Foxes could do it. Starlings could do it. Pretty much any species had the same ability to seemingly make optimal choices when faced with scarcity. It was how animals kept from starving to death. This was the birth of foraging theory. This wasn’t some homo-sapien-exclusive behavior that was directed from the heights of rationality downwards. It was an evolved behavior that was built from the ground up. It’s just that humans had learned how to apply it to our abstract notion of economic utility.

Three decades later, two researchers at Xerox’s Palo Alto Research Center found another twist. Not only had our ability to forage been evolved all the way through our extensive family tree, but we seemed to borrow this strategy and apply it to entirely new situations. Peter Pirolli and Stuart Card found that when humans navigate content in online environments, the exact same patterns could be found. We foraged for information. Those same calculations determined whether we would stay in an information “patch” or move on to more promising territory.

This seemed to indicate three surprising discoveries about our behavior:

  • Much of what we think is rational behavior is actually driven by instincts that have evolved over millions of years
  • We borrow strategies from one context and apply them in another. We use the same basic instincts to find the FAQ section of a website that we used to find sustenance on the savannah.
  • Our brains seem to use Bayesian logic to continuously calculate and update a model of the world. We rely on this model to survive in our environment, whatever and wherever that environment might be.

So that brings us to the study I mentioned at the beginning of this column. If we take the above into consideration, it should come as no surprise that our brain uses similar evolutionary strategies to process things like online reviews. But the way it does it is fascinating.

The amazing thing about the brain is how it seamlessly integrates and subconsciously synthesizes information and activity from different regions. For example, in foraging, the brain integrates information from the regions responsible for wayfinding – knowing our place in the world – with signals from the dorsal anterior cingulate cortex – an area responsible for reward monitoring and executive control. Essentially, the brain is constantly updating an algorithm about whether the effort required to travel to a new “patch” will be balanced by the reward we’ll find when we get there. You don’t consciously marshal the cognitive resources required to do this. The brain does it automatically. What’s more – the brain uses many of the same resources and algorithm whether we’re considering going to McDonald’s for a large order of fries or deciding what online destination would be the best bet for researching our upcoming trip to Portugal.

In evaluating online reviews, we have a different challenge: how reliable are the reviews? The context may be new – our ancestors didn’t have TripAdvisor or AirBNB ratings for choosing the right cave to sleep in tonight – but the problem isn’t. What criteria should we use when we decide to integrate social information into our decision making process? If Thorlak the bear hunter tells me there’s a great cave a half-day’s march to the south, should I trust him? Experience has taught us a few handy rules of thumb when evaluating sources of social information: reliability of the source and the consensus of crowds. Has Thorlak ever lied to us before? Do others in the tribe agree with him? These are hardwired social heuristics. We apply them instantly and instinctively to new sources of information that come from our social network. We’ve been doing it for thousands of years. So it should come as no surprise that we borrow these strategies when dealing with online reviews.

In a neuro-scanning study from the University College of London, researchers found that reliability plays a significant role in how our brains treat social information. Once again, a well-evolved capability of the brain is recruited to help us in a new situation. The dorsomedial prefrontal cortex is the area of the brain that keeps track of our social connections. This “social monitoring” ability of the brain worked in concert with ventromedial prefrontal cortex, an area that processes value estimates.

The researchers found that this part of our brain works like a Bayesian computer when considering incoming information. First we establish a “prior” that represents a model of what we believe to be true. Then we subject this prior to possible statistical updating based on new information – in this case, online reviews. If our confidence is high in this “prior” and the incoming information is weak, we tend to stick with our initial belief. But if our confidence is low and the incoming information is strong – i.e. a lot of positive reviews – then the brain overrides the prior and establishes a new belief, based primarily on the new information.

While this seems like common sense, the mechanisms at play are interesting. The brain effortlessly pattern matches new types of information and recruits the region that is most likely to have evolved to successfully interpret that information. In this case, the brain had decided that online reviews are most like information that comes from social sources. It combines the interpretation of this data with an algorithmic function that assigns value to the new information and calculates a new model – a new understanding of what we believe to be true. And it does all this “under the hood” – sitting just below the level of conscious thought.

Flow and the Machine

“In the future, either you’re going to be telling a machine what to do, or the machine is going to be telling you.”

Christopher Penn – VP of Marketing Technology, Shift Communications.

I often talk about the fallibility of the human brain – those irrational, cognitive biases that can cause us to miss the reality that’s right in front of our face. But there’s another side to the human brain – the intuitive, almost mystical machinations that happen when we’re on a cognitive roll, balancing gloriously on the edge between consciousness and subconciousness. Malcolm Gladwell took a glancing shot at this in his mega-bestseller: Blink. But I would recommend going right to the master of “Flow” – Mihaly Csikszentmihalyi (pronounced, if you’re interested – me-hi Chick-sent-me-hi). The Hungarian psychologist coined the term “flow” – referring to a highly engaged mental state where we’re completely absorbed with the work at hand. Csikszentmihalyi calls it the “psychology of optimal experience.”

It turns out there’s a pretty complicated neuroscience behind flow. In a blog post from gamer Adam Sinicki, he describes a state where the brain finds an ideal balance between instinctive behavior and total focus on one task. The state is called Transient Hypofrontality. It can sometimes be brought on by physical exercise. It’s why some people can think better while walking, or even jogging. The brain juggles resources required and this can force a stepping down of the prefrontal cortex, the part of the brain that causes us to question ourselves. This part of the brain is required in unfamiliar circumstances but in a situation where we’ve thoroughly rehearsed the actions required it’s actually better if it takes a break. This allows other – more intuitive – parts of the brain to come to the fore. And that may be the secret of “Flow.” It may also be the one thing that machines can’t replicate – yet.

The Rational Machine

If we were to compare the computer to a part of the brain, it would probably be the Prefrontal Cortex (PFC). When we talk about cognitive computing, what we’re really talking about is building a machine that can mimic – or exceed – the capabilities of the PFC. This is the home of our “executive function” – complex decision making, planning, rationalization and our own sense of self. It’s probably not a coincidence that the part of our brain we rely on to reason through complex challenges like designing artificial intelligence would build a machine in it’s own image. And in this instance, we’re damned close to surpassing ourselves. The PFC is an impressive chunk of neurobiology in its flexibility and power, but speedy it’s not. In fact, we’ve found that if we happen to make a mistake, the brain slows almost to a stand still. It shakes our confidence and kills any “flow” that might be happening in it’s tracks. This is what happens to athletes when they choke. With artificial intelligence, we are probably on the cusp of creating machines that can do most of what the PFC can do, only faster, more reliably and with the ability to process much more information.

But there’s a lot more to the brain than just the PFC. And it’s this ethereal intersection between ration and intuition where the essence of being human might be hiding.

The Future of Flow

What if we could harness “flow” at will? If we work in partnership with a machine that can crunch data in real time and present us with the inputs required to continue our flow-fueled exploration without the fear of making a mistake? It’s not so much a machine telling us what to do – or the reverse – as it is a partnership between human intuition and machine based rationalization. It’s analogous to driving a modern car, where the intelligent safety and navigation features backstop our ability to drive.

Of course, it may just be a matter of time before machines best us in this area as well. Perhaps machines already have mastered flow because they don’t have to worry about the consequences of making a mistake. But it seems to me that if humans have a future, it’s not going to be in our ability to crunch data and rationalize. We’ll have to find something a little more magical to stake our claim with.

 

 

Why Our Brains are Blocking Ads

On Mediapost alone in the last three months, there have been 172 articles written that have included the words “ad blockers” or “ad blocking.” That’s not really surprising, given that Mediapost covers the advertising biz and ad blocking is killing that particular biz, to the tune of an estimated loss of $41 billion in 2016. eMarketer estimates 70 million Americans, or 1 out of every 4 people online, uses ad blockers.

Paul Verna, an eMarketer Senior Analyst said “Ad blocking is a detriment to the entire advertising ecosystem, affecting mostly publishers, but also marketers, agencies and others whose businesses depend on ad revenue.” The UK’s culture Secretary, John Whittingdale, went even further, saying that ad blocking is a “modern-day protection racket.”

Here’s the problem with all this finger pointing. If you’re looking for a culprit to blame, don’t look at the technology or the companies deploying that technology. New technologies don’t cause us to change our behaviors – they enable behaviors that weren’t an option before. To get to the bottom of the growth of ad blocking, we have to go to the common denominator – the people those ads are aimed at. More specifically, we have to look at what’s happening in the brains of those people.

In the past, the majority of our interaction with advertising was done while our brain was idling, with no specific task in mind. I refer to this as bottom up environmental scanning. Essentially, we’re looking for something to capture our attention: a TV show, a book, a magazine article, a newspaper column. We were open to being engaged by stimuli from our environment (in other words, being activated from the “bottom up”).

In this mode, the brain is in a very accepting state. We match signals from our environment with concepts and beliefs we hold in our mind. We’re relatively open to input and if the mental association is a positive or intriguing one – we’re willing to spend some time to engage.

We also have to consider the effect of priming in this state. Priming sets a subconscious framework for the brain that then affects any subsequent mental processing. The traditional prime that was in place when we were exposed to advertising was a fairly benign one: we were looking to be entertained or informed, often the advertising content was delivered wrapped in a content package that we had an affinity for (our favorite show, a preferred newspaper, etc), and advertising was delivered in discrete chunks that our brain had been trained to identify and process accordingly.

All this means that in traditional exposures to ads, our brain was probably in the most accepting state possible. We were looking for something interesting, we were primed to be in a positive frame of mind and our brains could easily handle the contextual switches required to consider an ad and it’s message.

We also have to remember that we had a relatively static ad consumption environment that usually matched our expectations of how ads would be delivered. We expected commercial breaks in TV shows. We didn’t expect ads in the middle of a movie or book, two formats that required extended focusing of attention and didn’t lend themselves to mental contextual task switches. Each task switch brings with it a refocusing of attention and a brief burst of heightened awareness as our brains are forced to reassess its environment. These are fine in some environments – not in others.

Now, let’s look at the difference in cognitive contexts that accompany the deliver of most digital ads. First of all, when we’re online on our desktop or engaged with a mobile device, it’s generally in what I’ll call a “top down foraging” mode. We’re looking for something specific and we have intent in mind. This means there’s already a task lodged in our working memory (hence “top down”) and our attentional spotlight is on and focused on that task. This creates a very different environment for ad consumption.

When we’re in foraging mode, we suddenly are driven by an instinct that is as old as the human race (actually, much older than that): Optimal Foraging Theory. In this mode, we are constantly filtering the stimuli of our environment to see what is relevant to our intent. It’s this filtering that causes attentional blindness to non-relevant factors – whether they be advertising banners or people dressed up like gorillas. This filtering happens on a subconscious basis and the brain uses a primal engine to drive it – the promise of reward or the frustration of failure. When it comes to foraging – for food or for information – frustration is a feature, not a bug.

Our brains have a two loop learning process. It starts with a prediction – what psychologists and economists call “expected utility.” We mentally place bets on possible outcomes and go with the one that promises the best reward. If we’re right, the reward system of the brain gives us a shot of dopamine. Things are good. But if we bet wrong, a different part of the brain kicks in: the right anterior insula, the adjacent right ventral prefrontal cortex and the anterior cingulate cortex. Those are the centers of the brain that regulate pain. Nature is not subtle about these things – especially when the survival of the species depends on it. If we find what we’re looking for, we get a natural high. If we don’t, it’s actually causes us pain – but not in a physical way. We know it as frustration. Its purpose is to encourage us to not make the same mistake twice

The reason we’re blocking ads is that in the context those ads are being delivered, irrelevant ads are – quite literally – painful. Even relevant ads have a very high threshold to get over. Ad blocking has little to do with technology or “protection rackets” or predatory business practices. It has to do with the hardwiring of our brains. So if the media or the ad industry want to blame something or someone, let’s start there.