Is Busy the New Alpha?

Imagine you’ve just been introduced into a new social situation. Your brain immediately starts creating a social hierarchy. That’s what we do. We try to identify the power players. The process by which we do this is interesting. The first thing we do is look for obvious cues. In a new job, that would be titles and positions. Then, the process becomes very Bayesian – we form a base understanding of the hierarchy almost immediately and then constantly update it as we gain more knowledge. We watch power struggles and update our hierarchy based on the winners and losers. We start assigning values to the people in this particular social network and; more importantly, start assessing our place in the network and our odds for ascending in the hierarchy.

All of that probably makes sense to you as you read it. There’s nothing really earth shaking or counter intuitive. But what is interesting is that the cues we use to assign standings are context dependent. They can also change over time. What’s more, they can vary from person to person or generation to generation.

In other words, like most things, our understanding of social hierarchy is in the midst of disruption.

An understanding of hierarchy appears to be hardwired into us. A recent study found that humans can determine social standing and the accumulation of power pretty much as soon as they can walk. Toddlers as young as 17 months could identify the alphas in a group. One of the authors of the study, University of Washington psychology professor Jessica Sommerville , said that even the very young can “see that someone who is more dominant gets more stuff.” That certainly squares with our understanding of how the world works. “More stuff” has been how we’ve determined social status for hundreds of years. In sociology, it’s called conspicuous consumption, a term coined by sociologist Thorstein Veblen. And it’s a signaling strategy that evolved in humans over our recorded history. The more stuff we had, and the less we had to do to get that stuff, the more status we had. Just over a hundred years ago, Veblen called this the Leisure Class.

But today that appears to be changing. A recent study seems to indicate that we now associate busyness with status. Here, it’s time – not stuff – that is the scarce commodity. Social status signaling is more apt to involve complaining about how we never go on a vacation than about our “summer on the continent”.

At least, this seems to be true in the U.S. The researchers also ran their study in Italy and there the situation was reversed. Italians still love their lives of leisure. The U.S. is the only developed country in the world without a single legally required paid vacation day or holiday. In Italy, every employee is entitled to at least 32 paid days off per year.

In our world of marketing – which is acutely aware of social signaling – this could create some interesting shifts in messaging. I think we’re already seeing this. Campaigns aimed at busy people seem to equate scarcity of time with success. The one thing missing in all this social scrambling – whether it be conspicuous consumption or working yourself to death – might be happiness. Last year a study out of the University of British Columbia found a strong link between those who value their time more than money and happiness.

Maybe those Italians are on to something.

Curmudgeon, Chicken Little or Cognoscenti?

Apparently I’m old and out of step. Curmudgeonly, even. And this is from people of my own generation. My previous column about the potential shallowness encouraged by social media drew a few comments that indicated I was just being a grumpy old man. One was from an old industry friend – Brett Tabke:

“The rest of the article is like out of the 70’s in that it is devoid of the reality that is the uber-me generation. The selfie is only a reflection of their inward focus.”

The other was from Monica Emrich, whom I’ve never had the pleasure of meeting:

” ’Social Media Is Barely Skin-Deep.’ ho hum. History shows: when new medium hits, civilization as we know it is over.”

These comments seem to telling me, “Relax. You just don’t understand because you’re too old. Everything will be great.” And, if that’s true, I’d be okay with that. I’m more than willing to be proven a doddering old fool if it means technology is ushering us into a new era of human greatness.

But what if this time is different? What if Monica’s facetious comment actually nailed it? Maybe civilization as we know it will be over. The important part of this is “as we know it.” Every technological disruption unleashes a wave of creative destruction that pushes civilization in a new direction. We seem to blindly assume it will always go in the right direction. And it is true that technology has generally elevated the human race. But not uniformly – and not consistently. What if this shift is different? What if we become less than what we were? It can happen. Brexit – Xenophobia – Trump – Populism, all these things are surfing on the tides of new technology.

Here’s the problem. There are some aspects of technology that we’ve never had to deal with before – at least, not at this scale. One these aspects (other aspects will no doubt be the topic of a future Media Insider) is that technology is now immersive and ubiquitous. It creates an alternate reality for us, and it has done in it in a few short decades. Why is this dangerous? It’s dangerous because evolution has not equipped us to deal with this new reality. In the past, when there has been a shift in our physical reality, it has taken place over several generations. Natural selection had the time to reshape the human genome to survive and eventually thrive in this new reality. Along the way, we acquired checks and balances that would allow us to deal with the potentially negative impacts of the environment.

But our new reality is different. It’s happen in the space of a single generation. There is no way we could have acquired natural defenses against it. We are operating in an environment we have been untested for. The consequences are yet to be discovered.

No, your response might be to say that, “Yes, evolution doesn’t move this quickly, but out brains can. They are elastic and malleable.” This is true, but there’s a big “but” that lies hidden in this approach. Our brains rewire to be a better match their environment. This is one of the things that humans excel at. But this rewiring happens on top of a primitive platform with some built in limitations. The assumption is that a better match with our environment provides a better chance for survival of the species.

But what if technology is throwing us a curve ball in this case? No matter what the environment we have adapted to, there has been one constant: The history of humans depends on our success in living together. We have evolved to be social animals but that evolution is predicated on the assumption that our socializing would take place face-to-face. Technology is artificially decoupling our social interactions from the very definition of society that we have evolved to be able to handle. A recent Wharton interview with Eden Collinsworth sounds the same alarm bells.

“The frontal lobes, which are the part of the brain that puts things in perspective and allows you to be empathetic, are constantly evolving. But it is less likely to evolve and develop those skills if you are in front of a screen. In other words, those skills come into play when you have a face-to-face interaction with someone. You can observe facial gestures. You can hear the intonation of a voice. You’re more likely to behave moderately in that exchange, unless it’s a just a knock-down, drag-out fight.”

Collinsworth’s premise – which is covered in her new book, Behaving Badly – is that this artificial reality is changing our concepts of morality and ethics. She reminds us the two are interlinked, but they are not the same thing. Morality is our own personal code of conduct. Ethics are a shared code that society depends on to instill a general sense of fairness. Collinsworth believes both are largely learned from the context of our culture. And she worries that a culture that is decoupled from the physical reality we have evolved to operate in may have dire consequences.

The fact is that if our morality and ethics are intended to keep us socially more cohesive, this works best in a face-to-face context. In an extreme example of this, Lt. Col. Dave Grossman, a former paratrooper and professor of psychology at West Point, showed how our resistance to killing another human in combat is inversely related to our physical distance from them. The closer we are to them, the more resistant we are to the idea of killing them. This makes sense in an evolutionary environment where all combat was hand-to-hand. But today, the killer could be in a drone flight control center thousands of miles from his or her intended target.

This evolved constraint on unethical behavior – the social check and balance of being physically close to the people we’re engaging with – is important. And while the application of the two examples I’ve cited; One – the self-absorbed behavior on social networks – and Two – the moral landscape of a drone strike operator, may seem magnitudes apart in terms of culpability, the underlying neural machinery is related. What we believe is right and wrong is determined by a moral compass set to the bearings of our environment. The fundamental workings of that compass assumed we would be face-to-face with the people we have to deal with. But thanks to technology, that’s no longer the case.

Maybe Brett and Monica are right. Maybe I’m just being alarmist. But if not, we’d better start paying more attention. Because civilization “as we know it” may be ending.

 

Social Media is Barely Skin Deep

Here’s a troubling fact. According to a study from the Georgia Institute of Tech, half of all selfies taken have one purpose, to show how good the subject looks. They are intended to show the world how attractive we are: our makeup, our clothes, our shoes, our lips, our hair. The category accounts for more selfies than all other categories combined. More than selfies taken with people or pets we love, more than us doing the things we love, more than being in the places we love, more than eating the food we love. It appears that the one thing we love the most is ourselves. The selfies have spoken

In this study, the authors reference a 1956 work from sociologist Erving Goffman– The Presentation of Self in Everyday Life. Goffman took Shakespeare’s line – “All the World is a Stage and all the men and women merely players” – quite literally. His theory was that we are all playing the part of whom we want to be perceived as. Our lives are divided up into two parts – the front, when we’re “on stage” and playing our part, and the “back” – when we prepare for our role. The roles we play depend on the context we’re in.

 

Goffman’s theory introduces an interesting variable into consideration. The way we play these roles and the importance we place on them will vary with the individual. For some of us, it will be all about the role and less about the actual person who inhabits that role. These people are obsessed about how they are perceived by others. They’re the ones snapping selfies of themselves to show the world just how marvelous they look.

For others, they care little about what the world thinks of them. They are internally centered and are focused on living their lives, rather than acting their way through their lives for the entertainment of – and validation from – others. In between the two extremes is the ubiquitous bell curve of normal distribution. Most of us live somewhere on that curve.

Goffman’s theory was created specifically to provide insight into face-to-face encounters. Technology has again throw a gigantic wrinkle into things – and that wrinkle may explain why we keep taking those narcissistic selfies.

Humans are pretty damned good at judging authenticity in a face-to-face setting. We pick up subtle cues from across a wide swath of interpersonal communication channels: vocal intonations, body language, eye-to-eye contact, micro-expressions. Together, these inputs give us a pretty accurate “bullshit detector.” If someone comes across as an inauthentic “phony” the majority of us will just roll our eyes and simply start avoiding the person. In face-to-face encounters there is a social feedback mechanism that keeps the “actors” amongst us at least somewhat honest in order to remain part of the social network that forms their audience.

But social media platforms provide the idea incubator for inauthentic presentation of our own personas. There are three factors in particular that allow shallow “actors” to flourish – even to the point of going viral.

False Intimacy and Social Distance

In his blog on Psychology Today, counselor Michael Formica talks about two of these factors – social distance and false intimacy. I’ve talked about false intimacy before in another context – the “labelability” of celebrities. Social media removes the transactional costs of retaining a relationship. This has the unfortunate side effect of screwing up the brain’s natural defenses against inauthentic relationships. When we’re physically close to a person, there are no filters for the bad stuff. We get it all. Our brains have evolved to do a cost/benefit analysis of each relationship we have and decide whether it’s worth the effort to maintain it. This works well when we depend on physically proximate relationships for our own well-being.

But social media introduces a whole new context for maintaining social relationships. When the transactional costs are reduced to a scanning of a newsfeed and hitting the “Like” button, the brain says “What the hell, let’s add them to our mental friends list. It’s not costing me anything.” In evolutionary terms, intimacy is the highest status we can give to a relationship and it typically only comes with a thorough understanding of the good and the bad involved in that relationship by being close to the person – both physically and figuratively. With zero relational friction, we’re more apt to afford intimacy, whether or not it’s been earned.

The Illusion of Acceptance

The previous two factors perfectly set the “stage” for false personas to flourish, but it’s the third factor that allows them to go viral. Every actor craves acceptance from his or her audience. Social exclusion is the worst fate imaginable for them. In a face-to-face world, our mental cost/benefit algorithm quickly weeds out false relationships that are not worth the investment of our social resources. But that’s not true online. If it costs us nothing, we may be rolling our eyes – safely removed behind our screen – as we’re also hitting the “Like” button. And shallow people are quite content with shallow forms of acceptance. A Facebook like is more than sufficient to encourage them to continue their act. To make it even more seductive, social acceptance is now measurable – there are hard numbers assigned to popularity.

This is pure cat-nip to the socially needy. Their need to craft a popular – but entirely inauthentic – persona goes into overdrive. Their lives are not lived so much as manufactured to create a veneer just thick enough to capture a quick click of approval. Increasingly, they retreat to an online world that follows the script they’ve written for themselves.

Suddenly it makes sense why we keep taking all those selfies of ourselves. When all the world’s a stage, you need a good head shot.

Our Brain on Reviews

There’s an interesting new study that was just published about how our brain mathematically handles online reviews that I wanted to talk about today. But before I get to that, I wanted to talk about foraging a bit.

The story of how science discovered our foraging behaviors serves as a mini lesson in how humans tick. The economists of the 1940’s and 50’s discovered the world of micro-economics, based on the foundation that humans were perfectly rational – we were homo economicus. When making personal economic choices in a world of limited resources, we maximized utility. The economists of the time assumed this was a uniquely human property, bequeathed on us by virtue of the reasoning power of our superior brains.

In the 60’s, behavior ecologists knocked our egos down a peg or two. It wasn’t just humans that could do this. Foxes could do it. Starlings could do it. Pretty much any species had the same ability to seemingly make optimal choices when faced with scarcity. It was how animals kept from starving to death. This was the birth of foraging theory. This wasn’t some homo-sapien-exclusive behavior that was directed from the heights of rationality downwards. It was an evolved behavior that was built from the ground up. It’s just that humans had learned how to apply it to our abstract notion of economic utility.

Three decades later, two researchers at Xerox’s Palo Alto Research Center found another twist. Not only had our ability to forage been evolved all the way through our extensive family tree, but we seemed to borrow this strategy and apply it to entirely new situations. Peter Pirolli and Stuart Card found that when humans navigate content in online environments, the exact same patterns could be found. We foraged for information. Those same calculations determined whether we would stay in an information “patch” or move on to more promising territory.

This seemed to indicate three surprising discoveries about our behavior:

  • Much of what we think is rational behavior is actually driven by instincts that have evolved over millions of years
  • We borrow strategies from one context and apply them in another. We use the same basic instincts to find the FAQ section of a website that we used to find sustenance on the savannah.
  • Our brains seem to use Bayesian logic to continuously calculate and update a model of the world. We rely on this model to survive in our environment, whatever and wherever that environment might be.

So that brings us to the study I mentioned at the beginning of this column. If we take the above into consideration, it should come as no surprise that our brain uses similar evolutionary strategies to process things like online reviews. But the way it does it is fascinating.

The amazing thing about the brain is how it seamlessly integrates and subconsciously synthesizes information and activity from different regions. For example, in foraging, the brain integrates information from the regions responsible for wayfinding – knowing our place in the world – with signals from the dorsal anterior cingulate cortex – an area responsible for reward monitoring and executive control. Essentially, the brain is constantly updating an algorithm about whether the effort required to travel to a new “patch” will be balanced by the reward we’ll find when we get there. You don’t consciously marshal the cognitive resources required to do this. The brain does it automatically. What’s more – the brain uses many of the same resources and algorithm whether we’re considering going to McDonald’s for a large order of fries or deciding what online destination would be the best bet for researching our upcoming trip to Portugal.

In evaluating online reviews, we have a different challenge: how reliable are the reviews? The context may be new – our ancestors didn’t have TripAdvisor or AirBNB ratings for choosing the right cave to sleep in tonight – but the problem isn’t. What criteria should we use when we decide to integrate social information into our decision making process? If Thorlak the bear hunter tells me there’s a great cave a half-day’s march to the south, should I trust him? Experience has taught us a few handy rules of thumb when evaluating sources of social information: reliability of the source and the consensus of crowds. Has Thorlak ever lied to us before? Do others in the tribe agree with him? These are hardwired social heuristics. We apply them instantly and instinctively to new sources of information that come from our social network. We’ve been doing it for thousands of years. So it should come as no surprise that we borrow these strategies when dealing with online reviews.

In a neuro-scanning study from the University College of London, researchers found that reliability plays a significant role in how our brains treat social information. Once again, a well-evolved capability of the brain is recruited to help us in a new situation. The dorsomedial prefrontal cortex is the area of the brain that keeps track of our social connections. This “social monitoring” ability of the brain worked in concert with ventromedial prefrontal cortex, an area that processes value estimates.

The researchers found that this part of our brain works like a Bayesian computer when considering incoming information. First we establish a “prior” that represents a model of what we believe to be true. Then we subject this prior to possible statistical updating based on new information – in this case, online reviews. If our confidence is high in this “prior” and the incoming information is weak, we tend to stick with our initial belief. But if our confidence is low and the incoming information is strong – i.e. a lot of positive reviews – then the brain overrides the prior and establishes a new belief, based primarily on the new information.

While this seems like common sense, the mechanisms at play are interesting. The brain effortlessly pattern matches new types of information and recruits the region that is most likely to have evolved to successfully interpret that information. In this case, the brain had decided that online reviews are most like information that comes from social sources. It combines the interpretation of this data with an algorithmic function that assigns value to the new information and calculates a new model – a new understanding of what we believe to be true. And it does all this “under the hood” – sitting just below the level of conscious thought.

How I Cleared a Room Full of Marketing Techies

Was it me?

Was it something I said?

I don’t think so. I think it was just that I was talking about B2B.

Let me explain.

Last week, I was in San Francisco talking at a marketing technology conference. My session, in which I was a co presenter, was going to be about psychographic profiling and A.I. – in B2B marketing. It was supposed to start immediately after another session on “cognitive marketing”. During this prior session, I decided to stand at the back at the room so I didn’t take up a seat.

That proved to be a mistake. During the session, which was in one of three tracks running at the time, the medium sized room filled to standing room only capacity. The presenter talked about how machine learning – delivered via IBM’s Watson, Google’s DeepMind or Amazon’s Cloud AI solution – is going to change marketing and, along with it, the job of a human marketer.

I found it interesting. The audience seemed to think so as well. The presenter wrapped up – the moderator got up to thank him and introduce me as the next presenter – and about 60% of the room stood as one and headed for the exit door, creating a solid human wall between myself and the stage. It took me – the fish – about 5 minutes of proverbially and physically swimming upstream before I could get to the stage. It wasn’t the smoothest of transitions.

I tend to take these things personally. But I honestly don’t think it was me. I think it was the fact that “B2B” was in the title of my presentation. I have found that as soon as you slap that label on anything, marketers tend to swarm in the opposite direction. If there is a B2B track at a general marketing show, you can bet your authentic Adam West Batman action figure (not that I would have any such thing) that it’s tucked away in some far-off corner of the conference center, down three flights of escalators, where you turn right and head towards the parking garage. My experience at this past show was analogous to the lot of B2B marketing in general. Whenever we start talking about it, people start heading for the door.

I don’t get it.

It’s not a question of budget. Even in terms of marketing dollars, a lot of budget gets allocated for B2B. An Outsell report for 2016 pegged the total US B2B marketing spend at about $151 billion. That compares respectfully with the total consumer Ad Spend of $192 billion, according to eMarketer.

And it’s definitely not a question of market size. It’s very difficult to size the entire B2B market, but there’s no doubt that it’s huge. A Forrester report estimates that $8 trillion was sold in the US B2B retail space in 2014. That’s almost half of the US gross domestic product that year. And a huge swath of the business is happening online. The worldwide B2B eCommerce market is projected to be $6.7 trillion by 2020. That’s twice as big as the projected online B2C market ($3.2 trillion).

So what gives? B2B is showing us the money. Why are we not showing it any love? Just digging up the background research for this column proved to be painful. Consumer spend and marketing dollar numbers come gushing off the page of even a half-assed Google search. But B2B stats? Cue the crickets.

I have come to the conclusion that it’s just lack of attention, which probably comes from a lack of sex appeal. B2B is like the debate club in high school. While everyone goes gaga during school assemblies over the cheerleading squad and the football team, the people who will one day rule the world quietly gather after class with Mr. Tilman in the biology lab to plot their debate strategy for next week’s match up against J.R. Matheson Senior High. It goes without saying that parents will be the only ones who actually show up. And even some of them will probably have to stay home to cut the grass.

Those debaters will probably all grow up to be B2B marketers.

It may also be that B2B marketing is hard. Like – juggling Rubik’s Cubes while simultaneously solving them – hard. At least, it’s hard if you dare to go past the “get a lead and hound them mercilessly until they either move to another country or give in and buy something to get you off their back” school of marketing. If you try to do something as silly as try to predict purchase behaviors you have the problem of compound complexity. We have been trying for some time, with limited success, to predict a single consumer’s behavior. In B2B, you have to predict what might happen when you assemble a team of potential buyers – each with their own agenda, emotions and varying degrees of input – and ask them to come to a consensus on an organizational buying decision.

That can make your brain hurt. It’s a wicked problem to the power of 5.4 (the average number of buyers involved in a B2B buying decision- according to CEB’s research). It’s the Inconvenient Truth of Marketing.

That, I keep telling myself, is why everyone was rushing for the door the minute I started walking to the stage. I shouldn’t take it personally.

Shopping is Dead. Long Live Shopping!

Last week, a delivery truck pulled up in my driveway. As the rear door rolled up, I saw the truck was full of Amazon parcels, including one for me. Between the four of us that live in our house, we have at least one online purchase delivered each week. When compared to the total retail spending we do, perhaps that’s not all that significant, but it’s a heck of a lot more than we used to spend.

We are a microcosm of a much bigger behavioral trend. A recent Mediapost article by Jack Loechner reported that online retail grew by 15.6 percent last year and represents 11.7 percent of total retail sales. An IRI report shows similar trends in consumer packaged goods. In 2015, ecommerce represented just 1.5% of all consumer packaged good sales, but they project that to climb to 10% in 2022. In fueling that increase, Amazon is not only leading the pack, but also dominating it to an awe-inspiring extent. Between 2010 and last year, Amazon’s sales in North America quintupled from $16 billion to $80 billion. Hence all those packages in the back of the afore-mentioned truck.

Now, maybe all this still represents “small potatoes” in the total world of retail, but I think we’re getting close to an inflection point. We are fundamentally changing how we think of shopping, and once we let that demon out of the box (or bubble wrapped envelope) there is no stuffing it back.

In the nascent days of online shopping, way back in 2001, an academic study looked at the experience of shopping online. The authors, Childers, Carr, Peck and Carson, divided the experience into two aspects: hedonic and utilitarian. I’ll deal with both in that order.

First of all, the hedonic side of shopping – the touchy, feely joy of buying stuff. It’s mainly the hedonic aspects that purportedly hold up the shaky foundations of all those bricks and mortar stores. And I wonder – is that a generational thing? People of my generation and older still seem to like a little retail therapy now and again. But for my daughters, the act of physically shopping is generally a pain in the ass. If they can get what they want online, they’ll do so in the click of an OneClick button. They’ll visit a mall only if they have to.

In an article early this year in The Atlantic, Derek Thompson detailed the decimation of traditional retail. Mall visits declined 50 percent between 2010 and 2013, according to the real-estate research firm Cushman and Wakefield, and they’ve kept falling every year since. Retailers are declaring bankruptcy at alarming rates. Thompson points the finger at online shopping, but adds a little more context. Maybe the reason bricks and mortar retail is bleeding so badly is that it represents an experience that is no longer appealing. A quote from that article raises an interesting point:

“ ‘What experience will reliably deliver the most popular Instagram post?’—really drive the behavior of people ages 13 and up. This is a big deal for malls, says Barbara Byrne Denham, a senior economist at Reis, a real-estate analytics firm”

Malls were designed to provide an experience – to the point of ludicrous overkill in mega-malls like Canada’s West Edmonton Mall or Minnesota’s Mall of America. But increasingly, those aren’t the experiences we’re looking for. We’re still hedonistic, but our hedonism has developed different tastes. Things like travel and dining out with friends are booming, especially with younger generations. As Denham points out, our social barometers are not determined so much but what we have as by what we’re doing and whom we’re doing it with. Social proof of such things is just one quick post away.

Now let’s deal with the utilitarian aspects of shopping. According to a recent Harris Poll, the three most popular categories for online shopping are:

  1. Clothing and Shoes
  2. Beauty and Personal Care Products
  3. Food Items

Personally, when I look at the things I’ve recently ordered online, they include:

  • A barbecue
  • Storage shelves
  • Water filters for my refrigerator
  • A pair of sports headphones
  • Cycling accessories

I ordered these things online because (respectively):

  • They were heavy and I didn’t want the hassle of dragging them home from the store; and/or,
  • They probably wouldn’t have what I was looking for at any stores in my area.

But even if we look beyond these two very good reasons to buy online, “etail” is just that much easier. It’s generally cheaper, faster and more convenient. We have a long, long tail of things to look for, the advantage of objective reviews to help filter our buying and an average shopping trip duration of just a few minutes – start to finish – as opposed to a few hours or half a day. Finally, we don’t have to contend with assholes in the parking lot.

Online already wins on almost every aspect and the delta of “surprise and delight” is just going to keep getting bigger. Mobile devices untether buying from the desktop, so we can do it any place, any time. Voice commands can save our tender fingertips from unnecessary typing and clicking. Storefronts continue to get better as online retailers run bushels of UX tests to continually tweak the buying journey.

But what’s that you say? “There are just some things that you have to see and touch before you buy?” Perhaps, although I personally remain unconvinced about the need for tactile feedback when shopping. People are buying cars online and if ever there was a candidate for hedonism, it’s an automobile. But let’s say you’re right. I already wrote about how Amazon is changing the bricks and mortar retail game. But Derek Thompson casts his crystal ball gazing even further in the future when he speculates on what autonomous vehicles might do for retail:

“Once autonomous vehicles are cheap, safe, and plentiful, retail and logistics companies could buy up millions, seeing that cars can be stores and streets are the ultimate real estate. In fact, self-driving cars could make shopping space nearly obsolete in some areas.”

Maybe you should buy some shares in Amazon, if you haven’t already. P.S. You can buy them online.

 

The Status Quo Bias – Why Every B2B Vendor has to Understand It

It’s probably the biggest hurdle any B2B vendor has to get over. It’s called the Status Quo bias and it’s deadly in any high-risk purchase scenario. According to Wikipedia, the bias occurs when the current baseline (or status quo) is taken as a reference point, and any change from that baseline is perceived as a loss. In other words, if it ain’t broke don’t fix it. We believe that simply because something exists, it must have merit. The burden of proof then falls on the vendor to overcome this level of complacency

The Status Quo Bias is actually a bundle of other common biases, including the Endowment Effect, the Loss Aversion Bias, The Existence Bias, Mere Exposure effect and other psychological factors that tend to continually jam the cogs of B2B commerce. Why B2B? The Status Quo Bias is common in any scenario where risk is high and reward is low, but B2B in particular is subject to it because these are group-buying decisions. And, as I’ll soon explain, groups tend to default to Status Quo bias with irritating regularity. The new book from CEB (recently acquired by Gartner) – The Challenger Customer – is all about the status quo bias.

So why is the bias particularly common with groups? Think of the dynamics at play here. Generally speaking, most people have some level of the Status Quo Bias. Some will have it more than others, depending on their level of risk tolerance. But let’s look at what happens when we lump all those people together in a group and force them to come to a consensus. Generally, you’re going to have a one or two people in the group that are driving for change. Typically, these will be the ones that have the most to gain and have a risk tolerance threshold that allows the deal to go forward. On the other end of the spectrum you have some people who have low risk tolerance levels and nothing to gain. They may even stand to lose if the deal goes forward (think IT people who have to implement a new technology). In between you have the moderates. The gain factor and their risk tolerance levels net out to close to zero. Given that those that have something to gain will say yes and those who have nothing to gain will say no, it’s this middle group that will decide whether the deal will live or die.

Without the Status Quo bias, the deal might have a 50/50 chance. But the status quo bias stacks the deck towards negative outcomes for the vendor. Even if it tips the balance just a little bit towards “no” – that’s all that’s required to stop a deal dead in its tracks. The more disruptive the deal, the greater the Status Quo Bias. Let’s remember – this is B2B. There are no emotional rewards that can introduce a counter acting bias. It’s been shown in at least one study (Baker, Laury, Williams – 2008) that groups tend to be more risk averse than the individuals that make up that group. When the groups start discussing and – inevitably – disagreeing, it’s typically easier to do nothing.

So, how do we stick handle past this bias? The common approach is to divide and conquer – identifying the players and tailoring messages to speak directly to them. The counter intuitive finding of the CEB Challenger Customer research was that dividing and conquering is absolutely the wrong thing to do. It actually lessens the possibility of making a sale. While this sounds like it’s just plain wrong, it makes sense if we shift our perspective from the selling side to the buying side.

With our vendor goggles on, we believe that if we tailor messaging to appeal to every individual’s own value proposition, that would be a way to build consensus and drive the deal forward. And that would be true, if every member of our buying committee was acting rationally. But as we soon see when we put on the buying googles, they’re not. Their irrational biases are firmly stacked up on the “do nothing” side of the ledger. And by tailoring messaging in different directions, we’re actually just giving them more things to disagree about. We’re creating dysfunction rather than eliminating it. Disagreements almost always default back to the status quo, because it’s the least risky option. The group may not agree about much, but they can agree that the incumbent solution creates the least disruption.

So what do you do? Well, I won’t steal the CEB’s thunder here, because the Challenger Customer is absolutely worth a read if you’re a B2B vendor. The authors, Brent Adamson, Matthew Dixon, Pat Spenner and Nick Toman, lay out step by step strategy to get around the Status Quo bias. The trick is to create a common psychological frame where everyone can agree that doing nothing is the riskiest alternative. But biases are notoriously sticky things. Setting up a commonly understood frame requires a deep understanding of the group dynamics at play. The one thing I really appreciate about CEB’s approach is that it’s “psychologically sound.” They make no assumptions about buyer rationality. They know that emotions ultimately drive all human behavior and B2B purchases are no exception.