Together We Lie

Humans are social animals. We’ve become this way because – evolutionarily speaking – we do better as a group than individually. But there’s a caveat here. If you get a group of usually honest people together, they’re more likely to lie. Why is this?

Martin Kocher and his colleagues from LMU in Munich set up a study where participants had to watch a video of a single roll of a die and then report on the number that came up. Depending on what they reported, there was a payoff. Researchers asked both individuals and small groups who had the opportunity to chat anonymously with each other before reporting. The result,

“Our findings are unequivocal: People are less likely to lie if they decide on their own.”

Even individuals who answered honestly independently started lying when they got in a group.

The researchers called this a “dishonesty shift.” They blame it on a shifting weight placed on the norm of honesty. Norms are those patterns we have that guide us in our behaviors and beliefs. But those norms may be different individually than they are when we’re part of a group.

“Feedback is the decisive factor. Group-based decision-making involves an exchange of views that may alter the relative weight assigned to the relevant norm.”

Let’s look at how this may play out. Individually, we may default to honesty. We do so because we’re unsure of the consequences of not being honest. But when we get in a group, we start talking to others and it’s easier to rationalize not being honest – “Well, if everyone’s going to lie, I might as well too.”

Why is this important? Because marketing is done in groups, by groups, to groups. The dynamics of group-based ethics are important for us to understand. It could help to explain the most egregious breaches of ethics we see becoming more and more commonplace, either in corporations or in governments.

Four of the seminal studies in psychology and sociology shed further light on why groups tend to shift towards dishonesty. Let’s look at them individually.

In 1955, Solomon Asch showed that even if individually we believe something to be incorrect, if enough people around us have a different option, we’ll go with the group consensus rather than risk being the odd person out. In his famous study, he would surround a subject with “plants” who, when shown cards with three black lines of obviously differing lengths on it, would insist that three lines were equal. The subjects were then asked their opinion. In 75% of the cases, they’d go with the group rather than risk disagreement. As Asch said in his paper – quoting sociologist Gabriel Tarde – “Social man in a somnambulist.” We have about as much independent will as your average sleepwalker.

Now, let’s continue with Stanley Milgram’s Obedience to Authority study, perhaps the most controversial and frightening of the group. When confronted with an authoritative demeanor, a white coat and a clipboard, 63% of the subjects meekly followed directions and delivered what were supposed to be lethal levels of electrical shock to a hapless individual. The results were so disheartening that we’ve been trying to debunk them ever since. But a follow up study by Stanford psychology professor Philip Zimbardo – where subjects were arbitrarily assigned roles as guards and inmates in a mock prison scenario – was even more shocking. We’re more likely to become monsters and abandon our personal ethics when we’re in a group than when we act alone. Whether it’s obedience to authority – as Milgram was trying to prove – or whether it’s social conformity taken to the extreme, we tend to do very bad things when we’re in bad company.

But how do we slip so far so quickly from our own personal ethical baseline? Here’s where the last study I’ll cite can shed a little light. Sociologist Mark Granovetter – famous for his Strength of Weak Ties study – also looked at the viral spreading of behaviors in groups. I’ve talked about this in a previous column, but here’s the short version: If we have the choice between two options, with accompanying social consequences, which option we choose may be driven by social conformity. If we see enough other people around us picking the more disruptive option (i.e. starting a riot) we may follow suit. Even if we all have different thresholds – which we do – the nature of a crowd is such that those with the lowest threshold will pick the disruption option, setting into effect a bandwagon effect that eventually tips the entire group over the threshold.

These were all studied in isolation, because that’s how science works. We study variables in isolation. But it’s when factors combine that we get the complexity that typifies the real world – and the real marketplace. And there’s where predictability goes out the window. The group dynamics in play can create behavioral patterns that make no sense to the average person with the average degree of morality. But it’s happened before, it’s happening now, and it’s sure to happen again.

 

 

I, Robot….

Note: No Artificial Intelligence was involved in the creation of this column.

In the year 1942, science fiction writer Isaac Asimov introduced the 3 Rules of Robotics in his collection of short stories, I, Robot..

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov had the rules as coming from the Handbook of Robotics, 56th Edition, 2058 A.D. What was once an unimaginably distant time in the future is now knocking with increasing intensity on the door of the present. And Elon Musk, for one, is worried. “AI is a fundamental risk to the existence of human civilization.” Musk believes, Rules of Robotics or no, we won’t be able to control this genie once it gets out of its bottle.

Right now, the genie looks pretty benign. In the past year, the Washington Post has used robot reporters to write over 850 stories. The Post believes this is a win/win with their human reporters, because the robot, named Heliograf, can:

  • Cover stories that wouldn’t have been covered due to lack of human resources
  • Do the factual heavy lifting for human reporters
  • Alert humans to possible news stories in big data sets

So, should we fear or cheer robots? I think the Post’s experiment highlights two areas that AI excels at, and indicates how we might play nice with machines.

For AI to work effectively, the dots have to be pretty well sketched out. When they are, AI can be tireless in scouting out relevant facts and data where humans would tend to get bored easily. But humans are still much better at connecting those dots, especially when no obvious connection is apparent. We do it through something called intuition. It’s at least one area where we can still blow machines away.

Machines are also good at detecting patterns in overwhelming amounts of data. Humans tend to overfit…make the data fit our narratives. We’ll come back to this point in a minute, but for now, let’s go back to intuition. It’s still the trump card we humans hold. In 2008, Wired editor Chris Anderson prematurely (and, many believe, incorrectly) declared the Scientific Method dead, thanks to the massive data sets we now have available:

“We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”

Anderson gets it partly right, but he also unfairly gives intuition short shrift. This is not a zero sum game. Intuition and A.I. can and should play nicely together. As I mentioned a few weeks ago, human intuition was found to boost the effectiveness of an optimization algorithm by 25%.

Evolutionary biologist Richard Dawkins recently came to the defense of intuition in Science, saying:

“Science proceeds by intuitive leaps of the imagination – building an idea of what might be true, and then testing it”

The very human problem comes when we let our imaginations run away from the facts, bending science to fit our hypotheses:

“It is important that scientists should not be so wedded to that intuition that they omit the very important testing stage.”

There is a kind of reciprocation here – an oscillation between phases. Humans are great at some stages – the ones that require intuition and imagination -and machines are better at others – where a cold and dispassionate analysis of the facts is required. Like most things in nature that pulse with a natural rhythm, the whole gains from the opposing forces at work here. It is a symphony with a beat and a counterbeat.

That’s why, for the immediate future anyway, machines should bend not to our will, but to our imagination.

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

The Assisted Reality of the New Marketer

Last week, MediaPost’s Laurie Sullivan warned us that the future of analytical number crunchers is not particularly rosy in the world of marketing. With cognitive technologies like IBM’s Watson coming on strong in more and more places, analytic skills are not that hot a commodity any more. Ironically, when it comes to marketing, the majority of companies have not planned to incorporate cognitive technologies in the near future. According to a report from IBM and Oxford Economics, only 24% of the organizations have a plan to incorporate CT in their own operations.

Another study, from Forrester, explored AI Marketing Readiness in Retail and eCommerce sectors. The state of readiness is a little better. In these typically forward thinking sectors, 72% are implementing AI marketing tech in the next year, but only 45% of those companies would consider themselves as excelling in at least 2 out of 3 dimensions of readiness.

If those numbers seem contradictory, we should understand what the difference between cognitive technology and artificial intelligence is. You’ll notice that IBM refers to Watson as “cognitive computing.” As Rob High, IBM’s CTO for Watson put it, “What it’s really about is involvement of a human in the loop,” and he described Watson as “augmented intelligence” rather than artificial intelligence.

That “human in the loop” is a critical difference between the two technologies. Whether we like it or not, machines are inevitable in the world of marketing, so we’d better start thinking about how to play nice with them.

 

I remember first seeing a video from the IBM Amplify summit at a MediaPost event last year. Although the presentation was a little stilted, the promise was intriguing. It showed a marketer musing about a potential campaign and throwing “what ifs” at Watson, who quickly responded with the almost instantly analyzed quantified answers. The premise of the video was to show how smart Watson was. But here’s a “what if” to consider. What if the real key to this was the hypotheticals that the human seemed to be pulling out of the blue? That doesn’t seem that impressive to us – certainly not as impressive as Watson’s corralling and crunching of relevant numbers in the blink of an eye. Musing is what we do. But this is just one example of something called Moravec’s Paradox.

Moravec’s Paradox, as stated by AI pioneer Marvin Minsky, is this: “In general, we’re least aware of what our minds do best. We’re more aware of simple processes that don’t work well than of complex ones that work flawlessly.” In other words, what we find difficult are the tasks that machines are well suited for, and the things we’re not even aware of are the things machines find notoriously hard to do. Things like intuition. And empathy. If we’re looking at the future of the human marketer, we’re probably looking at those two things.

In his book, Humans are Underrated, Geoff Colvin writes,

“Rather than ask what computers can’t do, it’s much more useful to ask what people are compelled to do—those things that a million years of evolution cause us to value and seek from other humans, maybe for a good reason, maybe for no reason, but it’s the way we are.”

We should be ensuring that both humans and machines are doing what they do best, essentially erasing Moravec’s Paradox. Humans focus on intuition and empathy and machines do the heavy lifting on the analyzing and number crunching. The optimal balance – at this point anyway – is a little bit of both.

In Descarte’s Error – neurologist Antonio Damasio showed that without human intuition and emotion – together with the corresponding physical cues he called somatic markers – we could rationalize ourselves into a never-ending spiral without ever coming to a conclusion. We need to be human to function effectively.

Researchers at MIT have even tried to include this into an algorithm. In 1954, Herbert Simon introduced a concept called bounded rationality. It may seem like this puts limits on the cognitive power of humans, but as programmers like to say, bounded rationality is a feature, not a bug. The researchers at MIT found that in an optimization challenge, such as finding the optimal routing strategy for an airline, humans have the advantage of being able to impose some intuitive limits on the number of options considered. For example, a human can say, “Planes should visit each city at the most once,” and thereby dramatically limit the number crunching required. When these intuitive strategies were converted to machine language and introduced into automated algorithms, those algorithms got 10 to 15% smarter.

When it comes right down to it, the essence of marketing is simply a conversation between two people. All the rest: the targeting, the automation, the segmentation, the media strategy – this is all just to add “mass” to marketing. And that’s all the stuff that machines are great at. For us humans, our future seems to rely on our past – and on our ability to connect with other humans.

Curmudgeon, Chicken Little or Cognoscenti?

Apparently I’m old and out of step. Curmudgeonly, even. And this is from people of my own generation. My previous column about the potential shallowness encouraged by social media drew a few comments that indicated I was just being a grumpy old man. One was from an old industry friend – Brett Tabke:

“The rest of the article is like out of the 70’s in that it is devoid of the reality that is the uber-me generation. The selfie is only a reflection of their inward focus.”

The other was from Monica Emrich, whom I’ve never had the pleasure of meeting:

” ’Social Media Is Barely Skin-Deep.’ ho hum. History shows: when new medium hits, civilization as we know it is over.”

These comments seem to telling me, “Relax. You just don’t understand because you’re too old. Everything will be great.” And, if that’s true, I’d be okay with that. I’m more than willing to be proven a doddering old fool if it means technology is ushering us into a new era of human greatness.

But what if this time is different? What if Monica’s facetious comment actually nailed it? Maybe civilization as we know it will be over. The important part of this is “as we know it.” Every technological disruption unleashes a wave of creative destruction that pushes civilization in a new direction. We seem to blindly assume it will always go in the right direction. And it is true that technology has generally elevated the human race. But not uniformly – and not consistently. What if this shift is different? What if we become less than what we were? It can happen. Brexit – Xenophobia – Trump – Populism, all these things are surfing on the tides of new technology.

Here’s the problem. There are some aspects of technology that we’ve never had to deal with before – at least, not at this scale. One these aspects (other aspects will no doubt be the topic of a future Media Insider) is that technology is now immersive and ubiquitous. It creates an alternate reality for us, and it has done in it in a few short decades. Why is this dangerous? It’s dangerous because evolution has not equipped us to deal with this new reality. In the past, when there has been a shift in our physical reality, it has taken place over several generations. Natural selection had the time to reshape the human genome to survive and eventually thrive in this new reality. Along the way, we acquired checks and balances that would allow us to deal with the potentially negative impacts of the environment.

But our new reality is different. It’s happen in the space of a single generation. There is no way we could have acquired natural defenses against it. We are operating in an environment we have been untested for. The consequences are yet to be discovered.

No, your response might be to say that, “Yes, evolution doesn’t move this quickly, but out brains can. They are elastic and malleable.” This is true, but there’s a big “but” that lies hidden in this approach. Our brains rewire to be a better match their environment. This is one of the things that humans excel at. But this rewiring happens on top of a primitive platform with some built in limitations. The assumption is that a better match with our environment provides a better chance for survival of the species.

But what if technology is throwing us a curve ball in this case? No matter what the environment we have adapted to, there has been one constant: The history of humans depends on our success in living together. We have evolved to be social animals but that evolution is predicated on the assumption that our socializing would take place face-to-face. Technology is artificially decoupling our social interactions from the very definition of society that we have evolved to be able to handle. A recent Wharton interview with Eden Collinsworth sounds the same alarm bells.

“The frontal lobes, which are the part of the brain that puts things in perspective and allows you to be empathetic, are constantly evolving. But it is less likely to evolve and develop those skills if you are in front of a screen. In other words, those skills come into play when you have a face-to-face interaction with someone. You can observe facial gestures. You can hear the intonation of a voice. You’re more likely to behave moderately in that exchange, unless it’s a just a knock-down, drag-out fight.”

Collinsworth’s premise – which is covered in her new book, Behaving Badly – is that this artificial reality is changing our concepts of morality and ethics. She reminds us the two are interlinked, but they are not the same thing. Morality is our own personal code of conduct. Ethics are a shared code that society depends on to instill a general sense of fairness. Collinsworth believes both are largely learned from the context of our culture. And she worries that a culture that is decoupled from the physical reality we have evolved to operate in may have dire consequences.

The fact is that if our morality and ethics are intended to keep us socially more cohesive, this works best in a face-to-face context. In an extreme example of this, Lt. Col. Dave Grossman, a former paratrooper and professor of psychology at West Point, showed how our resistance to killing another human in combat is inversely related to our physical distance from them. The closer we are to them, the more resistant we are to the idea of killing them. This makes sense in an evolutionary environment where all combat was hand-to-hand. But today, the killer could be in a drone flight control center thousands of miles from his or her intended target.

This evolved constraint on unethical behavior – the social check and balance of being physically close to the people we’re engaging with – is important. And while the application of the two examples I’ve cited; One – the self-absorbed behavior on social networks – and Two – the moral landscape of a drone strike operator, may seem magnitudes apart in terms of culpability, the underlying neural machinery is related. What we believe is right and wrong is determined by a moral compass set to the bearings of our environment. The fundamental workings of that compass assumed we would be face-to-face with the people we have to deal with. But thanks to technology, that’s no longer the case.

Maybe Brett and Monica are right. Maybe I’m just being alarmist. But if not, we’d better start paying more attention. Because civilization “as we know it” may be ending.

 

Our Brain on Reviews

There’s an interesting new study that was just published about how our brain mathematically handles online reviews that I wanted to talk about today. But before I get to that, I wanted to talk about foraging a bit.

The story of how science discovered our foraging behaviors serves as a mini lesson in how humans tick. The economists of the 1940’s and 50’s discovered the world of micro-economics, based on the foundation that humans were perfectly rational – we were homo economicus. When making personal economic choices in a world of limited resources, we maximized utility. The economists of the time assumed this was a uniquely human property, bequeathed on us by virtue of the reasoning power of our superior brains.

In the 60’s, behavior ecologists knocked our egos down a peg or two. It wasn’t just humans that could do this. Foxes could do it. Starlings could do it. Pretty much any species had the same ability to seemingly make optimal choices when faced with scarcity. It was how animals kept from starving to death. This was the birth of foraging theory. This wasn’t some homo-sapien-exclusive behavior that was directed from the heights of rationality downwards. It was an evolved behavior that was built from the ground up. It’s just that humans had learned how to apply it to our abstract notion of economic utility.

Three decades later, two researchers at Xerox’s Palo Alto Research Center found another twist. Not only had our ability to forage been evolved all the way through our extensive family tree, but we seemed to borrow this strategy and apply it to entirely new situations. Peter Pirolli and Stuart Card found that when humans navigate content in online environments, the exact same patterns could be found. We foraged for information. Those same calculations determined whether we would stay in an information “patch” or move on to more promising territory.

This seemed to indicate three surprising discoveries about our behavior:

  • Much of what we think is rational behavior is actually driven by instincts that have evolved over millions of years
  • We borrow strategies from one context and apply them in another. We use the same basic instincts to find the FAQ section of a website that we used to find sustenance on the savannah.
  • Our brains seem to use Bayesian logic to continuously calculate and update a model of the world. We rely on this model to survive in our environment, whatever and wherever that environment might be.

So that brings us to the study I mentioned at the beginning of this column. If we take the above into consideration, it should come as no surprise that our brain uses similar evolutionary strategies to process things like online reviews. But the way it does it is fascinating.

The amazing thing about the brain is how it seamlessly integrates and subconsciously synthesizes information and activity from different regions. For example, in foraging, the brain integrates information from the regions responsible for wayfinding – knowing our place in the world – with signals from the dorsal anterior cingulate cortex – an area responsible for reward monitoring and executive control. Essentially, the brain is constantly updating an algorithm about whether the effort required to travel to a new “patch” will be balanced by the reward we’ll find when we get there. You don’t consciously marshal the cognitive resources required to do this. The brain does it automatically. What’s more – the brain uses many of the same resources and algorithm whether we’re considering going to McDonald’s for a large order of fries or deciding what online destination would be the best bet for researching our upcoming trip to Portugal.

In evaluating online reviews, we have a different challenge: how reliable are the reviews? The context may be new – our ancestors didn’t have TripAdvisor or AirBNB ratings for choosing the right cave to sleep in tonight – but the problem isn’t. What criteria should we use when we decide to integrate social information into our decision making process? If Thorlak the bear hunter tells me there’s a great cave a half-day’s march to the south, should I trust him? Experience has taught us a few handy rules of thumb when evaluating sources of social information: reliability of the source and the consensus of crowds. Has Thorlak ever lied to us before? Do others in the tribe agree with him? These are hardwired social heuristics. We apply them instantly and instinctively to new sources of information that come from our social network. We’ve been doing it for thousands of years. So it should come as no surprise that we borrow these strategies when dealing with online reviews.

In a neuro-scanning study from the University College of London, researchers found that reliability plays a significant role in how our brains treat social information. Once again, a well-evolved capability of the brain is recruited to help us in a new situation. The dorsomedial prefrontal cortex is the area of the brain that keeps track of our social connections. This “social monitoring” ability of the brain worked in concert with ventromedial prefrontal cortex, an area that processes value estimates.

The researchers found that this part of our brain works like a Bayesian computer when considering incoming information. First we establish a “prior” that represents a model of what we believe to be true. Then we subject this prior to possible statistical updating based on new information – in this case, online reviews. If our confidence is high in this “prior” and the incoming information is weak, we tend to stick with our initial belief. But if our confidence is low and the incoming information is strong – i.e. a lot of positive reviews – then the brain overrides the prior and establishes a new belief, based primarily on the new information.

While this seems like common sense, the mechanisms at play are interesting. The brain effortlessly pattern matches new types of information and recruits the region that is most likely to have evolved to successfully interpret that information. In this case, the brain had decided that online reviews are most like information that comes from social sources. It combines the interpretation of this data with an algorithmic function that assigns value to the new information and calculates a new model – a new understanding of what we believe to be true. And it does all this “under the hood” – sitting just below the level of conscious thought.

Flow and the Machine

“In the future, either you’re going to be telling a machine what to do, or the machine is going to be telling you.”

Christopher Penn – VP of Marketing Technology, Shift Communications.

I often talk about the fallibility of the human brain – those irrational, cognitive biases that can cause us to miss the reality that’s right in front of our face. But there’s another side to the human brain – the intuitive, almost mystical machinations that happen when we’re on a cognitive roll, balancing gloriously on the edge between consciousness and subconciousness. Malcolm Gladwell took a glancing shot at this in his mega-bestseller: Blink. But I would recommend going right to the master of “Flow” – Mihaly Csikszentmihalyi (pronounced, if you’re interested – me-hi Chick-sent-me-hi). The Hungarian psychologist coined the term “flow” – referring to a highly engaged mental state where we’re completely absorbed with the work at hand. Csikszentmihalyi calls it the “psychology of optimal experience.”

It turns out there’s a pretty complicated neuroscience behind flow. In a blog post from gamer Adam Sinicki, he describes a state where the brain finds an ideal balance between instinctive behavior and total focus on one task. The state is called Transient Hypofrontality. It can sometimes be brought on by physical exercise. It’s why some people can think better while walking, or even jogging. The brain juggles resources required and this can force a stepping down of the prefrontal cortex, the part of the brain that causes us to question ourselves. This part of the brain is required in unfamiliar circumstances but in a situation where we’ve thoroughly rehearsed the actions required it’s actually better if it takes a break. This allows other – more intuitive – parts of the brain to come to the fore. And that may be the secret of “Flow.” It may also be the one thing that machines can’t replicate – yet.

The Rational Machine

If we were to compare the computer to a part of the brain, it would probably be the Prefrontal Cortex (PFC). When we talk about cognitive computing, what we’re really talking about is building a machine that can mimic – or exceed – the capabilities of the PFC. This is the home of our “executive function” – complex decision making, planning, rationalization and our own sense of self. It’s probably not a coincidence that the part of our brain we rely on to reason through complex challenges like designing artificial intelligence would build a machine in it’s own image. And in this instance, we’re damned close to surpassing ourselves. The PFC is an impressive chunk of neurobiology in its flexibility and power, but speedy it’s not. In fact, we’ve found that if we happen to make a mistake, the brain slows almost to a stand still. It shakes our confidence and kills any “flow” that might be happening in it’s tracks. This is what happens to athletes when they choke. With artificial intelligence, we are probably on the cusp of creating machines that can do most of what the PFC can do, only faster, more reliably and with the ability to process much more information.

But there’s a lot more to the brain than just the PFC. And it’s this ethereal intersection between ration and intuition where the essence of being human might be hiding.

The Future of Flow

What if we could harness “flow” at will? If we work in partnership with a machine that can crunch data in real time and present us with the inputs required to continue our flow-fueled exploration without the fear of making a mistake? It’s not so much a machine telling us what to do – or the reverse – as it is a partnership between human intuition and machine based rationalization. It’s analogous to driving a modern car, where the intelligent safety and navigation features backstop our ability to drive.

Of course, it may just be a matter of time before machines best us in this area as well. Perhaps machines already have mastered flow because they don’t have to worry about the consequences of making a mistake. But it seems to me that if humans have a future, it’s not going to be in our ability to crunch data and rationalize. We’ll have to find something a little more magical to stake our claim with.

 

 

The Status Quo Bias – Why Every B2B Vendor has to Understand It

It’s probably the biggest hurdle any B2B vendor has to get over. It’s called the Status Quo bias and it’s deadly in any high-risk purchase scenario. According to Wikipedia, the bias occurs when the current baseline (or status quo) is taken as a reference point, and any change from that baseline is perceived as a loss. In other words, if it ain’t broke don’t fix it. We believe that simply because something exists, it must have merit. The burden of proof then falls on the vendor to overcome this level of complacency

The Status Quo Bias is actually a bundle of other common biases, including the Endowment Effect, the Loss Aversion Bias, The Existence Bias, Mere Exposure effect and other psychological factors that tend to continually jam the cogs of B2B commerce. Why B2B? The Status Quo Bias is common in any scenario where risk is high and reward is low, but B2B in particular is subject to it because these are group-buying decisions. And, as I’ll soon explain, groups tend to default to Status Quo bias with irritating regularity. The new book from CEB (recently acquired by Gartner) – The Challenger Customer – is all about the status quo bias.

So why is the bias particularly common with groups? Think of the dynamics at play here. Generally speaking, most people have some level of the Status Quo Bias. Some will have it more than others, depending on their level of risk tolerance. But let’s look at what happens when we lump all those people together in a group and force them to come to a consensus. Generally, you’re going to have a one or two people in the group that are driving for change. Typically, these will be the ones that have the most to gain and have a risk tolerance threshold that allows the deal to go forward. On the other end of the spectrum you have some people who have low risk tolerance levels and nothing to gain. They may even stand to lose if the deal goes forward (think IT people who have to implement a new technology). In between you have the moderates. The gain factor and their risk tolerance levels net out to close to zero. Given that those that have something to gain will say yes and those who have nothing to gain will say no, it’s this middle group that will decide whether the deal will live or die.

Without the Status Quo bias, the deal might have a 50/50 chance. But the status quo bias stacks the deck towards negative outcomes for the vendor. Even if it tips the balance just a little bit towards “no” – that’s all that’s required to stop a deal dead in its tracks. The more disruptive the deal, the greater the Status Quo Bias. Let’s remember – this is B2B. There are no emotional rewards that can introduce a counter acting bias. It’s been shown in at least one study (Baker, Laury, Williams – 2008) that groups tend to be more risk averse than the individuals that make up that group. When the groups start discussing and – inevitably – disagreeing, it’s typically easier to do nothing.

So, how do we stick handle past this bias? The common approach is to divide and conquer – identifying the players and tailoring messages to speak directly to them. The counter intuitive finding of the CEB Challenger Customer research was that dividing and conquering is absolutely the wrong thing to do. It actually lessens the possibility of making a sale. While this sounds like it’s just plain wrong, it makes sense if we shift our perspective from the selling side to the buying side.

With our vendor goggles on, we believe that if we tailor messaging to appeal to every individual’s own value proposition, that would be a way to build consensus and drive the deal forward. And that would be true, if every member of our buying committee was acting rationally. But as we soon see when we put on the buying googles, they’re not. Their irrational biases are firmly stacked up on the “do nothing” side of the ledger. And by tailoring messaging in different directions, we’re actually just giving them more things to disagree about. We’re creating dysfunction rather than eliminating it. Disagreements almost always default back to the status quo, because it’s the least risky option. The group may not agree about much, but they can agree that the incumbent solution creates the least disruption.

So what do you do? Well, I won’t steal the CEB’s thunder here, because the Challenger Customer is absolutely worth a read if you’re a B2B vendor. The authors, Brent Adamson, Matthew Dixon, Pat Spenner and Nick Toman, lay out step by step strategy to get around the Status Quo bias. The trick is to create a common psychological frame where everyone can agree that doing nothing is the riskiest alternative. But biases are notoriously sticky things. Setting up a commonly understood frame requires a deep understanding of the group dynamics at play. The one thing I really appreciate about CEB’s approach is that it’s “psychologically sound.” They make no assumptions about buyer rationality. They know that emotions ultimately drive all human behavior and B2B purchases are no exception.

We’re Becoming Intellectually “Obese”

Humans are defined by scarcity. All our evolutionary adaptations tend to be built to ensure survival in harsh environments. This can sometimes backfire on us in times of abundance.

For example, humans are great at foraging. We have built-in algorithms that tell us which patches are most promising and when we should give up on the patch we’re in and move to another patch.

We’re also good at borrowing strategies that evolution designed for one purpose and applying them for another purpose. This is called exaptation. For example, we’ve exapted our food foraging strategies and applied them to searching for information in an online environment. We use these skills when we look at a website, conduct an online search or scan our email inbox. But as we forage for information – or food – we have to remember, this same strategy assumes scarcity, not abundance.

Take food for example. Nutritionally we have been hardwired by evolution to prefer high fat, high calorie foods. That’s because this wiring took place in an environment of scarcity, where you didn’t know where your next meal was coming from. High fat, high calorie and high salt foods were all “jackpots” if food was scarce. Eating these foods could mean the difference between life and death. So our brains evolved to send us a reward signal when we ate these foods. Subsequently, we naturally started to forage for these things.

This was all good when our home was the African savannah. Not so good when it’s Redondo Beach, there’s a fast food joint on every corner and the local Wal-Mart’s shelves are filled to overflowing with highly processed pre-made meals. We have “refined” food production to continually push our evolutionary buttons, gorging ourselves to the point of obesity. Foraging isn’t a problem here. Limiting ourselves is.

So, evolution has made humans good at foraging when things are scarce, but not so good at filtering in an environment of abundance. I suspect the same thing that happened with food is today happening with information.

Just like we are predisposed to look for food that is high in fats, salt and calories, we are drawn to information that:

  1. Leads to us having sex
  2. Leads to us having more than our neighbors
  3. Leads to us improving our position in the social hierarchy

All those things make sense in an evolutionary environment where there’s not enough to go around. But, in a society of abundance, they can cause big problems.

Just like food, for most of our history information was in short supply. We had to make decisions based on too little information, rather than too much. So most of our cognitive biases were developed to allow us to function in a setting where knowledge was in short supply and decisions had to be made quickly. In such an environment, these heuristic short cuts would usually end up working in our favor, giving us a higher probability of survival.

These evolutionary biases become dangerous as our information environment becomes more abundant. We weren’t built to rationally seek out and judiciously evaluate information. We were built to make decisions based on little or no knowledge. There is an override switch we can use if we wish, but it’s important to know that just like we’re inherently drawn to crappy food, we’re also subconsciously drawn to crappy information.

Whether or not you agree with the mainstream news sources, the fact is that there was a thoughtful editorial process, which was intended to improve the quality of information we were provided. Entire teams of people were employed to spend their days rationally thinking about gathering, presenting and validating the information that would be passed along to the public. In Nobel laureate Daniel Kahneman’s terminology, they were “thinking slow” about it. And because the transactional costs of getting that information to us was so high, there was a relatively strong signal to noise ratio.

That is no longer the case. Transactional costs have dropped to the point that it costs almost nothing to get information to us. This allows information providers to completely bypass any editorial loop and get it in front of us. Foraging for that information is not the problem. Filtering it is. As we forage through potential information “patches” – whether they be on Google, Facebook or Twitter – we tend to “think fast” – clicking on the links that are most tantalizing.

I would have never dreamed that having too much information could be a bad thing. But most of the cautionary columns that I’ve written about in the last few years all seem to have the same root cause – we’re becoming intellectually “obese.” We’ve developed an insatiable appetite for fast, fried, sugar-frosted information.

 

Damn You Technology…

Quit batting your seductive visual sensors at me. You know I can’t resist. But I often wonder what I’m giving up when I give in to your temptations. That’s why I was interested in reading Tom Goodwin’s take on the major theme at SXSW – the Battle for Humanity. He broke this down into three sub themes. I agree with them. In fact, I’ve written on all of them in the past. They were:

Data Trading – We’re creating a market for data. But when you’re the one that generated that data, who should own it?

Shift to No Screens – an increasing number of connected devices will change of concept of what it means to be online.

Content Tunnel Vision – As the content we see is increasingly filtered based on our preferences, what does that do for our perception of what is real?

But while we’re talking about our imminent surrender to the machines, I feel there are some other themes that also merit some discussion. Let’s limit it to two today.

A New Definition of Connection and Community

sapolsky

Robert Sapolsky

A few weeks ago I read an article that I found fascinating by neuroendocrinologist and author Robert Sapolsky. In it, he posits that understanding Capgras Syndrome is the key to understanding the Facebook society. Capgras, first identified by French psychiatrist Joseph Capgras, is a disorder where we can recognize a face of a person but we can’t retrieve feelings of familiarity. Those afflicted can identify the face of a loved one but swear that it’s actually an identical imposter. Recognition of a person and retrieval of emotions attached to that person are handled by two different parts of the brain. When the connection is broken, Capgras Syndrome is the result.

This bifurcation of how we identify people is interesting. There is the yin and yang of cognition and emotion. The fusiform gyrus cognitively “parses” the face and then the brain retrieves the emotions and memories that are associated with it. To a normally functioning brain, it seems seamless and connected, but because two different regions (or, in the case of emotion, a network of regions) are involved, they can neurologically evolve independently of each other. And in the age of Facebook, that could mean a significant shift in the way we recognize connections and create “cognitive communities.” Sapolsky elaborates:

Through history, Capgras syndrome has been a cultural mirror of a dissociative mind, where thoughts of recognition and feelings of intimacy have been sundered. It is still that mirror. Today we think that what is false and artificial in the world around us is substantive and meaningful. It’s not that loved ones and friends are mistaken for simulations, but that simulations are mistaken for them.

As I said in a column a few months back, we are substituting surface cues for familiarity. We are rushing into intimacy without all the messy, time consuming process of understanding and shared experience that generally accompanies it.

Brains do love to take short cuts. They’re not big on heavy lifting. Here’s another example of that…

Free Will is Replaced with An Algorithm

harari

Yuval Harari

In a conversation with historian Yuval Harari, author of the best seller Sapiens, Derek Thompson from the Atlantic explored “The Post Human World.” One of the topics they discussed was the End of Individualism.

Humans (or, at least, most humans) have believed our decisions come from a mystical soul – a transcendental something that lives above our base biology and is in control of our will. Wrapped up in this is the concept of us as an individual and our importance in the world as free thinking agents.

In the past few decades, there is a growing realization that our notion of “free will” is just the result of a cascade of biochemical processes. There is nothing magical here; there is just a chain of synaptic switches being thrown. And that being the case – if a computer can process things faster than our brains, should we simply relegate our thinking to a machine?

In many ways, this is already happening. We trust Google Maps or our GPS device more than we trust our ability to find our own way. We trust Google Search more than our own memory. We’re on the verge of trusting our wearable fitness tracking devices more than our own body’s feedback. And in all these cases, our trust in tech is justified. These things are usually right more often than we are. But when it comes to humans vs, machines, they represent a slippery slope that we’re already well down. Harari speculates what might be at the bottom:

What really happens is that the self disintegrates. It’s not that you understand your true self better, but you come to realize there is no true self. There is just a complicated connection of biochemical connections, without a core. There is no authentic voice that lives inside you.

When I lay awake worrying about technology, these are the types of things that I think about. The big question is – is humanity an outmoded model? The fact is that we evolved to be successful in a certain environment. But here’s the irony in that: we were so successful that we changed that environment to one where it was the tools we’ve created, not the creators, which are the most successful adaptation. We may have made ourselves obsolete. And that’s why really smart humans, like Bill Gates, Elon Musk and Stephen Hawking are so worried about artificial intelligence.

“It would take off on its own, and re-design itself at an ever increasing rate,” said Hawking in a recent interview with BBC. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Worried about a machine taking your job? That may be the least of your worries.