Our Brain on Reviews

There’s an interesting new study that was just published about how our brain mathematically handles online reviews that I wanted to talk about today. But before I get to that, I wanted to talk about foraging a bit.

The story of how science discovered our foraging behaviors serves as a mini lesson in how humans tick. The economists of the 1940’s and 50’s discovered the world of micro-economics, based on the foundation that humans were perfectly rational – we were homo economicus. When making personal economic choices in a world of limited resources, we maximized utility. The economists of the time assumed this was a uniquely human property, bequeathed on us by virtue of the reasoning power of our superior brains.

In the 60’s, behavior ecologists knocked our egos down a peg or two. It wasn’t just humans that could do this. Foxes could do it. Starlings could do it. Pretty much any species had the same ability to seemingly make optimal choices when faced with scarcity. It was how animals kept from starving to death. This was the birth of foraging theory. This wasn’t some homo-sapien-exclusive behavior that was directed from the heights of rationality downwards. It was an evolved behavior that was built from the ground up. It’s just that humans had learned how to apply it to our abstract notion of economic utility.

Three decades later, two researchers at Xerox’s Palo Alto Research Center found another twist. Not only had our ability to forage been evolved all the way through our extensive family tree, but we seemed to borrow this strategy and apply it to entirely new situations. Peter Pirolli and Stuart Card found that when humans navigate content in online environments, the exact same patterns could be found. We foraged for information. Those same calculations determined whether we would stay in an information “patch” or move on to more promising territory.

This seemed to indicate three surprising discoveries about our behavior:

  • Much of what we think is rational behavior is actually driven by instincts that have evolved over millions of years
  • We borrow strategies from one context and apply them in another. We use the same basic instincts to find the FAQ section of a website that we used to find sustenance on the savannah.
  • Our brains seem to use Bayesian logic to continuously calculate and update a model of the world. We rely on this model to survive in our environment, whatever and wherever that environment might be.

So that brings us to the study I mentioned at the beginning of this column. If we take the above into consideration, it should come as no surprise that our brain uses similar evolutionary strategies to process things like online reviews. But the way it does it is fascinating.

The amazing thing about the brain is how it seamlessly integrates and subconsciously synthesizes information and activity from different regions. For example, in foraging, the brain integrates information from the regions responsible for wayfinding – knowing our place in the world – with signals from the dorsal anterior cingulate cortex – an area responsible for reward monitoring and executive control. Essentially, the brain is constantly updating an algorithm about whether the effort required to travel to a new “patch” will be balanced by the reward we’ll find when we get there. You don’t consciously marshal the cognitive resources required to do this. The brain does it automatically. What’s more – the brain uses many of the same resources and algorithm whether we’re considering going to McDonald’s for a large order of fries or deciding what online destination would be the best bet for researching our upcoming trip to Portugal.

In evaluating online reviews, we have a different challenge: how reliable are the reviews? The context may be new – our ancestors didn’t have TripAdvisor or AirBNB ratings for choosing the right cave to sleep in tonight – but the problem isn’t. What criteria should we use when we decide to integrate social information into our decision making process? If Thorlak the bear hunter tells me there’s a great cave a half-day’s march to the south, should I trust him? Experience has taught us a few handy rules of thumb when evaluating sources of social information: reliability of the source and the consensus of crowds. Has Thorlak ever lied to us before? Do others in the tribe agree with him? These are hardwired social heuristics. We apply them instantly and instinctively to new sources of information that come from our social network. We’ve been doing it for thousands of years. So it should come as no surprise that we borrow these strategies when dealing with online reviews.

In a neuro-scanning study from the University College of London, researchers found that reliability plays a significant role in how our brains treat social information. Once again, a well-evolved capability of the brain is recruited to help us in a new situation. The dorsomedial prefrontal cortex is the area of the brain that keeps track of our social connections. This “social monitoring” ability of the brain worked in concert with ventromedial prefrontal cortex, an area that processes value estimates.

The researchers found that this part of our brain works like a Bayesian computer when considering incoming information. First we establish a “prior” that represents a model of what we believe to be true. Then we subject this prior to possible statistical updating based on new information – in this case, online reviews. If our confidence is high in this “prior” and the incoming information is weak, we tend to stick with our initial belief. But if our confidence is low and the incoming information is strong – i.e. a lot of positive reviews – then the brain overrides the prior and establishes a new belief, based primarily on the new information.

While this seems like common sense, the mechanisms at play are interesting. The brain effortlessly pattern matches new types of information and recruits the region that is most likely to have evolved to successfully interpret that information. In this case, the brain had decided that online reviews are most like information that comes from social sources. It combines the interpretation of this data with an algorithmic function that assigns value to the new information and calculates a new model – a new understanding of what we believe to be true. And it does all this “under the hood” – sitting just below the level of conscious thought.

Flow and the Machine

“In the future, either you’re going to be telling a machine what to do, or the machine is going to be telling you.”

Christopher Penn – VP of Marketing Technology, Shift Communications.

I often talk about the fallibility of the human brain – those irrational, cognitive biases that can cause us to miss the reality that’s right in front of our face. But there’s another side to the human brain – the intuitive, almost mystical machinations that happen when we’re on a cognitive roll, balancing gloriously on the edge between consciousness and subconciousness. Malcolm Gladwell took a glancing shot at this in his mega-bestseller: Blink. But I would recommend going right to the master of “Flow” – Mihaly Csikszentmihalyi (pronounced, if you’re interested – me-hi Chick-sent-me-hi). The Hungarian psychologist coined the term “flow” – referring to a highly engaged mental state where we’re completely absorbed with the work at hand. Csikszentmihalyi calls it the “psychology of optimal experience.”

It turns out there’s a pretty complicated neuroscience behind flow. In a blog post from gamer Adam Sinicki, he describes a state where the brain finds an ideal balance between instinctive behavior and total focus on one task. The state is called Transient Hypofrontality. It can sometimes be brought on by physical exercise. It’s why some people can think better while walking, or even jogging. The brain juggles resources required and this can force a stepping down of the prefrontal cortex, the part of the brain that causes us to question ourselves. This part of the brain is required in unfamiliar circumstances but in a situation where we’ve thoroughly rehearsed the actions required it’s actually better if it takes a break. This allows other – more intuitive – parts of the brain to come to the fore. And that may be the secret of “Flow.” It may also be the one thing that machines can’t replicate – yet.

The Rational Machine

If we were to compare the computer to a part of the brain, it would probably be the Prefrontal Cortex (PFC). When we talk about cognitive computing, what we’re really talking about is building a machine that can mimic – or exceed – the capabilities of the PFC. This is the home of our “executive function” – complex decision making, planning, rationalization and our own sense of self. It’s probably not a coincidence that the part of our brain we rely on to reason through complex challenges like designing artificial intelligence would build a machine in it’s own image. And in this instance, we’re damned close to surpassing ourselves. The PFC is an impressive chunk of neurobiology in its flexibility and power, but speedy it’s not. In fact, we’ve found that if we happen to make a mistake, the brain slows almost to a stand still. It shakes our confidence and kills any “flow” that might be happening in it’s tracks. This is what happens to athletes when they choke. With artificial intelligence, we are probably on the cusp of creating machines that can do most of what the PFC can do, only faster, more reliably and with the ability to process much more information.

But there’s a lot more to the brain than just the PFC. And it’s this ethereal intersection between ration and intuition where the essence of being human might be hiding.

The Future of Flow

What if we could harness “flow” at will? If we work in partnership with a machine that can crunch data in real time and present us with the inputs required to continue our flow-fueled exploration without the fear of making a mistake? It’s not so much a machine telling us what to do – or the reverse – as it is a partnership between human intuition and machine based rationalization. It’s analogous to driving a modern car, where the intelligent safety and navigation features backstop our ability to drive.

Of course, it may just be a matter of time before machines best us in this area as well. Perhaps machines already have mastered flow because they don’t have to worry about the consequences of making a mistake. But it seems to me that if humans have a future, it’s not going to be in our ability to crunch data and rationalize. We’ll have to find something a little more magical to stake our claim with.

 

 

Why Our Brains are Blocking Ads

On Mediapost alone in the last three months, there have been 172 articles written that have included the words “ad blockers” or “ad blocking.” That’s not really surprising, given that Mediapost covers the advertising biz and ad blocking is killing that particular biz, to the tune of an estimated loss of $41 billion in 2016. eMarketer estimates 70 million Americans, or 1 out of every 4 people online, uses ad blockers.

Paul Verna, an eMarketer Senior Analyst said “Ad blocking is a detriment to the entire advertising ecosystem, affecting mostly publishers, but also marketers, agencies and others whose businesses depend on ad revenue.” The UK’s culture Secretary, John Whittingdale, went even further, saying that ad blocking is a “modern-day protection racket.”

Here’s the problem with all this finger pointing. If you’re looking for a culprit to blame, don’t look at the technology or the companies deploying that technology. New technologies don’t cause us to change our behaviors – they enable behaviors that weren’t an option before. To get to the bottom of the growth of ad blocking, we have to go to the common denominator – the people those ads are aimed at. More specifically, we have to look at what’s happening in the brains of those people.

In the past, the majority of our interaction with advertising was done while our brain was idling, with no specific task in mind. I refer to this as bottom up environmental scanning. Essentially, we’re looking for something to capture our attention: a TV show, a book, a magazine article, a newspaper column. We were open to being engaged by stimuli from our environment (in other words, being activated from the “bottom up”).

In this mode, the brain is in a very accepting state. We match signals from our environment with concepts and beliefs we hold in our mind. We’re relatively open to input and if the mental association is a positive or intriguing one – we’re willing to spend some time to engage.

We also have to consider the effect of priming in this state. Priming sets a subconscious framework for the brain that then affects any subsequent mental processing. The traditional prime that was in place when we were exposed to advertising was a fairly benign one: we were looking to be entertained or informed, often the advertising content was delivered wrapped in a content package that we had an affinity for (our favorite show, a preferred newspaper, etc), and advertising was delivered in discrete chunks that our brain had been trained to identify and process accordingly.

All this means that in traditional exposures to ads, our brain was probably in the most accepting state possible. We were looking for something interesting, we were primed to be in a positive frame of mind and our brains could easily handle the contextual switches required to consider an ad and it’s message.

We also have to remember that we had a relatively static ad consumption environment that usually matched our expectations of how ads would be delivered. We expected commercial breaks in TV shows. We didn’t expect ads in the middle of a movie or book, two formats that required extended focusing of attention and didn’t lend themselves to mental contextual task switches. Each task switch brings with it a refocusing of attention and a brief burst of heightened awareness as our brains are forced to reassess its environment. These are fine in some environments – not in others.

Now, let’s look at the difference in cognitive contexts that accompany the deliver of most digital ads. First of all, when we’re online on our desktop or engaged with a mobile device, it’s generally in what I’ll call a “top down foraging” mode. We’re looking for something specific and we have intent in mind. This means there’s already a task lodged in our working memory (hence “top down”) and our attentional spotlight is on and focused on that task. This creates a very different environment for ad consumption.

When we’re in foraging mode, we suddenly are driven by an instinct that is as old as the human race (actually, much older than that): Optimal Foraging Theory. In this mode, we are constantly filtering the stimuli of our environment to see what is relevant to our intent. It’s this filtering that causes attentional blindness to non-relevant factors – whether they be advertising banners or people dressed up like gorillas. This filtering happens on a subconscious basis and the brain uses a primal engine to drive it – the promise of reward or the frustration of failure. When it comes to foraging – for food or for information – frustration is a feature, not a bug.

Our brains have a two loop learning process. It starts with a prediction – what psychologists and economists call “expected utility.” We mentally place bets on possible outcomes and go with the one that promises the best reward. If we’re right, the reward system of the brain gives us a shot of dopamine. Things are good. But if we bet wrong, a different part of the brain kicks in: the right anterior insula, the adjacent right ventral prefrontal cortex and the anterior cingulate cortex. Those are the centers of the brain that regulate pain. Nature is not subtle about these things – especially when the survival of the species depends on it. If we find what we’re looking for, we get a natural high. If we don’t, it’s actually causes us pain – but not in a physical way. We know it as frustration. Its purpose is to encourage us to not make the same mistake twice

The reason we’re blocking ads is that in the context those ads are being delivered, irrelevant ads are – quite literally – painful. Even relevant ads have a very high threshold to get over. Ad blocking has little to do with technology or “protection rackets” or predatory business practices. It has to do with the hardwiring of our brains. So if the media or the ad industry want to blame something or someone, let’s start there.

What Would a “Time Well Spent” World Look Like?

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.

We’re Informed. But Are We Thoughtful?

I’m a bit of a jerk when I write. I lock myself behind closed doors in my home office. In the summer, I retreat to the most remote reaches of the back yard. The reason? I don’t want to be interrupted with human contact. If I am interrupted, I stare daggers through the interrupter and answer in short, clipped sentences. The house has to be silent. If conditions are less than ideal, my irritation is palpable. My family knows this. The warning signal is “Dad is writing.” This can be roughly translated as “Dad is currently an asshole.” The more I try to be thoughtful, the bigger the ass I am.

I suspect Henry David Thoreau was the same.  He went even further than my own backyard exile. He camped out alone for two years in Ralph Waldo Emersen’s cabin on Walden Pond. He said things like,

“I never found a companion that was so companionable as solitude.”

But Thoreau but was also a pretty thoughtful guy, who advised us that,

“As a single footstep will not make a path on the earth, so a single thought will not make a pathway in the mind. To make a deep physical path, we walk again and again. To make a deep mental path, we must think over and over the kind of thoughts we wish to dominate our lives.”

But, I ask, how can we be thoughtful when we are constantly distracted by information? Our mental lives are full of single footsteps. Even if we intend to cover the same path more than once, there are a thousand beeps, alerts, messages, prompts, pokes and flags that are beckoning us to start down a new path, in a different direction. We probably cover more ground, but I suspect we barely disturb the fallen leaves on the paths we take.

I happen to do all my reading on a tablet. I do this for three reasons; first, I always have my entire library with me and I usually have four books on the go at the same time (currently 1491, Reclaiming Conversation, Flash Boys and 50 Places to Bike Before You Die) – secondly, I like to read before I go to sleep and I don’t need to keep a light on that keeps my wife awake – and thirdly, I like to highlight passages and make notes. But there’s a trade-off I’ve had to make. I don’t read as thoughtfully as I used to. I can’t “escape” with a book anymore. I am often tempted to check email, play a quick game of 2048 or search for something on Google. Maybe the fact that my attention is always divided amongst four books is part of the problem. Or maybe it’s that I’m more attention deficit than I used to be.

There is a big difference between being informed and being thoughtful. And our connected world definitely puts the bias on the importance of information. Being connected is all about being informed. But being thoughtful requires us to remove distraction. It’s the deep paths that Thoreau was referring too. And it requires a very different mindset. Our brains are a single-purpose engine. We can either be informed or be thoughtful. We can’t be both at the same time.

090313-RatMaze

At the University of California, San Francisco, Mattiass Karlsson and Loren Frank found that rats need two very different types of cognitive activity when mastering a maze. First, when they explore a maze, certain parts of their brain are active as they’re being “informed” about their new environment. But they don’t master the maze unless they’re allowed downtime to consolidate the information into new persistent memories. Different parts of the brain are engaged, including the hippocampus. They need time to be thoughtful and create a “deep path.”

In this instance, we’re not all that different than rats. In his research, MIT’s Alex “Sandy” Pentland found that effective teams tend to cycle through two very different phases: First, they explore, gathering new information. Then, just like the thoughtful rats, they engage as a group, taking that information, digesting it and synthesizing it for future execution. Pentland found that while both are necessary, they don’t exist at the same time,

“Exploration and engagement, while both good, don’t easily coexist, because they require that the energy of team members be put to two different uses. Energy is a finite resource.”

Ironically, research is increasingly showing that are previous definitions of cognitive activity may have been off-the mark. We always assumed that “mind-wandering” or “day-dreaming” was a non-productive activity. But we’re finding out that it’s an essential part of being thoughtful. We’re actually not “wandering.” It’s just the brain’s way of synthesizing and consolidating information. We’re wearing deeper paths in the by-ways of our mind. But a constant flow of new information, delivered through digital channels, keeps us from synthesizing the information we already have. Our brain is too busy being informed to be able to make the switch to thoughtfulness. We don’t have enough cognitive energy to do both.

What price might we pay for being “informed” at the expense of being “thoughtful?” It appears that it might be significant. Technology distraction in the classroom could lower grades by close to 20 percent. And you don’t even have to be the one using the device. Just having an open screen in the vicinity might distract you enough to drop your report card from a “B” to a “C.”

Having read this, you now have two choices. You could click off to the next bit of information. Or, you could stare into space for a few minutes and be lost in your thoughts.

Chose wisely.

Our Brain on Books

Brain-on-BooksHere’s another neuroscanning study out of Emory University showing the power of a story.

Lead researcher Gregory Burns and his team wanted to “understand how stories get into your brain, and what they do to it.” Their findings seem to indicate that stories, in this case a historical fiction novel about Pompeii, caused a number of changes in the participants brain, at least in the short term. Over time, some of these changes decayed, but more research is required to determine how long lasting the changes are.

One would expect reading to alter related parts of the brain and this was true in the Emory study. The left temporal cortex, a section of the brain that handles language reception and interpretation showed signs of heightened connectivity for a period of time after reading the novel. This is almost like the residual effects of exercise on a muscle, which responds favorably to usage.

What was interesting, however, was that the team also saw increased connectivity in the areas of the brain that control representations of sensation for the body. This relates to Antonio Damasio’s “Embodied Semantics” theory where the reading of metaphors, especially those relating specifically to tactile images, activate the same parts of the brain that control the corresponding physical activity. The Emory study (and Damasio’s work) seems to show that if you read a novel that depicts physical activity, such as running through the streets of Pompeii as Vesuvius erupts, your brain is firing the same neurons as it would if you were actually doing it!

There are a number of interesting aspects to consider here, but what struck me is the multi-prong impact a story has on us. Let’s run through them:

Narratives have been shown to be tremendously influential frameworks for us to learn and update our sense of the world, including our own belief networks. Books have been a tremendously effect agent for meme transference and propagation. The structure of a story allows us to grasp concepts quickly, but also reinforces those concepts because it engages our brain in a way that a simple recital of facts could not. We relate to protagonists and see the world through their eyes. All our socially tuned, empathetic abilities kick into action when we read a story, helping to embed new information more fully. Reading a story helps shape our world view.

Reading exercises the language centers of our brain, heightening the neural connectivity and improving the effectiveness. Neurologists call this “shadow activity” – a concept similar to muscle memory.

Reading about physical activity fires the same neurons that we would use to do the actual activity. So, if you read an action thriller, even through you’re lying flat on a sofa, your brain thinks you’re the one racing a motorcycle through the streets of Istanbul and battling your arch nemesis on the rooftops of Rome. While it might not do much to improve muscle tone, it does begin to create neural pathways. It’s the same concept of visualization used by Olympic athletes.

For Future Consideration

As we learn more about the underlying neural activity of story reading, I wonder how we can use this to benefit ourselves? The biggest question I have is if a story in written form has this capacity to impact us at all the aforementioned levels, what would  more sense-engaged media like television or video games do? If reading about a physical activity tricks the brain into firing the corresponding sensory controlling neurons, what would happen if we are simulating that activity on an action controlled gaming system like Microsoft’s X Box? My guess would be that the sensory motor connections would obviously be much more active (because we’re physically active). Unfortunately, research in the area of embodied semantics is still at an early stage, so many of the questions have yet to be answered.

However, if our stories are conveyed through a more engaging sensory experience, with full visuals and sound, do we lose some opportunity for abstract analysis? The parts of our brain we use to read depend on relatively slow processing loops. I believe much of the power of reading lies in the requirements it places on our imagination to fill in the sensory blanks. When we read about a scene in Pompeii we have to create the visuals, the soundtrack and the tactile responses. In all this required rendering, does it more fully engage our sense-making capabilities, giving us more time to interpret and absorb?

The Insula and The Accumbens: Driving Online Behavior

First published December 16, 2010 in Mediapost’s Search Insider

One of the more controversial applications of new neurological scanning technologies has been a quest by marketers for the mythical “buy button” in our brains. So far, no magical nook or cranny in our cranium has given marketers the ability to foist whatever crap they want on it, but a couple of parts of the brain have emerged as leading contenders for influencing buying behavior.

The Nucleus Accumbens: The Gas Pedal

The nucleus accumbens has been identified as the reward center of the brain. Although this is an oversimplification, it definitely plays a central role in our reward circuit. Neuroscanning studies show that the nucleus accumbens “lights up” when people think about things that have a reward attached: investments with big returns, buying a sports car or participating in favorite activities. Dopamine is released and the brain benefits from a natural high. Emotions are the drivers of human behavior — they move us to action (the name comes from the Latin movere, meaning “to move”). The reward circuit of the brain uses emotions to drive us towards rewards, an evolutionary pathway that improves our odds for passing along our genes.

In consumer behaviors, there are certain purchase decisions that fire the nucleus accumbens. Anything that promises some sort of emotional reward can trigger our reward circuits. We start envisioning what possession would be like: the taste of a meal, the thrill of a new car, the joy of a new home, the indulgence of a new pair of shoes. There is strong positive emotional engagement in these types of purchases.

The Anterior Insula: The Brake

But if our brain was only driven by reward, we would never say no. There needs to be some governing factor on the nucleus accumbens. Again, neuroscanning has identified a small section of the brain called the anterior insula as one of the structures serving this role.

If the nucleus accumbens could be called the reward center, the anterior insula could be called the Angst Center of our brains. The insula is a key part of our emotional braking system.  Through the release of noradrenaline and other neurochemicals, it creates the gnawing anxiety that causes us to slow down and tread carefully. In extreme cases, it can even evoke disgust. If the nucleus accumbens drives impulse purchasing, it’s the anterior insula that triggers buyer’s remorse.

The Balance Between the Two 

Again, at the risk of oversimplification, these two counteracting forces drive much of our consumer behavior. You can look at any purchase as the net result of the balance between them; a balancing of risk and reward, or in the academic jargon, prevention and promotion. High-reward and low-risk purchases will have a significantly different consumer behavior pattern than low-reward and high-risk purchases. Think about the difference between buying life insurance and a new pair of shoes. And because they have significantly different behavior profiles, the online interactions that result from these purchases will look quite different as well. In the next column, I’ll look at the four different purchase profiles (High Risk/High Reward, High Risk/Low Reward, Low Risk/High Reward and Low Risk, Low Reward) and look at how the online maps might look in each scenario.