The Insula and The Accumbens: Driving Online Behavior

First published December 16, 2010 in Mediapost’s Search Insider

One of the more controversial applications of new neurological scanning technologies has been a quest by marketers for the mythical “buy button” in our brains. So far, no magical nook or cranny in our cranium has given marketers the ability to foist whatever crap they want on it, but a couple of parts of the brain have emerged as leading contenders for influencing buying behavior.

The Nucleus Accumbens: The Gas Pedal

The nucleus accumbens has been identified as the reward center of the brain. Although this is an oversimplification, it definitely plays a central role in our reward circuit. Neuroscanning studies show that the nucleus accumbens “lights up” when people think about things that have a reward attached: investments with big returns, buying a sports car or participating in favorite activities. Dopamine is released and the brain benefits from a natural high. Emotions are the drivers of human behavior — they move us to action (the name comes from the Latin movere, meaning “to move”). The reward circuit of the brain uses emotions to drive us towards rewards, an evolutionary pathway that improves our odds for passing along our genes.

In consumer behaviors, there are certain purchase decisions that fire the nucleus accumbens. Anything that promises some sort of emotional reward can trigger our reward circuits. We start envisioning what possession would be like: the taste of a meal, the thrill of a new car, the joy of a new home, the indulgence of a new pair of shoes. There is strong positive emotional engagement in these types of purchases.

The Anterior Insula: The Brake

But if our brain was only driven by reward, we would never say no. There needs to be some governing factor on the nucleus accumbens. Again, neuroscanning has identified a small section of the brain called the anterior insula as one of the structures serving this role.

If the nucleus accumbens could be called the reward center, the anterior insula could be called the Angst Center of our brains. The insula is a key part of our emotional braking system.  Through the release of noradrenaline and other neurochemicals, it creates the gnawing anxiety that causes us to slow down and tread carefully. In extreme cases, it can even evoke disgust. If the nucleus accumbens drives impulse purchasing, it’s the anterior insula that triggers buyer’s remorse.

The Balance Between the Two 

Again, at the risk of oversimplification, these two counteracting forces drive much of our consumer behavior. You can look at any purchase as the net result of the balance between them; a balancing of risk and reward, or in the academic jargon, prevention and promotion. High-reward and low-risk purchases will have a significantly different consumer behavior pattern than low-reward and high-risk purchases. Think about the difference between buying life insurance and a new pair of shoes. And because they have significantly different behavior profiles, the online interactions that result from these purchases will look quite different as well. In the next column, I’ll look at the four different purchase profiles (High Risk/High Reward, High Risk/Low Reward, Low Risk/High Reward and Low Risk, Low Reward) and look at how the online maps might look in each scenario.

Is the Internet Making Us Stupid – or a New Kind of Smart?

First published September 9, 2010 inn Mediapost’s Search Insider

As I mentioned a few weeks back, I’m reading Nicholas Carr’s book “The Shallows.” His basic premise is that our current environment, with its deluge of available information typically broken into bite-sized pieces served up online, is “dumbing down” our brains.  We no longer read, we scan. We forego the intellectual heavy lifting of prolonged reading for the more immediate gratification of information foraging. We’re becoming a society of attention-deficit dolts.

It’s a grim picture, and Carr does a good job of backing up his premise. I’ve written about many of these issues in the past. And I don’t dispute the trends that Carr chronicles (at length). But is Carr correct is saying that online is dulling our intellectual capabilities, or is it just creating a different type of intelligence?

While I’m at it, I suspect this new type of intelligence is much more aligned with our native abilities than the “book smarts” that have ruled the day for the last five centuries. I’m an avid reader (ironically, I’ve been reading Carr’s book on an iPad) and I’m the first to say that I would be devastated if reading goes the way of the dodo.  But are we projecting our view of what’s “right” on a future where the environment (and rules) have changed?

A Timeline of Intellect

If you expand your perspective of human intellectualism to the entire history of man, you find that the past 500 years have been an anomaly. Prior to the invention of the printing press (and the subsequent blossoming of intellectualism) our brains were there for one purpose: to keep us alive. The brain accomplished this critical objective through one of three ways:

Responding to Danger in Our Environments

Reading is an artificial human activity. We have to train our brains to do it. But scanning our surroundings to notice things that don’t fit is as natural to us as sleeping and eating. We have sophisticated, multi-layered mechanisms to help us recognize anomalies in our environment (which often signal potential danger).  I believe we have “exapted” these same mechanisms and use them every day to digest information presented online.

This idea goes back to something I have said repeatedly: Technology doesn’t change behavior, it enables behavior to change. Change comes from us pursuing the most efficient route for our brains. When technology opens up an option that wasn’t previously available, and the brain finds this a more natural path to take, it will take it. It may seem that the brain is changing, but in actuality it’s returning to its evolutionary “baseline.”

If the brain has the option of scanning, using highly efficient inherent mechanisms that have been created through evolution over thousands of generations, or reading, using jury-rigged, inefficient neural pathways that we’ve been forced to build from scratch through our lives, the brain will take the easiest path. The fact was, we couldn’t scan a book. But we can scan a Web site.

Making The Right Choices

Another highly honed ability of the brain is to make advantageous choices. We can consider alternatives using a combination of gut instincts (more than you know) and rational deliberation (less than you think) and more often than not, make the right choice. This ability goes in lock step with the previous one, scanning our environment.

Reading a book offers no choices. It’s a linear experience, forced to go in one direction. It’s an experience dictated by the writer, not the reader. But browsing a Web site is an experience littered with choices.  Every link is a new choice, made by the visitor. This is why we (at my company) have continually found that a linear presentation of information (for example, a Flash movie) is a far less successful user experience than a Web site where the user can choose from logical and intuitive navigation options.

Carr is right when he says this is distracting, taking away from the focused intellectual effort that typifies reading. But I counter with the view that scanning and making choices is more naturally human than focused reading.

Establishing Beneficial Social Networks

Finally, humans are herders. We naturally create intricate social networks and hierarchies, because it’s the best way of ensuring that our DNA gets passed along from generation to generation. When it comes to gene propagation, there is definitely safety in numbers.

Reading is a solitary pursuit. Frankly, that’s one of the things avid readers treasure most about a good book, the “me” time that it brings with it. That’s all well and good, but bonding and communication are key drivers of human behavior. Unlike a book, online experiences offer you the option of solitary entertainment or engaged social connection. Again, it’s a closer fit with our human nature.

From a personal perspective, I tend to agree with most of Carr’s arguments. They are a closer fit with what I value in terms of intellectual “worth.” But I wonder if we fall into a trap of narrowed perspective when we pass judgment on what’s right and what’s not based on what we’ve known, rather than on what’s likely to be.

At the end of the day, humans will always be human.

Wired for Information: A Brain Built to Google

First published August 26, 2010 in Mediapost’s Search Insider

In my last Search Insider, I took you on a neurological tour that gave us a glimpse into how our brains are built to read. Today, let’s dig deeper into how our brains guide us through an online hunt for information.

Brain Scans and Searching

First, a recap. In Nicholas Carr’s Book, “The Shallows: What the Internet is doing to Our Brains,I focused on one passage — and one concept — in particular. It’s likely that our brains have built a short cut for reading. The normal translation from a printed word to a concept usually requires multiple mental steps. But because we read so much, and run across some words frequently, it’s probable that our brains have built short cuts to help us recognize those words simply by their shape in mere milliseconds, instantly connecting us with the relevant concept. So, let’s hold that thought for a moment

The Semel Institute at UCLA recently did a neuroscanning study that monitored what parts of the brain lit up during the act of using a search engine online. What the institute found was that when we become comfortable with the act of searching, our brains become more active. Specifically, the prefrontal cortex, the language centers and the visual cortex all “light up” during the act of searching, as well as some sub-cortical areas.

It’s the latter of these that indicates the brain may be using “pre-wired” short cuts to directly connect words and concepts. It’s these sub-cortical areas, including the basal ganglia and the hippocampus, where we keep our neural “short cuts.”  They form the auto-pilot of the brain.

Our Brain’s “Waldo” Search Party

Now, let’s look at another study that may give us another piece of the puzzle in helping us understand how our brain orchestrates the act of searching online.

Dr. Robert Desimone at the McGovern Institute for Brain Research at MIT found that when we look for something specific, we “picture” it in our mind’s eye. This internal visualization in effect “wakes up” our brain and creates a synchronized alarm circuit: a group of neurons that hold the image so that we can instantly recognize it, even in complex surroundings. Think of a “Where’s Waldo” puzzle. Our brain creates a mental image of Waldo, activating a “search party” of Waldo neurons that synchronize their activities, sharpening our ability to pick out Waldo in the picture. The synchronization of neural activity allows these neurons to zero in on one aspect of the picture, in effect making it stand out from the surrounding detail

Pirolli’s Information Foraging

One last academic reference, and then we’ll bring the pieces together. Peter Pirolli, from Xerox’s PARC, believes we “forage” for information, using the same inherent mechanisms we would use to search for food. So, we hunt for the “scent” of our quarry, but in this case, rather than the smell of food, it’s more likely that we lodge the concept of our objective in our heads. And depending on what that concept is, our brains recruit the relevant neurons to help us pick out the right “scent” quickly from its surroundings.  If our quarry is something visual, like a person or thing, we probably picture it. But if our brain believes we’ll be hunting in a text-heavy environment, we would probably picture the word instead. This is the way the brain primes us for information foraging.

The Googling Brain

This starts to paint a fascinating and complex picture of what our brain might be doing as we use a search engine. First, our brain determines our quarry and starts sending “top down” directives so we can very quickly identify it.  Our visual cortex helps us by literally painting a picture of what we might be looking for. If it’s a word, our brain becomes sensitized to the shape of the word, helping us recognize it instantly without the heavy lifting of lingual interpretation.

Thus primed, we start to scan the search results. This is not reading, this is scanning our environment in mere milliseconds, looking for scent that may lead the way to our prey. If you’ve ever looked at a real-time eye-tracking session with a search engine, this is exactly the behavior you’d be seeing.

When we bring all the pieces together, we realize how instantaneous, primal and intuitive this online foraging is. The slow and rational brain only enters the picture as an afterthought.

Googling is done by instinct. Our eyes and brain are connected by a short cut in which decisions are made subconsciously and within milliseconds. This is the forum in which online success is made or missed.

How Our Brains are Wired to Read

First published August 19, 2010 in Mediapost’s Search Insider

How do we read? How do we take the arbitrary, human-made code that is the written word and translate it into thoughts and images that mean something to our brain, an organ that had its basic wiring designed thousands of generations before the appearance of the first written word? What is going on in your skull right now as your eyes scan the black squiggly lines that make up this column?

The Reading Short Cut

I’m currently reading Nicholas Carr’s “The Shallows: What the Internet is Doing to Our Brains,” a follow-up to Carr’s article in The Atlantic, “Is Google Making Us Stupid?” The concept Carr explores is fascinating to me: the impact of constant online usage on how the neural circuits of our brain are wired.

But there was one quote in particular, from Maryanne Wolf’s book, “Proust and the Squid: The Story and Science of the Reading Brain,” that literally leapt off the page for me: ‘The accomplished reader, Maryanne Wolf explains, develops specialized brain regions geared to the rapid deciphering of text. The areas are wired ‘to represent the important visual, phonological and semantic information and to retrieve this information at lightning speed.’ The visual cortex, for example, develops ‘a veritable collage’ of neuron assemblies dedicated to recognizing, in a matter of milliseconds, ‘visual images of letters, letter patterns and words.'”

For everyone reading this column today, that is one of the most relevant passages you may ever scan your eyes across. It’s vitally important to digital marketers and designers of online experiences. Humans that read a lot develop the ability to recognize word patterns instantly, without going through the tedious neural heavy lifting of translating the pattern through the language centers of the brain. A quick neurological tour is in order here.

How the Brain Reads

The brain has a habit of developing multiple paths to the same end goal. Many functions that our brain controls tend to have dual routes: a quick and dirty one that rips through the brain at lightning speed and a slower, more rational one. It’s the neural reality behind Malcolm Gladwell’s “Blink.” This dual speed processing is a tremendously efficient way of coping with our environment. The same mechanism, according to Wolf, has been adapted to our interpretation of the written word.

Humans have an evolved capacity for language. Noam Chomsky, Steven Pinker and others have shown convincingly that we come out of the box with inherent capabilities to communicate with each other. But those abilities, housed in the language centers of the brain (Wernicke’s and Broca’s Areas, if you’re interested) are limited to oral language. Written language hasn’t been around nearly long enough for evolution’s relatively slow timeline to have had much of an impact. That’s why we learn to speak naturally just by hanging around other humans, but only those with a formalized and structured education learn to read and write. We have to take the native machinery of the brain and force it to adapt to the required task by creating new neural paths.

Instantly Recognizable…

So, when we read a page of text, there’s a fairly complex and laborious process going on in our noggins. Our visual cortex scans the abstract code that is written language, feeds it to the language centers for translation, and then sends it to our prefrontal cortex and our long-term memory to be rendered into concepts that mean something to us. The word “horse” doesn’t really mean the large, hairy, four-legged mammal that we’re familiar with until it goes through this mental processing.

But, like anything that humans do often, we tend to create short cuts through repetition. It’s important to note that this isn’t evolution at work, it’s neuroplasticity. The ability to read and write is built in each human from scratch. The brain naturally tries to achieve maximum efficiency by taking things we do repeatedly and building little synaptic short cuts. Humans who read a lot become wired to recognize certain words just by their shape and appearance, without needing to run the full processing cycle. Your name is a good example. How often have you been reading a newspaper or book and run across your last name? Does it seem to “leap off the page?” That was your brain triggering one of its little short cuts.

So, what does this mean for online interactions, particularly with a search engine? In next week’s column, I’ll revisit a fascinating brain scanning study that was done by UCLA and take a peek at what might be happening under the hood when we launch a Web search.

 

Human Irrationality Online

irrationalLast week, I talked about the work of Daniel Kahneman, Amos Tversky, Herbert Simon and George Akerlof, key figures in helping define the foundations of consumer behavior, both rational and irrational, that dictate the realities of the marketplace. Today, I want to talk about how these emotional and cognitive biases and limitations play out online, but first, a quick recap is in order:

Prospect Theory – The role of psychological framing and emotional biases in determining human behavior in risky economic decisions. For example, how we’re more sensitive about loss than we are about gain.

Bounded Rationality – How we cannot endlessly consider all alternatives for the optimal behavior, but rather rely on “gut instincts” to help sort through the available alternatives.

Information Asymmetry – Why the marketplace has traditionally been unbalanced, with the seller almost always having more information about the product than the buyer.

This is Nothing New…

As I said last week, these are all hardwired human conditions that have been present for hundreds of generations, even though it’s only been recently that we’ve learned enough about human behavior to recognize them. And it’s these inherent tendencies that have changed the marketplace since the introduction of the Internet. The huge volume of information available online allowed us to shift the balance of the marketplace to be more equitably distributed between sellers and buyers. Let’s explore how each of these occurrences drove the behavioral change, which was enabled, not caused, by the introduction of the Net.

We understand that risk is present in almost all consumer transactions. This fact brings Prospect Theory into the picture. We will unconsciously employ our emotional biases to deal with the risk inherent in each purchase: the greater the risk, the greater the degree of bias.

The Risk/Reward Balance

Consumer motivation relies on us mentally balancing risk and reward. The balance between these opposing forces will dictate how we deal with risk mitigation. If there is a high reward — for example, buying our mid-life crisis sports car or taking our dream vacation — our emotional biases will be tilted towards maximizing this reward. Consumer research is really more about wish fulfillment than it is about risk mitigation.

But if there is little or no reward, our research takes a much different path. Think about how we approach the purchase of life insurance, for example. There is no inherent reward here, just risk — or rather, mitigation of risk. And insurance salespeople mercilessly exploit the emotional bias of loss by getting you to picture your family’s future without you in it.

Informed Does Not Always Equal Rational

This risk/reward balance will dictate what our online research will look like. And this is where Akerlof’s Information Asymmetry comes in. One of the ways we mitigate risk is by educating ourselves about our purchase. We look up consumer ratings, read reviews and pore over feature sheets.

Today, consumers are much more informed than they were a generation ago. But all that information does not necessarily mean we will make a more logical decision. We humans tend to look at information to support our emotional biases, rather than refute them. So, the balancing of information asymmetry is still done through the lens of our emotional and psychological frames, as shown by Kahneman and Tversky. We have access to information online, but each of us may walk away with different messages, depending on the lens we’re seeing that information through.

All This Information, All These Choices…

And that, finally, brings us to Simon’s concept of Bounded Rationality. We have more information than ever to sift through. As I said a few columns back, we can employ different strategies to make decisions. Some of us embrace bounded rationality, or satisficing, making us more decisive. It’s important to note here that the fact we’re trusting our gut to make these satisficing calls means that we may be trusting emotion rather than logic. Others try to optimize each decision, weighing all the variables. While this is perhaps a more rational approach, it can tax our cognitive limits, leading to frustration and often abandonment of the optimal path, resulting in a decision that ends up being a “gut” call anyway.

Our need to access information to mitigate risk has lead to the behavioral changes in consumer behavior. The Internet enabled this. It wasn’t technology that changed our behavior; it was just that technology opened the door to allow us to pursue our hardwired tendencies.

The Four Horsemen of the Consumer Behavior Apocalypse

First published March 25, 2010 in Mediapost’s Search Insider

Right out of the gate, let’s assume that we all agree consumer behavior is in the throes of its biggest shift in history. And the cause is generally attributed to the Internet.

While I don’t disagree with this assessment, I believe there may be some misattribution when it comes to cause and effect. Did the Internet cause our consumer behavior to change? Or did it enable it to change? The distinction may seem like mere semantics, but there’s a fundamental difference here.

“Cause” implies that an outside force, namely the Internet, pushed us in a new direction that was different from the one we would have pursued had this new force not come along. “Enable” is a different beast, the opening of a previously locked door that allows us to pursue a new path of our own volition. I believe the latter to be true. I believe we weren’t pushed anywhere. We went there of our own free will.

Free Will? Or Hardwired Human Behavior?

But, even in my last statement, language again gets us in a sticky place. “Will” assumes it was a conscious and willful decision. I’m not sure this is the case. I suspect there were subconscious, hardwired behaviors that had a natural affinity for the new opportunities presented by the online marketplace.

For most of our recorded history, we have assumed that rational consideration and conscious will forms the basis of human thought. If we did seem programmed automatically to respond to certain cues, this was as a result of being conditioned by our environment, the classic Skinner black-box approach. But when we were on top of our game, we were carefully considering pros and cons, making consciously deliberated decisions. These were the forces that drove our society and our behaviors. This theory formed the basis of economics (Adam Smith’s Invisible hand), Cartesian logic, and most market research.

But in the last few decades, this view of rationality riding triumphant over human foibles has been brought into question. In particular, there were three concepts put forward by four academics that caused us to question what drove our behaviors. These folks uncovered deeper, subconscious routines and influences that lay buried beneath the strata of rational thought. And it’s these subconscious behaviors that I believe found the new online opportunities so enticing. Let’s spend a little time today looking at these four thinkers and the new paradigms they asked us to consider.

Amos Tversky and Daniel Kahneman – Prospect Theory

Adam Smith’s Invisible hand, driven by the wisdom of the market, has been presumed to be the ultimate economic governing factor. The assumption was that each of us, individually making rational economic decisions, would ultimately decide winners and losers and capitalism would stay alive and well.

But Tversky and Kahneman, in their paper on Prospect Theory, showed that the invisible hand might not always be guided by a decisive and logical mind. We all have significant hardwired cognitive biases that often cause us to make illogical economic choices. For example, if I offered you $1,000, with no questions asked, or a chance to win $2,500 based on a coin toss, you’d probably take the sure bet, even though mathematically, the odds for net gain are better with the coin toss.

Prospect Theory shot some holes in the previous theory of Expected Utility, a model where we carefully weighed the pros and cons of a potential purchase based on a return on investment model. Emotional framing and risk avoidance played a much bigger role than we suspected, handicapping our logic and often guiding us down non-rational paths. Tversky and Kahneman single-handedly found the new discipline of Behavioral Economics and changed our thinking in the process.

Herbert Simon – Bounded Rationality

Simon’s concept of Bounded Rationality superseded Kahneman and Tversky’s theory, but it dovetailed with it very nicely. Even if we are rationally engaged in a decision, Simon argued, we couldn’t possibly optimize it, especially in complex scenarios. There were simply too many factors to consider. So, we took “gut feeling” short cuts, which Simon called “satisficing,” a combination of satisfy and suffice. We short-listed our consideration set by using beliefs and instincts.

To make the satisficing short list is the goal of any brand campaign. At some point, logical weighing of pros and cons has to give way to calls based primarily on instinct.  And, as Kahneman and Tversky showed, those instinctive calls may well be based on irrational emotional biases.

George Akerlof – information Asymmetry

The last piece, and the one that really drove the online consumer revolution, is George Akerlof’s Information Asymmetry theory. Traditionally, there has been an imbalance of information between buyers and sellers, to the seller’s advantage. The seller always knew more about what they were selling than the buyer did. This made purchasing inherently risky.

With an absence of information, consumers created strong beliefs about brands as a way to guide their future buying decisions. Brand loyalty, whether rational or not, filled the void left by a lack of information. Manufacturers and retailers carefully controlled what information did enter the marketplace, pushing the positives and carefully suppressing the negatives.

These three concepts, intertwined, defined the psychological make-up of the market prior to the introduction of the Internet. In my next column, I’ll explore what happened when these behavioral powder kegs were exposed to the fanned flames of the digital marketplace.

The Psychology of Entertainment: The Genotype of Art and the Phenotype of Entertainment

In the last post, I started down this road and today I’d like to explore further, because I think the question is a fundamentally important one – why do humans have entertainment anyway? What is it about us that connects with it?

Our Brains House a Stone-Age Mind

MaliThere is much about our behaviors are culture that does not align completely with the directives of evolution. It’s easy to see the evolutionary advantage of the opposable thumb or language. It’s much harder to see the advantages that saturated fat, iPods and American Idol give us. As I started to say in the last post, that’s the difference between a genotype and a phenotype. Our genetic blueprint gives us a starting point, a blueprint that cranks out who we are. But, unfortunately for us, there are a number of “gotchas” coded into our genomes. And that’s because the vast majority of the coding was done hundreds of thousands of years ago for an environment quite different that the one we currently inhabit. For example, a taste for high calorie foods. This makes sense if you live in an environment where food is scarce and when you do find it, it might have to sustain you for a day or so. It doesn’t make much sense when there’s a McDonald’s around every corner. The genotype for efficient food foraging, necessary for survival a 100,000 years ago, leads to today’s phenotype of an epidemic of obesity. As evolutionary psychologists Leda Cosmides and John Tooby say, our brains house a stone-age mind.

This clash between phenotypes and genotypes leads to many of the questions that arise when we apply evolutionary theory to humans. The primary calculation in evolution is a cost/benefit one. How much do we have to invest in something and what is the return we get from it, in terms of reproductive success? For example, why do humans have art? The reproductive purpose of a bow and arrow or a cooking pot seems to be easy to determine. Both ensure survival long enough to have offspring. The evolutionary advantage of a canoe also makes sense – it provides access to previously unobtainable resources, including, presumably, the opposite sex. Canoes enabled prehistoric precursors to the Frat house road trip. But why did we spend hours and hours decorating our weapons, or cooking utensils, or transportation vehicles? What evolutionary purpose does ornamentation have? Art is universally common, one of the criteria for evolved behaviors. The answer, or at least part of it, lies in another human truism – the guy with the guitar always gets the girl. Or, to use Darwin’s label, the Peacock Principle.

Hey, Nice Tail Feathers!

In a previous post, I talked about how admiration plays a big part in entertainment. We’re hardwired to admire talent. Why? Because social status accrues to those with talent. Also, it appears that talented people are more attractive to the opposite sex. This is driven by sexual selection, reinforcing this behavioral trait in the evolutionary psychology. Let’s use the peacock as an example. Somewhere, sometime, a male peacock, through a genetic mutation, was endowed with slightly larger tail feathers. And, for some reason, female peacocks found this to be desirable trait in selecting a mate. The result. The male peacock with the bigger tail feathers got more action. This started an evolutionary snowball that today accounts for the bizarre display of evolutionary energy we see in male peacocks.

Does this account for art in humans? Were artists given special status in our society, allowing their genes an easier path into the next generation? Well, there’s certainly evidence that points in this direction. But Ellen Dissanayake believes there’s more to it than Darwin’s Peacock Principle.

Art: Making Special

Dissanayake believes there are two other factors that explain the presence of art in our culture, and both have to do with how we adapt to our environments. The first question Dissanayake asked was “what is art?” The answer was “making special.” Art, she believes, comes from our need to take the ordinary and set it apart as something to be cherished and honored. And often, these cherished items were integral to the ceremonies we conduct as part of our culture. If you strip art away from ceremony, or ceremony away from art, each half suffers significantly from the separation.

The second question Dissanayake asked was: Why do humans create art? What is the evolutionary “return on investment?” The answer comes in two revelations. When we chose something to “make special,” it wasn’t any old thing that we applied this special treatment to. These favored objects or themes were, not coincidentally, the things that most lead to an evolutionary advantage: weapons, cooking utensils, hunting and foraging, sexual reproduction, vigorous health – the things that propelled our genes forward into future generations. The Darwinian logic here is obvious – by elevating these things to a higher status, we focused more attention on them. Our culture enshrined the very same things that provided evolutionary advantage.

Dissanayake’s second revelation revives a recurring theme in human history. We seek to control our environments. Art soothes us in the most uncontrollable parts of our lives. And it’s here where the connection between ceremony and art is at it’s most basic. The ceremonies in our lives, across all cultures, are at the times of greatest transition: birth, marriage, war, sickness and death. It’s here where we gain some small measure of comfort in the control we can exert over our ceremonies, and as part of those ceremonies, we create art. As I mentioned before, a sense of control, the solving of an incongruity, is also the psychological basis of humor. We seek to control the uncontrollable, through our mythologies, our culture and our beliefs. This illusion of control over the uncontrollable has a direct evolutionary benefit. It allows us to get on with our lives rather than obsess about things we have no control over.

Through these two observations, Dissanayake was able to connect the dots between art and an evolutionary payoff. She believes an appreciation of art is part of the human genome, an evolutionary endowment that drives our aesthetic sense. There are universal and recurring themes in the things we find aesthetically pleasing that go beyond something explainable by cultural influence. When it comes to art, just as Noam Chomsky and Steven Pinker believe with language, there is no “blank slate.”

What’s the Evolutionary Purpose of Entertainment?

But what about entertainment? If art starts in the genotype and extends through the phenotype, is the same true for entertainment? Does entertainment serve an evolutionary purpose?

When we talk entertainment, the line between genotypes and phenotypes gets much harder to detect. There is very little I could find that would parallel Dissanayake’s exploration of the evolutionary purpose of art when it comes to entertainment. The fact is, historically humans don’t do very well when we get too much leisure time on our hands.

Most of our genetically driven behaviors and traits are built to insure survival, as they should be. Propagation of genes requires survival, at least to child bearing years. When humans thrive, to the point of having excess time on our hands, those survival mechanisms start working against us. We become fat and lazy, literally. Genes drive us to get the maximum return for the minimum effort. This works well when every hour of the day is devoted to doing the things you need to do to survive. It doesn’t work so well when we can cover the basics of survival in a few hours a day.

Leisure time is a relatively new phenomenon for humans. Except for a few notable exceptions, we haven’t had a lot of time to be entertained in. The exceptions provide a stark warning for what can happen. Leisure time exploded in ancient Rome as slave labor suddenly allowed the citizens of Rome to stop working for a living. The same was true in ancient Greece and Egypt. This fostered a dramatic increase of artistic output, but it also lead to an gradual erosion of social capital, leading to complacency and ennui. Eventually, these cultures rotted from the inside.

Let’s look at the causal chain of behavior here. Leisure time allows talented artists in our culture to “make special” more often. We have a hardwired appreciation of this art, so we admire those that create it. This gives them greater status and social benefits. Which makes us admire them more, but also become envious of them. We are built to emulate success, but in this case, there is no identifiable path to take. We may admire the benefits but we haven’t been granted the ability to follow in their footsteps. A cult of celebrity starts to emerge. Once it starts, this cultural snowball picks up speed, leading to ever higher status for celebrities and greater admiration and envy from those watching. Greed emerges, along with a sense of entitlement. Our values skew from survival to conspicuous consumption, driven by genes that are still trying to maximize returns from an ever increasing pile of consumable resources. The phenotype of this genetically driven consumption treadmill is not a pretty sight.

Entertainment Seems to Live in the Phenotype, Not the Genotype

Try as I might, I could not find a evolutionary pay off for entertainment, which leads me to believe it’s a phenotypical phenomenon, not a genotypical one. At it’s most benign, entertainment is a manifestation of our inherent need for art and ceremony. At that level, entertainment seems to live closest to the gene. But it doesn’t stay there long. Fuelled by our social hierarchal instincts, entertainment seems to rapidly sink to the lowest common denominator. It rapidly steps from art to raw sensory gratification. It’s much easier to absorb entertainment through the more primitive parts of our brain than to employ the effort required to intellectualize it.

To be honest, I’m still grappling with this concept, as you can no doubt tell from this post. There’s a big concept here and one of the joys and frustrations of blogging is that you never have the time to properly explore a concept before having to post it. For me, my blog serves as an intellectual grist mill, albeit a relatively inefficient one. I’ve got to go now and figure out where this goes from here.

Maximizers vs. Satisficers: Why It’s Tough to Decide

First published February 18, 2010 in Mediapost’s Search Insider

In last week’s column, I introduced the study from Wesleyan University about how decisiveness played out for a group of 54 university students as they chose their courses.  The student’s eye movements were tracked as they looked at a course comparison matrix.

Weighing all the Options vs Saying No

In the previous column, I talked about two different strategies: the compensatory one, where we weigh all the options, and the non-compensatory one, where we start eliminating candidates based on the criterion most important to us. Indecisive people tend to start with the compensatory strategy and decisive people go right for the linear approach.  I also talked about Barry Schwartz’s theory (in his book “The Paradox of Choice”) that indecisiveness can lead to a lot of anxiety and stress.

The biggest factor for indecisive people seems to be a fear of lost opportunity. They hate to turn away from any option for fear that something truly valuable lies down that path. Again, this is territory well explored in Tversky and Kahnemann’s famous Prospect Theory.

The Curse of the Maximizer

Part of the problem is perfectionism, identified by Schwartz as a strong corollary to anxiety caused by impending decisions. The Wesleyan research cites previous work that shows indecisive people tend to want a lot more information at hand before making any decisions. And, once they’ve gone to the trouble to gather that information, they feel compelled to use it. Not only do they use it, they try to use it all at once.

The Wesleyan eye tracking showed that the more indecisive participants went back and forth between the five different course attributes fairly evenly, apparently trying to weigh them all at the same time.  Not only that, they spent more time staring at the blank parts of the page. This indicated that they were trying to crunch the data, literally staring into space.  The maximizing approach to decision-making places a high cognitive load on the brain. The brain has to juggle a lot more information to try to come to an optimal decision.

Decisive people embrace the promise of “good enough,” known as satisficing. They are less afraid to eliminate options for consideration because the remaining choices are adequate (the word satisficing is a portmanteau of “satisfy” and “suffice”) to meet their requirements. They are quicker to turn away from lost opportunity. For them, decision-making is much easier. Rather than trying to juggle multiple attributes, they go sequentially down the list, starting with the attribute that is most important to them.

In the case of this study, this became clear in looking at the spread of fixations spread amongst the five attributes: time of the class, the instructor, the work load, the person’s own goals and the level of interest. For decisive people, the most important thing was the time of class. This makes sense. If you don’t have the time available, why even consider what the course has to offer? If the time didn’t work, the decisive group eliminated it from consideration. They then moved onto the instructor, the next most important criterion. And so on down the list.

Tick…Tick…Tick…

Another interesting finding was that even though indecisive people start by trying to weigh all the options to look for the optimal solution, if the clock is ticking, they often become overwhelmed by the decision and shift to a non-compensatory strategy by starting to eliminate candidates for consideration. The difference is that for the indecisive maximizers, this feels like surrender, or, at best, a compromise. For the decisive satisficers, it’s simply the way they operate. If the indecisive people are given the choice between delaying the decision and being forced to eliminate promising alternatives, they’ll choose to delay.

This sets up a fascinating question for search engine behavior: do satisficers search differently than maximizers? I suspect so. We’ll dive deeper into this question next week.

Decisiveness and Search: Two Different Strategies

First published February 11, 2010 in Mediapost’s Search Insider

In “The Paradox of Choice,” author Barry Schwartz speculates that we all might be happier if we had fewer options in life. Our consumer-based society continually pumps out more and more options, forcing us into making more and more decisions. Schwartz convincingly draws a parallel between decisiveness and happiness. The less time we spend making decisions, the more we’ll be satisfied with our lives, he says.

A new study out of Wesleyan University explores the actual cognitive mechanisms of decisiveness. This has direct implications for search marketers, because every time we use a search engine, we’re forced to make decisions. In fact, every online interaction is a branching tree of decisions. The study provides new insight into the decision-making process we use as we guide ourselves through the online landscape.

Study Set-Up

The researchers at Wesleyan used a scenario familiar to their sample of 54 students: they had to pick courses for the upcoming semester. Course options were set up on a matrix that allowed students to evaluate their options on a few different criteria: time of the course, instructor quality, relevance, amount of work required and interest in topic. There were no “no-brainer” options. In each alternative, trade-offs were required.

The researchers also introduced a variable into the mix: the opportunity to delay final course selection.

Finally, they asked the students to use the course grid to help make their selections while using an eye-tracker to capture exactly what they looked at on the grid. After the task was completed, participants were asked to grade themselves on a standard decisiveness scale.

Decisive vs. Indecisive Strategies

Building on previous academic work on decisiveness, the researchers found that individuals tend to use two different strategies when making decisions.  The compensatory strategy tries to weigh all the decision attributes together, literally creating an evaluation formula in the decision-maker’s mind.  If there are five different decision criteria, all are considered at the same time and are weighted by the importance of each to the individual.

In a purely rational world, this would seem to be the optimal strategy, but as Schwartz pointed out, we are not rational decision-making machines. In their Nobel prize-winning work on Prospect Theory, Amos Tversky and Daniel Kahnemann  (and more recently, Dan Ariely) showed that we use irrational risk-triggered biases in our decision-making. These throw some significant wrenches into the workings of our decisiveness. Emotions get involved and we start feeling anxious. Decisions, even about things that will bring eventual rewards, start to cause us stress.

The other decision strategy is a non-compensatory, linear strategy. This is the foundation of Herbert Simon’s famous “satisficing” approach. Here, alternatives are quickly cut down by a sequential consideration of criteria, beginning with the one most important to the decision-maker. In the study scenario of picking courses, many looked first at the time the class would be taught, reasoning that if the time didn’t work for them, there was little point in considering the other things the course might offer. This quickly narrowed the consideration set. From there, they moved on to the next most important criterion. This sequential approach is relatively ruthless in eliminating candidates for consideration.

This study, along with others, found that indecisive decision-makers tend to start with a compensatory strategy, while decisive people start short-listing immediately with a non-compensatory strategy. In the next column, we’ll see how this difference in strategies was clearly shown in the eye tracking results. I’ll also explore how indecisive individuals are often forced to abandon one strategy for the other, which can cause significant stress.

The Psychology of Entertainment: Will Video Games Become Too Real for Us to Handle?

Man_Playing_A_Video_Game_1575481-310x416In yesterday’s post, I explored our psychological attraction to violent action thrillers. Today, let’s go one step further. What is the attraction of violent video games? And how might this attraction deepen and even become pathologically dangerous as the technology behind the games improves? It’s a question we’re speeding towards, so we should stop to consider it.

In TV and film, violent action triggers a chemical reaction in the brain that we find stimulating and pleasing. As cortisol and dopamine get released, we experience a natural high. Strong evidence points to a connection between sensation seeking (triggering the high) and addictive tendencies.

The Veil of Non Reality

There is a “veil of non-reality” that moderates this reaction however. The high we get from violent entertainment comes from the limbic structures of the brain, triggered by the amygdala and other sub-cortical neural modules. This is the primal part of the brain that ensures survival in threatening situations, which means that responses are fast but not deliberate. The higher, cortical parts of the brain ride overtop of these responses like a governor, toning down the responses and modulating the overactive danger response mechanisms. It our brains didn’t do this, we’d quickly burn ourselves out. Cortisol is a great stimulant when it’s needed, but a steady diet of it turns us into a quivering pile of anxiety-ridden stress.

When we watch entertainment, this modulating part of the brain quickly realizes that what we’re watching isn’t real and puts its foot on the brake of the brain’s natural desire to pump out Cortisol, dopamine and other neuro-chemicals. It’s the “voice of reason” that spoils the fun of the limbic brain. Despite the fact that there’s car’s exploding left and right and people are dropping like flies, the fact that we’re watching all this on a 2 dimensional screen helps us keep everything in perspective, preventing our brain from running away with itself. This is the veil of “non-reality” that keeps us from be fooled that this is all real.

The Imagined Reality of Entertainment

But let’s stop for a moment and think about how we’re consuming entertainment. In the past decade, screens have got bigger and bigger. It’s no coincidence that we get a bigger high from watching violence on the big screen than from watching it on a 20 inch home TV. The “veil of non-reality” starts to slip a little bit. It seems more real to us. Also, we feed off the responses of others in the theater. We are social animals and this is especially true in threatening situations, even if they are simulations in the name of entertainment. We pick up our social cues from the herd.

It’s not just the size of the screen that’s changing, however. Technology is continually trying to make our entertainment experiences more real. Recent advances in 3D technology have not only made James Cameron even wealthier, they also deliver a stronger sensory jolt. Watching Avatar in 3D is a sensory explosion. The veil of “non-reality” slips a little further.

But improvements in graphic technology can only go so far in fooling the brain. Much as our eyes might be deceived, we’re still sitting passively in a chair. Our interpretation of the world not only relies on input from the senses, it also relies on our own sense of “body” – Antonio Damasio’s somatic markers.

The Satisfaction of Control

This is where video games are quickly approaching a potential crisis point in sensory overload. Even the best Hollywood thriller requires us to sit passively and consume the experience. We have no control over plot, dialogue or the character actions. We can only engage in the experience to a certain level. In fact, much of the appeal of a Hollywood thriller comes from this gap between what’s happening on the screen and what’s happening in our own minds. We can imagine possible outcomes or perhaps the director gives us knowledge the protagonist doesn’t have. We experience suspense as we see if the protagonist takes the same actions we would. We silently scream “Get out of the house!” to the teenage babysitter when we know the psychopathic killer is upstairs.

But video games erase this limitation. With a video game, we’re suddenly in control. Control is a powerfully seductive condition for humans. We naturally try to control as many elements of our environment as possible. And when we can exert control over something, we’re rewarded by our brains and a natural hit of dopamine. That’s why completing a puzzle or solving a riddle is so inherently satisfying. These are tiny exertions of control. In a video game, we are the authors of the script. It is we who decide how we react to dangerous situations. Suddenly we are not a passive audience. we are the actors. This is cognitive engagement at a whole different level. Suddenly the appeal of sensory stimulation is combined with the rewards we get from exercising control over novel situations. That’s a powerful one-two punch for our brains. And the veil of “non-reality” slips a little further.

Virtual Reality

The negative impacts of video games have been studied, but again, like TV, studies have been largely centred around one question: does the playing of video games lead to increased aggression and violence in children? And, like TV, the answer seems to be a qualified yes. For those already prone to violence, the playing of video games seems to reinforce these attitudes. But it’s also been argued that the playing of video games provides a cathartic release for violent tendencies.

Less research has been conducted on the cognitive impact of video games, and it’s here where the bigger problem might lie. A few studies have shown the playing of video games could be addictive. A Japanese study found that excessive video game playing during adolescence seems to alter the way brains develop, impairing the ability to focus attention for long periods of time. In fact, a number of studies have shown links between exposure to excessive sensory stimulation through electronic media and the incidence of ADHD and other attention deficit disorders. It’s this longer term altering of how our brains work that may represent the bigger danger in video games.

Video games combined violent scenarios, which we know to provide sensory jolts to the brain, with the seduction of control. What has limited the addictive appeal of video games to this point are two things: how realistic the scenarios are perceived to be and the way we interact with the games. And, in both these areas, technology is moving forward very quickly.

Video game graphics have come a long way, but they still lack the photo realism of a typical Hollywood movie. However, the distance between the two is lessening every day. How far away are we from a video game experience that matches the realism of Hollywood? Huge advances in computer graphics and sheer processing power are bringing the two closer and closer together. The day is not far away where our experience in a video game will feel like we’ve been dropped in the middle of a movie. And, with 3D and virtual reality technology, even the physical separation of a screen will soon disappear. The imaginary world will surround us in a highly realistic way. What will that do for the “veil of non-reality?”

The other area where video games have improved dramatically is in the way we control them. The control pad with various triggers and buttons was a artificial way to interact with the video game world. A spin-jump-kick combination was triggered by pushing down a few buttons while we sat in a chair. This helped our brain maintain it’s distance from the imagined reality. But Nintendo’s Wii changed how we interact with video games. Sophisticated sensors now translate our own body motions into corresponding digital commands for the game. Even our bodies are fooled into believing we’re actually playing golf or participating in a boxing match. Interestingly, Nintendo made the choice to make the graphics on the Wii less realistic, perhaps trying to maintain a “veil of non-reality.”

The Wii opens the door to a much more realistic way of controlling video games. Now our own body movements control the virtual character. Suddenly, our body is providing reinforcing feedback to our brain that this might just be real. When you combine this with photo-realistic visual input and audio input, one could forgive our brains for not being able to determine what is real and what isn’t.

Entertainment Overload?

If technology continues down the path it’s own, the virtual reality of a video game may be indistinguishable from the true reality of our lives. If the “veil of non-reality” permanently slips, we have a huge potential problem: our lives pale in comparison to the sensory possibilities of a virtual world. That’s why our brains may not be equipped to handle the overload. We may get addicted to sensation as the brain is fooled into giving us stronger and stronger hits of cortisol, dopamine, adrenaline and other natural narcotics. When the “veil of non-reality” slips away forever, our brains may not be equipped to handle the new virtual reality.