As We May Remember

First published January 12, 2012 in Mediapost’s Search Insider

In his famous Atlantic Monthly essay “As We May Think,” published in July 1945, Vannevar Bush forecast a mechanized extension to our memory that he called a “memex”:

Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, “memex” will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

Last week, I asked you to ponder what our memories might become now that Google puts vast heaps of information just one click away. And ponder you did:

I have to ask, WHY do you state, “This throws a massive technological wrench into the machinery of our own memories,” inferring something negative??? Might this be a totally LIBERATING situation? – Rick Short, Indium Corporation

Perhaps, much like using dictionaries in grade school helped us to learn and remember new information, Google is doing the same? Each time we “google” and learn something new aren’t we actually adding to our knowledge base in some way? – Lester Bryant III

Finally, I ran across this. Our old friend Daniel Wegner (transactive memory) and colleagues Betsy Sparrow and Jenny Liu from Columbia University actually did research on this very topic this past year. It appears from the study that our brains are already adapting to having Internet search as a memory crutch. Participants were less likely to remember information they looked up online when they knew they could access it again at any time. Also, if they looked up information that they knew they could remember, they were less likely to remember where they found it. But if the information was determined to be difficult to remember, the participants were more likely to remember where they found it, so they could navigate there again.

The beautiful thing about our capacity to remember things is that it’s highly elastic. It’s not restricted to one type of information. It will naturally adapt to new challenges and requirements. As many rightly commented on last week’s column, the advent of Google may introduce an entirely new application of memory — one that unleashes our capabilities rather than restricts them. Let me give you an example.

If I had written last week’s column in 1987, before the age of Internet Search, I would have been very hesitant to use the references I did: the Transactive Memory Hypothesis of Daniel Wegner, and the scene from “Annie Hall.”  That’s because I couldn’t remember them that well. I knew (or thought I knew) what the general gist was, but I had to search them out to reacquaint myself with the specific details of each. I used Google in both cases, but I was already pretty sure that Wikipedia would have a good overview of transactive memory and that Youtube would have the clip in question. Sure enough, both those destinations topped the results that Google brought back. So, my search for transactive memory utilized my own transactive memorizations. The same was true, by the way, for my reference to Vannevar Bush at the opening of this column.

By knowing what type of information I was likely to find, and where I was likely to find it, I could check the references to ensure they were relevant and summarize what I quickly researched in order to make my point. All I had to do was remember high-level summations of concepts, rather than the level of detail required to use them in a meaningful manner.

One of my favorite concepts is the idea of consilience – literally, the “jumping together” of knowledge. I believe one of the greatest gifts of the digitization of information is the driving of consilience. We can now “graze” across multiple disciplines without having to dive too deep in any one, and pull together something useful — and occasionally amazing. Deep dives are now possible “on demand.” Might our memories adapt to become consilience orchestrators, able to quickly sift through the sum of our experience and gather together relevant scraps of memory to form the framework of new thoughts and approaches?

I hope so, because I find this potential quite amazing.

Is Google Replacing Memory?

First published on January 5, 2012 in Mediapost’s Search Insider

“How old is Tony Bennett anyway?”

We were sitting in a condo on a ski hill with friends, counting down to the new year, when the ageless Mr. Bennett appeared on TV. One of us wondered aloud about just how many new years he has personally ushered in.

In days gone by, the question would have just hung there. It would probably have  filled up a few minutes of conversation. If someone felt strongly about the topic, it might even have started an argument. But, at the end of it all, there would be no definitive answer — just opinions.

This was the way of the world. We were restricted to the knowledge we could each jam in our noggin. And if our opinion conflicted with another’s, all we could do is argue.

In “Annie Hall, “ Woody Allen set up the scenario perfectly. He and Diane Keaton are in a movie line. Behind them, an intellectual blowhard is in mid-stream pontification on everything from Fellini’s movie-making to the media theories of Marshall McLuhan. Finally, Allen can take it no more and asks the camera “What do you do with a guy like this?” The “guy” takes exception and explains to Allen that he teaches a course on McLuhan at Columbia. But Allen has the last laugh — literally. He pulls the real Marshall McLuhan out from behind an in-lobby display, and McLuhan proceeds to intellectually eviscerate the Columbia professor.

“If only life was actually like this,” Allen sighs to the camera.

Well, now, some 35 years later, it may be. While we may not have Marshall McLuhan in our back pocket, we do have Google. And for many questions, Google is the final arbitrator. Opinions quickly give way to facts (or, at least, information presented as fact online.) No longer do we have to wonder how old Tony Bennett really is. Now, we can quickly check the answer.

If you stop to think about this, it has massive implications.

In 1985, Daniel Wegner proposed something along these lines when he introduced the hypothetical concept of transactive memory. An extension of “group mind,” transactive memory posits a type of meta-memory, where our own capacity to remember things is enhanced in a group by knowing whom in that group knows more than we do about any given topic.

In its simplest form, transactive memory is my knowing that my wife tends to remember birthdays and anniversaries — but I remember when to pay our utility bills. It’s not that I can’t remember birthdays and my wife can’t remember to pay bills, it’s just that we don’t have to go to the extra effort if we know our partner has it covered.

If Wegner’s hypothesis is correct (and it certainly passes my own smell test) then transactive memory has been around for a long time. In fact, many believe that the acquisition of language, which allowed for the development of transactive memory and other aids to survival in our ancestral tribes, was probably responsible for the “Great Leap Forward” in our own evolution.

But with ubiquitous access to online knowledge, transactive memory takes on a whole new spin. Now, not only don’t we have to remember as much as we used to, we don’t even have to remember who else might have the answer. For much of what we need to know, it’s as simple as searching for it on our smartphone.  Our search engine of choice does the heavy lifting for us.

This throws a massive technological wrench into the machinery of our own memories. Much of what it was originally intended for may no longer be required.  And this begs the question, “If we no longer have to remember stuff we can just look up online, what will we use our memory for?”

Something to ponder at the beginning of a new year.

Oh, and in case you’re wondering, Anthony Dominick Benedetto was born Aug. 3, 1926, making him 85.

Can Websites Make Us Forgetful?

First published December 15, 2011 in Mediapost’s Search Insider

Ever open the door to the fridge and then forget what you were looking for?

Or ever head to your bedroom and then, upon entering it, forget why you went there in the first place?

Me too. And it turns out we’re not alone. New research from the University of Notre Dame’s Gabriel Radvansky indicates this sudden “threshold” amnesia is actually pretty common. Walking from one room to another triggers an “event boundary” in the mind, which seems to act as a cue for the brain to file away short-term memories and move on to the next task at hand. If your tasks causes you to cross one of these event boundaries and you don’t keep your working memory actively engaged through deliberate focusing of attention, it could be difficult to remember what it was that motivated you in the first place.

Ever since I’ve read the original article, I’ve wondered if the same thing applies to navigating websites. If we click a link to move from one page to another, I am pretty sure the brain could well send out a “flush” signal that clears the slate of working memory.  I think we cross these event boundaries all the time online.

Let’s unpack this idea a bit, because if my suspicions prove to be correct, it opens up some very pertinent points when we think of online experiences.  Working memory is directed by active attention. It is held in place by a top-down directive from the brain. So, as long as we’re focused on memorizing a discrete bit of information (for example, a phone number) we’ll be able to keep it in our working memory. But when we shift our attention to something else, the working memory slate is wiped clean. The spotlight of attention determines what is retained in working memory and what is discarded.

Radvansky’s research indicates that moving from one room to another may act as a subconscious environmental cue that the things retained in working memory (i.e. our intent for going to the new room in the first place) can be flushed if we’re not consciously focusing our attention on it. It’s a sort of mental “palate cleansing” to ready the brain for new challenges. Radvansky discovered that it wasn’t distance or time that caused things to be forgotten. It was passing through a doorway. Others could travel exactly the same distance but remain in the same room and not forget what their original intention was. But as soon as a doorway was introduced, the rate of forgetting increased significantly.

Interestingly, one of the variations of Radvansky’s research used virtual environments, and the results were the same. So, if a virtual representation of a doorway triggered a boundary, would moving from one page of a website to another?

I think there are some distinctions here to keep in mind. If you go to a page with intent and you’re following navigational links to get closer to that intent, it’s probably pretty safe to assume that there is some “top-down” focus on that intent. As long as you keep following the “intent” path, you should be able to keep it in focus as you move from page to page. But what if you get distracted by a link on a page and follow that? In that case, your attention has switched and moving to another page may trigger the same “event boundary” dump of working memory. In that case, you may have to retrace your steps to pick up the original thread of intent.

I just finished benchmarking the user experience across several different sites for a client and found that consistent navigation is pretty rare in many sites, especially B2B ones.  If you did happen to forget your original intent as you navigated a few clicks deep in a website, backtracking could prove to be a challenge.

I also suspect that’s why a consistent look and feel as you move from page to page could be important. It may serve to lessen the “event boundary” effect, because there are similarities in the environment.

In any case, Dr. Radvansky’s research opens the door (couldn’t resist) to some very interesting speculations. I do know that in the 10 B2B websites I visited during the benchmarking exercise, the experience ranged from mildly frustrating to excruciatingly painful.

In the worst of these cases, a little amnesia might actually be a blessing.

Is the Internet Making Us Stupid – or a New Kind of Smart?

First published September 9, 2010 inn Mediapost’s Search Insider

As I mentioned a few weeks back, I’m reading Nicholas Carr’s book “The Shallows.” His basic premise is that our current environment, with its deluge of available information typically broken into bite-sized pieces served up online, is “dumbing down” our brains.  We no longer read, we scan. We forego the intellectual heavy lifting of prolonged reading for the more immediate gratification of information foraging. We’re becoming a society of attention-deficit dolts.

It’s a grim picture, and Carr does a good job of backing up his premise. I’ve written about many of these issues in the past. And I don’t dispute the trends that Carr chronicles (at length). But is Carr correct is saying that online is dulling our intellectual capabilities, or is it just creating a different type of intelligence?

While I’m at it, I suspect this new type of intelligence is much more aligned with our native abilities than the “book smarts” that have ruled the day for the last five centuries. I’m an avid reader (ironically, I’ve been reading Carr’s book on an iPad) and I’m the first to say that I would be devastated if reading goes the way of the dodo.  But are we projecting our view of what’s “right” on a future where the environment (and rules) have changed?

A Timeline of Intellect

If you expand your perspective of human intellectualism to the entire history of man, you find that the past 500 years have been an anomaly. Prior to the invention of the printing press (and the subsequent blossoming of intellectualism) our brains were there for one purpose: to keep us alive. The brain accomplished this critical objective through one of three ways:

Responding to Danger in Our Environments

Reading is an artificial human activity. We have to train our brains to do it. But scanning our surroundings to notice things that don’t fit is as natural to us as sleeping and eating. We have sophisticated, multi-layered mechanisms to help us recognize anomalies in our environment (which often signal potential danger).  I believe we have “exapted” these same mechanisms and use them every day to digest information presented online.

This idea goes back to something I have said repeatedly: Technology doesn’t change behavior, it enables behavior to change. Change comes from us pursuing the most efficient route for our brains. When technology opens up an option that wasn’t previously available, and the brain finds this a more natural path to take, it will take it. It may seem that the brain is changing, but in actuality it’s returning to its evolutionary “baseline.”

If the brain has the option of scanning, using highly efficient inherent mechanisms that have been created through evolution over thousands of generations, or reading, using jury-rigged, inefficient neural pathways that we’ve been forced to build from scratch through our lives, the brain will take the easiest path. The fact was, we couldn’t scan a book. But we can scan a Web site.

Making The Right Choices

Another highly honed ability of the brain is to make advantageous choices. We can consider alternatives using a combination of gut instincts (more than you know) and rational deliberation (less than you think) and more often than not, make the right choice. This ability goes in lock step with the previous one, scanning our environment.

Reading a book offers no choices. It’s a linear experience, forced to go in one direction. It’s an experience dictated by the writer, not the reader. But browsing a Web site is an experience littered with choices.  Every link is a new choice, made by the visitor. This is why we (at my company) have continually found that a linear presentation of information (for example, a Flash movie) is a far less successful user experience than a Web site where the user can choose from logical and intuitive navigation options.

Carr is right when he says this is distracting, taking away from the focused intellectual effort that typifies reading. But I counter with the view that scanning and making choices is more naturally human than focused reading.

Establishing Beneficial Social Networks

Finally, humans are herders. We naturally create intricate social networks and hierarchies, because it’s the best way of ensuring that our DNA gets passed along from generation to generation. When it comes to gene propagation, there is definitely safety in numbers.

Reading is a solitary pursuit. Frankly, that’s one of the things avid readers treasure most about a good book, the “me” time that it brings with it. That’s all well and good, but bonding and communication are key drivers of human behavior. Unlike a book, online experiences offer you the option of solitary entertainment or engaged social connection. Again, it’s a closer fit with our human nature.

From a personal perspective, I tend to agree with most of Carr’s arguments. They are a closer fit with what I value in terms of intellectual “worth.” But I wonder if we fall into a trap of narrowed perspective when we pass judgment on what’s right and what’s not based on what we’ve known, rather than on what’s likely to be.

At the end of the day, humans will always be human.

The Psychology of Entertainment: Will Video Games Become Too Real for Us to Handle?

Man_Playing_A_Video_Game_1575481-310x416In yesterday’s post, I explored our psychological attraction to violent action thrillers. Today, let’s go one step further. What is the attraction of violent video games? And how might this attraction deepen and even become pathologically dangerous as the technology behind the games improves? It’s a question we’re speeding towards, so we should stop to consider it.

In TV and film, violent action triggers a chemical reaction in the brain that we find stimulating and pleasing. As cortisol and dopamine get released, we experience a natural high. Strong evidence points to a connection between sensation seeking (triggering the high) and addictive tendencies.

The Veil of Non Reality

There is a “veil of non-reality” that moderates this reaction however. The high we get from violent entertainment comes from the limbic structures of the brain, triggered by the amygdala and other sub-cortical neural modules. This is the primal part of the brain that ensures survival in threatening situations, which means that responses are fast but not deliberate. The higher, cortical parts of the brain ride overtop of these responses like a governor, toning down the responses and modulating the overactive danger response mechanisms. It our brains didn’t do this, we’d quickly burn ourselves out. Cortisol is a great stimulant when it’s needed, but a steady diet of it turns us into a quivering pile of anxiety-ridden stress.

When we watch entertainment, this modulating part of the brain quickly realizes that what we’re watching isn’t real and puts its foot on the brake of the brain’s natural desire to pump out Cortisol, dopamine and other neuro-chemicals. It’s the “voice of reason” that spoils the fun of the limbic brain. Despite the fact that there’s car’s exploding left and right and people are dropping like flies, the fact that we’re watching all this on a 2 dimensional screen helps us keep everything in perspective, preventing our brain from running away with itself. This is the veil of “non-reality” that keeps us from be fooled that this is all real.

The Imagined Reality of Entertainment

But let’s stop for a moment and think about how we’re consuming entertainment. In the past decade, screens have got bigger and bigger. It’s no coincidence that we get a bigger high from watching violence on the big screen than from watching it on a 20 inch home TV. The “veil of non-reality” starts to slip a little bit. It seems more real to us. Also, we feed off the responses of others in the theater. We are social animals and this is especially true in threatening situations, even if they are simulations in the name of entertainment. We pick up our social cues from the herd.

It’s not just the size of the screen that’s changing, however. Technology is continually trying to make our entertainment experiences more real. Recent advances in 3D technology have not only made James Cameron even wealthier, they also deliver a stronger sensory jolt. Watching Avatar in 3D is a sensory explosion. The veil of “non-reality” slips a little further.

But improvements in graphic technology can only go so far in fooling the brain. Much as our eyes might be deceived, we’re still sitting passively in a chair. Our interpretation of the world not only relies on input from the senses, it also relies on our own sense of “body” – Antonio Damasio’s somatic markers.

The Satisfaction of Control

This is where video games are quickly approaching a potential crisis point in sensory overload. Even the best Hollywood thriller requires us to sit passively and consume the experience. We have no control over plot, dialogue or the character actions. We can only engage in the experience to a certain level. In fact, much of the appeal of a Hollywood thriller comes from this gap between what’s happening on the screen and what’s happening in our own minds. We can imagine possible outcomes or perhaps the director gives us knowledge the protagonist doesn’t have. We experience suspense as we see if the protagonist takes the same actions we would. We silently scream “Get out of the house!” to the teenage babysitter when we know the psychopathic killer is upstairs.

But video games erase this limitation. With a video game, we’re suddenly in control. Control is a powerfully seductive condition for humans. We naturally try to control as many elements of our environment as possible. And when we can exert control over something, we’re rewarded by our brains and a natural hit of dopamine. That’s why completing a puzzle or solving a riddle is so inherently satisfying. These are tiny exertions of control. In a video game, we are the authors of the script. It is we who decide how we react to dangerous situations. Suddenly we are not a passive audience. we are the actors. This is cognitive engagement at a whole different level. Suddenly the appeal of sensory stimulation is combined with the rewards we get from exercising control over novel situations. That’s a powerful one-two punch for our brains. And the veil of “non-reality” slips a little further.

Virtual Reality

The negative impacts of video games have been studied, but again, like TV, studies have been largely centred around one question: does the playing of video games lead to increased aggression and violence in children? And, like TV, the answer seems to be a qualified yes. For those already prone to violence, the playing of video games seems to reinforce these attitudes. But it’s also been argued that the playing of video games provides a cathartic release for violent tendencies.

Less research has been conducted on the cognitive impact of video games, and it’s here where the bigger problem might lie. A few studies have shown the playing of video games could be addictive. A Japanese study found that excessive video game playing during adolescence seems to alter the way brains develop, impairing the ability to focus attention for long periods of time. In fact, a number of studies have shown links between exposure to excessive sensory stimulation through electronic media and the incidence of ADHD and other attention deficit disorders. It’s this longer term altering of how our brains work that may represent the bigger danger in video games.

Video games combined violent scenarios, which we know to provide sensory jolts to the brain, with the seduction of control. What has limited the addictive appeal of video games to this point are two things: how realistic the scenarios are perceived to be and the way we interact with the games. And, in both these areas, technology is moving forward very quickly.

Video game graphics have come a long way, but they still lack the photo realism of a typical Hollywood movie. However, the distance between the two is lessening every day. How far away are we from a video game experience that matches the realism of Hollywood? Huge advances in computer graphics and sheer processing power are bringing the two closer and closer together. The day is not far away where our experience in a video game will feel like we’ve been dropped in the middle of a movie. And, with 3D and virtual reality technology, even the physical separation of a screen will soon disappear. The imaginary world will surround us in a highly realistic way. What will that do for the “veil of non-reality?”

The other area where video games have improved dramatically is in the way we control them. The control pad with various triggers and buttons was a artificial way to interact with the video game world. A spin-jump-kick combination was triggered by pushing down a few buttons while we sat in a chair. This helped our brain maintain it’s distance from the imagined reality. But Nintendo’s Wii changed how we interact with video games. Sophisticated sensors now translate our own body motions into corresponding digital commands for the game. Even our bodies are fooled into believing we’re actually playing golf or participating in a boxing match. Interestingly, Nintendo made the choice to make the graphics on the Wii less realistic, perhaps trying to maintain a “veil of non-reality.”

The Wii opens the door to a much more realistic way of controlling video games. Now our own body movements control the virtual character. Suddenly, our body is providing reinforcing feedback to our brain that this might just be real. When you combine this with photo-realistic visual input and audio input, one could forgive our brains for not being able to determine what is real and what isn’t.

Entertainment Overload?

If technology continues down the path it’s own, the virtual reality of a video game may be indistinguishable from the true reality of our lives. If the “veil of non-reality” permanently slips, we have a huge potential problem: our lives pale in comparison to the sensory possibilities of a virtual world. That’s why our brains may not be equipped to handle the overload. We may get addicted to sensation as the brain is fooled into giving us stronger and stronger hits of cortisol, dopamine, adrenaline and other natural narcotics. When the “veil of non-reality” slips away forever, our brains may not be equipped to handle the new virtual reality.

Nicotine and Memory: Things Seemed Better with Smoke

iStock_000003125082XSmall“My God,” you think, as you swirl your drink in front of you, “I could use a smoke right now.” The urge is all the stronger because of all those memories of past times with friends and a cigarette. Your life just seemed more fun when you were smoking. Was life more exciting before you kicked the habit? It sure seems so.

It’s not all your imagination. A recent study at Baylor College of Medicine says nicotine actually tricks the brain into linking cigarettes and the environment you’re in when you smoke them. The brain is wired to reward you with a shot of dopamine when you do things that ultimately end up in your living longer. The problem is that this mechanism was built to reward us in an environment where scarcity was the norm. So, we get a reward when we eat, for example. Move this forward into our age of excess and the result is rampant obesity.

This mechanism also fires when we’re in an environment that typically prompts these reward releases of dopamine. We’re driven to spend more time there. If we typically get rewarded in one location (i.e. great dinners at our parent’s house) and not another we develop a subconscious affinity for the rewarding environment.

So, what do cigarettes do to this hard wired reward mechanism? They short circuit it in a couple ways. Nicotine not only hijacks the dopamine reward system, but it also alters the way our memories are laid down, drawing us back to environments where we smoke. Nicotine supercharges the hippocampus, a part of the brain that lays down new memories. The Baylor study, which was done on mice, found that mice “on nicotine” recorded twice the neuronal activity as the control group. Nicotine tricks the brain into believing that smoking is a beneficial activity and laying down memories to reinforce this belief. It’s a double whammy for those trying to kick the habit.

Digging Googlized Brains: Front Page Stuff!

In my Just Behave column last week, I looked at the recent UCLA fMRI study on brain activity during online searching. I also looped this back to Nicholas Carr’s article from the summer, Is Google Making Us Stupid? and a few of my other posts on how cognition plays out when we search and potential neural remapping. All pretty geeky stuff right?

Well, it seems that putting the words “Google” and “brain” in the same title hit a nerve with readers. Somehow I made the front page of Digg (my first time) and Danny Sullivan fired me an email saying the story had 18,000 views in one day, making it one of the most read Search Engine Land articles ever. I know I find this stuff fascinating, but it’s good to know others do as well. Here was one of the Digg comments:

First off, this is the most interesting article I’ve seen on the front page of Digg in a good while. It doesn’t say that Jesus doesn’t exist nor does it compare Jesus to Obama. It’s about a revolutionary scientific study and it made it to the front page of Digg. WOW!

The column seems to have found it’s way onto a ton of blogs, but just in case you didn’t see it in any of your other feeds, thought I’d do a quick post. Feel free to continue to Digg it. I have to admit, now that I made the front page once..it’s getting a little addictive!

Picking and Choosing What We Pay Attention To

First published October 9, 2008 in Mediapost’s Search Insider

In a single day, you will be assaulted by hundreds of thousands of discrete bits of information. I’m writing this from a hotel room on the corner of 43rd and 8th in New York. Just a simple three-block walk down 8th Avenue will present me with hundreds bits of information: signs, posters, flyers, labels, brochures. By the time I go to sleep this evening, I will be exposed to over 3,000 advertising messages. Every second of our lives, we are immersed in a world of detail and distraction, all vying for our attention. Even the metaphors we use, such as “paying attention,” show that we consider attention a valuable commodity to be allocated wisely.

 

Lining Up for the Prefrontal Cortex

Couple this with the single-mindedness of the prefrontal cortex, home of our working memory. There, we work on one task at a time. We are creatures driven by a constant stack of goals and objectives. We pull our big goals out, one and a time, often break it into sub goals and tasks, and then pursue these with the selective engagement of the prefrontal cortex. The more demanding the task, the more we have to shut out the deluge of detail screaming for our attention.

Our minds have an amazingly effective filter that continually scans our environment, subconsciously monitoring all this detail, and then moving it into our attentive focus if our sub cortical alarm system determines we should give it conscious attention. So, as we daydream our way through our lives, we don’t unconsciously plow through pedestrians as they step in front of us. We’re jolted into conscious awareness until the crisis is dealt with, working memory is called into emergency duty, and then, post crisis, we have to try to pick up the thread of what we were doing before. This example shows that working memory is not a multi-tasker. It’s impossible to continue to mentally balance your check book while you’re trying to avoid smashing into the skateboarding teen who just careened off the side walk. Only one task at a time, thank you.

You Looked, but Did You See?

The power of our ability to focus and filter out extraneous detail is a constant source of amazement for me. We’ve done several engagement studies where we have captured physical interactions with an ad (tracked through an eye tracker) on a web page of several seconds in duration, then have participants swear there was no ad there. They looked at the ad, but their mind was somewhere else, quite literally. The extreme example of this can be found in an amusing experiment done by University of Illinois  cognitive psychologist  Daniel J. Simons and now enjoying viral fame through YouTube. Go ahead and check it out  before you read any further if you haven’t already seen it. (Count the number of times the white team passes the ball)

This selective perception is the door through which we choose to let the world into our conscious (did you see the Gorilla in the video? If not, go back and try again). And its door that advertisers have been trying to pry through for the past 200 years at least. We are almost never focused on advertising, so, in order for it to be effective, it has to convince us to divert our attention from what we’re currently doing. The strategies behind this diversion have become increasingly sophisticated. Advertising can play to our primal cues. A sexy woman is almost always guaranteed to divert a man’s attention. Advertising can throw a road block in front of our conscious objectives, forcing us to pass through them. TV ads work this way, literally bringing our stream of thought to a screeching halt and promising to pick it up again “right after these messages”. The hope is that there is enough engagement momentum for us to keep focused on the 30 second blurb for some product guaranteed to get our floors/teeth/shirts whiter.

Advertising’s Attempted Break-In

The point is, almost all advertising never enjoys the advantage of having working memory actively engaged in trying to understand its message. Every variation has to use subterfuge, emotion or sheer force to try to hammer its way into our consciousness. This need has led to the industry searching for a metric that attempts to measure the degree to which our working memory is on the job. In the industry, we call it engagement. The ARF defined engagement as “turning on a prospect to a brand idea enhanced by the surrounding media context.” Really, engagement is better described as smashing through the selective perception filter.

In a recent study, ARF acknowledged the importance of emotion as a powerful way to sneak past the guardhouse and into working memory. Perhaps more importantly, the study shows the power of emotion to ensure memories make it from short term to long term memory: “Emotion underlies engagement which affects memory of experience, thinking about the experience, and subsequent behavior.  Emotion is not a peripheral phenomenon but involves people completely.  Emotions have motivational properties, to the extent that people seek to maximize the experience of positive emotions and to minimize the experience of negative emotions.  Emotion is fundamental to engagement.  Emotion directs attention to the causally significant aspects of the experience, serves to encode and classify the ‘unusual’ (unexpected or novel) in memory, and promotes persisting rehearsal of the event-memory. In this way, thinking/feeling/memory articulates the experience to guide future behaviors.”

With this insight into the marketing mindset, honed by decades of hammering away at our prefrontal cortex, it’s little wonder why the marketing community has struggled with where search fits in the mix. Search plays by totally different neural rules. And that means its value as a branding tool also has to play by those same rules.  I’ll look at that next week.

False Memories: Was that Bugs Bunny or Just My Imagination?

First published September 11, 2008 inn Mediapost’s Search Insider

I’ve talked about how powerful our mental brand beliefs can be, even to the point of altering the physical taste of Coke. But where do these brand beliefs come from? How do they get embedded in the first place?

A Place for Every Memory, and Every Memory in its Place

Some of the most interesting studies that have been done recently have been done in the area of false memories. It appears that we have different memory “modules,” optimized for certain kinds of memory. We have declarative memory, where we store facts. We can call these memories back under conscious will and discuss them. Then we have implicit memory, or procedural memory, that helps us with our day-to-day tasks without conscious intervention. Remembering how to tie your shoes or which keys to hit on a keyboard are procedural memories

Declarative memory is further divided into semantic and episodic memory. In theory, semantic memory is where we store meaning, understandings and concept-based knowledge. It’s our database of tags and relationships that help us make sense of the real world. Episodic memory is our storehouse of personal experiences. But the division between the two is not always so clear or water-tight.

The Making of a Brand Memory

Let’s look at our building of a brand belief. We have personal experiences with a brand, either good or bad, that should be stored in episodic memory. Then we have our understandings of the brand, based on information provided, that should build a representation of that brand in semantic memory. This is where advertising’s influence should be stored.

But the divisions are not perfect. Some things slip from one bucket to the other. Many of our inherent evolutionary mechanisms were not built to handle some of the complexities of modern life. For instance, the emotional onslaught of modern advertising might slip over from semantic to episodic memory. There will also be impacts that reside at the implicit rather than the explicit level. Memory is not a neatly divided storage container. Rather, it’s like grabbing a bunch of ingredients out of various cupboards and throwing them together into a soup pot. It can be difficult knowing what came from where when it’s all mixed together.

This is what happens with false memories. Often, they’re external stories or information that we internalize, creating an imaginary happening that we mistakenly believe is an episode from our lives. Advertising has the power to plant images in our mind that get mixed up with our personal experiences, becoming part of our brand belief. These memories are all the more powerful because we swear they actually happened to us.

That Wascally Wabbit!

University of Washington researcher Elizabeth Loftus and her research partner Jacquie Pickrell have done hundreds of studies on the creation of false memories. In one, under the guise of evaluating a bogus advertising campaign, they showed participants a picture of Bugs Bunny in front of Disneyland, and then had them do other tasks. Later in the study, the participants were asked to remember a trip to Disneyland. Thirty percent of them remembered shaking Bugs Bunny’s hand when they visited the Magic Kingdom, which would be a neat trick, considering that Bugs is a Warner’s character and would not be welcome on Disney turf.

We all tend to elaborate on our personal experiences to make them more interesting. We “sharpen” our stories, downplaying the trivial and embellishing (and sometimes completely fabricating) the key points to impress others. When we do this, we will draw from any sources handy, including things we’ve seen or heard in the past that we’ve never personally experienced. To go back to last week’s Coke example, our fond memories of Coke might just as likely come from a Madison Avenue copywriter as from our own childhood. We idealize and color in the details so our conversations can be more interesting. It goes back to the human need to curry social favor by gossiping. When you have this natural human tendency fueled by billions of dollars of advertising, it’s often difficult to know where our lives end and our fantasies take over.

This mix of personal experience and implanted images explains part of where our brand beliefs come from. Next week, I’ll look at the power of word of mouth and the opinions of others.

For Coke, Brand Love is Blind

First published August 28, 2008 in Mediapost’s Search Insider

In 2003, Read Montague had a “why” question that was nagging at him. If Pepsi was chosen by the  majority of people in a blind taste test, why did Coke have the lion’s share of the cola market? It didn’t make sense. If Pepsi tasted better, why wasn’t it the market leader?

Fortunately, Read wasn’t just any cola consumer idly pondering the mysteries of brown sugared water. He had at his disposal a rather innovative methodology to explore his “why” question. Dr. Read Montague was the director of Baylor University’s Neuroimaging Lab and he just happened to have a spare multi-million dollar MRI machine kicking around. MRI machines allow us to see which parts of the brain “light up” when we undertake certain activities. Although fMRI scanning’s roots are in medicine, lately the technology has been applied with much fanfare to the world of market research.  Montague is one of the pioneer’s of this area, due in part to the 2003 Coke /Pepsi study, which went but the deceptively uninteresting title, “Neural Correlates of Behavioral Preference for Culturally Familiar Drinks” (Note: Montague has since picked up a knack for catchier titles. His recent book is  “Why Choose this Book? How We Make Decisions” ).

Believing in Brands

In my last two columns, I talked about how our emotions and beliefs are inseparably wrapped up in many brand relationships. The strongest brands evoke a visceral response, beyond the reach of reason, coloring our entire engagement and relationship with them. It doesn’t matter if these brands are better than their competitors. The important thing is that we believe they are better, and these beliefs are reinforced by emotional cues.

This certainly seemed to be the case with Coke and Pepsi. The market split was beyond reason. In fact, the irrationality of the market split caused Coca Cola to make the biggest marketing blunder in history in 1985. A brief recap of marketing history is in order here, because it highlights one of the challenges with market research: namely, that there’s a huge gulf of difference between what we say and what we do, thanks to the mysterious depths of our sub-cortical mind. It also sheds light on the strength of our brand beliefs.

Coke’s Crisis

Through the ’70s and ’80s, Coke’s market share lead over Pepsi was eroding to the point when, in the mid ’80s, Coke’s lead was only a few points over their rivals. This was due in no small part to the success of the Pepsi Challenge advertising campaign, where the majority of cola drinkers indicated they preferred the taste of Pepsi in blind taste tests. This wasn’t just a marketing ploy. Coke did their own blind taste tests and the results were the same. If people didn’t know what they were drinking, they preferred Pepsi. It was panic time in Atlanta.

Enter new Coke. It was a lighter, sweeter drink that was possibly the most thoroughly tested consumer product in history. Coke was preparing to kill the golden goose, and it wasn’t a decision they were taking lightly. If they were changing the secret recipe, they were making damned sure they were right before they rolled it out to market. So they tested, and tested, and tested again Coke meticulously did their home work, according to all the standard market research metrics. The results were consistent and overwhelming. In the tests, people loved New Coke. Not only did it blow the original Coke formulation away, it also trounced Pepsi. They asked people if they liked New Coke. Yes! Would you buy New Coke. Yes! Would this become your new favorite soft drink? Yes, Yes and Yes! Feeling exceptionally confident, Coke bit the bullet and rolled out New Coke. And the results, as they say, are now history.

Classic Coke’s Comeback

On April 23, 1985, Coke shocked the world by announcing the new formulation and ceasing production on the original formula. And, at first, it appeared the move was a success. In many markets, people bought new Coke at the same levels they had bought original Coke. They kept saying they preferred the taste. But there was one critical market that new Coke had to win over, and that wasn’t going to be easy. In the Southeast, the home of Coke, people weren’t so easy to convince. There, ardent Coke fans were mounting a counteroffensive. By May, the “Old Coke” backlash had spread to other parts of the U.S. and was picking up steam. Soon, a “black Coke” market emerged when deprived Coke drinkers started bring in the original Coke from overseas markets where the old formulation was still being bottled. By July, the Old Coke counteroffensive was so strong, the company capitulated and reintroduced the original formulation as Coke Classic. Within months, Coke Classic was outselling both New Coke and Pepsi and began racking up the highest sales increases for Coke in decades, rebuilding Coke’s lead in the market.

Although it eventually worked out in their favor, Coke executives were puzzled by the whole episode. President Don Keough admitted in a press conference, “There is a twist to this story which will please every humanist and will probably keep Harvard professors puzzled for years, The simple fact is that all the time and money and skill poured into consumer research on the new Coca-Cola could not measure or reveal the deep and abiding emotional attachment to original Coca-Cola felt by so many people.”

Keough was amazingly prescient in this statement, although he had the university wrong. Almost two decades later, it would be a professor at Baylor, not Harvard, that would dig further into the puzzle. Next column, we’ll see what one of the very first neuromarketing studies uncovered when Montague replicated the Pepsi Challenge in an fMRI machine.