The “Savanna” Hypothesis of Online Design

First published December 6, 2012 in Mediapost’s Search Insider

I’m currently reading a fascinating paper titled “Evolved Responses to Landscapes” by Gordon Orians and Judith Heerwagen that was written back in 1992. The objective was to see if humans have an evolved preference for an ideal habitat. The researchers called their hunch the Savanna Hypothesis, noting that because homo sapiens spent much of our evolutionary history on the plains of tropical Africa, we should have a natural affinity for this type of landscape.

Your typical savanna features some cover from vegetation and trees, but not too much, which would allow natural predators to advance unnoticed. The environment should offer enough lushness to indicate the presence of ample food and water. It should allow for easy mobility. And it should be visually intriguing, encouraging us to venture and explore our habitat.

Here’s a quote from the paper: “Landscapes that aid and encourage exploration, wayfinding and information processing should be more favored than landscapes that impede these needs.”

The researchers, after showing participants hundreds of pictures of different landscapes, found significant support for their hypothesis. Most of us have a preference for landscapes that resemble our evolutionary origin. And the younger we are, the more predictable the preference. With age, we tend to adapt to where we live and develop a preference for it.

In reading this study, I couldn’t help but equate it to Pirolli and Card’s Information Foraging Theory. The two PARC researchers said that the strategies we use to hunt for information in a hyperlinked digital format (such as a webpage) seem to correspond to evolved optimal foraging strategies used by many species, including humans back in our hunting and foraging days. If, as Pirolli and Card theorized, we borrow inherent strategies for foraging and adapt them for new purposes, like looking for information, why wouldn’t we also apply evolved environmental preferences to new experiences, like the design of a Web page?

Consider the description of an ideal habitat quoted above. We want to be able to quickly determine our navigation options, with just a teaser of things still to explore. We want open space, so we can quickly survey our options, but we also want the promise of abundant rewards, either in the form of food and sustenance — or, in the online case, information and utility. After all, what is a website but another environment to navigate?

I find the idea of creating a home page design that incorporates a liberal dose of intrigue and promise particularly compelling. In a physical space, such an invitation may take the form of a road or pathway curving behind some trees or over a gentle rise. Who can resist such an invitation to explore just a little further?

Why should we take the same approach with a home page or landing page? Orians and Heerwagen explain that we tend to “way-find” through new environments in three distinct stages: First, we quickly scan the environment to decide if it’s even worth exploring. Do we stay here or move on to another, more hospitable location? This very quick scan really frames all the interactions to take place after it. After this “go-no/go” scan, we then start surveying the environment to gather information and find the most promising path to take. The final phase — true engagement with our surroundings — is when we decide to stay put and get some things done.

Coincidentally (or not?), I have found users take a very similar approach to evaluating a webpage. We’ve even entrenched this behavior into a usability best practice we call the “3 Scan Rule.” The first scan is to determine the promise of the page. Is it visually appealing? Is it relevant? Is it user-friendly? All these questions should be able to be answered in one second or less. In fact, a study at Carleton University found that we can reliably judge the aesthetic appeal of a website in as short a span as 50 milliseconds. That’s less time than it takes to blink your eye.

The second scan is to determine the best path. This typically involves exploring the primary navigation options, scanning graphics and headings and quickly looking at bullet lists to determine how “rich” the page is. Is it relevant to our intent? Does it look like there’s sufficient content for us to invest our time? Are there compelling navigation options that offer us more? This scan should take no more than 10 seconds.

Finally, there’s the in-depth scan. It’s here where we more deeply engage with the content. This can take anywhere from several seconds to several minutes.

At this point, the connection between the inherently pleasing characteristics of the African savanna and a well-designed website is no more than a hypothesis on my part. But I have to admit: I find the concept intriguing, like a half-obscured pathway disappearing over a swell on the horizon, waiting to be explored.

The Death of the Purchase Funnel

First published June 21, 2012 in Mediapost’s Search Insider

A recent series of three posts on the Harvard Business Review blog by Karen Freeman, Patrick Spenner and Anna Bird explored some of the myths about how consumers make decisions. I think each of these has direct implications for search marketers, so over the next three weeks I want to explore them one at a time.

The first, titled “What Do Consumers Really Want? Simplicity,” talks about the breakdown of the purchase funnel. The HBR bloggers contend the funnel, which has been around for well over a hundred years, no longer applies to consumer behaviors. I concur, and said as much in my book, “The BuyerSphere Project.”

We differ a little on the reason for the demise, however. The HBR team credits the demise to cognitive overload on the part of the consumer. We’re simply bombarded by too much information on the purchase path to fit it all into the nice, simple, rational filtering process captured in St. Elmo Lewis’s elegant funnel-shaped model. The accompanying research, a survey of 7,000 consumers, shows decision simplicity was the number-one thing people wanted when making a purchase.

I agree that information overload is part of it, but I also believe that two other factors have led to the end of the purchase funnel. First, the purchase funnel assumes a rational filtering of options based on careful consideration of a consumer’s requirements. I don’t think this was ever the case. Emotions drive our decisions, and more often than not, rationality is applied after the fact to justify our choices. Prior to the Internet, emotion was tough to distinguish from rationality, as buyers didn’t have much control over the content they accessed during the consideration process. They were limited to whatever the marketer pushed out at them. So, whether driven by emotion or logic, they tended to go down the same path and display many of the same behaviors. Given the pervasive believe in humans as rational animals at the time, it was not surprising that a logic-driven model emerged.

The other factor, as I alluded to, was that the Internet shifted the balance of power during the purchase process. Suddenly, we could choose which paths we took during the consideration process. We weren’t all forced down the same path, according to some arbitrary notion of a funnel-shaped model.

What became clear, when consumers could choose their own path, was that the simplicity of the funnel model bore little relation to the actual paths consumers took. And those paths were driven by emotion. People bounced all around, depending on what they were looking to buy. They could go all the way to a shopping cart, then suddenly abandon it and go back to a destination that would be considered “upper funnel” and start all over again. From the outside looking in, this resembled a bowl of spaghetti much more than it did a funnel.

So, we have a trio of suspects in the death of the purchasing funnel: cognitive overload, emotion trumping logic, and consumers gaining more control over their consideration path. All lead to an interesting concept to consider: laying an online path that anticipates the emotional needs of the buyer, and yet keeps the information presented from overwhelming them. For example, marketing has traditionally taken a “turf war” approach to persuading a prospect: “as long as they’re on our turf, we do everything possible to close the sale.

But this doesn’t really match up with the three trends we’re talking about. What online consumers are looking for, according to the HBR research, is a safe online zone that will make their decision easier. Rather than going from site to site, collecting information and filtering out overt marketing hyperbole, what consumers want is a single information source they can trust. They want to be able to lower their “anti-BS” shields, because being a rational, cynical shopper takes a lot of time and effort.

Today, it’s extremely rare to find that trustworthy information on a site you can actually purchase from, but it’s starting to happen in some high activity categories, where independent portals facilitate this simplified approach to shopping. Travel comes to mind.

But let’s consider what would happen if a brand’s website took this approach. Rather than bombard a prospect with exaggerated sales pitches, putting them on the defensive, what if a more neutral, objective experience was provided?  After all, why shouldn’t the decision path be built on your own turf, giving you a home field advantage?

A Look at the Future through Google Glasses?

First published June 7, 2012 in Mediapost’s Search Insider

“A wealth of information creates a poverty of attention.” — Herbert Simon

Last week, I explored the dark recesses of the hyper-secret Google X project.  Two X Projects in particular seem poised to change our world in very fundamental ways: Google’s Project Glass and the “Web of Things.”

Let’s start with Project Glass. In a video entitled “One Day…,” the future seen through the rose-colored hue of Google Glasses seems utopian, to say the least. In the video, we step into the starring role, strolling through our lives while our connected Google Glasses feed us a steady stream of information and communication — a real-time connection between our physical world and the virtual one.

In theory, this seems amazing. Who wouldn’t want to have the world’s sum total of information available instantly, just a flick of the eye away?

Couple this with the “Web of Things,” another project said to be in the Google X portfolio.  In the Web of Things, everything is connected digitally. Wearable technology, smart appliances, instantly findable objects — our world becomes a completely inventoried, categorized and communicative environment.

Information architecture expert Peter Morville explored this in his book “Ambient Findability.”  But he cautions that perhaps things may not be as rosy as you might think after drinking the Google X Kool-Aid. This excerpt is from a post he wrote on Ambient Findability:  “As information becomes increasingly disembodied and pervasive, we run the risk of losing our sense of wonder at the richness of human communication.”

And this brings us back to the Herbert Simon quote — knowing and thinking are not the same thing. Our brains were not built on the assumption that all the information we need is instantly accessible. And, if that does become the case through advances in technology, it’s not at all clear what the impact on our ability to think might be. Nicholas Carr, for one, believes that the Internet may have the long-term effect of actually making us less intelligent. And there’s empirical evidence he might be right.

In his book “Thinking, Fast and Slow,”Noble laureate Daniel Kahneman says that while we have the ability to make intuitive decisions in milliseconds (Malcolm Gladwell explored this in “Blink”), humans also have a nasty habit of using these “fast” mental shortcuts too often, relying on gut calls that are often wrong (or, at the very least, biased) when we should be using the more effortful “slow” and rational capabilities that tend to live in the frontal part of our brain. We rely on beliefs, instincts and habits, at the expense of thinking. Call it informational instant gratification.

Kahneman recounts a seminal study in psychology, where four-year-old children were given a choice: they could have one Oreo immediately, or wait 15 minutes (in a room with the offered Oreo in front of them, with no other distractions) and have two Oreos. About half of the children managed to wait the 15 minutes. But it was the follow-up study, where the researchers followed what happened to the children 10 to 15 years later, that yielded the fascinating finding:

“A large gap had opened between those who had resisted temptation and those who had not. The resisters had higher measures of executive control in cognitive tasks, and especially the ability to reallocate their attention effectively. As young adults, they were less likely to take drugs. A significant difference in intellectual aptitude emerged: the children who had shown more self-control as four year olds had substantially higher scores on tests of intelligence.”

If this is true for Oreos, might it also be true for information? If we become a society that expects to have all things at our fingertips, will we lose the “executive control” required to actually think about things? Wouldn’t it be ironic if Google, in fulfilling its mission to “organize the world’s information” inadvertently transgressed against its other mission, “don’t be evil,” by making us all attention-deficit, intellectual-diminished, morally bankrupt dough heads?

The “Field of Dreams” Dilemma

First published May 3, 2012 in Mediapost’s Search Insider

There’s a chicken and an egg paradox in mobile marketing. Many mobile sites sit moldering in the online wilderness, attracting few to no visitors. The same could be said for many elaborate online customer portals, social media outposts or online communities. Somebody went to the trouble to build them, but no one came. Why?

Well, it could be because no one thinks to go to the trouble to look for them, just as no one expects to find a ball diamond in the middle of an Iowa cornfield. It wasn’t until the ghosts of eight Chicago White Sox players, banned for life from playing the game they loved, started playing on the “Field of Dreams” that anyone bothered to drive to Ray Kinsella’s farm.  There was suddenly a reason to go.

The problem with many out-of–the-way online destinations is that there is no good reason to go. Because of this, we make two assumptions:

–       If there is no good reason for a destination to exist, then the destination probably doesn’t exist. Or,

–       If it does exist, it will be a waste of time and energy to visit.

If we jump to either of these two conclusions, we don’t bother looking for the destination. We won’t make the investment required to explore and evaluate. You see, there is a built-in mechanism that makes a “Build it and they will come” strategy a risky bet.

This built-in mechanism comes from behavioral ecology and is called the “marginal value theorem.” It was first identified by Eric Charnov in 1976 and has since been borrowed to explain behaviors in online information foraging by Peter Pirolli, amongst others. The idea behind it is simple: We will only invest the time and effort to find a new “patch” of online information if we think it’s better than “patches” we already know exist and are easy to navigate to.  In other words, we’re pretty lazy and won’t make any unnecessary trips.

This cost/benefit calculation is done largely at a subconscious level and will dictate our online behaviors. It’s not that we make a conscious decision not to look for new mobile sites or social destinations. But unbeknownst to us, our brain is already passing value judgments that will tend to keep us going down well-worn paths. So, if we are looking for information or functionality that would be unlikely to find in a mobile site or app, but we know of a website that has just what we’re looking for and time is not a urgent matter, we’ll wait until we’re in front of our regular computer to do the research. We automatically disqualify the mobile opportunity because our “marginal value” threshold has not been met.

The same is true for social sites. If we believe that there is a compelling reason to seek out a Facebook page (promotional offers, information not available elsewhere) then we’ll go to the trouble to track it down. Otherwise, we’ll stick to destinations we know.

I believe the marginal value theorem plays an important role in defining the scope of our online worlds. We only explore new territory when we feel our needs won’t be met by destinations we already know and are comfortable with.  And if we rule out entire categories of content or functionality as being unlikely to adapt well to a mobile or social environment (B2B research in complex sales scenarios being one example) then we won’t go to the trouble to look for them.

I should finish off by saying that this is a moving target. Once there is enough critical mass in new online territory to reset visitor expectations, you’ve increased the “richness” of the patch to the point where the “marginal value” conditions are met and the brain decides it’s worth a small investment of time and energy.

In other words, if Shoeless Joe Jackson, Chick Gandil, Eddie Cicotte, Lefty Williams, Happy Felsch, Swede Risberg, Buck Weaver and Fred McMullin all start playing baseball in a cornfield, than it’s probably worth hopping on the tractor and head’n over to the Kinsella place!

Search and the Age of “Usefulness”

First published April 19, 2012 in Mediapost’s Search Insider

There has been a lot of digital ink spilled over the recent changes to Google’s algorithm and what it means for the SEO industry. This is not the first time the death knell has been rung for SEO. It seems to have more lives than your average barnyard cat. But there’s no doubt that Google’s recent changes throws a rather large wrench in the industry as a whole. In my view, that’s a good thing.

First of all, from the perspective of the user, Google’s changes mark an evolution of search beyond a tool used to search for information to one used by us to do the things we want to do. It’s moving from using relevance as the sole measure of success to incorporating usefulness.

The algorithm is changing to keep pace with the changes in the Web as a whole. No longer is it just the world’s biggest repository of text-based information; it’s now a living, interactive, functional network of apps, data and information, extending our capabilities through a variety of connected devices.

Google had to introduce these back-end changes. Not to do so would have guaranteed the company would have soon become irrelevant in the online world.

As Google succeeds in consistently interpreting more and more signals of user intent, it can become more confident in presenting a differentiated user experience. It can serve a different type of results set to a query that’s obviously initiated by someone looking for information than it does to the user who’s looking to do something online.

We’ve been talking about the death of the monolithic set of search results for years now. In truth, it never died; it just faded away, pixel by pixel. The change has been gradual, but for the first time in several years of observing search, I can truthfully say that my search experience (whether on Google, Bing or the other competitors) looks significantly different today than it did three years ago.

As search changes, so do the expectations of users. And that affects the “use case” of search. In its previous incarnation, we accepted that search was one of a number of necessary intermediate steps between our intent and our ultimate action. If we wanted to do something, we accepted the fact that we would search for information, find the information, evaluate the information and then, eventually, take the information and do something with it. The limitations of the Web forced us to take several steps to get us where we wanted to go.

But now, as we can do more of what we want to online, the steps are being eliminated. Information and functionality are often seamlessly integrated in a single destination. So we have less patience with seemingly superfluous steps between us and our destination. That includes search.

Soon, we will no longer be content with considering the search results page as a sort of index to online content. We will want the functionality we know exists served to us via the shortest possible path. We see this beginning as answers to common information requests are pushed to the top of the search results page.

What this does, in terms of user experience, is make the transition from search page to destination more critical than ever. As long as search was a reference index, the user expected to bounce back and forth between potential destinations, deciding which was the best match. But as search gets better at unearthing useful destinations, our “post-click” expectations will rise accordingly.  Whatever lies on the other side of that search click better be good. The changes in Google’s algorithm are the first step (of several yet to come) to ensure that it is.

What this does for SEO specialists is to suddenly push them toward considering a much bigger picture than they previously had to worry about. They have to think in terms of a search user’s unique intent and expectations. They have to understand the importance of the transition from a search page to a landing page and the functionality that has to offer. And, most of all, they have to counsel their clients on the increasing importance of “usefulness” — and how potential customers will use online to seek and connect to that usefulness.  If the SEO community can transition to that role, there will always be a need for them.

The SEO industry and the Google search quality team have been playing a game of cat and mouse for several years now. It’s been more “hacking” than “marketing” as SEO practitioners prod for loopholes in the Google algorithm. All too often, a top ranking was the end goal, with no thought to what that actually meant for true connections with prospects.

In my mind, if that changes, it’s perhaps the best thing to ever happen in the SEO business.

The Facebook Personality Test

First published February 2, 2012 in Mediapost’s Search Insider

I’ve always believed that you could learn everything you needed to know about a person by asking them who their favorite Beatle was. To back up the efficacy of this bulletproof psychological profiling tool, there are several online Beatle personality tests.  I mean really, if you can’t build an online quiz from it, how valid can a psychological tool be? I, personally, am primarily a John Lennon, with George Harrison undertones. But for the test to work, you actually have to know the Beatles on a fairly intimate level, and their status as a cultural baseline is regrettably eroding.

Now, you could use a more standard but much less interesting approach; say a Myers-Briggs personality sorter, or the “colors” test. I seem to bounce back and forth between “INFJ” and an “INTJ.”

But a recent paper by Ashwini Nadkarni and Stefan Hofman (both from Boston University) in the Journal of Personality and Individual Differences offered a more timely way to sort out the extroverts from the introverts (and the neurotics from the narcissists). It seems our usage of Facebook may provide a remarkably accurate glimpse into who we are.

For example, in their review of previous studies, Nadkarni and Hoffman found that people with neurotic tendencies like Facebook’s Wall, while those less neurotic prefer photos.

Several columns back I bemoaned the fact that the more we use social networking, the less social we seem to become. It appears that wasn’t just my perception. A 2009 study by E.S. Orr et al discovered that shy people love Facebook and spend way more time on it than non-shy people.  Ironically, for all the time they spend Facebooking, their friend networks are much smaller than their more gregarious but less-Facebook-engaged counterparts.

Narcissists also spend a higher-than-average amount of time on Facebook — over an hour a day.  They use the social site to promote themselves through profiles and photos. Conversely, multiple studies have shown than many Facebook fans use it to pump up low self-esteem. Through self-promotion and validation through virtual connections, they’ve found a kinder, gentle and more accepting world than the one that lies outside their bedroom door.

Studies have found that more socially awkward Facebook users have found that the less intense and demanding connections formed online can actually help them expose more of their personalities than they can in a more typical social environment. Some are more themselves on Facebook than they are in the real world. It’s not really creating a new persona, but rather exposing the one you’ve always possessed but felt too fragile to put out there in your day-to-day interactions.

Finally, what does it say about you if you use Facebook only sparingly or not at all? Are you hopelessly disconnected? Not at all. The more individualistic you are, the more goal-oriented you are and the more disciplined you are, the less you tend to use Facebook. Ironically, if this matches your personality type and you do use Facebook at all, you probably have a very healthy network of friends. I don’t know where I fall on the scale, but I probably spend less than an hour a month on Facebook — and for some reason, I seem to have a network of close to 400 friends.

Maybe it’s my irresistible INFJ/John Lennon-like qualities. I hope that doesn’t sound too narcissistic.

 

 

As We May Remember

First published January 12, 2012 in Mediapost’s Search Insider

In his famous Atlantic Monthly essay “As We May Think,” published in July 1945, Vannevar Bush forecast a mechanized extension to our memory that he called a “memex”:

Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, “memex” will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

Last week, I asked you to ponder what our memories might become now that Google puts vast heaps of information just one click away. And ponder you did:

I have to ask, WHY do you state, “This throws a massive technological wrench into the machinery of our own memories,” inferring something negative??? Might this be a totally LIBERATING situation? – Rick Short, Indium Corporation

Perhaps, much like using dictionaries in grade school helped us to learn and remember new information, Google is doing the same? Each time we “google” and learn something new aren’t we actually adding to our knowledge base in some way? – Lester Bryant III

Finally, I ran across this. Our old friend Daniel Wegner (transactive memory) and colleagues Betsy Sparrow and Jenny Liu from Columbia University actually did research on this very topic this past year. It appears from the study that our brains are already adapting to having Internet search as a memory crutch. Participants were less likely to remember information they looked up online when they knew they could access it again at any time. Also, if they looked up information that they knew they could remember, they were less likely to remember where they found it. But if the information was determined to be difficult to remember, the participants were more likely to remember where they found it, so they could navigate there again.

The beautiful thing about our capacity to remember things is that it’s highly elastic. It’s not restricted to one type of information. It will naturally adapt to new challenges and requirements. As many rightly commented on last week’s column, the advent of Google may introduce an entirely new application of memory — one that unleashes our capabilities rather than restricts them. Let me give you an example.

If I had written last week’s column in 1987, before the age of Internet Search, I would have been very hesitant to use the references I did: the Transactive Memory Hypothesis of Daniel Wegner, and the scene from “Annie Hall.”  That’s because I couldn’t remember them that well. I knew (or thought I knew) what the general gist was, but I had to search them out to reacquaint myself with the specific details of each. I used Google in both cases, but I was already pretty sure that Wikipedia would have a good overview of transactive memory and that Youtube would have the clip in question. Sure enough, both those destinations topped the results that Google brought back. So, my search for transactive memory utilized my own transactive memorizations. The same was true, by the way, for my reference to Vannevar Bush at the opening of this column.

By knowing what type of information I was likely to find, and where I was likely to find it, I could check the references to ensure they were relevant and summarize what I quickly researched in order to make my point. All I had to do was remember high-level summations of concepts, rather than the level of detail required to use them in a meaningful manner.

One of my favorite concepts is the idea of consilience – literally, the “jumping together” of knowledge. I believe one of the greatest gifts of the digitization of information is the driving of consilience. We can now “graze” across multiple disciplines without having to dive too deep in any one, and pull together something useful — and occasionally amazing. Deep dives are now possible “on demand.” Might our memories adapt to become consilience orchestrators, able to quickly sift through the sum of our experience and gather together relevant scraps of memory to form the framework of new thoughts and approaches?

I hope so, because I find this potential quite amazing.

Is Google Replacing Memory?

First published on January 5, 2012 in Mediapost’s Search Insider

“How old is Tony Bennett anyway?”

We were sitting in a condo on a ski hill with friends, counting down to the new year, when the ageless Mr. Bennett appeared on TV. One of us wondered aloud about just how many new years he has personally ushered in.

In days gone by, the question would have just hung there. It would probably have  filled up a few minutes of conversation. If someone felt strongly about the topic, it might even have started an argument. But, at the end of it all, there would be no definitive answer — just opinions.

This was the way of the world. We were restricted to the knowledge we could each jam in our noggin. And if our opinion conflicted with another’s, all we could do is argue.

In “Annie Hall, “ Woody Allen set up the scenario perfectly. He and Diane Keaton are in a movie line. Behind them, an intellectual blowhard is in mid-stream pontification on everything from Fellini’s movie-making to the media theories of Marshall McLuhan. Finally, Allen can take it no more and asks the camera “What do you do with a guy like this?” The “guy” takes exception and explains to Allen that he teaches a course on McLuhan at Columbia. But Allen has the last laugh — literally. He pulls the real Marshall McLuhan out from behind an in-lobby display, and McLuhan proceeds to intellectually eviscerate the Columbia professor.

“If only life was actually like this,” Allen sighs to the camera.

Well, now, some 35 years later, it may be. While we may not have Marshall McLuhan in our back pocket, we do have Google. And for many questions, Google is the final arbitrator. Opinions quickly give way to facts (or, at least, information presented as fact online.) No longer do we have to wonder how old Tony Bennett really is. Now, we can quickly check the answer.

If you stop to think about this, it has massive implications.

In 1985, Daniel Wegner proposed something along these lines when he introduced the hypothetical concept of transactive memory. An extension of “group mind,” transactive memory posits a type of meta-memory, where our own capacity to remember things is enhanced in a group by knowing whom in that group knows more than we do about any given topic.

In its simplest form, transactive memory is my knowing that my wife tends to remember birthdays and anniversaries — but I remember when to pay our utility bills. It’s not that I can’t remember birthdays and my wife can’t remember to pay bills, it’s just that we don’t have to go to the extra effort if we know our partner has it covered.

If Wegner’s hypothesis is correct (and it certainly passes my own smell test) then transactive memory has been around for a long time. In fact, many believe that the acquisition of language, which allowed for the development of transactive memory and other aids to survival in our ancestral tribes, was probably responsible for the “Great Leap Forward” in our own evolution.

But with ubiquitous access to online knowledge, transactive memory takes on a whole new spin. Now, not only don’t we have to remember as much as we used to, we don’t even have to remember who else might have the answer. For much of what we need to know, it’s as simple as searching for it on our smartphone.  Our search engine of choice does the heavy lifting for us.

This throws a massive technological wrench into the machinery of our own memories. Much of what it was originally intended for may no longer be required.  And this begs the question, “If we no longer have to remember stuff we can just look up online, what will we use our memory for?”

Something to ponder at the beginning of a new year.

Oh, and in case you’re wondering, Anthony Dominick Benedetto was born Aug. 3, 1926, making him 85.

Can Websites Make Us Forgetful?

First published December 15, 2011 in Mediapost’s Search Insider

Ever open the door to the fridge and then forget what you were looking for?

Or ever head to your bedroom and then, upon entering it, forget why you went there in the first place?

Me too. And it turns out we’re not alone. New research from the University of Notre Dame’s Gabriel Radvansky indicates this sudden “threshold” amnesia is actually pretty common. Walking from one room to another triggers an “event boundary” in the mind, which seems to act as a cue for the brain to file away short-term memories and move on to the next task at hand. If your tasks causes you to cross one of these event boundaries and you don’t keep your working memory actively engaged through deliberate focusing of attention, it could be difficult to remember what it was that motivated you in the first place.

Ever since I’ve read the original article, I’ve wondered if the same thing applies to navigating websites. If we click a link to move from one page to another, I am pretty sure the brain could well send out a “flush” signal that clears the slate of working memory.  I think we cross these event boundaries all the time online.

Let’s unpack this idea a bit, because if my suspicions prove to be correct, it opens up some very pertinent points when we think of online experiences.  Working memory is directed by active attention. It is held in place by a top-down directive from the brain. So, as long as we’re focused on memorizing a discrete bit of information (for example, a phone number) we’ll be able to keep it in our working memory. But when we shift our attention to something else, the working memory slate is wiped clean. The spotlight of attention determines what is retained in working memory and what is discarded.

Radvansky’s research indicates that moving from one room to another may act as a subconscious environmental cue that the things retained in working memory (i.e. our intent for going to the new room in the first place) can be flushed if we’re not consciously focusing our attention on it. It’s a sort of mental “palate cleansing” to ready the brain for new challenges. Radvansky discovered that it wasn’t distance or time that caused things to be forgotten. It was passing through a doorway. Others could travel exactly the same distance but remain in the same room and not forget what their original intention was. But as soon as a doorway was introduced, the rate of forgetting increased significantly.

Interestingly, one of the variations of Radvansky’s research used virtual environments, and the results were the same. So, if a virtual representation of a doorway triggered a boundary, would moving from one page of a website to another?

I think there are some distinctions here to keep in mind. If you go to a page with intent and you’re following navigational links to get closer to that intent, it’s probably pretty safe to assume that there is some “top-down” focus on that intent. As long as you keep following the “intent” path, you should be able to keep it in focus as you move from page to page. But what if you get distracted by a link on a page and follow that? In that case, your attention has switched and moving to another page may trigger the same “event boundary” dump of working memory. In that case, you may have to retrace your steps to pick up the original thread of intent.

I just finished benchmarking the user experience across several different sites for a client and found that consistent navigation is pretty rare in many sites, especially B2B ones.  If you did happen to forget your original intent as you navigated a few clicks deep in a website, backtracking could prove to be a challenge.

I also suspect that’s why a consistent look and feel as you move from page to page could be important. It may serve to lessen the “event boundary” effect, because there are similarities in the environment.

In any case, Dr. Radvansky’s research opens the door (couldn’t resist) to some very interesting speculations. I do know that in the 10 B2B websites I visited during the benchmarking exercise, the experience ranged from mildly frustrating to excruciatingly painful.

In the worst of these cases, a little amnesia might actually be a blessing.

The ZMOT Continued: More from Jim Lecinski

First published July 28, 2011 in Mediapost’s Search Insider

Last week, I started my conversation with Jim Lecinski, author of the new ebook from Google: “ZMOT, Winning the Zero Moment of Truth.”  Yesterday, Fellow Search Insider Aaron Goldman gave us his take on ZMOT. Today, I’ll wrap up by exploring with Jim the challenge that the ZMOT presents to organizations and some of the tips for success he covers in the book.

First of all, if we’re talking about what happens between stimulus and transaction, search has to play a big part in the activities of the consumer. Lecinski agreed, but was quick to point out that the online ZMOT extends well beyond search.

Jim Lecinski: Yes, Google or a search engine is a good place to look. But sometimes it’s a video, because I want to see [something] in use…Then [there’s] your social network. I might say, “Saw an ad for Bobby Flay’s new restaurant in Las Vegas. Anybody tried it?” That’s in between seeing the stimulus, but before… making a reservation or walking in the door.

We see consumers using… a broad set of things. In fact, 10.7 sources on average are what people are using to make these decisions between stimulus and shelf.

A few columns back, I shared the pinball model of marketing, where marketers have to be aware of the multiple touchpoints a buyer can pass through, potentially heading off in a new and unexpected direction at each point. This muddies the marketing waters to a significant degree, but it really lies at the heart of the ZMOT concept:

Lecinski: It is not intended to say, “Here’s how you can take control,” but you need to know what those touch points are. We quote the great marketer Woody Allen: “‘Eighty percent of success in life is just showing up.”

So if you’re in the makeup business, people are still seeing your ads in Cosmo and Modern Bride and Elle magazine, and they know where to buy your makeup. But if Makeupalley is now that place between stimulus and shelf where people are researching, learning, reading, reviewing, making decisions about your $5 makeup, you need to show up there.

Herein lies an inherent challenge for the organization looking to win the ZMOT: whose job is that? Our corporate org chart reflects marketplace realities that are at least a generation out of date. The ZMOT is virgin territory, which typically means it lies outside of one person’s job description. Even more challenging, it typically cuts across several departments.

Lecinski: We offer seven recommendations in the book, and the first one is “Who’s in charge?” If you and I were to go ask our marketer clients, “Okay, stimulus — the ad campaigns. Who’s in charge of that? Give me a name,” they could do that, right? “Here’s our VP of National Advertising.”

Shelf — if I say, “Who’s in charge of winning at the shelf?” “Oh. Well, that’s our VP of Sales” or “Shopper Marketing.” And if I say, “Product delivery,” – “well that’s our VP of Product Development” or “R&D” or whatever. So there’s someone in charge of those classic three moments. Obviously the brand manager’s job is to coordinate those. But when I say, “Who’s in charge of winning the ZMOT?” Well, usually I get blank stares back.

If you’re intent on winning the ZMOT, the first thing you have to do is make it somebody’s job. But you can’t stop there. Here are Jim’s other suggestions:

The second thing is, you need to identify what are those zero moments of truth in your category… Start to catalogue what those are and then you can start to say, “Alright. This is a place where we need to start to show up.”

The next is to ask, “Do we show up and answer the questions that people are asking?”

Then we talk about being fast and being alert, because up to now, stimulus has been characterized as an ad you control. But sometimes it’s not. Sometimes it’s a study that’s released by an interest group. Sometimes it’s a product recall that you don’t control. Sometimes it’s a competitor’s move. Sometimes it’s Colbert on his show poking a little fun at Miracle Whip from Kraft. That wasn’t in your annual plan, but now there’s a ZMOT because, guess what happens — everybody types in “Colbert Miracle Whip video.” Are you there, and what do people see? Because that’s how they’re going to start making up their mind before they get to Shoppers Drug Mart to pick up their Miracle Whip.

Winning the ZMOT is not a cakewalk. But it lies at the crux of the new marketing reality. We’ve begun to incorporate the ZMOT into the analysis we do for clients. If you don’t, you’re leaving a huge gap between the stimulus and shelf — and literally anything could happen in that gap.