How Our Brain Decides How Long We Look at Something

In this week, I’ve talked about how our attention focusing mechanism moves the spotlight of foveal attention around different environments: a Where’s Waldo picture, a webpage, a website with advertising and a search engine results page. I want to wrap up the week by looking at another study that looked at the role of brain waves in regulating how we shift the spotlight of attention from one subject to another.

Eye Spy

eyetrackingsaccadesIf you do eye tracking research, you soon learn to distinguish fixations and saccades. Fixations occur when we let our foveal attention linger on an element, even for a fraction of a second. Saccades are the movements our eyes make from one fixation to the next. These movements take mere milliseconds. Below I show an example of a single session “gaze plot” – the recording of how one individual’s eyes took in an ad (the image is from Tobii, the maker of the eye tracking equipment we use). The dots represent fixations, as measured in milliseconds. The bigger the dot the longer the eye stayed here. The lines connecting the dots are saccades.

When you look at a scene like the one shown here, the question becomes, how do you consciously move from one element to another. It’s not like you think “okay, I’ve spent enough time looking at the logo, perhaps it’s time to move to the headline of the ad, or the rather attractive bosom in the upper right corner (I suspect the participant was male)” The movements happen subconsciously. Your eyes move to digest the content of the picture on their own accord, based on what appears to be interesting based on your overall scan of the picture and your attention focusing mechanisms.

Keeping Our Eyes Running on Time

Knowing that the eye tends to move from spot to spot subconsciously, Dr. Earl Miller at MIT decided to look closer at the timing of these shifts of attention and what might cause them. He found that our brains appear to have a built in timer that moves our eyes around a scene. Our foveal focus shifts about 25 times a second and this shift seems to be regulated by our brain waves. Our brain cycles between high activity phases and low activity phases, the activity recorded through EEG scanning. Neurologists have known that these waves seem to be involved in the focusing of attention and the functions of working memory, but Miller’s study showed a conclusive link between these wave cycles and the refocusing of visual attention. It appears our brains have a built in metronome that dictates how we engage with visual stimuli. The faster the cycles, the faster we “think.”

But, it’s not as if we let our eyes dash around the page every 1/25 of a second. Our eyes linger in certain spots and jump quickly over others. Somewhere, something is dictating how long the eye stays in one spot. As our brain waves tick out the measures of attention, something in our brains decide where to invest those measures and how many should be invested.

The Information Scent Clock is Ticking

Here, I take a huge philosophical leap and tie together two empirical bodies of knowledge with nothing scientifically concrete to connect them that I’m aware of. Let’s imagine for a second that Miller’s timing of eye movements might play some role in Eric Charnov’s Marginal Value Theorem, which in turn plays a part in Peter Pirolli’s Information Foraging Theory.

Eric Charnov discovered that animals seem to have an innate and highly accurate sense of when to leave one source of food and move on to another, based on a calculation of the energy that would have to be expended versus the calories that would be gained in return. Obviously, organisms that are highly efficient at surviving would flourish in nature, passing on their genes and less efficient candidates would die out. Charnov’s marginal value calculation would be a relatively complex one if we sat down to work it out on paper (Charnov did exactly that, with some impressive charts and formulas) but I’m guessing the birds Charnov was studying didn’t take this approach. The calculations required are done by instinct, not differential calculus.

So, if birds can do it, how do humans fare? Well, we do pretty well when it comes to food. In fact, we’re so good at seeking high calorie foods, it’s coming back to bite us. We have highly evolved tastes for high fat, high sugar calorie rich foods. In the 20th Century, this built in market preference caused food manufacturers to pump out these foods by the truck load. Now, well over 1/3 of the population is considered obese. Evolution sometimes plays nasty tricks on us, but I digress.

Pirolli took Charnov’s marginal value theorem and applied it to how we gather information in an online environment. Do we use the same instinctive calculations to determine how long to spend on a website looking for the information we’re seeking? Is our brain doing subconscious calculations the entire time we’re browsing online, telling us to either click deeper on a site or give up and go back to Google? I suspect the answer is yes. And, if that’s the case, are our brain waves that dictate how and where we spend our attention part of this calculation, a mental hourglass that somehow factors into Charnov’s theorem? If so, it behooves us to ensure our websites instill a sense of information scent as soon as possible. The second someone lands on our site, the clock is already ticking. Each tick that goes by without them finding something relevant devalues our patch according to Charnov’s theorem.

How Our Brains “Google”

So far this week, I’ve covered how our brains find Waldo, scan a webpage and engage with online advertising. Today, I’m looking at how our brains help find the best result on a search engine.

Searching by Habit

First, let’s accept the fact that most of us have now had a fair amount of experience searching for things on the internet, to the point that we’ve now made Google a verb. What’s more important, from a neural perspective, is that searching is now driven by habit. And that has some significant implications for how our brain works.

Habits form when we do the same thing over and over again. In order for that to happen, we need what’s called a stable environment. Whatever we’re doing, habits only form when the path each time is similar enough that we don’t have to think about each individual junction and intersection. If you drive the same way home from work each day, your brain will start navigating by habit. If you take a different route every single day, you’ll be required to think through each and every trip. Parts of the brain called the basal ganglia seem to be essential in recording these habitual scripts, acting as sort of a control mechanism telling the brain when it’s okay to run on autopilot and when it needs to wake up and pay attention. Ann Graybiel from MIT has done extensive work exploring habitual behaviors and the role of the basal ganglia.

The Stability of the Search Page

A search results page, at least for now, provides such a stable environment. Earlier this week, I looked at how our brain navigates webpages. Even though each website is unique, there are some elements that are stable enough to allow for habitual conditioned routines to form. The main logo or brand identifier is usually in the upper left. The navigation bar typically runs horizontally below the logo. A secondary navigation bar is typically found running down the left side. The right side is usually reserved for a feature sidebar or, in the case of a portal, advertising. Given these commonalities, there is enough stability in most website’s designs that we navigate for the first few seconds on autopilot.

Compared to a website, a search engine results page is rigidly structured, providing the ideal stable environment for habits to form. This has meant a surprising degree of uniformity in people’s search behaviors. My company, Enquiro, has been looking at search behavior for almost a decade now and we’ve found that it’s remained remarkably consistent. We start in the upper left, break off a “chunk” of 3 to 5 results and scan it in an “F” shaped pattern. The following excerpts from The BuyerSphere Project give a more detailed walk through of the process.

searchheatmap11 – First, we orient ourselves to the page. This is something we do by habit, based on where we expect to see the most relevant result. We use a visual anchor point, typically the blue border that runs above the search results, and use this to start our scanning in the upper left, a conditioned response we’ve called the Google Effect. Google has taught us that the highest relevance is in the upper left corner

Searchheatmap22 – Then, we begin searching for information scent. This is a term from information foraging theory, which we’ve covered in our eye tracking white papers. In this particular case, we’ve asked our participants to look for thin, light laptops for their sales team. Notice how the eye tracking hot spots are over the words that offer the greatest “scent”, based on the intention of the user. Typically, this search for scent is a scanning of the first few words of the title of the top 3 or 4 listings.

Searchheatmap33 – Now the evaluation begins. Based on the initial scan of the beginnings of titles from the top 3 or 4 listings, users begin to compare the degree of relevance of some alternatives, typically by comparing two at a time. We tend to “chunk” the results page into sections of 3 or 4 listings at a time to compare, as this has been shown to be a typical limit of working memory9 when considering search listing alternatives

searchheatmap44 -It’s this scanning pattern, roughly in the shape of an “F”, that creates the distinct scan pattern that we first called the “Golden Triangle” in our first eye tracking study. Users generally scan vertically first, creating the upright of the “F”, then horizontally when they pick up a relevant visual cue, creating the arms of the F. Scanning tends to be top heavy, with more horizontal scanning on top entries, which over time creates the triangle shape.


searchheatmap5(2)5 – Often, especially if the results are relevant, this initial scan of the first 3 or 4 listings will result in a click. If two listings or more listings in the initial set look to be relevant, the user will click through to both and compare the information scent on the landing page. This back and forth clicking is referred to as “pogo sticking”. It’s this initial set of results that represents the prime real estate on the page.

searchheatmap66 – If the initial set doesn’t result in a successful click through, the user continues to “chunk” the page for future consideration. The next chunk could be the next set of organic results, or the ads on the right hand side of the page. There, the same F Shaped Scan patterns will be repeated. By the way, there’s one thing to note about the right hand ads. Users tend to glance at the first ad and make a quick evaluation of the relevance. If the first ad doesn’t appear relevant, the user will often not scan any further, passing judgement on the usefulness and relevance of all the ads on the right side based on their impression of the ad on top.

So, that explains how habits dictate our scanning pattern. What I want to talk more about today is how our attention focusing mechanism might impact our search for information scent on the page.

The Role of the Query in Information Scent

Remember the role of our neuronal chorus, firing in unison, in drawing our attention to potential targets in our total field of vision. Now, text based web pages don’t exactly offer a varied buffet of stimuli, but I suspect the role of key words in the text of listings might serve to help focus our attention.

In a previous post, I mentioned that words are basically abstract visual representations of ideas or concepts. The shape of the letters in a familiar word can draw our attention. It tends to “pop out” at us from the rest of the words on the page. I suspect this “pop out” effect could be the result of Dr. Desimone’s neural synchrony patterns. We may have groups of neurons tuned to pick certain words out of the sea of text we see on a search page.

The Query as a Picture

This treating of a word as a picture rather than text has interesting implications for the work our brain has to do. The interpretation of text actually calls a significant number of neural mechanisms into play. It’s fairly intensive processing. We have to visually intrepret the letters, run it through the language centres of our brain, translate into a concept and only then can we capture the meaning of the word. It happens quickly, but not nearly as quickly as the brain can absorb a picture. Pictures don’t have to be interpreted. Our understanding of a picture requires fewer mental “middle men” in our brain, so it takes a shorter path. Perhaps that’s why one picture is worth a thousand words.

But in the case of logos and very well known words, we may be able to skip some of the language processing we would normally have to do. The shape of the word might be so familiar, we treat it more like an icon or picture than a word. For example, if you see your name in print, it tends to immediately jump out at you. I suspect the shape of the word might be so familiar that our brain processes it through a quicker path than a typical word. We process it as a picture rather than language.

Now, if this is the case, the most obvious candidate for this “express processing” behavior would be the actual query we use. And we have a “picture” of what the word looks like already in our minds, because we just typed it into the query box. This would mean that this word would pop out of the rest of the text quicker than other text. And, through eye tracking, there are very strong indications that this is exactly what’s happening. The query used almost inevitably attracts foveal attention quicker than anything else. The search engines have learned to reinforce this “pop out” effect by using hit bolding to put the query words in bold type when ever they appear in the results set.

Do Other Words Act as Scent Pictures?

If this is true of the query, are there other words that trigger the same pop out effect? I suspect this to also be true. We’ve seen that certain word attract more than their fair share of attention, depending on the intent of the user. Well know brands typically attract foveal attention. So do prices and salient product features. Remember, we don’t read search listings, we scan them. We focus on a few key words and if there is a strong enough match of information scent to our intent, we click on the listing.

The Intrusion of Graphics

Until recently, the average search page was devoid of graphics. But all the engines are now introducing richer visuals into many results sets. A few years ago we did some eye tracking to see what the impact might be. The impact, as we found out, was that the introduction of a graphic significantly changed the conditioned scan patterns I described earlier in the post.

eshapedpatternThis seems to be a perfect illustration of Desimone’s attention focusing mechanism at work. If we’re searching for Harry Potter, or in the case of the example heat map shown below, an iPhone, we likely have a visual image already in mind. If a relevant image appears on the page, it hits our attention alarms with full force. First of all, it stands out from the text that surrounds it. Secondly, our pre-tuned neurons immediately pick it out in our peripheral vision as something worthy of foveal focus because it matches the picture we have in our mind. And thirdly, our brain interprets the relevancy of the image much faster than it can the surrounding text. It’s an easier path for the attention mechanisms of our brain to go down and our brains follow the same rules as my sister-in-law: no unnecessary trips.

The result? The F Shaped Scan pattern, which is the most efficient scan pattern for an ordered set of text results, suddenly becomes an E shaped pattern. The center of the E is on the image, which immediately draws our attention. We scan the title beside it to confirm relevancy, and then we have a choice to make. Do we scan the section above or below. Again, our peripheral vision helps make this decision by scanning for information scent above and below the image. Words that “pop out” could lure us up or down. Typically, we expect greater relevancy higher in the page, so we would move up more often than down.

Tomorrow, I’ll wrap up my series of posts on how our brains control what grabs our attention by looking at another study that indicates we might have a built in timer that governs our attention span and we’ll revisit the concept of the information patch, looking at how long we decide to spend “in the patch.”

How Our Brain Scans a Webpage

eyesYesterday, I explained how our brain finds “Waldo.” To briefly recap the post:

  • We have two neural mechanisms for seeing things we might want to pay attention to: a peripheral scanning system that takes in a wide field of vision and a focused (foveal) system that allows us to drill down to details
  • We have neurons that are specialists in different areas: i.e. picking out colors, shapes and disruptions in patterns
  • We use these recruited neuronal swat teams to identify something we’re looking for in our “mind’s eye” (the visual cortex) prior to searching for it in our environment
  • These swat teams focus our attention on our intended targets by synchronizing their firing patterns (like a mental Flash Mob) which allows them to rise above the noise of the other things fighting for our attention.

Today, let’s look at the potential implications of this in our domain, specifically interactions with websites.

But First: A Word about Information Scent

I’ve talked before about Pirolli’s Information Foraging Theory (and another post from this blog). Briefly, it states that we employ the same strategies we use to find food when we’re looking for information online. That’s because, just like food, information tends to come in patches online and we have to make decisions about the promise of the patch, to determine whether we should stay there or find a new patch. There’s another study I’ve yet to share (it will be coming in a post later this week) that indicates our brain might have a built in timer that controls how much time we spend in a patch and when we decide to move on.

The important point for this post is that we have a mental image of the information we seek. We picture our “prey” in our mind before looking for it. And, if that prey can be imagined visually, this will begin to recruit our swat team of neurons to help guide us to the part of the page where we might see it. Just like we have a mental picture of Waldo (from yesterday’s post) that helps us pick him out of a crowd, we have a mental picture of whatever we’re looking for.

Pirolli talks about information scent. These are the clues on a page that the information we seek lies beyond a link or button. Now, consider what we’ve learned about how the brain chooses what we pay attention to. If a visual representation of information is relevant, it acts as a powerful presentation of information scent. The brain processes images much faster than text (which has to be translated by the brain). We would have our neuronal swat team already primed for the picture, singing in unison to draw the spotlight of our attention towards it.

Neurons Storming Your Webpage

sunscreenshotFirst, let me share some of the common behaviors we’ve seen through eye tracking on people visiting websites (in an example from The BuyerSphere Project). I’ll try to interpret what’s happening in the brain:

The heat map shows the eye activity on a mocked up home page. Remember, eye tracking only captures foveal attention, not peripheral, so we’re seeing activity after our brain has already focused the spotlight of attention. For example, notice how the big picture has almost no eye tracking “heat” on it. Most of the time, we don’t have to focus our fovea on a picture to understand what’s in it (the detail rich Waldo pictures would be the exception). Our peripheral vision is more than adequate to interpret most pictures. But consider what happens when the picture matches the target in our “mind’s eye”. The neurons draw our eye to it.

One thing to think about. Words shown in text are pictures too. I’ll be coming back to this theme a couple of times – but a word is nothing more than a picture that represents a concept. For example, the Sun logo in the upper left (1) is nothing more than a picture that our brain associates with the company Sun Microsystems. To interpret this word, the brain first has to interpret the shape of the word. That means there are neurones that recognize straight edges, others than recognize curved edges and others that look for the overall “shape” of the word. Words too can act as information targets that we picture mentally before seeing it in front of us. For example, let’s imagine that we’re a developer. The word “DEVELOPER” (2) has a shape that is recognizable to us because we’ve seen it so often. The straight strokes of the E’s and V’s, sandwiched between the curves of the D’s, O’ and P’s. As we scan the overall page, our “Developer” neurons may suddenly wake up, synchronize their firing and draw the eye here as well. “Developer” already has a prewired connection in our brains. This is true for all the words we’re most familiar with, including brands like Sun. This is why we see a lot of focused eye activity on these areas of the picture.

Intent Clustering

In the last part of today’s post, I want to talk about a concept I spent some time on in the BuyerSphere Project: Intent Clustering. I’ve always know this makes sense from an Information Scent perspective, but now I know why from a neural perspective as well.

Intent clustering is creating groups of relevant information cues in the same area of the page. For example, for a product category on an e-commerce page, an intent cluster would include a picture of the product, a headline with the product category name, short bullet points with salient features and brands and perhaps relevant logos. An Intent cluster immediately says to the visitor that this is the right path to take to find out more about a certain topic or subject. The page shown has two intent clusters that were aligned with the task we gave, one in the upper right sidebar (3) and one in the lower left hand corner (4). Again, we see heat around both these areas.

Why are intent clusters “eye candy” for visitors? It’s because we’ve stacked the odds for these clusters to be noticed peripherally in our favor. We’ve included pictures, brands, familiar words and hints of rich information scent in well chosen bullet points. This combination is almost guaranteed to set our neural swat teams singing in harmony. Once scanned in peripheral vision, the conductor (the FEF I talked about in yesterday’s post) of our brain swings our attention spotlight towards the cluster for more engaged consumption, generating the heat we see in the above heatmap.

Tomorrow, I’ll be looking at how these mechanisms can impact our engagement with online display ads.

Summer Stories: How I Became a Researcher

First published August 13, 2009 in Mediapost’s Search Insider

About six years ago, I had one of those life-changing moments that set me on a new path. I’ve always been curious. I’ve always had questions, and up to that point in my life, I was usually able to find an answer, with enough perseverance. But in 2003, I had a question that no one seemed able to answer.  It didn’t seem to be an especially difficult question, and I knew someone had the answer. They just weren’t sharing it.

The Unanswerable Question

The question was this: what percentage of searchers click on the organic results and what percentage click on the sponsored ads? Today, that’s not even a question; it’s common knowledge for search marketers. But in 2003, that wasn’t the case. Sponsored search ads were still in their infancy (Overture had just been acquired by Yahoo, and Google’s AdWords was only a couple years old) and no one at either engine was sharing the clickthrough breakdowns between organic and paid.

I reached out to everyone I knew in the industry, but either they didn’t know, or they weren’t willing to go public with the info. My connections into Google and Yahoo were nonexistent at the time. No one, it seemed, had the answer. My curiosity was stymied. And that’s when my revelation happened. If no one had the answer, perhaps I could provide it.

At the time, research was not something Enquiro did. When we wanted to find out an answer, we combed through the forums, just like everyone else. But there seemed to be a noticeable gap in available information. There was plenty of discussion about technical SEO tactics, but no one seemed to be interested in how people actually used search engines.

To me, this was an unforgiveable oversight. If we were using search as a marketing channel, shouldn’t we have some understanding of how our prospects used search?  Off the top of my head, I jotted down a list of several questions I had about how people actually search; questions that appeared to have no readily available answers. It was at that point that I officially became a researcher.

Discovering “Why”

Our first research project proved to set the path we would go down for much of the follow-up: we just looked at how people used search to do things. Our methodology has become much tighter and we now have added eye-tracking and even neuro-scanning to our arsenal, but from the beginning, our research was more focused on “why” than “what.” The first paper was called “Inside the Mind of the Searcher” and it’s still referenced on a regular basis. Frankly, we were surprised with how quickly it was picked up in the industry. Suddenly, we became the experts on search user behavior, a crown I was uncomfortable with at the beginning. Yes, we were exploring new ground, but I always worried about how representative this was to the real world. Did people really do what we said they did, or was it just a research-created anomaly?

Defining the Golden Triangle

For us, the groundbreaking study was our first eye tracking study, done through Eyetools in San Francisco. I had read the Poynter study about how people interacted with online publications and was fascinated. “What if,” I wondered, “we did this with a search engine?” I found a similarly curious cohort in Kevin Lee from DidIt and together with Eyetools we launched the first study, which discovered the now-famous “Golden Triangle.” I remember sitting with Kevin in a speaker prep room at a show whose name escapes me, looking at the very first results of the data. The pattern jumped off the page:

“Look at that!” I said, “It’s a triangle!”

Kevin, always the search optimizer, said, “We need something catchy to call it, something that we can optimize for. The Magic Triangle?”

Because the heat map tends to indicate the most popular areas in a reddish yellow color, the answer was right in front of us. I can’t remember whether it was Kevin or I that first said it, but as soon as we said it, we knew the name would stick: “It’s a gold color… The Golden Triangle?”

Is It Real?

Even with the release of the study and the quick acceptance, I still questioned whether this represented real behavior. It was later that year when I got the confirmation I needed. I had just presented the results during a session at an industry show and was stepping down from the stage. Someone was quietly standing in the corner and came over as I started to head out of the room.

“Hi. I just wanted to let you know. I work with Yahoo on user experience and your heat map looks identical to our internal ones. I actually thought you had somehow got your hands on ours.” The validation was a few years in coming, but very welcome when it finally arrived.

Today, ironically, things have come full circle. I have talked to sales and engineering teams at all the major engines and much of the research they refer to about user behavior comes from Enquiro.

And the answer to my original question has held remarkably consistent in the past 6 years: What percentage of users click on paid ads vs. organic listings? For commercial searches, it’s about 70% organic, 30% paid. Just in case you were curious.

A Cognitive Walk Through of Searching

First published October 23, 2008 in Mediapost’s Search Insider

Two weeks ago, I talked about the concept of selective perception, how subconsciously we pick and choose what we pay attention to. Then, last week, I explained how engagement with search is significantly different than engagement with other types of advertising. These two concepts set the stage for what I want to do today. In this column, I want to lay out a step-by-step hypothetical walk-through of our cognitive engagement with a search page.

Searching on Auto Pilot

First, I think it’s important to clear up a common misunderstanding. We don’t think our way through an entire search interaction. The brain only kicks into cognitive high gear (involving the cortex) when it absolutely needs to. When we’re engaged in a mental task, any mental task, our brain is constantly looking for cognitive shortcuts to lessen the workload required. Most of these shortcuts involve limbic structures at the sub-cortical level, including the basal ganglia, hippocampus, thalamus and nucleus accumbens. This is a good thing, as these structures have been honed through successful generations to simplify even the most complicated tasks. They’re the reason driving is much easier for you now than it was the first time you climbed behind the wheel. These structures and their efficiencies also play a vital role in our engagement with search.

So, to begin with, our mind identifies a need for information. Usually, this is a sub task that is part of a bigger goal. The goal is established in the prefrontal cortex and the neural train starts rolling toward it. We realize there’s a piece of information missing that prevents us from getting closer to our goal – and, based on our past successful experiences, we determine that a search engine offers the shortest route to gain the information. This is the first of our processing efficiencies. We don’t deliberate long hours about the best place to turn. We make a quick, heuristic decision based on what’s worked in the past. The majority of this process is handled at the sub-cortical level.

The Google Habit

Now we have the second subconscious decision. Although we have several options available for searching, the vast majority of us will turn to Google, because we’ve developed a Google habit. Why spend precious cognitive resources considering our options when Google has generally proved successful in the past? Our cortex has barely begun to warm up at this point. The journey thus far has been on autopilot.

The prefrontal cortex, home of our working memory, first sparked to life with the realization of the goal and the identification of the sub task, locating the missing piece of information. Now, the cortical mind is engaged once again as we translate that sub task into an appropriate query. This involves matching the concept in our minds with the right linguistic label. Again, we’re not going to spend a lot of cognitive effort on this, which is why query construction tends to start simply and become longer and more complex only if required. In this process, the label, the query we plugged into the search box, remains embedded in working memory.

Conditioned Scanning

At this point, the prefrontal cortex begins to idle down again. The next exercise is handled by the brain as a simple matching game. We have the label, or query, in our mind. We scan the page in the path we’ve been conditioned to believe will lead to the best results: starting in the upper left, and then moving down the page in an F-shaped scan pattern. All we want to do is find a match between the query in our prefrontal cortex and the results on the page.

Here the brain also conserves cognitive processing energy by breaking the page into chunks of three or four results. This is due to the channel capacity of our working memory and how many discrete chunks of information we can process in our prefrontal cortex at a time. We scan the results looking first for the query, usually in the title of the results. And it’s here where I believe a very important cognitive switch is thrown.

The “Pop Out” Effect

When we structure the query, we type it into a box. In the process, we remember the actual shape of the phrase. When we first scan results, we’re not reading words, we’re matching shapes. In cognitive psychology, this is called the “pop out” effect. We can recognize shapes much faster than we can read words. The shapes of our query literally “pop out” from the page as a first step toward matching relevance. The effect is enhanced by query (or hit) bolding. This matching game is done at the sub-cortical level.

If the match is positive (shape = query), then our eye lingers long enough to start picking up the detail around the word. We’ve seen in multiple eye tracking studies that foveal focus (the center of the field of vision) tends to hit the query in the title, but peripheral vision begins to pick up words surrounding the title. In our original eye tracking study, we called this semantic mapping. In Peter Pirolli’s book, “Information Foraging,” he referred to this activity as spreading activation. It’s after the “pop out” match that the prefrontal cortex again kicks into gear. As additional words are picked up, they are used to reinforce the original scent cue. Additional words from the result pull concepts into the prefrontal cortex (recognized URL, feature, supporting information, price, brand), which tend to engage different cortical regions as long-term memory labels are paged and brought back into the working memory. If enough matches with the original mental construct of the information sought are registered, the link is clicked.

Next week, we’ll look at the nature of this memory recall, including the elusive brand message.

Picking and Choosing What We Pay Attention To

First published October 9, 2008 in Mediapost’s Search Insider

In a single day, you will be assaulted by hundreds of thousands of discrete bits of information. I’m writing this from a hotel room on the corner of 43rd and 8th in New York. Just a simple three-block walk down 8th Avenue will present me with hundreds bits of information: signs, posters, flyers, labels, brochures. By the time I go to sleep this evening, I will be exposed to over 3,000 advertising messages. Every second of our lives, we are immersed in a world of detail and distraction, all vying for our attention. Even the metaphors we use, such as “paying attention,” show that we consider attention a valuable commodity to be allocated wisely.


Lining Up for the Prefrontal Cortex

Couple this with the single-mindedness of the prefrontal cortex, home of our working memory. There, we work on one task at a time. We are creatures driven by a constant stack of goals and objectives. We pull our big goals out, one and a time, often break it into sub goals and tasks, and then pursue these with the selective engagement of the prefrontal cortex. The more demanding the task, the more we have to shut out the deluge of detail screaming for our attention.

Our minds have an amazingly effective filter that continually scans our environment, subconsciously monitoring all this detail, and then moving it into our attentive focus if our sub cortical alarm system determines we should give it conscious attention. So, as we daydream our way through our lives, we don’t unconsciously plow through pedestrians as they step in front of us. We’re jolted into conscious awareness until the crisis is dealt with, working memory is called into emergency duty, and then, post crisis, we have to try to pick up the thread of what we were doing before. This example shows that working memory is not a multi-tasker. It’s impossible to continue to mentally balance your check book while you’re trying to avoid smashing into the skateboarding teen who just careened off the side walk. Only one task at a time, thank you.

You Looked, but Did You See?

The power of our ability to focus and filter out extraneous detail is a constant source of amazement for me. We’ve done several engagement studies where we have captured physical interactions with an ad (tracked through an eye tracker) on a web page of several seconds in duration, then have participants swear there was no ad there. They looked at the ad, but their mind was somewhere else, quite literally. The extreme example of this can be found in an amusing experiment done by University of Illinois  cognitive psychologist  Daniel J. Simons and now enjoying viral fame through YouTube. Go ahead and check it out  before you read any further if you haven’t already seen it. (Count the number of times the white team passes the ball)

This selective perception is the door through which we choose to let the world into our conscious (did you see the Gorilla in the video? If not, go back and try again). And its door that advertisers have been trying to pry through for the past 200 years at least. We are almost never focused on advertising, so, in order for it to be effective, it has to convince us to divert our attention from what we’re currently doing. The strategies behind this diversion have become increasingly sophisticated. Advertising can play to our primal cues. A sexy woman is almost always guaranteed to divert a man’s attention. Advertising can throw a road block in front of our conscious objectives, forcing us to pass through them. TV ads work this way, literally bringing our stream of thought to a screeching halt and promising to pick it up again “right after these messages”. The hope is that there is enough engagement momentum for us to keep focused on the 30 second blurb for some product guaranteed to get our floors/teeth/shirts whiter.

Advertising’s Attempted Break-In

The point is, almost all advertising never enjoys the advantage of having working memory actively engaged in trying to understand its message. Every variation has to use subterfuge, emotion or sheer force to try to hammer its way into our consciousness. This need has led to the industry searching for a metric that attempts to measure the degree to which our working memory is on the job. In the industry, we call it engagement. The ARF defined engagement as “turning on a prospect to a brand idea enhanced by the surrounding media context.” Really, engagement is better described as smashing through the selective perception filter.

In a recent study, ARF acknowledged the importance of emotion as a powerful way to sneak past the guardhouse and into working memory. Perhaps more importantly, the study shows the power of emotion to ensure memories make it from short term to long term memory: “Emotion underlies engagement which affects memory of experience, thinking about the experience, and subsequent behavior.  Emotion is not a peripheral phenomenon but involves people completely.  Emotions have motivational properties, to the extent that people seek to maximize the experience of positive emotions and to minimize the experience of negative emotions.  Emotion is fundamental to engagement.  Emotion directs attention to the causally significant aspects of the experience, serves to encode and classify the ‘unusual’ (unexpected or novel) in memory, and promotes persisting rehearsal of the event-memory. In this way, thinking/feeling/memory articulates the experience to guide future behaviors.”

With this insight into the marketing mindset, honed by decades of hammering away at our prefrontal cortex, it’s little wonder why the marketing community has struggled with where search fits in the mix. Search plays by totally different neural rules. And that means its value as a branding tool also has to play by those same rules.  I’ll look at that next week.

Persuasion on the Search Results Page

First published January 3, 2008 in Mediapost’s Search Insider

Chris Copeland took out 2007 with one last jab at the whole “agencies getting it” thing. Much as I’m tempted to ring in the New Year by continuing to flog this particular horse, I’m going to bow to my more rational side. As Chris and Mike Margolin both rightly pointed out in their responses to my columns, we all have vested interests and biases that will inevitably cause us to see things from our own perspectives. Frankly, the perspective I’m most interested at this point in this debate is the client’s, as this will ultimately be a question the marketplace decides. So, for now, I’ll leave it there.

But Chris did take exception to one particular point that I did want to spill a little more virtual ink over; the idea of whether persuasion happens in search. Probably the cause for the confusion was my original choice of words. Rather than saying we don’t persuade people “in search” I should have said “on the search page.” Let me explain further with a quick reference to the dictionary, in this case,

Persuade: to move by argument, entreaty, or expostulation to a belief, position, or course of action.

In the definition of persuade, the idea is to move someone from their current belief, position or course of action to a new one. The search results page is not the place to do this. And the reasons why are important to understand for the search marketer.

For quick reference, here’s Chris’s counterargument: Persuasion is at the heart of everything that we do in search — from where we place an ad on a page (Hotchkiss’ golden triangle study) to how we message. The experience we drive to every step of the process is about understanding behavior and how to better optimize for the purpose of connecting consumer intent with advertiser content.
I don’t disagree with Chris in the importance of search in the decision-making process, but I do want to clarify where persuasion happens. What we’re doing on the search results page is not persuading. We’re confirming. We’re validating. In some cases, we’re introducing. But we’re not persuading.

As Chris mentioned, at Enquiro we’ve spent a lot of time mapping out what search interactions look like. And they’re quick. Very quick. About 10 seconds, looking at 4 to 5 results. That’s 2 seconds per listing. In that time, all searchers can do is scan the title and pick up a few words. From that, they make a decision to click or not to click. They’re not reading an argument, entreaty or expostulation. They’re not waiting to be persuaded. They’re making a split-second decision based on the stuff that’s already knocking around in their cortex.

Part of the problem is that we all want to think we’re rational decision-making creatures. When asked in a market research survey, we usually indicate that we think before we click (or buy). This leads to the false assumption that we can be persuaded on the search page, because our rational minds (the part that can be persuaded) are engaged. But it’s just not true. It’s similar to people looking at a shelf of options in the grocery store. In a study (Gerald Zaltman, How Customers Think, p. 124) shoppers exiting a supermarket were asked if they looked at competing brands and compared prices before making their decision. Most said yes. But observation proved differently. They spent only 5 seconds at the category location and 90% only handled their chosen product. This is very similar to responses and actual behavior we’ve seen on search pages.

Now, if someone is in satisficing mode (looking for candidates for a consideration set for further research) you can certainly introduce alternatives for consideration. But the persuasion will happen well downstream from the search results page, not on it.

Am I splitting semantic hairs here? Probably. But if we’re going to get better at search marketing, we have to be obsessed with understanding search behavior and intent. Chris and I are in agreement on that. And that demands a certain precision with the language we use. I was at fault with my original statement, but similarly, I think it’s important to clear up where we can and can’t persuade prospects.

Of course, you may disagree and if so, go ahead, persuade me I’m wrong. I’ll give you 2 seconds and 6 or 7 words. Go!

On Your Search Menu Tonight

First published October 4, 2007 in Mediapost’s Search Insider

This week Yahoo unveiled a new feature. It doesn’t really change the search game that much in terms of competitive functionality. If anything, it’s another case of Yahoo catching up with the competition. But it may have dramatic implications from a user’s point of view. To illustrate that point further I’d like to share a couple of stories with you.

The feature is called Search Assist. You type your query in, and Yahoo provides a list under the query box with a number of possible ways you could complete the query. This follows in the footsteps of Google’s search suggestions in its toolbar. Currently, Google doesn’t offer this functionality within the standard Google query box, at least in North America. Ask also offers this feature.

Because Yahoo is late to the game, the company had the opportunity to up the functionality a little bit. For example, the suggestions that come from Yahoo can include the word you’re typing anywhere in the suggested query phrase. Google uses straight stemming, so the word you’re typing is always at the beginning of the suggested phrases. Yahoo also seems to be pulling from a larger inventory of suggested phrases. The few test queries I did brought back substantially more suggestions than did Google.

It’s not so much the functionality of this feature that intrigues me; it’s how it could affect the way we search. I personally have found that I come to rely on this feature in the Google toolbar more and more. Rather than structuring a complete query in my mind, I type the first few letters of the root word in and see what Google offers me. It leads me to select query phrases that I probably never would have thought of myself.

Some time ago I wrote that contrary to popular belief, we’ve actually become quite adept at paring our queries down to the essential words. It’s not that we don’t know how to launch an advanced query; it’s that most times, we don’t need to. This becomes even truer with search suggestions. All we have to do is think of one word, and the search engine will serve us a menu of potential queries. It reduces the effort required from the searcher, but let me tell you a story about how this might impact a company’s reputation online.

I Wouldn’t Recommend That Choice

Some time ago I got a voicemail from an equity firm. The woman who left a message was brash, a little abrasive and left a rather cryptic message, insisting that I had to phone her right back. Now, since I’m in the search game, getting calls from venture capitalists and investment bankers is nothing really new. But I’d never quite heard this tone from one of these prospecting calls before. So, I did as I usually do in these cases and decided to do a little more research on the search engines to determine whether I was actually going to return this call or not. I did my quick 30-second reputation check.

Normally, I would just type in the name of the firm and see what came up in the top 10 results. Usually, if there’s strong negative content out there, it’s worth paying attention to and it tends to collect enough search equity to break the top 10. This time, I didn’t even have to get as far as the results page. The minute I started typing the company name into my Google toolbar, the suggestions Google was providing me told the entire story: “company” scam, “company” fraud and “company” lawsuits. Of the top eight suggestions, over half of them were negative in nature. Not great odds for success. Needless to say, I never returned the call.

If these search suggestions are going to significantly alter our search patterns, we should be aware of what’s coming up in those suggestions for our branded terms. Type your company name into Yahoo or Google’s toolbar and see what variations are being served to you. Some of them may not be that appetizing.

Would You Prefer Szechuan?

My belief is that users are increasingly going to use this to structure their queries. It moves search one step closer to be coming a true discovery engine. One of the overwhelming characteristics of search user behavior is that we’re basically lazy. We want to expand a minimal amount of effort but in return, we expect a significant degree of relevance. Search suggestions allow us to enter a minimum of keystrokes and the search engine obliges us with a full menu of options.

This brings me to my other story. Earlier this year we did some eye-tracking research on how Chinese citizens interact with the search engines Baidu and Google China. After we released the preliminary results of the study, I had a chance to talk to a Google engineer who worked on the search engine. In China, Google does provide real-time search suggestions right from the query box. The company found that it’s significantly more work to type a query in Mandarin than it is in most Western languages. Using a keyboard for input in China is, at best, a compromise. So Google found that because of the amount of work required to enter a query, the average query length was quite short in China, giving a substantially reduced degree of relevancy. In fact, many Chinese users would type in the bare minimum required and then would scroll to the bottom of the page, where Google showed other suggested queries. Then, the user would just click on one of these links. Hardly the efficient searching behavior Google was shooting for. After introducing real-time search suggestions for the query box, Google found the average length of query increased dramatically and supposedly, so did the level of user satisfaction.

Search query suggestions are just one additional way we’ll see our search behavior change significantly over the next year or two. Little changes, like a list of suggested queries or the inclusion of more types of content in our results pages, will have some profound effects. And when search is the ubiquitous online activity it is, it doesn’t take a very big rock to create some significant and far-reaching ripples.

Personalization Catches the User’s Eye

First published September 13, 2007 in Mediapost’s Search Insider

Last week, I looked at the impact the inclusion of graphics on the search results page might have on user behavior, based on our most recent eye tracking report. This week, we look at the impact that personalization might bring.

One of the biggest hurdles is that personalization, as currently implemented by Google, is a pretty tentative representation of what personalization will become. It only impacts a few listings on a few searches, and the signals driving personalization are limited at this point. Personalization is currently a test bed that Google is working on, but Sep Kamvar and his team have the full weight of Google behind them, so expect some significant advances in a hurry. In fact, my suspicion is that there’s a lot being held in reserve by Google, waiting for user sensitivity around the privacy issue to lessen a bit. We didn’t really expect to see the current flavor of personalization alter user behavior that much, because it’s not really making that much of a difference on the relevancy of the results for most users.

But if we look forward a year or so, it’s safe to assume that personalization would become a more powerful influencer of user behavior. So, for our test, we manually pushed the envelope of personalization a bit. We divided up the study into two separate sessions around one task (an unrestricted opportunity to find out more about the iPhone) and used the click data from the first session to help us personalize the data for the search experience in the second session. We used past sites visited to help us first of all determine what the intent of the user might be (research, looking for news, looking to buy) and secondly to tailor the personalized results to provide the natural next step in their online research. We showed these results in organic positions 3, 4 and 5 on the page, leaving base Google results in the top two organic spots so we could compare.

Stronger Scent

The results were quite interesting. In the nonpersonalized results pages, taken straight from Google (in signed out mode) we saw 18.91% of the time spent looking at the page happened in these three results, 20.57% of the eye fixations happened here, and 15% of the clicks were on Organic listings 3, 4 and 5. The majority of the activity was much further up the page, in the typical top heavy Golden Triangle configuration.

But on our personalized results, participants spent 40.4% of their time on these three results, 40.95% of the fixations were on them, and they captured a full 55.56% of the clicks. Obviously, from the user’s point of view, we did a successful job of connecting intent and content with these listings, providing greater relevance and stronger information scent. We manually accomplished exactly what Google wants to do with the personalization algorithm.

Scanning Heading South

Something else happened that was quite interesting. Last week I shared how the inclusion of a graphic changed our “F” shaped scanning patterns into more of an “E” shape, with the middle arm of the “E” aligned with the graphic. We scan that first, and then scan above and below. When we created our personalized test results pages, we (being unaware of this behavioral variation at the time) coincidentally included a universal graphic result in the number 2 organic position, as this is what we were finding on Google.

When we combined the fact that users started scanning on the graphic, then looked above and below to see where they wanted to scan next with the greater relevance and information scent of the personalized results, we saw a very significant relocation of scanning activity, moving down from the top of the Golden Triangle.

One of the things that distinguished Google in our previous eye tracking comparisons with Yahoo and Microsoft was its success of keeping the majority of scanning activity high on the page, whether those top results were organic or sponsored.

Top of page relevance has been a religion at Google. More aggressive presentation of sponsored ads (Yahoo) or lower quality and relevance thresholds of those ads (Microsoft) meant that on these engines (at least as of early 2006) users scanned deeper and were more likely to move past the top of the page in their quest for the most relevant results. Google always kept scan activity high and to the left.

But ironically, as Google experiments with improving the organic results set, both through the inclusion of universal results and more personalization, their biggest challenge may be in making sure sponsored results aren’t left in the dust. Top of page scanning is ideal user behavior that also happens to offer a big win for advertisers. As results pages are increasingly in flux, it will be important to ensure that scanning doesn’t move too far from the upper left corner, at least as long as we still have a linear, 1 dimensional top to bottom list of results.

An Image Can Change Everything for the Searcher

First published September 6, 2007 in Mediapost’s Search Insider

For the many of you who responded to last week’s column about Nona Yolanda, I just want to take a few seconds to let you know that she passed away the evening of Sept. 3, having fought for five days more than doctors gave her. She was in the presence of her family right until the end. We printed off your comments and well wishes and posted them on the hospital door. It was somewhat surprising but very gratifying for my wife’s family to know that Nona’s story touched hearts around the world. Thank you. – G.H.

The world of the search results page is changing quickly, which means that we’re going to have to apply new rules for user behavior. This week, I’d like to look at some results from a recent eye tracking study we did about how we interact with search when graphic elements start to appear on the page. We also tested for the inclusion of personalized results. There’s a lot of ground to cover, so I’ll start off with Universal Search this week, and cover personalization and the future of search next week.

Warning: Graphic Depictions Ahead

You can’t get much more basic than the search results page we’ve all grown to know in the past decade. The 10 blue organic links and, more recently, the top and side sponsored ads have defined the interface. It’s been all text, ordered in a linear top to bottom format. The only sliver of real estate that saw any variation was the vertical results, sandwiched between top sponsored and top organic. So it was little wonder that we saw a consistent scan pattern emerge, which we labeled the Golden Triangle. It was created by an “F”-shaped scan pattern, where we scanned down the left hand side, looking for information scent, and then scanned across when we found it.

But that design paradigm is in the middle of change. The first and most significant of these will be the inclusion of different types of results on the same page, blended into the main results set. Google’s label is Universal Search, Ask’s is 3D Search and Yahoo’s is Omni Search. Whatever you choose to call it, it defines a whole new ball game for the user.

Starting at the Top…

In the classic pattern, users began at the top left corner because there was no real reason not to. We saw the page, our eyes swung up to the top left and then we started our “F”- shaped scans from there. Therefore, our interactions with the page were very top-heavy. The variable in this was the relevance of the top sponsored ads. If the engine maintained relevance by only showing top sponsored when they were highly relevant (i.e. Google) to the query, we scanned them. If the engine bowed to the pressures of monetization and showed the ads even when they might not be highly relevant to the query (we saw more examples of this on Yahoo and Microsoft) users tended to move down quickly and the Golden Triangle stretched much further down the page. It was a mild form of search banner blindness. The one thing that remained consistent was the upper left starting point.

But things change, at least for now, when you start mixing results into the equation. If the number 2 or 3 organic return is a blended one, with a thumbnail graphic, we assume the different presentation must mean the result is unique in some way. The graphic proves to be a power attractor for the eye, especially if it’s a relevant graphic. It’s information scent that can be immediately “grokked” (to use Jakob Nielsen’s parlance) and this often drew the eye quickly down, making this the new entry point for scanning. This reduces the top to bottom bias (or totally eliminates it), making the blended result the first one scanned. Also, we saw a much more deliberate scanning of this listing.

Give Me an F, Give Me an E…

Another common behavior we identified is the creation of a consideration set, by choosing three or four listings to scan before either choosing the most relevant one or selecting another consideration set. In the pre-blended results set, this consideration set was usually the top three or four results. But in blended results, it’s usually the image result being the first result scanned, and then the results immediately above and below it. Rather than an “F”-shaped scan, this changes the pattern to an “E”-shaped scan, with the middle arm of the “E” focused on the graphic result.

The implications are interesting to consider. The engines and marketers have come to accept the top to bottom behavior as one of the few dominant behavioral characteristics, and it has given us a foundation on which to build our positioning strategy. But if the inclusion of a graphic result suddenly moves the scanning starting point, we have to consider our best user interception opportunities on a case-by-case basis.

Next week, I’ll look at further findings.