Wired for Information: A Brain Built to Google

First published August 26, 2010 in Mediapost’s Search Insider

In my last Search Insider, I took you on a neurological tour that gave us a glimpse into how our brains are built to read. Today, let’s dig deeper into how our brains guide us through an online hunt for information.

Brain Scans and Searching

First, a recap. In Nicholas Carr’s Book, “The Shallows: What the Internet is doing to Our Brains,I focused on one passage — and one concept — in particular. It’s likely that our brains have built a short cut for reading. The normal translation from a printed word to a concept usually requires multiple mental steps. But because we read so much, and run across some words frequently, it’s probable that our brains have built short cuts to help us recognize those words simply by their shape in mere milliseconds, instantly connecting us with the relevant concept. So, let’s hold that thought for a moment

The Semel Institute at UCLA recently did a neuroscanning study that monitored what parts of the brain lit up during the act of using a search engine online. What the institute found was that when we become comfortable with the act of searching, our brains become more active. Specifically, the prefrontal cortex, the language centers and the visual cortex all “light up” during the act of searching, as well as some sub-cortical areas.

It’s the latter of these that indicates the brain may be using “pre-wired” short cuts to directly connect words and concepts. It’s these sub-cortical areas, including the basal ganglia and the hippocampus, where we keep our neural “short cuts.”  They form the auto-pilot of the brain.

Our Brain’s “Waldo” Search Party

Now, let’s look at another study that may give us another piece of the puzzle in helping us understand how our brain orchestrates the act of searching online.

Dr. Robert Desimone at the McGovern Institute for Brain Research at MIT found that when we look for something specific, we “picture” it in our mind’s eye. This internal visualization in effect “wakes up” our brain and creates a synchronized alarm circuit: a group of neurons that hold the image so that we can instantly recognize it, even in complex surroundings. Think of a “Where’s Waldo” puzzle. Our brain creates a mental image of Waldo, activating a “search party” of Waldo neurons that synchronize their activities, sharpening our ability to pick out Waldo in the picture. The synchronization of neural activity allows these neurons to zero in on one aspect of the picture, in effect making it stand out from the surrounding detail

Pirolli’s Information Foraging

One last academic reference, and then we’ll bring the pieces together. Peter Pirolli, from Xerox’s PARC, believes we “forage” for information, using the same inherent mechanisms we would use to search for food. So, we hunt for the “scent” of our quarry, but in this case, rather than the smell of food, it’s more likely that we lodge the concept of our objective in our heads. And depending on what that concept is, our brains recruit the relevant neurons to help us pick out the right “scent” quickly from its surroundings.  If our quarry is something visual, like a person or thing, we probably picture it. But if our brain believes we’ll be hunting in a text-heavy environment, we would probably picture the word instead. This is the way the brain primes us for information foraging.

The Googling Brain

This starts to paint a fascinating and complex picture of what our brain might be doing as we use a search engine. First, our brain determines our quarry and starts sending “top down” directives so we can very quickly identify it.  Our visual cortex helps us by literally painting a picture of what we might be looking for. If it’s a word, our brain becomes sensitized to the shape of the word, helping us recognize it instantly without the heavy lifting of lingual interpretation.

Thus primed, we start to scan the search results. This is not reading, this is scanning our environment in mere milliseconds, looking for scent that may lead the way to our prey. If you’ve ever looked at a real-time eye-tracking session with a search engine, this is exactly the behavior you’d be seeing.

When we bring all the pieces together, we realize how instantaneous, primal and intuitive this online foraging is. The slow and rational brain only enters the picture as an afterthought.

Googling is done by instinct. Our eyes and brain are connected by a short cut in which decisions are made subconsciously and within milliseconds. This is the forum in which online success is made or missed.

Maximizers vs. Satisficers: Why It’s Tough to Decide

First published February 18, 2010 in Mediapost’s Search Insider

In last week’s column, I introduced the study from Wesleyan University about how decisiveness played out for a group of 54 university students as they chose their courses.  The student’s eye movements were tracked as they looked at a course comparison matrix.

Weighing all the Options vs Saying No

In the previous column, I talked about two different strategies: the compensatory one, where we weigh all the options, and the non-compensatory one, where we start eliminating candidates based on the criterion most important to us. Indecisive people tend to start with the compensatory strategy and decisive people go right for the linear approach.  I also talked about Barry Schwartz’s theory (in his book “The Paradox of Choice”) that indecisiveness can lead to a lot of anxiety and stress.

The biggest factor for indecisive people seems to be a fear of lost opportunity. They hate to turn away from any option for fear that something truly valuable lies down that path. Again, this is territory well explored in Tversky and Kahnemann’s famous Prospect Theory.

The Curse of the Maximizer

Part of the problem is perfectionism, identified by Schwartz as a strong corollary to anxiety caused by impending decisions. The Wesleyan research cites previous work that shows indecisive people tend to want a lot more information at hand before making any decisions. And, once they’ve gone to the trouble to gather that information, they feel compelled to use it. Not only do they use it, they try to use it all at once.

The Wesleyan eye tracking showed that the more indecisive participants went back and forth between the five different course attributes fairly evenly, apparently trying to weigh them all at the same time.  Not only that, they spent more time staring at the blank parts of the page. This indicated that they were trying to crunch the data, literally staring into space.  The maximizing approach to decision-making places a high cognitive load on the brain. The brain has to juggle a lot more information to try to come to an optimal decision.

Decisive people embrace the promise of “good enough,” known as satisficing. They are less afraid to eliminate options for consideration because the remaining choices are adequate (the word satisficing is a portmanteau of “satisfy” and “suffice”) to meet their requirements. They are quicker to turn away from lost opportunity. For them, decision-making is much easier. Rather than trying to juggle multiple attributes, they go sequentially down the list, starting with the attribute that is most important to them.

In the case of this study, this became clear in looking at the spread of fixations spread amongst the five attributes: time of the class, the instructor, the work load, the person’s own goals and the level of interest. For decisive people, the most important thing was the time of class. This makes sense. If you don’t have the time available, why even consider what the course has to offer? If the time didn’t work, the decisive group eliminated it from consideration. They then moved onto the instructor, the next most important criterion. And so on down the list.

Tick…Tick…Tick…

Another interesting finding was that even though indecisive people start by trying to weigh all the options to look for the optimal solution, if the clock is ticking, they often become overwhelmed by the decision and shift to a non-compensatory strategy by starting to eliminate candidates for consideration. The difference is that for the indecisive maximizers, this feels like surrender, or, at best, a compromise. For the decisive satisficers, it’s simply the way they operate. If the indecisive people are given the choice between delaying the decision and being forced to eliminate promising alternatives, they’ll choose to delay.

This sets up a fascinating question for search engine behavior: do satisficers search differently than maximizers? I suspect so. We’ll dive deeper into this question next week.

How Our Brain Decides How Long We Look at Something

In this week, I’ve talked about how our attention focusing mechanism moves the spotlight of foveal attention around different environments: a Where’s Waldo picture, a webpage, a website with advertising and a search engine results page. I want to wrap up the week by looking at another study that looked at the role of brain waves in regulating how we shift the spotlight of attention from one subject to another.

Eye Spy

eyetrackingsaccadesIf you do eye tracking research, you soon learn to distinguish fixations and saccades. Fixations occur when we let our foveal attention linger on an element, even for a fraction of a second. Saccades are the movements our eyes make from one fixation to the next. These movements take mere milliseconds. Below I show an example of a single session “gaze plot” – the recording of how one individual’s eyes took in an ad (the image is from Tobii, the maker of the eye tracking equipment we use). The dots represent fixations, as measured in milliseconds. The bigger the dot the longer the eye stayed here. The lines connecting the dots are saccades.

When you look at a scene like the one shown here, the question becomes, how do you consciously move from one element to another. It’s not like you think “okay, I’ve spent enough time looking at the logo, perhaps it’s time to move to the headline of the ad, or the rather attractive bosom in the upper right corner (I suspect the participant was male)” The movements happen subconsciously. Your eyes move to digest the content of the picture on their own accord, based on what appears to be interesting based on your overall scan of the picture and your attention focusing mechanisms.

Keeping Our Eyes Running on Time

Knowing that the eye tends to move from spot to spot subconsciously, Dr. Earl Miller at MIT decided to look closer at the timing of these shifts of attention and what might cause them. He found that our brains appear to have a built in timer that moves our eyes around a scene. Our foveal focus shifts about 25 times a second and this shift seems to be regulated by our brain waves. Our brain cycles between high activity phases and low activity phases, the activity recorded through EEG scanning. Neurologists have known that these waves seem to be involved in the focusing of attention and the functions of working memory, but Miller’s study showed a conclusive link between these wave cycles and the refocusing of visual attention. It appears our brains have a built in metronome that dictates how we engage with visual stimuli. The faster the cycles, the faster we “think.”

But, it’s not as if we let our eyes dash around the page every 1/25 of a second. Our eyes linger in certain spots and jump quickly over others. Somewhere, something is dictating how long the eye stays in one spot. As our brain waves tick out the measures of attention, something in our brains decide where to invest those measures and how many should be invested.

The Information Scent Clock is Ticking

Here, I take a huge philosophical leap and tie together two empirical bodies of knowledge with nothing scientifically concrete to connect them that I’m aware of. Let’s imagine for a second that Miller’s timing of eye movements might play some role in Eric Charnov’s Marginal Value Theorem, which in turn plays a part in Peter Pirolli’s Information Foraging Theory.

Eric Charnov discovered that animals seem to have an innate and highly accurate sense of when to leave one source of food and move on to another, based on a calculation of the energy that would have to be expended versus the calories that would be gained in return. Obviously, organisms that are highly efficient at surviving would flourish in nature, passing on their genes and less efficient candidates would die out. Charnov’s marginal value calculation would be a relatively complex one if we sat down to work it out on paper (Charnov did exactly that, with some impressive charts and formulas) but I’m guessing the birds Charnov was studying didn’t take this approach. The calculations required are done by instinct, not differential calculus.

So, if birds can do it, how do humans fare? Well, we do pretty well when it comes to food. In fact, we’re so good at seeking high calorie foods, it’s coming back to bite us. We have highly evolved tastes for high fat, high sugar calorie rich foods. In the 20th Century, this built in market preference caused food manufacturers to pump out these foods by the truck load. Now, well over 1/3 of the population is considered obese. Evolution sometimes plays nasty tricks on us, but I digress.

Pirolli took Charnov’s marginal value theorem and applied it to how we gather information in an online environment. Do we use the same instinctive calculations to determine how long to spend on a website looking for the information we’re seeking? Is our brain doing subconscious calculations the entire time we’re browsing online, telling us to either click deeper on a site or give up and go back to Google? I suspect the answer is yes. And, if that’s the case, are our brain waves that dictate how and where we spend our attention part of this calculation, a mental hourglass that somehow factors into Charnov’s theorem? If so, it behooves us to ensure our websites instill a sense of information scent as soon as possible. The second someone lands on our site, the clock is already ticking. Each tick that goes by without them finding something relevant devalues our patch according to Charnov’s theorem.

How Our Brains “Google”

So far this week, I’ve covered how our brains find Waldo, scan a webpage and engage with online advertising. Today, I’m looking at how our brains help find the best result on a search engine.

Searching by Habit

First, let’s accept the fact that most of us have now had a fair amount of experience searching for things on the internet, to the point that we’ve now made Google a verb. What’s more important, from a neural perspective, is that searching is now driven by habit. And that has some significant implications for how our brain works.

Habits form when we do the same thing over and over again. In order for that to happen, we need what’s called a stable environment. Whatever we’re doing, habits only form when the path each time is similar enough that we don’t have to think about each individual junction and intersection. If you drive the same way home from work each day, your brain will start navigating by habit. If you take a different route every single day, you’ll be required to think through each and every trip. Parts of the brain called the basal ganglia seem to be essential in recording these habitual scripts, acting as sort of a control mechanism telling the brain when it’s okay to run on autopilot and when it needs to wake up and pay attention. Ann Graybiel from MIT has done extensive work exploring habitual behaviors and the role of the basal ganglia.

The Stability of the Search Page

A search results page, at least for now, provides such a stable environment. Earlier this week, I looked at how our brain navigates webpages. Even though each website is unique, there are some elements that are stable enough to allow for habitual conditioned routines to form. The main logo or brand identifier is usually in the upper left. The navigation bar typically runs horizontally below the logo. A secondary navigation bar is typically found running down the left side. The right side is usually reserved for a feature sidebar or, in the case of a portal, advertising. Given these commonalities, there is enough stability in most website’s designs that we navigate for the first few seconds on autopilot.

Compared to a website, a search engine results page is rigidly structured, providing the ideal stable environment for habits to form. This has meant a surprising degree of uniformity in people’s search behaviors. My company, Enquiro, has been looking at search behavior for almost a decade now and we’ve found that it’s remained remarkably consistent. We start in the upper left, break off a “chunk” of 3 to 5 results and scan it in an “F” shaped pattern. The following excerpts from The BuyerSphere Project give a more detailed walk through of the process.

searchheatmap11 – First, we orient ourselves to the page. This is something we do by habit, based on where we expect to see the most relevant result. We use a visual anchor point, typically the blue border that runs above the search results, and use this to start our scanning in the upper left, a conditioned response we’ve called the Google Effect. Google has taught us that the highest relevance is in the upper left corner

Searchheatmap22 – Then, we begin searching for information scent. This is a term from information foraging theory, which we’ve covered in our eye tracking white papers. In this particular case, we’ve asked our participants to look for thin, light laptops for their sales team. Notice how the eye tracking hot spots are over the words that offer the greatest “scent”, based on the intention of the user. Typically, this search for scent is a scanning of the first few words of the title of the top 3 or 4 listings.

Searchheatmap33 – Now the evaluation begins. Based on the initial scan of the beginnings of titles from the top 3 or 4 listings, users begin to compare the degree of relevance of some alternatives, typically by comparing two at a time. We tend to “chunk” the results page into sections of 3 or 4 listings at a time to compare, as this has been shown to be a typical limit of working memory9 when considering search listing alternatives

searchheatmap44 -It’s this scanning pattern, roughly in the shape of an “F”, that creates the distinct scan pattern that we first called the “Golden Triangle” in our first eye tracking study. Users generally scan vertically first, creating the upright of the “F”, then horizontally when they pick up a relevant visual cue, creating the arms of the F. Scanning tends to be top heavy, with more horizontal scanning on top entries, which over time creates the triangle shape.

 

searchheatmap5(2)5 – Often, especially if the results are relevant, this initial scan of the first 3 or 4 listings will result in a click. If two listings or more listings in the initial set look to be relevant, the user will click through to both and compare the information scent on the landing page. This back and forth clicking is referred to as “pogo sticking”. It’s this initial set of results that represents the prime real estate on the page.

searchheatmap66 – If the initial set doesn’t result in a successful click through, the user continues to “chunk” the page for future consideration. The next chunk could be the next set of organic results, or the ads on the right hand side of the page. There, the same F Shaped Scan patterns will be repeated. By the way, there’s one thing to note about the right hand ads. Users tend to glance at the first ad and make a quick evaluation of the relevance. If the first ad doesn’t appear relevant, the user will often not scan any further, passing judgement on the usefulness and relevance of all the ads on the right side based on their impression of the ad on top.

So, that explains how habits dictate our scanning pattern. What I want to talk more about today is how our attention focusing mechanism might impact our search for information scent on the page.

The Role of the Query in Information Scent

Remember the role of our neuronal chorus, firing in unison, in drawing our attention to potential targets in our total field of vision. Now, text based web pages don’t exactly offer a varied buffet of stimuli, but I suspect the role of key words in the text of listings might serve to help focus our attention.

In a previous post, I mentioned that words are basically abstract visual representations of ideas or concepts. The shape of the letters in a familiar word can draw our attention. It tends to “pop out” at us from the rest of the words on the page. I suspect this “pop out” effect could be the result of Dr. Desimone’s neural synchrony patterns. We may have groups of neurons tuned to pick certain words out of the sea of text we see on a search page.

The Query as a Picture

This treating of a word as a picture rather than text has interesting implications for the work our brain has to do. The interpretation of text actually calls a significant number of neural mechanisms into play. It’s fairly intensive processing. We have to visually intrepret the letters, run it through the language centres of our brain, translate into a concept and only then can we capture the meaning of the word. It happens quickly, but not nearly as quickly as the brain can absorb a picture. Pictures don’t have to be interpreted. Our understanding of a picture requires fewer mental “middle men” in our brain, so it takes a shorter path. Perhaps that’s why one picture is worth a thousand words.

But in the case of logos and very well known words, we may be able to skip some of the language processing we would normally have to do. The shape of the word might be so familiar, we treat it more like an icon or picture than a word. For example, if you see your name in print, it tends to immediately jump out at you. I suspect the shape of the word might be so familiar that our brain processes it through a quicker path than a typical word. We process it as a picture rather than language.

Now, if this is the case, the most obvious candidate for this “express processing” behavior would be the actual query we use. And we have a “picture” of what the word looks like already in our minds, because we just typed it into the query box. This would mean that this word would pop out of the rest of the text quicker than other text. And, through eye tracking, there are very strong indications that this is exactly what’s happening. The query used almost inevitably attracts foveal attention quicker than anything else. The search engines have learned to reinforce this “pop out” effect by using hit bolding to put the query words in bold type when ever they appear in the results set.

Do Other Words Act as Scent Pictures?

If this is true of the query, are there other words that trigger the same pop out effect? I suspect this to also be true. We’ve seen that certain word attract more than their fair share of attention, depending on the intent of the user. Well know brands typically attract foveal attention. So do prices and salient product features. Remember, we don’t read search listings, we scan them. We focus on a few key words and if there is a strong enough match of information scent to our intent, we click on the listing.

The Intrusion of Graphics

Until recently, the average search page was devoid of graphics. But all the engines are now introducing richer visuals into many results sets. A few years ago we did some eye tracking to see what the impact might be. The impact, as we found out, was that the introduction of a graphic significantly changed the conditioned scan patterns I described earlier in the post.

eshapedpatternThis seems to be a perfect illustration of Desimone’s attention focusing mechanism at work. If we’re searching for Harry Potter, or in the case of the example heat map shown below, an iPhone, we likely have a visual image already in mind. If a relevant image appears on the page, it hits our attention alarms with full force. First of all, it stands out from the text that surrounds it. Secondly, our pre-tuned neurons immediately pick it out in our peripheral vision as something worthy of foveal focus because it matches the picture we have in our mind. And thirdly, our brain interprets the relevancy of the image much faster than it can the surrounding text. It’s an easier path for the attention mechanisms of our brain to go down and our brains follow the same rules as my sister-in-law: no unnecessary trips.

The result? The F Shaped Scan pattern, which is the most efficient scan pattern for an ordered set of text results, suddenly becomes an E shaped pattern. The center of the E is on the image, which immediately draws our attention. We scan the title beside it to confirm relevancy, and then we have a choice to make. Do we scan the section above or below. Again, our peripheral vision helps make this decision by scanning for information scent above and below the image. Words that “pop out” could lure us up or down. Typically, we expect greater relevancy higher in the page, so we would move up more often than down.

Tomorrow, I’ll wrap up my series of posts on how our brains control what grabs our attention by looking at another study that indicates we might have a built in timer that governs our attention span and we’ll revisit the concept of the information patch, looking at how long we decide to spend “in the patch.”

How Our Brain Scans a Webpage

eyesYesterday, I explained how our brain finds “Waldo.” To briefly recap the post:

  • We have two neural mechanisms for seeing things we might want to pay attention to: a peripheral scanning system that takes in a wide field of vision and a focused (foveal) system that allows us to drill down to details
  • We have neurons that are specialists in different areas: i.e. picking out colors, shapes and disruptions in patterns
  • We use these recruited neuronal swat teams to identify something we’re looking for in our “mind’s eye” (the visual cortex) prior to searching for it in our environment
  • These swat teams focus our attention on our intended targets by synchronizing their firing patterns (like a mental Flash Mob) which allows them to rise above the noise of the other things fighting for our attention.

Today, let’s look at the potential implications of this in our domain, specifically interactions with websites.

But First: A Word about Information Scent

I’ve talked before about Pirolli’s Information Foraging Theory (and another post from this blog). Briefly, it states that we employ the same strategies we use to find food when we’re looking for information online. That’s because, just like food, information tends to come in patches online and we have to make decisions about the promise of the patch, to determine whether we should stay there or find a new patch. There’s another study I’ve yet to share (it will be coming in a post later this week) that indicates our brain might have a built in timer that controls how much time we spend in a patch and when we decide to move on.

The important point for this post is that we have a mental image of the information we seek. We picture our “prey” in our mind before looking for it. And, if that prey can be imagined visually, this will begin to recruit our swat team of neurons to help guide us to the part of the page where we might see it. Just like we have a mental picture of Waldo (from yesterday’s post) that helps us pick him out of a crowd, we have a mental picture of whatever we’re looking for.

Pirolli talks about information scent. These are the clues on a page that the information we seek lies beyond a link or button. Now, consider what we’ve learned about how the brain chooses what we pay attention to. If a visual representation of information is relevant, it acts as a powerful presentation of information scent. The brain processes images much faster than text (which has to be translated by the brain). We would have our neuronal swat team already primed for the picture, singing in unison to draw the spotlight of our attention towards it.

Neurons Storming Your Webpage

sunscreenshotFirst, let me share some of the common behaviors we’ve seen through eye tracking on people visiting websites (in an example from The BuyerSphere Project). I’ll try to interpret what’s happening in the brain:

The heat map shows the eye activity on a mocked up home page. Remember, eye tracking only captures foveal attention, not peripheral, so we’re seeing activity after our brain has already focused the spotlight of attention. For example, notice how the big picture has almost no eye tracking “heat” on it. Most of the time, we don’t have to focus our fovea on a picture to understand what’s in it (the detail rich Waldo pictures would be the exception). Our peripheral vision is more than adequate to interpret most pictures. But consider what happens when the picture matches the target in our “mind’s eye”. The neurons draw our eye to it.

One thing to think about. Words shown in text are pictures too. I’ll be coming back to this theme a couple of times – but a word is nothing more than a picture that represents a concept. For example, the Sun logo in the upper left (1) is nothing more than a picture that our brain associates with the company Sun Microsystems. To interpret this word, the brain first has to interpret the shape of the word. That means there are neurones that recognize straight edges, others than recognize curved edges and others that look for the overall “shape” of the word. Words too can act as information targets that we picture mentally before seeing it in front of us. For example, let’s imagine that we’re a developer. The word “DEVELOPER” (2) has a shape that is recognizable to us because we’ve seen it so often. The straight strokes of the E’s and V’s, sandwiched between the curves of the D’s, O’ and P’s. As we scan the overall page, our “Developer” neurons may suddenly wake up, synchronize their firing and draw the eye here as well. “Developer” already has a prewired connection in our brains. This is true for all the words we’re most familiar with, including brands like Sun. This is why we see a lot of focused eye activity on these areas of the picture.

Intent Clustering

In the last part of today’s post, I want to talk about a concept I spent some time on in the BuyerSphere Project: Intent Clustering. I’ve always know this makes sense from an Information Scent perspective, but now I know why from a neural perspective as well.

Intent clustering is creating groups of relevant information cues in the same area of the page. For example, for a product category on an e-commerce page, an intent cluster would include a picture of the product, a headline with the product category name, short bullet points with salient features and brands and perhaps relevant logos. An Intent cluster immediately says to the visitor that this is the right path to take to find out more about a certain topic or subject. The page shown has two intent clusters that were aligned with the task we gave, one in the upper right sidebar (3) and one in the lower left hand corner (4). Again, we see heat around both these areas.

Why are intent clusters “eye candy” for visitors? It’s because we’ve stacked the odds for these clusters to be noticed peripherally in our favor. We’ve included pictures, brands, familiar words and hints of rich information scent in well chosen bullet points. This combination is almost guaranteed to set our neural swat teams singing in harmony. Once scanned in peripheral vision, the conductor (the FEF I talked about in yesterday’s post) of our brain swings our attention spotlight towards the cluster for more engaged consumption, generating the heat we see in the above heatmap.

Tomorrow, I’ll be looking at how these mechanisms can impact our engagement with online display ads.

Summer Stories: How I Became a Researcher

First published August 13, 2009 in Mediapost’s Search Insider

About six years ago, I had one of those life-changing moments that set me on a new path. I’ve always been curious. I’ve always had questions, and up to that point in my life, I was usually able to find an answer, with enough perseverance. But in 2003, I had a question that no one seemed able to answer.  It didn’t seem to be an especially difficult question, and I knew someone had the answer. They just weren’t sharing it.

The Unanswerable Question

The question was this: what percentage of searchers click on the organic results and what percentage click on the sponsored ads? Today, that’s not even a question; it’s common knowledge for search marketers. But in 2003, that wasn’t the case. Sponsored search ads were still in their infancy (Overture had just been acquired by Yahoo, and Google’s AdWords was only a couple years old) and no one at either engine was sharing the clickthrough breakdowns between organic and paid.

I reached out to everyone I knew in the industry, but either they didn’t know, or they weren’t willing to go public with the info. My connections into Google and Yahoo were nonexistent at the time. No one, it seemed, had the answer. My curiosity was stymied. And that’s when my revelation happened. If no one had the answer, perhaps I could provide it.

At the time, research was not something Enquiro did. When we wanted to find out an answer, we combed through the forums, just like everyone else. But there seemed to be a noticeable gap in available information. There was plenty of discussion about technical SEO tactics, but no one seemed to be interested in how people actually used search engines.

To me, this was an unforgiveable oversight. If we were using search as a marketing channel, shouldn’t we have some understanding of how our prospects used search?  Off the top of my head, I jotted down a list of several questions I had about how people actually search; questions that appeared to have no readily available answers. It was at that point that I officially became a researcher.

Discovering “Why”

Our first research project proved to set the path we would go down for much of the follow-up: we just looked at how people used search to do things. Our methodology has become much tighter and we now have added eye-tracking and even neuro-scanning to our arsenal, but from the beginning, our research was more focused on “why” than “what.” The first paper was called “Inside the Mind of the Searcher” and it’s still referenced on a regular basis. Frankly, we were surprised with how quickly it was picked up in the industry. Suddenly, we became the experts on search user behavior, a crown I was uncomfortable with at the beginning. Yes, we were exploring new ground, but I always worried about how representative this was to the real world. Did people really do what we said they did, or was it just a research-created anomaly?

Defining the Golden Triangle

For us, the groundbreaking study was our first eye tracking study, done through Eyetools in San Francisco. I had read the Poynter study about how people interacted with online publications and was fascinated. “What if,” I wondered, “we did this with a search engine?” I found a similarly curious cohort in Kevin Lee from DidIt and together with Eyetools we launched the first study, which discovered the now-famous “Golden Triangle.” I remember sitting with Kevin in a speaker prep room at a show whose name escapes me, looking at the very first results of the data. The pattern jumped off the page:

“Look at that!” I said, “It’s a triangle!”

Kevin, always the search optimizer, said, “We need something catchy to call it, something that we can optimize for. The Magic Triangle?”

Because the heat map tends to indicate the most popular areas in a reddish yellow color, the answer was right in front of us. I can’t remember whether it was Kevin or I that first said it, but as soon as we said it, we knew the name would stick: “It’s a gold color… The Golden Triangle?”

Is It Real?

Even with the release of the study and the quick acceptance, I still questioned whether this represented real behavior. It was later that year when I got the confirmation I needed. I had just presented the results during a session at an industry show and was stepping down from the stage. Someone was quietly standing in the corner and came over as I started to head out of the room.

“Hi. I just wanted to let you know. I work with Yahoo on user experience and your heat map looks identical to our internal ones. I actually thought you had somehow got your hands on ours.” The validation was a few years in coming, but very welcome when it finally arrived.

Today, ironically, things have come full circle. I have talked to sales and engineering teams at all the major engines and much of the research they refer to about user behavior comes from Enquiro.

And the answer to my original question has held remarkably consistent in the past 6 years: What percentage of users click on paid ads vs. organic listings? For commercial searches, it’s about 70% organic, 30% paid. Just in case you were curious.

A Cognitive Walk Through of Searching

First published October 23, 2008 in Mediapost’s Search Insider

Two weeks ago, I talked about the concept of selective perception, how subconsciously we pick and choose what we pay attention to. Then, last week, I explained how engagement with search is significantly different than engagement with other types of advertising. These two concepts set the stage for what I want to do today. In this column, I want to lay out a step-by-step hypothetical walk-through of our cognitive engagement with a search page.

Searching on Auto Pilot

First, I think it’s important to clear up a common misunderstanding. We don’t think our way through an entire search interaction. The brain only kicks into cognitive high gear (involving the cortex) when it absolutely needs to. When we’re engaged in a mental task, any mental task, our brain is constantly looking for cognitive shortcuts to lessen the workload required. Most of these shortcuts involve limbic structures at the sub-cortical level, including the basal ganglia, hippocampus, thalamus and nucleus accumbens. This is a good thing, as these structures have been honed through successful generations to simplify even the most complicated tasks. They’re the reason driving is much easier for you now than it was the first time you climbed behind the wheel. These structures and their efficiencies also play a vital role in our engagement with search.

So, to begin with, our mind identifies a need for information. Usually, this is a sub task that is part of a bigger goal. The goal is established in the prefrontal cortex and the neural train starts rolling toward it. We realize there’s a piece of information missing that prevents us from getting closer to our goal – and, based on our past successful experiences, we determine that a search engine offers the shortest route to gain the information. This is the first of our processing efficiencies. We don’t deliberate long hours about the best place to turn. We make a quick, heuristic decision based on what’s worked in the past. The majority of this process is handled at the sub-cortical level.

The Google Habit

Now we have the second subconscious decision. Although we have several options available for searching, the vast majority of us will turn to Google, because we’ve developed a Google habit. Why spend precious cognitive resources considering our options when Google has generally proved successful in the past? Our cortex has barely begun to warm up at this point. The journey thus far has been on autopilot.

The prefrontal cortex, home of our working memory, first sparked to life with the realization of the goal and the identification of the sub task, locating the missing piece of information. Now, the cortical mind is engaged once again as we translate that sub task into an appropriate query. This involves matching the concept in our minds with the right linguistic label. Again, we’re not going to spend a lot of cognitive effort on this, which is why query construction tends to start simply and become longer and more complex only if required. In this process, the label, the query we plugged into the search box, remains embedded in working memory.

Conditioned Scanning

At this point, the prefrontal cortex begins to idle down again. The next exercise is handled by the brain as a simple matching game. We have the label, or query, in our mind. We scan the page in the path we’ve been conditioned to believe will lead to the best results: starting in the upper left, and then moving down the page in an F-shaped scan pattern. All we want to do is find a match between the query in our prefrontal cortex and the results on the page.

Here the brain also conserves cognitive processing energy by breaking the page into chunks of three or four results. This is due to the channel capacity of our working memory and how many discrete chunks of information we can process in our prefrontal cortex at a time. We scan the results looking first for the query, usually in the title of the results. And it’s here where I believe a very important cognitive switch is thrown.

The “Pop Out” Effect

When we structure the query, we type it into a box. In the process, we remember the actual shape of the phrase. When we first scan results, we’re not reading words, we’re matching shapes. In cognitive psychology, this is called the “pop out” effect. We can recognize shapes much faster than we can read words. The shapes of our query literally “pop out” from the page as a first step toward matching relevance. The effect is enhanced by query (or hit) bolding. This matching game is done at the sub-cortical level.

If the match is positive (shape = query), then our eye lingers long enough to start picking up the detail around the word. We’ve seen in multiple eye tracking studies that foveal focus (the center of the field of vision) tends to hit the query in the title, but peripheral vision begins to pick up words surrounding the title. In our original eye tracking study, we called this semantic mapping. In Peter Pirolli’s book, “Information Foraging,” he referred to this activity as spreading activation. It’s after the “pop out” match that the prefrontal cortex again kicks into gear. As additional words are picked up, they are used to reinforce the original scent cue. Additional words from the result pull concepts into the prefrontal cortex (recognized URL, feature, supporting information, price, brand), which tend to engage different cortical regions as long-term memory labels are paged and brought back into the working memory. If enough matches with the original mental construct of the information sought are registered, the link is clicked.

Next week, we’ll look at the nature of this memory recall, including the elusive brand message.