How Our Brains “Google”

So far this week, I’ve covered how our brains find Waldo, scan a webpage and engage with online advertising. Today, I’m looking at how our brains help find the best result on a search engine.

Searching by Habit

First, let’s accept the fact that most of us have now had a fair amount of experience searching for things on the internet, to the point that we’ve now made Google a verb. What’s more important, from a neural perspective, is that searching is now driven by habit. And that has some significant implications for how our brain works.

Habits form when we do the same thing over and over again. In order for that to happen, we need what’s called a stable environment. Whatever we’re doing, habits only form when the path each time is similar enough that we don’t have to think about each individual junction and intersection. If you drive the same way home from work each day, your brain will start navigating by habit. If you take a different route every single day, you’ll be required to think through each and every trip. Parts of the brain called the basal ganglia seem to be essential in recording these habitual scripts, acting as sort of a control mechanism telling the brain when it’s okay to run on autopilot and when it needs to wake up and pay attention. Ann Graybiel from MIT has done extensive work exploring habitual behaviors and the role of the basal ganglia.

The Stability of the Search Page

A search results page, at least for now, provides such a stable environment. Earlier this week, I looked at how our brain navigates webpages. Even though each website is unique, there are some elements that are stable enough to allow for habitual conditioned routines to form. The main logo or brand identifier is usually in the upper left. The navigation bar typically runs horizontally below the logo. A secondary navigation bar is typically found running down the left side. The right side is usually reserved for a feature sidebar or, in the case of a portal, advertising. Given these commonalities, there is enough stability in most website’s designs that we navigate for the first few seconds on autopilot.

Compared to a website, a search engine results page is rigidly structured, providing the ideal stable environment for habits to form. This has meant a surprising degree of uniformity in people’s search behaviors. My company, Enquiro, has been looking at search behavior for almost a decade now and we’ve found that it’s remained remarkably consistent. We start in the upper left, break off a “chunk” of 3 to 5 results and scan it in an “F” shaped pattern. The following excerpts from The BuyerSphere Project give a more detailed walk through of the process.

searchheatmap11 – First, we orient ourselves to the page. This is something we do by habit, based on where we expect to see the most relevant result. We use a visual anchor point, typically the blue border that runs above the search results, and use this to start our scanning in the upper left, a conditioned response we’ve called the Google Effect. Google has taught us that the highest relevance is in the upper left corner

Searchheatmap22 – Then, we begin searching for information scent. This is a term from information foraging theory, which we’ve covered in our eye tracking white papers. In this particular case, we’ve asked our participants to look for thin, light laptops for their sales team. Notice how the eye tracking hot spots are over the words that offer the greatest “scent”, based on the intention of the user. Typically, this search for scent is a scanning of the first few words of the title of the top 3 or 4 listings.

Searchheatmap33 – Now the evaluation begins. Based on the initial scan of the beginnings of titles from the top 3 or 4 listings, users begin to compare the degree of relevance of some alternatives, typically by comparing two at a time. We tend to “chunk” the results page into sections of 3 or 4 listings at a time to compare, as this has been shown to be a typical limit of working memory9 when considering search listing alternatives

searchheatmap44 -It’s this scanning pattern, roughly in the shape of an “F”, that creates the distinct scan pattern that we first called the “Golden Triangle” in our first eye tracking study. Users generally scan vertically first, creating the upright of the “F”, then horizontally when they pick up a relevant visual cue, creating the arms of the F. Scanning tends to be top heavy, with more horizontal scanning on top entries, which over time creates the triangle shape.


searchheatmap5(2)5 – Often, especially if the results are relevant, this initial scan of the first 3 or 4 listings will result in a click. If two listings or more listings in the initial set look to be relevant, the user will click through to both and compare the information scent on the landing page. This back and forth clicking is referred to as “pogo sticking”. It’s this initial set of results that represents the prime real estate on the page.

searchheatmap66 – If the initial set doesn’t result in a successful click through, the user continues to “chunk” the page for future consideration. The next chunk could be the next set of organic results, or the ads on the right hand side of the page. There, the same F Shaped Scan patterns will be repeated. By the way, there’s one thing to note about the right hand ads. Users tend to glance at the first ad and make a quick evaluation of the relevance. If the first ad doesn’t appear relevant, the user will often not scan any further, passing judgement on the usefulness and relevance of all the ads on the right side based on their impression of the ad on top.

So, that explains how habits dictate our scanning pattern. What I want to talk more about today is how our attention focusing mechanism might impact our search for information scent on the page.

The Role of the Query in Information Scent

Remember the role of our neuronal chorus, firing in unison, in drawing our attention to potential targets in our total field of vision. Now, text based web pages don’t exactly offer a varied buffet of stimuli, but I suspect the role of key words in the text of listings might serve to help focus our attention.

In a previous post, I mentioned that words are basically abstract visual representations of ideas or concepts. The shape of the letters in a familiar word can draw our attention. It tends to “pop out” at us from the rest of the words on the page. I suspect this “pop out” effect could be the result of Dr. Desimone’s neural synchrony patterns. We may have groups of neurons tuned to pick certain words out of the sea of text we see on a search page.

The Query as a Picture

This treating of a word as a picture rather than text has interesting implications for the work our brain has to do. The interpretation of text actually calls a significant number of neural mechanisms into play. It’s fairly intensive processing. We have to visually intrepret the letters, run it through the language centres of our brain, translate into a concept and only then can we capture the meaning of the word. It happens quickly, but not nearly as quickly as the brain can absorb a picture. Pictures don’t have to be interpreted. Our understanding of a picture requires fewer mental “middle men” in our brain, so it takes a shorter path. Perhaps that’s why one picture is worth a thousand words.

But in the case of logos and very well known words, we may be able to skip some of the language processing we would normally have to do. The shape of the word might be so familiar, we treat it more like an icon or picture than a word. For example, if you see your name in print, it tends to immediately jump out at you. I suspect the shape of the word might be so familiar that our brain processes it through a quicker path than a typical word. We process it as a picture rather than language.

Now, if this is the case, the most obvious candidate for this “express processing” behavior would be the actual query we use. And we have a “picture” of what the word looks like already in our minds, because we just typed it into the query box. This would mean that this word would pop out of the rest of the text quicker than other text. And, through eye tracking, there are very strong indications that this is exactly what’s happening. The query used almost inevitably attracts foveal attention quicker than anything else. The search engines have learned to reinforce this “pop out” effect by using hit bolding to put the query words in bold type when ever they appear in the results set.

Do Other Words Act as Scent Pictures?

If this is true of the query, are there other words that trigger the same pop out effect? I suspect this to also be true. We’ve seen that certain word attract more than their fair share of attention, depending on the intent of the user. Well know brands typically attract foveal attention. So do prices and salient product features. Remember, we don’t read search listings, we scan them. We focus on a few key words and if there is a strong enough match of information scent to our intent, we click on the listing.

The Intrusion of Graphics

Until recently, the average search page was devoid of graphics. But all the engines are now introducing richer visuals into many results sets. A few years ago we did some eye tracking to see what the impact might be. The impact, as we found out, was that the introduction of a graphic significantly changed the conditioned scan patterns I described earlier in the post.

eshapedpatternThis seems to be a perfect illustration of Desimone’s attention focusing mechanism at work. If we’re searching for Harry Potter, or in the case of the example heat map shown below, an iPhone, we likely have a visual image already in mind. If a relevant image appears on the page, it hits our attention alarms with full force. First of all, it stands out from the text that surrounds it. Secondly, our pre-tuned neurons immediately pick it out in our peripheral vision as something worthy of foveal focus because it matches the picture we have in our mind. And thirdly, our brain interprets the relevancy of the image much faster than it can the surrounding text. It’s an easier path for the attention mechanisms of our brain to go down and our brains follow the same rules as my sister-in-law: no unnecessary trips.

The result? The F Shaped Scan pattern, which is the most efficient scan pattern for an ordered set of text results, suddenly becomes an E shaped pattern. The center of the E is on the image, which immediately draws our attention. We scan the title beside it to confirm relevancy, and then we have a choice to make. Do we scan the section above or below. Again, our peripheral vision helps make this decision by scanning for information scent above and below the image. Words that “pop out” could lure us up or down. Typically, we expect greater relevancy higher in the page, so we would move up more often than down.

Tomorrow, I’ll wrap up my series of posts on how our brains control what grabs our attention by looking at another study that indicates we might have a built in timer that governs our attention span and we’ll revisit the concept of the information patch, looking at how long we decide to spend “in the patch.”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s