First published October 23, 2008 in Mediapost’s Search Insider
Two weeks ago, I talked about the concept of selective perception, how subconsciously we pick and choose what we pay attention to. Then, last week, I explained how engagement with search is significantly different than engagement with other types of advertising. These two concepts set the stage for what I want to do today. In this column, I want to lay out a step-by-step hypothetical walk-through of our cognitive engagement with a search page.
Searching on Auto Pilot
First, I think it’s important to clear up a common misunderstanding. We don’t think our way through an entire search interaction. The brain only kicks into cognitive high gear (involving the cortex) when it absolutely needs to. When we’re engaged in a mental task, any mental task, our brain is constantly looking for cognitive shortcuts to lessen the workload required. Most of these shortcuts involve limbic structures at the sub-cortical level, including the basal ganglia, hippocampus, thalamus and nucleus accumbens. This is a good thing, as these structures have been honed through successful generations to simplify even the most complicated tasks. They’re the reason driving is much easier for you now than it was the first time you climbed behind the wheel. These structures and their efficiencies also play a vital role in our engagement with search.
So, to begin with, our mind identifies a need for information. Usually, this is a sub task that is part of a bigger goal. The goal is established in the prefrontal cortex and the neural train starts rolling toward it. We realize there’s a piece of information missing that prevents us from getting closer to our goal – and, based on our past successful experiences, we determine that a search engine offers the shortest route to gain the information. This is the first of our processing efficiencies. We don’t deliberate long hours about the best place to turn. We make a quick, heuristic decision based on what’s worked in the past. The majority of this process is handled at the sub-cortical level.
The Google Habit
Now we have the second subconscious decision. Although we have several options available for searching, the vast majority of us will turn to Google, because we’ve developed a Google habit. Why spend precious cognitive resources considering our options when Google has generally proved successful in the past? Our cortex has barely begun to warm up at this point. The journey thus far has been on autopilot.
The prefrontal cortex, home of our working memory, first sparked to life with the realization of the goal and the identification of the sub task, locating the missing piece of information. Now, the cortical mind is engaged once again as we translate that sub task into an appropriate query. This involves matching the concept in our minds with the right linguistic label. Again, we’re not going to spend a lot of cognitive effort on this, which is why query construction tends to start simply and become longer and more complex only if required. In this process, the label, the query we plugged into the search box, remains embedded in working memory.
At this point, the prefrontal cortex begins to idle down again. The next exercise is handled by the brain as a simple matching game. We have the label, or query, in our mind. We scan the page in the path we’ve been conditioned to believe will lead to the best results: starting in the upper left, and then moving down the page in an F-shaped scan pattern. All we want to do is find a match between the query in our prefrontal cortex and the results on the page.
Here the brain also conserves cognitive processing energy by breaking the page into chunks of three or four results. This is due to the channel capacity of our working memory and how many discrete chunks of information we can process in our prefrontal cortex at a time. We scan the results looking first for the query, usually in the title of the results. And it’s here where I believe a very important cognitive switch is thrown.
The “Pop Out” Effect
When we structure the query, we type it into a box. In the process, we remember the actual shape of the phrase. When we first scan results, we’re not reading words, we’re matching shapes. In cognitive psychology, this is called the “pop out” effect. We can recognize shapes much faster than we can read words. The shapes of our query literally “pop out” from the page as a first step toward matching relevance. The effect is enhanced by query (or hit) bolding. This matching game is done at the sub-cortical level.
If the match is positive (shape = query), then our eye lingers long enough to start picking up the detail around the word. We’ve seen in multiple eye tracking studies that foveal focus (the center of the field of vision) tends to hit the query in the title, but peripheral vision begins to pick up words surrounding the title. In our original eye tracking study, we called this semantic mapping. In Peter Pirolli’s book, “Information Foraging,” he referred to this activity as spreading activation. It’s after the “pop out” match that the prefrontal cortex again kicks into gear. As additional words are picked up, they are used to reinforce the original scent cue. Additional words from the result pull concepts into the prefrontal cortex (recognized URL, feature, supporting information, price, brand), which tend to engage different cortical regions as long-term memory labels are paged and brought back into the working memory. If enough matches with the original mental construct of the information sought are registered, the link is clicked.
Next week, we’ll look at the nature of this memory recall, including the elusive brand message.