Over at Searchengineland, Danny Sullivan did a deeper dive into the Microsoft Eye Tracking Study that I posted about last Friday. In it, Danny said:
“Interesting, the pattern is different that the “golden triangle” that Enquiro has long talked about in its eye tracking studies, where you see all the red along the horizontal line of the top listing (indicating a lot of reading there), then less on the second listing, then less still as you move down. “
I just want to draw a few distinctions between the studies. In our study, we wanted to replicate typical search behavior as much as possible, so let people interact with actual results pages. In the Microsoft study, they were testing what would happen when the most relevant result was moved down the page and how searchers responded to different snippet lengths. The results, while actual results, were intercepted and were restructured in a way (i.e., stripping out sponsored ads) to let the researchers test different variables. We have said repeatedly that the Golden Triangle is not a constant, as is shown in our second study, but follows intent and the presentation of the search results.
In fact, the Microsoft study does confirm many of our findings, in the linear scanning of results, the scanning of groups of results and the importance of being in the top 5.
Another potential misconception that could be drawn from Danny’s interpretation of results is hard and fast rules about how many results searchers scan. He settled on the number five. When looking at eye tracking results, it’s vital to remember that there is no typical activity. Please don’t take an average and apply it as a rule of thumb. Averages, or aggregate heat maps, are just that. They’re what happens when you take a lot of different sessions, varying greatly, and mash them together. Scanning activity is highly dependent on the intent of the user and what appears on the search results page. A particularly relevant result in top sponsored, matched to the intent of the majority of users, would probably mean little scanning beyond the first or second organic result. On the other hand, if the query is more ambiguous, you could see scanning a lot deeper on the page. The Microsoft study used two tasks that would generate a limited number of queries, and recorded interactions based on this limited scope. Our studies, while using more tasks, still out of necessity represented the tiniest slice of possible interactions.
After looking at over a thousand sessions in the past 2 years, I’ve learned first hand that there are a lot of variables in scanning patterns and interactions with the search page. An eye tracking study provides clues, but no real answers. You have to take the results and try to extrapolate them beyond the scope of the study. We spent a lot of time doing this when writing up both our reports. You try to find universal behaviors and commonalities, but you have to be very careful not to accept the results at face value. Drawing conclusions such as snippet lengths should be longer or that official site tags should become standard are dangerous, because it’s not true for every search. The study actually found that ideal snippet length is highly dependent on the task and intent of the user.
If anything, what eye tracking has shown me is the need for more flexible search results, personalized to me and my intent at the time.