Our Disappearing Attention Spans

Last week, Mediapost Editor in Chief Joe Mandese mused about our declining attention spans. He wrote, 

“while in the past, the most common addictive analogy might have been opiates — as in an insatiable desire to want more — these days [consumers] seem more like speed freaks looking for the next fix.”

Mandese cited a couple of recent studies, saying that more than half of mobile users tend to abandon any website that takes longer than three seconds to load. That

“has huge implications for the entire media ecosystem — even TV and video — because consumers increasingly are accessing all forms of content and commerce via their mobile devices.”

The question that begs to be asked here is, “Is a short attention span a bad thing?” The famous comparison is that we are now more easily distracted than a goldfish. But does a shorter attention span negatively impact us, or is it just our brain changing to be a better fit with our environment?

Academics have been debating the impact of technology on our ability to cognitively process things for some time. Journalist Nicholas Carr sounded the warning in his 2010 book, “The Shallows,” where he wrote, 

” (Our brains are) very malleable, they adapt at the cellular level to whatever we happen to be doing. And so the more time we spend surfing, and skimming, and scanning … the more adept we become at that mode of thinking.”

Certainly, Carr is right about the plasticity of our brains. It’s one of the most advantageous features about them. But is our digital environment forever pushing our brains to the shallow end of the pool? Well, it depends. Context is important. One of the biggest factors in determining how we process the information we’re seeing is the device where we’re seeing it.

Back in 2010, Microsoft did a large-scale ethnographic study on how people searched for information on different devices. The researchers found those behaviors differed greatly depending on the platform being used and the intent of the searcher. They found three main categories of search behaviors:

  • Missions are looking for one specific answer (for example, an address or phone number) and often happen on a mobile device.
  • Excavations are widespread searches that need to combine different types of information (for example, researching an upcoming trip or major purchase). They are usually launched on a desktop.
  • Finally, there are Explorations: searching for novelty, often to pass the time. These can happen on all types of devices and can often progress through different devices as the exploration evolves. The initial search may be launched on a mobile device, but as the user gets deeper into the exploration, she may switch to a desktop.

The important thing about this research was that it showed our information-seeking behaviors are very tied to intent, which in turn determines the device used. So, at a surface level, we shouldn’t be too quick to extrapolate behaviors seen on mobile devices with certain intents to other platforms or other intents. We’re very good at matching a search strategy to the strengths and weaknesses of the device we’re using.

But at a deeper level, if Carr is right (and I believe he is) about our constant split-second scanning of information to find items of interest making permanent changes in our brains, what are the implications of this?

For such a fundamentally important question, there is a small but rapidly growing body of academic research that has tried to answer it. To add to the murkiness, many of the studies done contradict each other. The best summary I could find of academia’s quest to determine if “the Internet is making us stupid” was a 2015 article in academic journal The Neuroscientist.

The authors sum up by essentially saying both “yes” — and “no.” We are getting better at quickly filtering through reams of information. We are spending fewer cognitive resource memorizing things we know we can easily find online, which theoretically leaves those resources free for other purposes. Finally, for this post, I will steer away from commenting on multitasking, because the academic jury is still very much out on that one.

But the authors also say that 

“we are shifting towards a shallow mode of learning characterized by quick scanning, reduced contemplation and memory consolidation.”

The fact is, we are spending more and more of our time scanning and clicking. There are inherent benefits to us in learning how to do that faster and more efficiently. The human brain is built to adapt and become better at the things we do all the time. But there is a price to be paid. The brain will also become less capable of doing the things we don’t do as much anymore. As the authors said, this includes actually taking the time to think.

So, in answer to the question “Is the Internet making us stupid?,” I would say no. We are just becoming smart in a different way.

But I would also say the Internet is making us less thoughtful. And that brings up a rather worrying prospect.

As I’ve said many times before, the brain thinks both fast and slow. The fast loop is brutally efficient. It is built to get stuff done in a split second, without having to think about it. Because of this, the fast loop has to be driven by what we already know or think we know. Our “fast” behaviors are necessarily bounded by the beliefs we already hold. It’s this fast loop that’s in control when we’re scanning and clicking our way through our digital environments.

But it’s the slow loop that allows us to extend our thoughts beyond our beliefs. This is where we’ll find our “open minds,” if we have such a thing. Here, we can challenge our beliefs and, if presented with enough evidence to the contrary, willingly break them down and rebuild them to update our understanding of the world. In the sense-making loop, this is called reframing.

The more time we spend “thinking fast” at the expense of “thinking slow,” the more we will become prisoners to our existing beliefs. We will be less able to consolidate and consider information that lies beyond those boundaries. We will spend more time “parsing” and less time “pondering.” As we do so, our brains will shift and change accordingly.

Ironically, our minds will change in such a way to make it exceedingly difficult to change our minds.

Just in Time for Christmas: More Search Eye-Tracking

The good folks over at the Nielsen Norman Group have released a new search eye tracking report. The findings are quite similar to one my former company — Mediative — did a number of years ago (this link goes to a write-up about the study. Unfortunately, the link to the original study is broken. *Insert head smack here).

In the Nielsen Norman study, the two authors — Kate Moran and Cami Goray — looked at how a more visually rich and complex search results page would impact user interaction with the page. The authors of the report called the sum of participant interactions a “Pinball Pattern”: “Today, we find that people’s attention is distributed on the page and that they process results more nonlinearly than before. We observed so much bouncing between various elements across the page that we can safely define a new SERP-processing gaze pattern — the pinball pattern.”

While I covered this at some length when the original Mediative report came out in 2014 (in three separate columns: 1,2 & 3), there are some themes that bear repeating. Unfortunately, I found the study’s authors missed what I think are some of the more interesting implications. 

In the days of the “10 Blue Links” search results page, we used the same scanning strategy no matter what our intent was. In an environment where the format never changes, you can afford to rely on a stable and consistent strategy. 

In our first eye tracking study, published in 2004, this consistent strategy led to something we called the Golden Triangle. But those days are over.

Today, when every search result can look a little bit different, it comes as no surprise that every search “gaze plot” (the path the eyes take through the results page) will also be different. Let’s take a closer look at the reasons for this. 

SERP Eye Candy

In the Nielsen Norman study, the authors felt “visual weighting” was the main factor in creating the “Pinball Pattern”: “The visual weight of elements on the page drives people’s scanning patterns. Because these elements are distributed all over the page and because some SERPs have more such elements than others, people’s gaze patterns are not linear. The presence and position of visually compelling elements often affect the visibility of the organic results near them.”

While the visual impact of the page elements is certainly a factor, I think it’s only part of the answer. I believe a bigger, and more interesting, factor is how the searcher’s brain and its searching strategies have evolved in lockstep with a more visually complex results page. 

The Importance of Understanding Intent

The reason why we see so much variation in scan patterns is that there is also extensive variation in searchers’ intent. The exact same search query could be used by someone intent on finding an online or physical place to purchase a product, comparing prices on that product, looking to learn more about the technical specs of that product, looking for how-to videos on the use of the product, or looking for consumer reviews on that product.

It’s the same search, but with many different intents. And each of those intents will result in a different scanning pattern. 

Predetermined Page Visualizations

I really don’t believe we start each search page interaction with a blank slate, passively letting our eyes be dragged to the brightest, shiniest object on the page. I think that when we launch the search, our intent has already created an imagined template for the page we expect to see. 

We have all used search enough to be fairly accurate at predicting what the page elements might be: thumbnails of videos or images, a map showing relevant local results, perhaps a Knowledge Graph result in the lefthand column. 

Yes, the visual weighting of elements act as an anchor to draw the eye, but I believe the eye is using this anticipated template to efficiently parse the results page. 

I have previously referred to this behavior as a “chunking” of the results page. And we already have an idea of what the most promising chunks will be when we launch the search. 

It’s this chunking strategy that’s driving the “pinball” behavior in the Nielsen Norman study.  In the Mediative study, it was somewhat surprising to see that users were clicking on a result in about half the time it took in our original 2005 study. We cover more search territory, but thanks to chunking, we do it much more efficiently.

One Last Time: Learn Information Scent

Finally, let me drag out a soapbox I haven’t used for a while. If you really want to understand search interactions, take the time to learn about Information Scent and how our brains follow it (Information Foraging Theory — Pirolli and Card, 1999 — the link to the original study is also broken. *Insert second head smack, this one harder.). 

This is one area where the Nielsen Norman Group and I are totally aligned. In 2003, Jakob Nielsen — the first N in NNG — called the theory “the most important concept to emerge from human-computer interaction research since 1993.”

On that we can agree.

Data does NOT Equal People

We marketers love data. We treat it like a holy grail: a thing to be worshipped. But we’re praying at the wrong altar. Or, at the very least, we’re praying at a misleading altar.

Data is the digital residue of behavior. It is the contrails of customer intent — a thin, wispy proxy for the rich bandwidth of the real world. It does have a purpose, but it should be just one tool in a marketer’s toolbox. Unfortunately, we tend to use it as a Swiss army knife, thinking it’s the only tool we need.

The problem is that data is seductive. It’s pliable and reliable, luring us into manipulation because it’s so easy to do. It can be twisted and molded with algorithms and spreadsheets.

But it’s also sterile. There is a reason people don’t fit nicely into spreadsheets. There are simply not enough dimensions and nuances to accommodate real human behavior.

Data is great for answering the questions “what,” “who,” “when” and “where.” But they are all glimpses of what has happened. Stopping here is like navigating through the rear-view mirror.

Data seldom yields the answer to “why.” But it’s why that makes the magic happen, that gives us an empathetic understanding that helps us reliably predict future behaviors.

Uncovering the what, who, when and where makes us good marketers. But it’s “why” that makes us great. It’s knowing why that allows us to connect the distal dots, hacking out the hypotheses that can take us forward in the leaps required by truly great marketing. As Tom Goodwin, the author of “Digital Darwinism,” said in a recent post, “What digital has done well is have enough of a data trail to claim, not create, success.”

We as marketers have to resist stopping at the data. We have to keep pursuing why.

Here’s one example from my own experience. Some years ago, my agency did an eye-tracking study that looked at gender differences in how we navigate websites.

For me, the most interesting finding to fall out of the data was that females spent a lot more time than males looking at a website’s “hero” shot, especially if it was a picture that had faces in it. Males quickly scanned the picture, but then immediately moved their eyes up to the navigation menu and started scanning the options there. Females lingered on the graphic and then moved on to scan text immediately adjacent to it.

Now, I could have stopped at “who” and “what,” which in itself would have been a pretty interesting finding. But I wanted to know “why.” And that’s where things started to get messy.

To start to understand why, you have to rely on feelings and intuition. You also have to accept that you probably won’t arrive at a definitive answer. “Why” lives in the realm of “wicked” problems, which I defined in a previous column as “questions that can’t be answered by yes or no — the answer always seems to be maybe.  There is no linear path to solve them. You just keep going in loops, hopefully getting closer to an answer but never quite arriving at one. Usually, the optimal solution to a wicked problem is ‘good enough – for now.’”

The answer to why males scan a website differently than females is buried in a maze of evolutionary biology, social norms and cognitive heuristics. It probably has something to do with wayfinding strategies and hardwired biases. It won’t just “fall out” of data because it’s not in the data to begin with.

Even half-right “why” answers often take months or even years of diligent pursuit to reveal themselves. Given that, I understand why it’s easier to just focus on the data. It will get you to “good,” and maybe that’s enough.

Unless, of course, you’re aiming to “put a ding in the universe,” as Steve Jobs said in an inspirational commencement speech at Stanford University. Then you have to shoot for great.

Search and The Path to Purchase

Just how short do we want the path to purchase to be anyway?

A few weeks back Mediapost reporter Laurie Sullivan brought this question up in her article detailing how Instagram is building ecom into their app. While Instagram is not usually considered a search platform, Sullivan muses on the connecting of two dots that seem destined to be joined: search and purchase. But is that a destiny that users can “buy into?”

Again, this is one of those questions where the answer is always, “It depends.”  And there are at least a few dependencies in this case.

The first is whether our perspective is as a marketer or a consumer. Marketers always want the path to purchase to be as short as possible. When we have that hat on, we won’t be fully satisfied until the package hits our front step about the same time we first get the first mental inkling to buy.

Amazon has done the most to truncate the path to purchase. Marketers look longingly at their one click ordering path – requiring mere seconds and a single click to go from search to successful fulfillment. If only all purchases were this streamlined, the marketer in us muses.

But if we’re leading our double life as a consumer, there is a second “It depends…”  And that is dependent on what our shopping intentions are. There are times when we – as consumers – also want to fastest possible path to purchase. But that’s not true all the time.

Back when I was looking at purchase behaviors in the B2B world, I found that there are variables that lead to different intentions on the part of the buyer. Essentially, it boils down to the degree of risk and reward in the purchase itself. I first wrote about this almost a decade ago now.

If there’s a fairly high degree of risk inherent in the purchase itself, the last thing we want is a frictionless path to purchase. These are what we call high consideration purchases.

We want to take our time, feeling that we’ve considered all the options. One click ordering scares the bejeezus out of us.

Let’s go back to the Amazon example. Today, Amazon is the default search engine of choice for product searches, outpacing Google by a margin rapidly approaching double digits. But this is not really an apples to apples comparison. We have to factor in the deliberate intention of the user. We go to Amazon to buy, so a faster path to purchase is appropriate. We go to Google to consider. And for reasons I’ll get into soon, we would be less accepting of a “buy” button there.

The buying paths we would typically take in a social platform like Instagram are probably not that high risk, so a fast path to purchase might be fine. But there’s another factor that we need to consider when shortening the path to purchase – or buiding a path in the first place – in what has traditionally been considered a discovery platform. Let’s call it a mixing of motives.

Google has been dancing around a shorter path to purchase for years now. As Sullivan said in her article, “Search engines have strength in what’s known as discovery shopping, but completing the transaction has never been a strong point — mainly because brands decline to give up the ownership of the data.”

Data ownership is one thing, but even if the data were available, including a “buy now” button in search results can also lead to user trust issues. For many purchases, we need to feel that our discovery engine has no financial motive in the ordering of their search results. This – of course – is a fallacy we build in our own minds. There is always a financial motive in the ordering of search results. But as long as it’s not overt, we can trick ourselves into living with it. A “buy now” button makes it overt.

This problem of mixed motives is not just a problem of user perception. It also can lead publishers down a path that leaves objectivity behind and pursues higher profits ahead. One example is TripAdvisor. Some years ago, they made the corporate decision to parlay their strong position as a travel experience discovery platform into an instant booking platform. In the beginning, they separated this booking experience onto its own platform under the brand Viator. Today, the booking experience has been folded into the main TripAdvisor results and – more disturbingly – is now the default search order. Every result at the top of the page has a “Book Now” button.

Speaking as a sample of one, I trust TripAdvisor a lot less than I used to.

 

How We Might Search (On the Go)

As I mentioned in last week’s column – Mediative has just released a new eyetracking study on mobile devices. And it appears that we’re still conditioned to look for the number one organic result before clicking on our preferred destination.

But…

It appears that things might be in the process of changing. This makes sense. Searching on a mobile device is – and should be – significantly different from searching on a desktop. We have different intents. We are interacting with a different platform. Even the way we search is different.

Searching on a desktop is all about consideration. It’s about filtering and shortlisting multiple options to find the best one. Our search strategies are still carrying a significant amount of baggage from what search was – an often imperfect way to find the best place to get more information about something. That’s why we still look for the top organic listing. For some reason we still subconsciously consider this the gold standard of informational relevancy. We measure all other results against it. That’s why we make sure we reserve one slot from the three to five available in our working memory (I have found that the average person considers about 4 results at a time) for its evaluation.

But searching on a mobile device isn’t about filtering content. For one thing, it’s absolutely the wrong platform to do this with. The real estate is too limited. For another, it’s probably not what we want to spend our time doing. We’re on the go and trying to get stuff done. This is not the time for pausing and reflecting. This is the time to find what I’m looking for and use it to take action.

This all makes sense but the fact remains that the way we search is a product of habit. It’s a conditioned subconscious strategy that was largely formed on the desktop. Most of us haven’t done enough searching on mobile devices yet to abandon our desktop-derived strategies and create new mobile specific ones. So, our subconscious starts playing out the desktop script and only varies from it when it looks like it’s not going to deliver acceptable results. That’s why we’re still looking for that number one organic listing to benchmark against

There were a few findings in the Mediative study that indicate that our desktop habits may be starting to slip on mobile devices. But before we review them, let’s do a quick review of how habits play out. Habits are the brains way of cutting down on thinking. If we do something over and over again and get acceptable results, we store that behavior as a habit. The brain goes on autopilot so we don’t have to think our way through a task with predictable outcomes.

If, however, things change, either in the way the task plays out or in the outcomes we get, the brain reluctantly takes control again and starts thinking its way through the task. I believe this is exactly what’s happening with our mobile searches. The brain desperately wants to use its desktop habits, but the results are falling below our threshold of acceptability. That means we’re all somewhere in the process of rebuilding a search strategy more suitable for a mobile device.

Mediative’s study shows me a brain that’s caught between the desktop searches we’ve always done and the mobile searches we’d like to do. We still feel we should scroll to see at least the top organic result, but as mobile search results become more aligned with our intent, which is typically to take action right away, we are being side tracked from our habitual behaviors and kicking our brains into gear to take control. The result is that when Google shows search elements that are probably more aligned with our intent – either local results, knowledge graphs or even highly relevant ads with logical ad extensions (such as a “call” link) – we lose confidence in our habits. We still scroll down to check out the organic result but we probably scroll back up and click on the more relevant result.

All this switching back and forth from habitual to engaged interaction with the results ends up exacting a cost in terms of efficiency. We take longer to conduct searches on a mobile device, especially if that search shows other types of results near the top. In the study, participants spent an extra 2 seconds or so scrolling between the presented results (7.15 seconds for varied results vs. 4.95 seconds for organic only results). And even though they spent more time scrolling, more participants ended up clicking on the mobile relevant results they saw right at the top.

The trends I’m describing here are subtle – often playing out in a couple seconds or less. And you might say that it’s no big deal. But habits are always a big deal. The fact that we’re still relying on desktop habits that were laid down over the past two decades show how persistent then can be. If I’m right and we’re finally building new habits specific to mobile devices, those habits could dictate our search behaviors for a long time to come.

In Search- Even in Mobile – Organic Still Matters

I told someone recently that I feel like Rick Astley. You know, the guy that had the monster hit “Never Gonna Give You Up” in 1987 and is still trading on it almost 30 years later? He even enjoyed a brief resurgence of viral fame in 2007 when the world discovered what it meant to be “Rickrolled”

google-golden-triangle-eye-trackingFor me, my “Never Gonna Give You Up” is the Golden Triangle eye tracking study we released in 2005. It’s my one hit wonder (to be fair to Astley, he did have a couple other hits, but you get the idea). And yes, I’m still talking about it.

The Golden Triangle as we identified it existed because people were drawn to look at the number one organic listing. That’s an important thing to keep in mind. In today’s world of ad blockers and teeth gnashing about the future of advertising, there is probably no purer or more controllable environment than the search results page. Creativity is stripped to the bare minimum. Ads have to be highly relevant and non-promotional in nature. Interaction is restricted to the few seconds required to scan and click. If there was anywhere where ads might be tolerated, its on the search results page

But…

If we fully trusted ads – especially those as benign as those that show up on search results – there would have be no Golden Triangle. It only existed because we needed to see that top Organic result and dragging our eyes down to it formed one side of the triangle.

eyetracking2014Fast forward almost 10 years. Mediative, which is the current incarnation of my old company, released a follow up two years ago. While the Golden Triangle had definitely morphed into a more linear scan, the motivation remained – people wanted to scan down to see at least one organic listing. They didn’t trust ads then. They don’t trust ads now.

Google has used this need to anchor our scanning with the top organic listing to introduce a greater variety of results into the top “hot zone” – where scanning is the greatest. Now, depending on the search, there is likely to be at least a full screen of various results – including ads, local listings, reviews or news items – before your eyes hit that top organic web result. Yet, we seem to be persistent in our need to see it. Most people still make the effort to scroll down, find it and assess its relevance.

It should be noted that all of the above refers to desktop search. But almost a year ago, Google announced that – for the first time ever – more searches happened on a mobile device than on a desktop.

eyetracking mobile.pngMediative just released a new eye-tracking study (Note: I was not involved at all with this one). This time, they dove into scan patterns on mobile devices. Given the limited real estate and the fact that for many popular searches, you would have to consciously scroll down at least a couple times to see the first organic result, did users become more accepting of ads?

Nope. They just scanned further down!

The study’s first finding was that the #1 organic listing still captures the most click activity, but it takes users almost twice as long to find it compared to a desktop.

The study’s second finding was that even though organic is still important, position matters more than ever. Users will make the effort to find the top organic result and, once they do, they’ll generally scan the top 4 results, but if they find nothing relevant, they probably won’t scan much further. In the study, 92.6% of the clicks happened above the 4th organic listing. On a desktop, 84% of the clicks happened above the number 4 listing.

The third listing shows an interesting paradox that’s emerging on mobile devices: we’re carrying our search habits from the desktop over with us – especially our need to see at least one organic listing. The average time to scan the top sponsored listing was only 0.36 seconds, meaning that people checked it out immediately after orienting themselves to the mobile results page, but for those that clicked the listing, the average time to click was 5.95 seconds. That’s almost 50% longer than the average time to click on a desktop search. When organic results are pushed down the page because of other content, it’s taking us longer before we feel confident enough to make our choice. We still need to anchor our relevancy assessment with that top organic result and that’s causing us to be less efficient in our mobile searches than we are on the desktop.

The study also indicated that these behaviors could be in flux. We may be adapted our search strategies for mobile devices, but we’re just not quite there yet. I’ll touch on this in next week’s column.

 

 

 

 

 

 

 

 

Evolved Search Behaviors: Take Aways for Marketers

In the last two columns, I first looked at the origins of the original Golden Triangle, and then looked at how search behaviors have evolved in the last 9 years, according to a new eye tracking study from Mediative. In today’s column, I’ll try to pick out a few “so whats” for search marketers.

It’s not about Location, It’s About Intent

In 2005, search marketing as all about location. It was about grabbing a part of the Golden Triangle, and the higher, the better. The delta between scanning and clicks from the first organic result to the second was dramatic – by a factor of 2 to 1! Similar differences were seen in the top paid results. It’s as if, given the number of options available on the page (usually between 12 and 18, depending on the number of ads showing) searchers used position as a quick and dirty way to filter results, reasoning that the higher the result, the better match it would be to their intent.

In 2014, however, it’s a very different story. Because the first scan is now to find the most appropriate chunk, the importance of being high on the page is significantly lessened. Also, once the second step of scanning has begun, within a results chunk, there seems to be more vertical scanning within the chunk and less lateral scanning. Mediative found that in some instances, it was the third or fourth listing in a chunk that attracted the most attention, depending on content, format and user intent. For example, in the heat map shown below, the third organic result actually got as many clicks as the first, capturing 26% of all the clicks on the page and 15% of the time spent on page. The reason could be because it was the only listing that had the Google Ratings Rich Snippet because of the proper use of structured data mark up. In this case, the information scent that promised user reviews was a strong match with user intent, but you would only know this if you knew what that intent was.

Google-Ford-Fiesta

This change in user search scanning strategies makes it more important than ever to understand the most common user intents that would make them turn to a search engine. What will be the decision steps they go through and at which of those steps might they turn to a search engine? Would it be to discover a solution to an identified need, to find out more about a known solution, to help build a consideration set for direct comparisons, to look for one specific piece of information (ie a price) or to navigate to one particular destination, perhaps to order online? If you know why your prospects might use search, you’ll have a much better idea of what you need to do with your content to ensure you’re in the right place at the right time with the right content.  Nothing shows this clearer than the following comparison of heat maps. The one on the left was the heat map produced when searchers were given a scenario that required them to gather information. The one on the right resulted from a scenario where searchers had to find a site to navigate to. You can see the dramatic difference in scanning behaviors.

Intent-compared-2

If search used to be about location, location, location, it’s now about intent, intent, intent.

Organic Optimization Matters More than Ever!

Search marketers have been saying that organic optimization has been dying for at least two decades now, ever since I got into this industry. Guess what? Not only is organic optimization not dead, it’s now more important than ever! In Enquiro’s original 2005 study, the top two sponsored ads captured 14.1% of all clicks. In Mediative’s 2014 follow up, the number really didn’t change that much, edging up to 14.5% What did change was the relevance of the rest of the listings on the page. In 2005, all the organic results combined captured 56.7% of the clicks. That left about 29% of the users either going to the second page of results, launching a new search or clicking on one of the side sponsored ads (this only accounted for small fraction of the clicks). In 2014, the organic results, including all the different category “chunks,” captured 74.6% of the remaining clicks. This leaves only 11% either clicking on the side ads (again, a tiny percentage) or either going to the second page or launching a new search. That means that Google has upped their first page success rate to an impressive 90%.

First of all, that means you really need to break onto the first page of results to gain any visibility at all. If you can’t do it organically, make sure you pay for presence. But secondly, it means that of all the clicks on the page, some type of organic result is capturing 84% of them. The trick is to know which type of organic result will capture the click – and to do that you need to know the user’s intent (see above). But you also need to optimize across your entire content portfolio. With my own blog, two of the biggest traffic referrers happen to be image searches.

Left Gets to Lead

The Left side of the results page has always been important but the evolution of scanning behaviors now makes it vital. The heat map below shows just how important it is to seed the left hand of results with information scent.

Googlelefthand

Last week, I talked about how the categorization of results had caused us to adopt a two stage scanning strategy, the first to determine which “chunks” of result categories are the best match to intent, and the second to evaluated the listings in the most relevant chunks. The vertical scan down the left hand of the page is where we decide which “chunks” of results are the most promising. And, in the second scan, because of the improved relevancy, we often make the decision to click without a lot of horizontal scanning to qualify our choice. Remember, we’re only spending a little over a second scanning the result before we click. This is just enough to pick up the barest whiffs of information scent, and almost all of the scent comes from the left side of the listing. Look at the three choices above that captured the majority of scanning and clicks. The search was for “home decor store toronto.” The first popular result was a local result for the well known brand Crate and Barrel. This reinforces how important brands can be if they show up on the left side of the result set. The second popular result was a website listing for another well known brand – The Pottery Barn. The third was a link to Yelp – a directory site that offered a choice of options. In all cases, the scent found in the far left of the result was enough to capture a click. There was almost no lateral scanning to the right. When crafting titles, snippets and metadata, make sure you stack information scent to the left.

In the end, there are no magic bullets from this latest glimpse into search behaviors. It still comes down to the five foundational planks that have always underpinned good search marketing:

  1. Understand your user’s intent
  2. Provide a rich portfolio of content and functionality aligned with those intents
  3. Ensure your content appears at or near the top of search results, either through organic optimization or well run search campaigns
  4. Provide relevant information scent to capture clicks
  5. Make sure you deliver on what you promise post-click

Sure, the game is a little more complex than it was 9 years ago, but the rules haven’t changed.

Google’s Golden Triangle – Nine Years Later

Last week, I reviewed why the Golden Triangle existed in the first place. This week, we’ll look at how the scanning patterns of Google user’s has evolved in the past 9 years.

The reason I wanted to talk about Information Foraging last week is that it really sets the stage for understanding how the patterns have changed with the present Google layout. In particular, one thing was true for Google in 2005 that is no longer true in 2014 – back then, all results sets looked pretty much the same.

Consistency and Conditioning

If humans do the same thing over and over again and usually achieve the same outcome, we stop thinking about what we’re doing and we simply do it by habit. It’s called conditioning. But habitual conditioning requires consistency.

In 2005, The Google results page was a remarkably consistent environment. There was always 10 blue organic links and usually there were up to three sponsored results at the top of the page. There may also have been a few sponsored results along the right side of the page. Also, Google would put what it determined to be the most relevant results, both sponsored and organic, at the top of the page. This meant that for any given search, no matter the user intent, the top 4 results should presumably include the most relevant one or two organic results and a few hopefully relevant sponsored options for the user. If Google did it’s job well, there should be no reason to go beyond these 4 top results, at least in terms of a first click. And our original study showed that Google generally did do a pretty good job – over 80% of first clicks came from the top 4 results.

In 2014, however, we have a much different story. The 2005 Google was a one-size-fits-all solution. All results were links to a website. Now, not only do we have a variety of results, but even the results page layout varies from search to search. Google has become better at anticipating user intent and dynamically changes the layout on each search to be a better match for intent.

google 2014 big

What this means, however, is that we need to think a little more whenever we interact with a search page. Because the Google results page is no longer the same for every single search we do, we have exchanged consistency for relevancy. This means that conditioning isn’t as important a factor as it was in 2005. Now, we must adopt a two stage foraging strategy. This is shown in the heat map above. Our first foraging step is to determine what categories – or “chunks” of results – Google has decided to show on this particular results page. This is done with a vertical scan down the left side of the results set. In this scan, we’re looking for cues on what each chunk offers – typically in category headings or other quickly scanned labels. This first step is to determine which chunks are most promising in terms of information scent. Then, in the second step, we go back to the most relevant chunks and start scanning in a more deliberate fashions. Here, scanning behaviors revert to the “F” shaped scan we saw in 2005, creating a series of smaller “Golden Triangles.”

What is interesting about this is that although Google’s “chunking” of the results page forces us to scan in two separate steps, it’s actually more efficient for us. The time spent scanning each result is half of what it was in 2005, 1.2 seconds vs. 2.5 seconds. Once we find the right “chunk” of results, the results shown tend to be more relevant, increasing our confidence in choosing them.  You’ll see that the “mini” Golden Triangles have less lateral scanning than the original. We’re picking up enough scent on the left side of each result to push our “click confidence” over the required threshold.

A Richer Visual Environment

Google also offers a much more visually appealing results page than they did 9 years ago. Then, the entire results set was text based. There were no images shown. Now, depending on the search, the page can include several images, as the example below (a search for “New Orleans art galleries”) shows.

Googleimageshot

The presence of images has a dramatic impact on our foraging strategies. First of all, images can be parsed much quicker than text. We can determine the content of an image in fractions of a second, where text requires a much slower and deliberate type of mental processing. This means that our eyes are naturally drawn to images. You’ll notice that the above heat map has a light green haze over all the images shown. This is typical of the quick scan we do immediately upon page entry to determine what the images are about. Heat in an eye tracking heat map is produced by duration of foveal focus. This can be misleading when we’re dealing with images for two reasons. First, the fovea centralis is, predictably, in the center of our eye where our focus is the sharpest. We use this extensively when reading but it’s not as important when we’re glancing at an image. We can make a coarse judgement about what a picture is without focusing on it. We don’t need our fovea to know it’s a picture of a building, or a person, or a map. It’s only when we need to determine the details of a picture that we’ll recruit the fine-grained resolution of our fovea.

Our ability to quickly parse images makes it likely that they will play an important role in our initial orientation scan of the results page. We’ll quickly scan the available images looking for information scent. It the image does offer scent, it will also act as a natural entry point for further scanning. Typically, when we see a relevant image, we look in the immediate vicinity to find more reinforcing scent. We often see scanning hot spots on titles or other text adjacent to relevant images.

We Cover More Territory – But We’re Also More Efficient

So, to sum up, it appears that with our new two step foraging strategy, we’re covering more of the page, at least on our first scan, but Google is offering richer information scent, allowing us to zero in on the most promising “chunks” of information on the page. Once we find them, we are quicker to click on a promising result.

Next week, I’ll look at the implications of this new behavior on organic optimization strategies.

The Evolution of Google’s Golden Triangle

In search marketing circles, most everyone has heard of Google’s Golden Triangle. It even has it’s own Wikipedia entry (which is more than I can say). The “Triangle” is rapidly coming up to its 10th birthday (it was March of 2005 when Did It and Enquiro – now Mediative – first released the study). This year, Mediative conducted a new study to see if what we found a decade ago still continues to be true. Another study from the Institute of Communication and Media Research in Cologne, Germany also looked at the evolution of search user behaviors. I’ll run through the findings of both studies to see if the Golden Triangle still exists. But before we dive in, let’s look back at the original study.

Why We Had a Golden Triangle in the First Place

To understand why the Golden Triangle appeared in the first place, you have to understand about how humans look for relevant information. For this, I’m borrowing heavily from Peter Pirolli and Stuart Card at PARC and their Information Foraging Theory (by the way, absolutely every online marketer, web designer and usability consultant should be intimately familiar with this theory).

Foraging for Information

Humans “forage” for information. In doing so, they are very judicious about the amount of effort they go to find the available information. This is largely a subconscious activity, with our eyes rapidly scanning for cues of relevancy. Pirolli and Card refer to this as “information scent.” Picture a field mouse scrambling across a table looking for morsels to eat and you’ll have an appropriate mental context in which to understand the concept of information foraging. In most online contexts, our initial evaluation of the amount of scent on a page takes no more than a second or two. In that time, we also find the areas that promise the greatest scent and go directly to them. To use our mouse analogy, the first thing she does is to scurry quickly across the table and see where the scent of possible food is the greatest.

The Area of Greatest Promise

Now, Imagine that same mouse comes back day after day to the same table and every time she returns, she finds the greatest amount of food is always in the same corner. After a week or so, she learns that she doesn’t have to scurry across the entire table. All she has to do is go directly to that corner and start there. If, by some fluke, there is no food there, then the mouse can again check out the rest of the table to see if there are better offerings elsewhere. The mouse has been conditioned to go directly to the “Area of Greatest Promise” first.

Golden Triangle original

F Shaped Scanning

This was exactly the case when we did the first eye tracking study in 2005. Google had set a table of available information, but they always put the best information in the upper right corner. We became conditioned to go directly to the area of greatest promise. The triangle shape came about because of the conventions of how we read in the western world. We read top to bottom, left to right. So, to pick up information scent, we would first scan down the beginning of each of the top 4 or 5 listings. If we saw something that seemed to be a good match, we would scan across the title of the listing. If it was still a good match, we would quickly scan the description and the URL. If Google was doing it’s job right, there would be more of this lateral scanning on the top listing than there would be on the subsequent listings. This F shaped scanning strategy would naturally produce the Golden Triangle scanning pattern we saw.

Working Memory and Chunking

There was another behavior we saw that helped explain the heat maps that emerged. Our ability to actively compare options requires us to hold in our mind information about each of the options. This means that the number of options we can compare at any one time is restricted by the limits of our working memory. George Miller, in a famous paper in 1956, determined this to be 7 pieces of information, plus or minus two. The actual number depends on the type of information to be retained and the dimension of variability. In search foraging, the dimension is relevancy and the inputs to the calculation will be quick judgments of information scent based on a split second scan of the listing. This is a fairly complex assessment, so we found that the number of options to be compared at once by the user tends to max out about 3 or 4 listings. This means that the user “chunks” the page into groupings of 3 or 4 listings and determines if one of the listings is worthy of a click. If not, the user moves on to the next chunk. We also see this in the heat map shown. Scanning activity drops dramatically after the first 4 listings. In our original study, we found that over 80% of first clicks on all the results pages tested came from the top 4 listings. This is also likely why Google restricted the paid ads shown above organic to 3 at the most.

So, that’s a quick summary of our findings from the 2005 study. Next week, we’ll look how search scanning has changed in the past 9 years.

Note: Mediative and SEMPO will be hosting a Google+ Hang Out talking about their research on October 14th. Full details can be found here.

Wired for Information: A Brain Built to Google

First published August 26, 2010 in Mediapost’s Search Insider

In my last Search Insider, I took you on a neurological tour that gave us a glimpse into how our brains are built to read. Today, let’s dig deeper into how our brains guide us through an online hunt for information.

Brain Scans and Searching

First, a recap. In Nicholas Carr’s Book, “The Shallows: What the Internet is doing to Our Brains,I focused on one passage — and one concept — in particular. It’s likely that our brains have built a short cut for reading. The normal translation from a printed word to a concept usually requires multiple mental steps. But because we read so much, and run across some words frequently, it’s probable that our brains have built short cuts to help us recognize those words simply by their shape in mere milliseconds, instantly connecting us with the relevant concept. So, let’s hold that thought for a moment

The Semel Institute at UCLA recently did a neuroscanning study that monitored what parts of the brain lit up during the act of using a search engine online. What the institute found was that when we become comfortable with the act of searching, our brains become more active. Specifically, the prefrontal cortex, the language centers and the visual cortex all “light up” during the act of searching, as well as some sub-cortical areas.

It’s the latter of these that indicates the brain may be using “pre-wired” short cuts to directly connect words and concepts. It’s these sub-cortical areas, including the basal ganglia and the hippocampus, where we keep our neural “short cuts.”  They form the auto-pilot of the brain.

Our Brain’s “Waldo” Search Party

Now, let’s look at another study that may give us another piece of the puzzle in helping us understand how our brain orchestrates the act of searching online.

Dr. Robert Desimone at the McGovern Institute for Brain Research at MIT found that when we look for something specific, we “picture” it in our mind’s eye. This internal visualization in effect “wakes up” our brain and creates a synchronized alarm circuit: a group of neurons that hold the image so that we can instantly recognize it, even in complex surroundings. Think of a “Where’s Waldo” puzzle. Our brain creates a mental image of Waldo, activating a “search party” of Waldo neurons that synchronize their activities, sharpening our ability to pick out Waldo in the picture. The synchronization of neural activity allows these neurons to zero in on one aspect of the picture, in effect making it stand out from the surrounding detail

Pirolli’s Information Foraging

One last academic reference, and then we’ll bring the pieces together. Peter Pirolli, from Xerox’s PARC, believes we “forage” for information, using the same inherent mechanisms we would use to search for food. So, we hunt for the “scent” of our quarry, but in this case, rather than the smell of food, it’s more likely that we lodge the concept of our objective in our heads. And depending on what that concept is, our brains recruit the relevant neurons to help us pick out the right “scent” quickly from its surroundings.  If our quarry is something visual, like a person or thing, we probably picture it. But if our brain believes we’ll be hunting in a text-heavy environment, we would probably picture the word instead. This is the way the brain primes us for information foraging.

The Googling Brain

This starts to paint a fascinating and complex picture of what our brain might be doing as we use a search engine. First, our brain determines our quarry and starts sending “top down” directives so we can very quickly identify it.  Our visual cortex helps us by literally painting a picture of what we might be looking for. If it’s a word, our brain becomes sensitized to the shape of the word, helping us recognize it instantly without the heavy lifting of lingual interpretation.

Thus primed, we start to scan the search results. This is not reading, this is scanning our environment in mere milliseconds, looking for scent that may lead the way to our prey. If you’ve ever looked at a real-time eye-tracking session with a search engine, this is exactly the behavior you’d be seeing.

When we bring all the pieces together, we realize how instantaneous, primal and intuitive this online foraging is. The slow and rational brain only enters the picture as an afterthought.

Googling is done by instinct. Our eyes and brain are connected by a short cut in which decisions are made subconsciously and within milliseconds. This is the forum in which online success is made or missed.