How We Might Search (On the Go)

As I mentioned in last week’s column – Mediative has just released a new eyetracking study on mobile devices. And it appears that we’re still conditioned to look for the number one organic result before clicking on our preferred destination.

But…

It appears that things might be in the process of changing. This makes sense. Searching on a mobile device is – and should be – significantly different from searching on a desktop. We have different intents. We are interacting with a different platform. Even the way we search is different.

Searching on a desktop is all about consideration. It’s about filtering and shortlisting multiple options to find the best one. Our search strategies are still carrying a significant amount of baggage from what search was – an often imperfect way to find the best place to get more information about something. That’s why we still look for the top organic listing. For some reason we still subconsciously consider this the gold standard of informational relevancy. We measure all other results against it. That’s why we make sure we reserve one slot from the three to five available in our working memory (I have found that the average person considers about 4 results at a time) for its evaluation.

But searching on a mobile device isn’t about filtering content. For one thing, it’s absolutely the wrong platform to do this with. The real estate is too limited. For another, it’s probably not what we want to spend our time doing. We’re on the go and trying to get stuff done. This is not the time for pausing and reflecting. This is the time to find what I’m looking for and use it to take action.

This all makes sense but the fact remains that the way we search is a product of habit. It’s a conditioned subconscious strategy that was largely formed on the desktop. Most of us haven’t done enough searching on mobile devices yet to abandon our desktop-derived strategies and create new mobile specific ones. So, our subconscious starts playing out the desktop script and only varies from it when it looks like it’s not going to deliver acceptable results. That’s why we’re still looking for that number one organic listing to benchmark against

There were a few findings in the Mediative study that indicate that our desktop habits may be starting to slip on mobile devices. But before we review them, let’s do a quick review of how habits play out. Habits are the brains way of cutting down on thinking. If we do something over and over again and get acceptable results, we store that behavior as a habit. The brain goes on autopilot so we don’t have to think our way through a task with predictable outcomes.

If, however, things change, either in the way the task plays out or in the outcomes we get, the brain reluctantly takes control again and starts thinking its way through the task. I believe this is exactly what’s happening with our mobile searches. The brain desperately wants to use its desktop habits, but the results are falling below our threshold of acceptability. That means we’re all somewhere in the process of rebuilding a search strategy more suitable for a mobile device.

Mediative’s study shows me a brain that’s caught between the desktop searches we’ve always done and the mobile searches we’d like to do. We still feel we should scroll to see at least the top organic result, but as mobile search results become more aligned with our intent, which is typically to take action right away, we are being side tracked from our habitual behaviors and kicking our brains into gear to take control. The result is that when Google shows search elements that are probably more aligned with our intent – either local results, knowledge graphs or even highly relevant ads with logical ad extensions (such as a “call” link) – we lose confidence in our habits. We still scroll down to check out the organic result but we probably scroll back up and click on the more relevant result.

All this switching back and forth from habitual to engaged interaction with the results ends up exacting a cost in terms of efficiency. We take longer to conduct searches on a mobile device, especially if that search shows other types of results near the top. In the study, participants spent an extra 2 seconds or so scrolling between the presented results (7.15 seconds for varied results vs. 4.95 seconds for organic only results). And even though they spent more time scrolling, more participants ended up clicking on the mobile relevant results they saw right at the top.

The trends I’m describing here are subtle – often playing out in a couple seconds or less. And you might say that it’s no big deal. But habits are always a big deal. The fact that we’re still relying on desktop habits that were laid down over the past two decades show how persistent then can be. If I’m right and we’re finally building new habits specific to mobile devices, those habits could dictate our search behaviors for a long time to come.

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.

The Messy Part of Marketing

messymarketingMarketing is hard. It’s hard because marketing reflects real life. And real life is hard. But here’s the thing – it’s just going to get harder. It’s messy and squishy and filled with nasty little organic things like emotions and human beings.

For the past several weeks, I’ve been filing things away as possible topics for this column. For instance, I’ve got a pretty big file of contradicting research on what works in B2B marketing. Videos work. They don’t work. Referrals are the bomb. No, it’s content. Okay, maybe it’s both. Hmmm..pretty sure it’s not Facebook though.

The integration of marketing technology was another promising avenue. Companies are struggling with data. They’re drowning in data. They have no idea what to do with all the data that’s pouring in from smart watches and smart phones and smart bracelets and smart bangles and smart suppositories and – okay, maybe not suppositories, but that’s just because no one thought of it till I just mentioned it.

Then there’s the new Google tool that predicts the path to purchase. That sounds pretty cool. Marketers love things that predict things. That would make life easier. But life isn’t easy. So marketing isn’t easy. Marketing is all about trying to decipher the mangled mess of living just long enough to shoehorn in a message that maybe, just maybe that will catch the right person at the right time. And that mangled mess is just getting messier.

Personally, the thing that attracted me to marketing was its messiness. I love organic, gritty problems with no clear-cut solutions. Scientists call these ill-defined problems. And that’s why marketing is hard. It’s an ill-defined problem. It defies programmatic solutions. You can’t write an algorithm that will spit out perfect marketing. You can attack little slivers of marketing that lend themselves to clearer solutions, which is why you have the current explosion of ad-tech tools. But the challenge is trying to bring all these solutions together into some type of cohesive package that actually helps you relate to a living, breathing human.

One of the things that has always amazed me is how blissfully ignorant most marketers are about concepts that I think should be fundamental to understanding customer behaviors: things like bounded rationality, cognitive biases, decision theory and sense-making. Mention any of these things in a conference room full of marketers and watch eyes glaze over as fingers nervously thumb through the conference program, looking for any session that has “Top Ten” or “Surefire” in it’s title.

Take Information Foraging Theory, for instance. Anytime I speak about a topic that touches on how humans find information (which is almost always), I ask my audience of marketers if they’ve ever heard of I.F.T. Generally, not one hand goes up. Sometimes I think Jakob Nielsen and I are the only two people in the world that recognize I.F.T. for what it is: “the most important concept to emerge from Human-Computer Interaction research since 1993.” (Jakob’s words). If you take the time to understand this one concept I promise it will fundamentally and forever change how you look at web design, search marketing, creative and ad placement. Web marketers should be building a shrine to Peter Pirolli and Stuart Card. Their names should be on the tips of every marketer’s tongue. But I venture to guess that most of you reading this column never heard of them until today.

None of these fundamental concepts about human behavior are easy to grasp. Like all great ideas, they are simple to state but difficult to understand. They cover a lot of territory – much of it ill defined. I’ve spent most of my professional life trying to spread awareness of things like Information Foraging Theory. Can I always predict human behavior? Not by a long shot. But I hope that by taking the time to learn more about the classic theories of how we humans tick, I have also learned a little more about marketing. It’s not easy. It’s not perfect. It’s a lot like being human. But I’ve always believed that to be an effective marketer, you first need to understand humans.

How Activation Works in an Absolute Value Market

As I covered last week, if I mention a brand to you – like Nike, for instance – your brain immediately pulls back your own interpretation of the brand. What has happened, in a split second, is that the activation of that one node – let’s call it the Nike node – triggers the activation of several related nodes in your brain, which is quickly assembled into a representation of the brand Nike. This is called Spreading Activation.

This activation is all internal. It’s where most of the efforts of advertising have been focused over the past several decades. Advertising’s job has been to build a positive network of associations so when that prime happens, you have a positive feeling towards the brand. Advertising has been focused on winning territory in this mental landscape.

Up to now, we have been restricted to this internal landscape when making consumer decisions by the boundaries of our own rationality. Access to reliable and objective information about possible purchases was limited. It required more effort on our part than we were willing to expend. So, for the vast majority of purchases, these internal representations were enough for us. They acted as a proxy for information that lay beyond our grasp.

But the world has changed. For almost any purchase category you can think of, there exists reliable, objective information that is easy to access and filter. We no longer are restricted to internal brand activations (relative values based on our own past experiences and beliefs). Now, with a few quick searches, we can access objective information, often based on the experiences of others. In their book of the same name, Itimar Simonson and Emanuel Rosen call these sources “Absolute Value.” For more and more purchases, we turn to external sources because we can. The effort invested is more than compensated for the value returned. In the process, the value of traditional branding is being eroded. This is truer for some product categories than others. The higher the risk or the level of interest, the more the prospect will engage in an external activation. But across all product categories, there has been a significant shift from the internal to the external.

What this means for advertising is that we have to shift our focus from internal spreading activations to external spreading activations. Now, when we retrieve an internal representation of a product or brand, it typically acts as a starting point, not the end point. That starting point is then to be modified or discarded completely depending on the external information we access. The first activated node is our own initial concept of the product, but the subsequent nodes are spread throughout the digitized information landscape.

In an internal spreading activation, the nodes activated and the connections between those nodes are all conducted at a subconscious level. It’s beyond our control. But an external spreading activation is a different beast. It’s a deliberate information search conducted by the prospect. That means that the nodes accessed and the connections between those nodes becomes of critical importance. Advertisers have to understand what those external activation maps look like. They have to be intimately aware of the information nodes accessed and the connections used to get to those nodes. They also have to be familiar with the prospect’s information consumption preferences. At first glance, this seems to be an impossibly complex landscape to navigate. But in practice, we all tend to follow remarkable similar paths when establishing our external activation networks. Search is often the first connector we use. The nodes accessed and the information within those nodes follow predictable patterns for most product categories.

For the advertiser, it comes down to a question of where to most profitably invest your efforts. Traditional advertising was built on the foundation of controlling the internal activation. This was the psychology behind classic treatises such as Ries and Trout’s “Positioning, The Battle for Your Mind.” And, in most cases, that battle was won by whomever could assemble the best collection of smoke and mirrors. Advertising messaging had very little to do with facts and everything to do with persuasion.

But as Simonsen and Rosen point out, the relative position of a brand in a prospect’s mind is becoming less and less relevant to the eventual purchase decision. Many purchases are now determined by what happens in the external activation. Factual, reliable information and easy access to that information becomes critical. Smoke and mirrors are relegated to advertising “noise” in this scenario. The marketer with a deep understanding of how the prospect searches for and determines what the “truth” is about a potential product will be the one who wins. And traditional marketing is becoming less and less important to that prospect.

 

The Spreading Activation Model of Marketing

“Beatle.”

I have just primed you. Before you even finished reading the word above, you had things popping into your mind. Perhaps it was a mental image of an individual Beatle – either John, Paul, George or Ringo. Perhaps it was a snippet of song. Perhaps it was grainy black and white footage of the Ed Sullivan show appearance. But as the concept “Beatle” entered your working memory, your brain was hard at work retrieving what you believed were relevant concepts from your long-term memory. (By the way, if your reaction was “What’s a Beatle?” – substitute “Imagine Dragons.”)

1-brain-neural-network-pasiekaThat’s a working example of spreading activation. The activation of your working memory pulls associated concepts from your long-term memory to create a mental construct that creates your internal definition of whatever that first label was.

Now, an important second step may or may not happen. First, you have to decide how long you’re going to let the “Beatle” prime occupy your working memory. If it’s of fleeting interest, you’ve probably already wiped the slate clear, ready for the next thing that catches your interest. But if that prime is strong enough to establish a firm grip on your attention, then you have a choice to make. Is your internal representation complete, or do you require more information? If you require more information then you have to turn to external sources for that information.

Believe it or not, this column is not intended as a 101 primer in Cognitive Psych. But the mental gymnastics I describe are important when we think about marketing, as we go through exactly the same process when we think about potential purchases. If we can understand that process better, we gain some valuable hints about how to more effectively market in an exceedingly fluid technological environment.

Much of advertising is built on the first half of the process – building associative brand concepts and triggering the prime that retrieves those concepts into working memory. Most of what isn’t working about advertising lies on this side of the cognitive map. We’ve been overly focused on the internal activation, at the expense of the external. But thanks to an explosion of available (and objective) information we’re less reliant on using our internal knowledge when making purchase decisions. Itamar Simonson and Emanuel Rosen explain in their book “Absolute Value”: “A person’s decision to buy is affected by a mix of three related sources: The individual’s Prior preferences, beliefs, and experiences (P) Others. Other people and information services (O) and Marketers (M).”

Simonson and Rosen say that with near perfect information available for the consumer, we now rely more on (O) and less on (P) and (M). Let’s leave (M) and (O) aside for the moment and focus on the (P) in this equation. (P) represents our internal spreading activation. After we’re primed, we retrieve a representation of the product or service we’re thinking of. At this point, we make an internal calculation. We balance how confident we are that our internal representation is adequate to make a purchase against how much effort we have to expend to gather further information. This calculation is largely made subconsciously. It follows Herbert Simon’s principle of Bounded Rationality. It also depends on how much risk is involved in the purchase we’re contemplating. If all the factors dictate that we’re reasonably confident in our internal representation and the risk we’re assuming, we’ll pull out our wallets and buy. If, however, we aren’t confident, we’ll start seeking more information. And that’s where (O) and (M) come in.

Simonson and Rosen lay out a purchase behaviour continuum, from (O) Dependent to (O) Independent. It’s at the (O) Dependent end, where internal confidence in retrieved beliefs and experience is low, that buying behaviors are changing dramatically. And it’s there where conventional approaches to advertising are falling far short of the mark. They are still stuck in the mythical times of Mad Men, where marketers relied on a “Prime, Retrieve (Internal beliefs), Purchase” path. Today, it’s much more likely that the Prime and Retrieve stages will be followed by an external spreading activation. We’ll pick up that thread in next week’s Online Spin.

 

Consuming in Context

npharris-oscarsIt was interesting watching my family watch the Oscars Sunday night. Given that I’m the father of two millennials, who have paired with their own respective millennials, you can bet that it was a multi-screen affair. But to be fair, they weren’t the only ones splitting their attention amongst the TV and various mobile devices. I was also screen hopping.

As Dave Morgan pointed out last week, media usage no longer equates to media opportunity. And it’s because the nature of our engagement has changed significantly in the last decade. Unfortunately, our ad models have been unable to keep up. What is interesting is the way our consumption has evolved. Not surprisingly, technology is allowing our entertainment consumption to evolve back to its roots. We are watching our various content streams in much the same way that we interact with our world. We are consuming in context.

The old way of watching TV was very linear in nature. It was also divorced from context. We suspended engagement with our worlds so that we could focus on the flickering screen in front of us. This, of course, allowed advertisers to buy our attention in little 30-second blocks. It was the classic bait and switch technique. Get our attention with something we care about, and then slip in something the advertiser cares about.

The reason we were willing to suspend engagement with the world was that there was nothing in that world that was relevant to our current task at hand. If we were watching Three’s Company, or the Moon Landing, or a streaker running behind David Niven at the 1974 Oscar ceremony, there was nothing in our everyday world that related to any of those TV events. Nothing competed for the spotlight of our attention. We had no choice but to keep watching the TV to see what happened next.

But imagine if a nude man suddenly appeared behind Matthew McConaughey at the 2015 Oscars. We would immediately want to know more about the context of what just happened. Who was it? Why did it happen? What’s the backstory? The difference is now, we have channels at our disposal to try to find answers to those questions. Our world now includes an extended digital nervous system that allows us to gain context for the things that happen on our TV screens. And because TV no longer has exclusive control of our attention, we switch to the channel that is the best bet to find the answers we seek.

That’s how humans operate. Our lives are a constant quest to fill gaps in our knowledge and by doing so, make sense of the world around us. When we become aware of one of these gaps we immediate scan our environment to find cues of where we might find answers. Then, our senses are focused on the most promising cues. We forage for information to satiate our curiosity. A single-minded focus on one particular cue, especially one over which we have no control, is not something we evolved to do. The way we watched TV in the 60s and 70s was not natural. It was something we did because we had no option.

Our current mode of splitting attention across several screens is much closer to how humans naturally operate. We continually scan our environment, which, in this case, included various electronic interfaces to the extended virtual world, for things of interest to us. When we find one, our natural need to make sense sends us on a quest for context. As we consume, we look for this context. The diligence of our quest for that context will depend on the degree of our engagement with the task at hand. If it is slight, we’ll soon move on to the next thing. If it’s deep, we’ll dig further.

On Sunday night, the Hotchkiss family quest for context continually skipped around, looking for what other movies J.K. Simmons had acted in, watching the trailer for Whiplash, reliving the infamous Adele Dazeem moment from last year and seeing just how old Benedict Cumberbatch is (I have two daughters that are hopelessly in love, much to the chagrin of their boyfriends). As much as the advertisers on the 88th Oscars might wish otherwise, all of this was perfectly natural. Technology has finally evolved to give our brain choices in our consumption.

 

 

 

 

 

 

Why More Connectivity is Not Just More – Why More is Different

data-brain_SMEric Schmidt is predicting from Davos that the Internet will disappear. I agree. I’ve always said that Search will go under the hood, changing from a destination to a utility. Not that Mr. Schmidt or the Davos crew needs my validation. My invitation seems to have got lost in the mail.

Laurie Sullivan’s recent post goes into some of the specifics of how search will become an implicit rather than an explicit utility. Underlying this is a pretty big implication that we should be aware of – the very nature of connectivity will change. Right now, the Internet is a tool, or resource. We access it through conscious effort. It’s a “task at hand.” Our attention is focused on the Internet when we engage with it. The world described by Eric Schmidt and the rest of the panel is much, much different.   In this world, the “Internet of Things” creates a connected environment that we exist in. And this has some pretty important considerations for us.

First of all, when something becomes an environment, it surrounds us. It becomes our world as we interpret it through our assorted sensory inputs. These inputs have evolved to interpret a physical world – an environment of things. We will need help interpreting a digital world – an environment of data. Our reality, or what we perceive our reality to be, will change significantly as we introduce technologically mediated inputs into it.

Our brains were built to parse information from a physical world. We have cognitive mechanisms that evolved to do things like keep us away from physical harm. Our brains were never intended to crunch endless reams of digital data. So, we will have to rely on technology to do that for us. Right now we have an uneasy alliance between our instincts and the capabilities of machines. We are highly suspicious of technology. There is every rational reason in the world to believe that a self-driving Google car will be far safer than a two ton chunk of accelerating metal under the control of a fundamentally flawed human, but who of us are willing to give up the wheel? The fact is, however, that if we want to function in the world Schmidt hints at, we’re going to have to learn not only to trust machines, but also to rely totally on them.

The other implication is one of bandwidth. Our brains have bottlenecks. Right now, our brain together with our senses subconsciously monitor our environment and, if the situation warrants, they wake up our conscious mind for some focused and deliberate processing. The busier our environment gets, the bigger this challenge becomes. A digitally connected environment will soon exceed our brain’s ability to comprehend and process information. We will have to determine some pretty stringent filtering thresholds. And we will rely on technology to do the filtering. As I said, our physical senses were not built to filter a digital world.

It will be an odd relationship with technology that will have to develop. Even if we lower our guard on letting machines do much of our “thinking” (in terms of processing environmental inputs for us) we still have to learn how to give machines guidelines so they know what our intentions are. This raises the question, “How smart do we want machines to become?” Do we want machines that can learn about us over time, without explicit guidance from us? Are we ready for technology that guesses what we want?

One of the comments on Laurie’s post was from Jay Fredrickson, “Sign me up for this world, please. When will this happen and be fully rolled out? Ten years? 20 years?” Perhaps we should be careful what we wish for.  While this world may seem to be a step forward, we will actually be stepping over a threshold into a significantly different reality. As we step over that threshold, we will change what it means to be human. And there will be no stepping back.

Publishers as Matchmakers

gatekeeperI’m a content creator. And, in this particular case, I’ve chosen MediaPost as the distribution point for that content. If we’re exploring the role of publishing in the future, the important question to ask here is why? After all, I could publish this post in a couple clicks to my blog. And, thanks to my blogging software, it will automatically notify my followers that there’s a new post. So, what value does Mediapost add to that?

Again, we come back to signal and noise. I generate content primarily to reach both a wide and interested audience. As a digital marketing consultant, there is a financial incentive to grow my own personal brand, but to be honest, my reward is probably more tied up in the concepts of social capital and my own ego. I publish because I want to be heard. And I want to be heard by people who find my content valuable. I have almost 2000 followers between my blog, Twitter feed and other social networks, but those people already know me. Hopefully, Mediapost will introduce me to new people that don’t know me. I want Mediapost to be my matchmaker.

Now, the second question to ask is, why are you reading this post on Mediapost? While I don’t presume to be able to know your own personal intentions, I can take a pretty good shot at generalizing – you are a Mediapost reader because you find the collection of content they publish interesting. It’s certainly not the only place online you can find content about marketing and media. And, if they chose to, any of the MediaPost writers could easily publish their content on their own blogs. You have chosen MediaPost because it acts as both a convenient access point and an effective filter.

This connection between content and audience is where publishers like MediaPost add value. Because you trust MediaPost to deliver content you find interesting, it passes the first level of your filtering threshold. I, as a content creator, get the benefit of MediaPost’s halo effect. The odds are better that I can connect with new readers under the MediaPost banner than they are if you’re introduced to me through a random, unfiltered tweet or alert in your newsfeed. And here we have a potential clue in the future of revenue generation for publishers. If publishing is potentially a match making service, perhaps we need to look at other matchmakers to see how they generate revenue.

In the traditional publishing world, it would be blasphemous to suggest that content creators should be charged for access to an audience. After all, we used to get paid to generate content by the publishers. But that was then and this is now. Understand, I’m not talking about native advertising or advertorials here. In fact, it would be the publisher’s responsibility to filter out unacceptably commercial editorials. I’m talking about creating an audience market for true content generators. In this day of personal branding, audiences have value. The better the audience, the higher the value. It should be worth something to me to reach new audiences. Publishers, in turn, act as the reader’s filter, ensuring the content they provide matches the user’s interest. Again, if the match is good enough, that has value for the reader.

Of course, the problem here is quantifying value on both sides of the relationship. I would imagine that both the content creators and content consumers that are reading my suggestions are probably saying, “There is no way I would pay for that!” And, in the current state of online publishing, I wouldn’t either – as a creator nor a consumer. The value isn’t there because the match isn’t strong enough. But if publishers focused on building the best possible audience and on presenting the best possible content, it might be a different story. More importantly, it would be a revenue model that would realign publishers with their audience, rather than pit them against it.

From the reader’s perspective, if a publisher was acting as your own private information filter, and not as a platform for poorly targeted advertising, you would probably be more willing to indicate your preferences and share information. If the publisher was discriminating enough, you might even be willing to allow them to introduce very carefully targeted offers from advertiser’s, filtering down to only the offers you’re highly likely to be interested in. This provides three potential revenue sources to the publisher: content creators looking for an audience, readers looking for an effective filtering service and advertisers looking for highly targeted introductions to prospects. In the last case, the revenue should be split with the prospect, with the publisher taking a percentage for handling the introduction and the rest going to the prospect in return for agreeing to accept the advertiser’s introduction.

While radically different than today’s model, what I’ve proposed is not a new idea. It was first introduced in the book Net Worth, by John Hagel and Marc Singer. They introduced the idea in 1999. Granted, my take is less involved than theirs is, but the basic idea is the same – a shift from a relentless battering of prospects with increasingly overt advertising messages to a careful filtering and matching of interests and appropriate content. And, when you think about it, the matching of intent and content is what Google has been doing for two decades.

Disruptive innovations tend to change the ways that value is determined. They take previous areas of scarcity and change them to ones of abundance. They upend markets and alter existing balances between forces. When the markets shift to this extent, trying to stick to the old paradigm guarantees failure. The challenge is that there is no new paradigm to follow. Experimentation is the only option. And to experiment you have to be willing to explore the boundaries. The answer won’t be found in the old, familiar territory.

Evolved Search Behaviors: Take Aways for Marketers

In the last two columns, I first looked at the origins of the original Golden Triangle, and then looked at how search behaviors have evolved in the last 9 years, according to a new eye tracking study from Mediative. In today’s column, I’ll try to pick out a few “so whats” for search marketers.

It’s not about Location, It’s About Intent

In 2005, search marketing as all about location. It was about grabbing a part of the Golden Triangle, and the higher, the better. The delta between scanning and clicks from the first organic result to the second was dramatic – by a factor of 2 to 1! Similar differences were seen in the top paid results. It’s as if, given the number of options available on the page (usually between 12 and 18, depending on the number of ads showing) searchers used position as a quick and dirty way to filter results, reasoning that the higher the result, the better match it would be to their intent.

In 2014, however, it’s a very different story. Because the first scan is now to find the most appropriate chunk, the importance of being high on the page is significantly lessened. Also, once the second step of scanning has begun, within a results chunk, there seems to be more vertical scanning within the chunk and less lateral scanning. Mediative found that in some instances, it was the third or fourth listing in a chunk that attracted the most attention, depending on content, format and user intent. For example, in the heat map shown below, the third organic result actually got as many clicks as the first, capturing 26% of all the clicks on the page and 15% of the time spent on page. The reason could be because it was the only listing that had the Google Ratings Rich Snippet because of the proper use of structured data mark up. In this case, the information scent that promised user reviews was a strong match with user intent, but you would only know this if you knew what that intent was.

Google-Ford-Fiesta

This change in user search scanning strategies makes it more important than ever to understand the most common user intents that would make them turn to a search engine. What will be the decision steps they go through and at which of those steps might they turn to a search engine? Would it be to discover a solution to an identified need, to find out more about a known solution, to help build a consideration set for direct comparisons, to look for one specific piece of information (ie a price) or to navigate to one particular destination, perhaps to order online? If you know why your prospects might use search, you’ll have a much better idea of what you need to do with your content to ensure you’re in the right place at the right time with the right content.  Nothing shows this clearer than the following comparison of heat maps. The one on the left was the heat map produced when searchers were given a scenario that required them to gather information. The one on the right resulted from a scenario where searchers had to find a site to navigate to. You can see the dramatic difference in scanning behaviors.

Intent-compared-2

If search used to be about location, location, location, it’s now about intent, intent, intent.

Organic Optimization Matters More than Ever!

Search marketers have been saying that organic optimization has been dying for at least two decades now, ever since I got into this industry. Guess what? Not only is organic optimization not dead, it’s now more important than ever! In Enquiro’s original 2005 study, the top two sponsored ads captured 14.1% of all clicks. In Mediative’s 2014 follow up, the number really didn’t change that much, edging up to 14.5% What did change was the relevance of the rest of the listings on the page. In 2005, all the organic results combined captured 56.7% of the clicks. That left about 29% of the users either going to the second page of results, launching a new search or clicking on one of the side sponsored ads (this only accounted for small fraction of the clicks). In 2014, the organic results, including all the different category “chunks,” captured 74.6% of the remaining clicks. This leaves only 11% either clicking on the side ads (again, a tiny percentage) or either going to the second page or launching a new search. That means that Google has upped their first page success rate to an impressive 90%.

First of all, that means you really need to break onto the first page of results to gain any visibility at all. If you can’t do it organically, make sure you pay for presence. But secondly, it means that of all the clicks on the page, some type of organic result is capturing 84% of them. The trick is to know which type of organic result will capture the click – and to do that you need to know the user’s intent (see above). But you also need to optimize across your entire content portfolio. With my own blog, two of the biggest traffic referrers happen to be image searches.

Left Gets to Lead

The Left side of the results page has always been important but the evolution of scanning behaviors now makes it vital. The heat map below shows just how important it is to seed the left hand of results with information scent.

Googlelefthand

Last week, I talked about how the categorization of results had caused us to adopt a two stage scanning strategy, the first to determine which “chunks” of result categories are the best match to intent, and the second to evaluated the listings in the most relevant chunks. The vertical scan down the left hand of the page is where we decide which “chunks” of results are the most promising. And, in the second scan, because of the improved relevancy, we often make the decision to click without a lot of horizontal scanning to qualify our choice. Remember, we’re only spending a little over a second scanning the result before we click. This is just enough to pick up the barest whiffs of information scent, and almost all of the scent comes from the left side of the listing. Look at the three choices above that captured the majority of scanning and clicks. The search was for “home decor store toronto.” The first popular result was a local result for the well known brand Crate and Barrel. This reinforces how important brands can be if they show up on the left side of the result set. The second popular result was a website listing for another well known brand – The Pottery Barn. The third was a link to Yelp – a directory site that offered a choice of options. In all cases, the scent found in the far left of the result was enough to capture a click. There was almost no lateral scanning to the right. When crafting titles, snippets and metadata, make sure you stack information scent to the left.

In the end, there are no magic bullets from this latest glimpse into search behaviors. It still comes down to the five foundational planks that have always underpinned good search marketing:

  1. Understand your user’s intent
  2. Provide a rich portfolio of content and functionality aligned with those intents
  3. Ensure your content appears at or near the top of search results, either through organic optimization or well run search campaigns
  4. Provide relevant information scent to capture clicks
  5. Make sure you deliver on what you promise post-click

Sure, the game is a little more complex than it was 9 years ago, but the rules haven’t changed.

Google’s Golden Triangle – Nine Years Later

Last week, I reviewed why the Golden Triangle existed in the first place. This week, we’ll look at how the scanning patterns of Google user’s has evolved in the past 9 years.

The reason I wanted to talk about Information Foraging last week is that it really sets the stage for understanding how the patterns have changed with the present Google layout. In particular, one thing was true for Google in 2005 that is no longer true in 2014 – back then, all results sets looked pretty much the same.

Consistency and Conditioning

If humans do the same thing over and over again and usually achieve the same outcome, we stop thinking about what we’re doing and we simply do it by habit. It’s called conditioning. But habitual conditioning requires consistency.

In 2005, The Google results page was a remarkably consistent environment. There was always 10 blue organic links and usually there were up to three sponsored results at the top of the page. There may also have been a few sponsored results along the right side of the page. Also, Google would put what it determined to be the most relevant results, both sponsored and organic, at the top of the page. This meant that for any given search, no matter the user intent, the top 4 results should presumably include the most relevant one or two organic results and a few hopefully relevant sponsored options for the user. If Google did it’s job well, there should be no reason to go beyond these 4 top results, at least in terms of a first click. And our original study showed that Google generally did do a pretty good job – over 80% of first clicks came from the top 4 results.

In 2014, however, we have a much different story. The 2005 Google was a one-size-fits-all solution. All results were links to a website. Now, not only do we have a variety of results, but even the results page layout varies from search to search. Google has become better at anticipating user intent and dynamically changes the layout on each search to be a better match for intent.

google 2014 big

What this means, however, is that we need to think a little more whenever we interact with a search page. Because the Google results page is no longer the same for every single search we do, we have exchanged consistency for relevancy. This means that conditioning isn’t as important a factor as it was in 2005. Now, we must adopt a two stage foraging strategy. This is shown in the heat map above. Our first foraging step is to determine what categories – or “chunks” of results – Google has decided to show on this particular results page. This is done with a vertical scan down the left side of the results set. In this scan, we’re looking for cues on what each chunk offers – typically in category headings or other quickly scanned labels. This first step is to determine which chunks are most promising in terms of information scent. Then, in the second step, we go back to the most relevant chunks and start scanning in a more deliberate fashions. Here, scanning behaviors revert to the “F” shaped scan we saw in 2005, creating a series of smaller “Golden Triangles.”

What is interesting about this is that although Google’s “chunking” of the results page forces us to scan in two separate steps, it’s actually more efficient for us. The time spent scanning each result is half of what it was in 2005, 1.2 seconds vs. 2.5 seconds. Once we find the right “chunk” of results, the results shown tend to be more relevant, increasing our confidence in choosing them.  You’ll see that the “mini” Golden Triangles have less lateral scanning than the original. We’re picking up enough scent on the left side of each result to push our “click confidence” over the required threshold.

A Richer Visual Environment

Google also offers a much more visually appealing results page than they did 9 years ago. Then, the entire results set was text based. There were no images shown. Now, depending on the search, the page can include several images, as the example below (a search for “New Orleans art galleries”) shows.

Googleimageshot

The presence of images has a dramatic impact on our foraging strategies. First of all, images can be parsed much quicker than text. We can determine the content of an image in fractions of a second, where text requires a much slower and deliberate type of mental processing. This means that our eyes are naturally drawn to images. You’ll notice that the above heat map has a light green haze over all the images shown. This is typical of the quick scan we do immediately upon page entry to determine what the images are about. Heat in an eye tracking heat map is produced by duration of foveal focus. This can be misleading when we’re dealing with images for two reasons. First, the fovea centralis is, predictably, in the center of our eye where our focus is the sharpest. We use this extensively when reading but it’s not as important when we’re glancing at an image. We can make a coarse judgement about what a picture is without focusing on it. We don’t need our fovea to know it’s a picture of a building, or a person, or a map. It’s only when we need to determine the details of a picture that we’ll recruit the fine-grained resolution of our fovea.

Our ability to quickly parse images makes it likely that they will play an important role in our initial orientation scan of the results page. We’ll quickly scan the available images looking for information scent. It the image does offer scent, it will also act as a natural entry point for further scanning. Typically, when we see a relevant image, we look in the immediate vicinity to find more reinforcing scent. We often see scanning hot spots on titles or other text adjacent to relevant images.

We Cover More Territory – But We’re Also More Efficient

So, to sum up, it appears that with our new two step foraging strategy, we’re covering more of the page, at least on our first scan, but Google is offering richer information scent, allowing us to zero in on the most promising “chunks” of information on the page. Once we find them, we are quicker to click on a promising result.

Next week, I’ll look at the implications of this new behavior on organic optimization strategies.