The Messy Part of Marketing

messymarketingMarketing is hard. It’s hard because marketing reflects real life. And real life is hard. But here’s the thing – it’s just going to get harder. It’s messy and squishy and filled with nasty little organic things like emotions and human beings.

For the past several weeks, I’ve been filing things away as possible topics for this column. For instance, I’ve got a pretty big file of contradicting research on what works in B2B marketing. Videos work. They don’t work. Referrals are the bomb. No, it’s content. Okay, maybe it’s both. Hmmm..pretty sure it’s not Facebook though.

The integration of marketing technology was another promising avenue. Companies are struggling with data. They’re drowning in data. They have no idea what to do with all the data that’s pouring in from smart watches and smart phones and smart bracelets and smart bangles and smart suppositories and – okay, maybe not suppositories, but that’s just because no one thought of it till I just mentioned it.

Then there’s the new Google tool that predicts the path to purchase. That sounds pretty cool. Marketers love things that predict things. That would make life easier. But life isn’t easy. So marketing isn’t easy. Marketing is all about trying to decipher the mangled mess of living just long enough to shoehorn in a message that maybe, just maybe that will catch the right person at the right time. And that mangled mess is just getting messier.

Personally, the thing that attracted me to marketing was its messiness. I love organic, gritty problems with no clear-cut solutions. Scientists call these ill-defined problems. And that’s why marketing is hard. It’s an ill-defined problem. It defies programmatic solutions. You can’t write an algorithm that will spit out perfect marketing. You can attack little slivers of marketing that lend themselves to clearer solutions, which is why you have the current explosion of ad-tech tools. But the challenge is trying to bring all these solutions together into some type of cohesive package that actually helps you relate to a living, breathing human.

One of the things that has always amazed me is how blissfully ignorant most marketers are about concepts that I think should be fundamental to understanding customer behaviors: things like bounded rationality, cognitive biases, decision theory and sense-making. Mention any of these things in a conference room full of marketers and watch eyes glaze over as fingers nervously thumb through the conference program, looking for any session that has “Top Ten” or “Surefire” in it’s title.

Take Information Foraging Theory, for instance. Anytime I speak about a topic that touches on how humans find information (which is almost always), I ask my audience of marketers if they’ve ever heard of I.F.T. Generally, not one hand goes up. Sometimes I think Jakob Nielsen and I are the only two people in the world that recognize I.F.T. for what it is: “the most important concept to emerge from Human-Computer Interaction research since 1993.” (Jakob’s words). If you take the time to understand this one concept I promise it will fundamentally and forever change how you look at web design, search marketing, creative and ad placement. Web marketers should be building a shrine to Peter Pirolli and Stuart Card. Their names should be on the tips of every marketer’s tongue. But I venture to guess that most of you reading this column never heard of them until today.

None of these fundamental concepts about human behavior are easy to grasp. Like all great ideas, they are simple to state but difficult to understand. They cover a lot of territory – much of it ill defined. I’ve spent most of my professional life trying to spread awareness of things like Information Foraging Theory. Can I always predict human behavior? Not by a long shot. But I hope that by taking the time to learn more about the classic theories of how we humans tick, I have also learned a little more about marketing. It’s not easy. It’s not perfect. It’s a lot like being human. But I’ve always believed that to be an effective marketer, you first need to understand humans.

Mourning Becomes Electric

dreamstime_19503560Last Friday was a sad day. A very dear and lifelong friend of mine, my Uncle Al, passed away. And so I did what I’ve done before on these occasions. I expressed my feelings by writing about it. The post went live on my blog around 10:30 in the morning. By mid afternoon, it had been shared and posted through Facebook, Twitter and many other online channels. Many were kind enough to send comments. The family, in the midst of their grief, forwarded my post to their family and friends. Soon, there was an extended network of mourning that sought to heal each other, all through channels that didn’t exist just a few years ago. Mourning had moved online.

As you probably know, I’m fascinated by how we express our innate human needs through digital technologies. And death, together with birth, is the most universal of human experiences. It was inevitable that we would use online channels to grieve. So I, as I always do, asked the question – why?

First of all – why do we mourn? Well, we mourn because we are social animals. We are probably the most social of animals. So we grieve to an according degree. We miss the departed terribly. It is natural to try to fill the hole a death tears inside of us by reaching out to others who may share the same grief. James R. Averill believed we communally mourn because it cements the social bonds that make it more likely that we will survive as a species. When it comes to dealing with death, misery loves company.

Secondly, why do we grieve online? Well, here, I think it has something to do with Granovetter’s weak ties. Death is one of those life events where we reach beyond the strong ties that define our day-to-day social existence. Certainly we seek comfort from those closest to us, but the death also triggers the existence of a virtual community – defined and united by their grieving for the one who has passed away. Our digital networks allow us to eliminate the six degrees of separation in one fell swoop. We can share our grief almost instantaneously and simultaneously with family, friends, acquaintances and even people we have never met.

There are two other aspects of grief that I believe lend themselves well to online channels: the need to chronicle and the comfort of emotional distance.

Part of the healing process is sharing memories of the departed love one. And, for those like myself, just writing about our feelings helps overcome the pain. Online provides a perfect platform for chronicling. We can share our own thoughts and, in the expressing of them, start the healing process.

The comfort of physical distance seems a contradictory idea, but almost everyone I know who has gone through a deep loss has one common dread – dealing with a never-ending stream of condolences over the coming weeks and months, triggered by each new physical encounter.

When you’ve been in the middle of the storm, you are typically a few days ahead of everyone else in dealing with your grief. Your mind has been occupied with nothing else as you have sat vigil by the hospital bed. While the condolences are given with the best of intentions, you feel compelled to give a response. The problem is, each new expression of grief forces you to replay your loop of very painful memories. The amplitude of this pain increases when it’s a face-to-face encounter. Condolences that reach you through a more detached channel, such as online, can be dealt with at your discretion. You can wait until you marshall the emotional reserves necessary to respond. You can also respond to several people at a time. How many times have you heard this from a grieving loved one, “I just wish I could record my message and play it whenever I meet someone who wants to tell me how sorry they are for my loss?” It may seem callous, but no one wants to relive that pain over and over again. And let’s face it – almost no one knows the right things to say at a moment like this.

By the end of last Friday, my online social connections had helped me ease a very deep pain. I hope I was able to return the favor for others that were dealing with their own grief. There are many things about technology that I treat with suspicion, but in this case, turning online seemed like the most natural thing in the world.

How Activation Works in an Absolute Value Market

As I covered last week, if I mention a brand to you – like Nike, for instance – your brain immediately pulls back your own interpretation of the brand. What has happened, in a split second, is that the activation of that one node – let’s call it the Nike node – triggers the activation of several related nodes in your brain, which is quickly assembled into a representation of the brand Nike. This is called Spreading Activation.

This activation is all internal. It’s where most of the efforts of advertising have been focused over the past several decades. Advertising’s job has been to build a positive network of associations so when that prime happens, you have a positive feeling towards the brand. Advertising has been focused on winning territory in this mental landscape.

Up to now, we have been restricted to this internal landscape when making consumer decisions by the boundaries of our own rationality. Access to reliable and objective information about possible purchases was limited. It required more effort on our part than we were willing to expend. So, for the vast majority of purchases, these internal representations were enough for us. They acted as a proxy for information that lay beyond our grasp.

But the world has changed. For almost any purchase category you can think of, there exists reliable, objective information that is easy to access and filter. We no longer are restricted to internal brand activations (relative values based on our own past experiences and beliefs). Now, with a few quick searches, we can access objective information, often based on the experiences of others. In their book of the same name, Itimar Simonson and Emanuel Rosen call these sources “Absolute Value.” For more and more purchases, we turn to external sources because we can. The effort invested is more than compensated for the value returned. In the process, the value of traditional branding is being eroded. This is truer for some product categories than others. The higher the risk or the level of interest, the more the prospect will engage in an external activation. But across all product categories, there has been a significant shift from the internal to the external.

What this means for advertising is that we have to shift our focus from internal spreading activations to external spreading activations. Now, when we retrieve an internal representation of a product or brand, it typically acts as a starting point, not the end point. That starting point is then to be modified or discarded completely depending on the external information we access. The first activated node is our own initial concept of the product, but the subsequent nodes are spread throughout the digitized information landscape.

In an internal spreading activation, the nodes activated and the connections between those nodes are all conducted at a subconscious level. It’s beyond our control. But an external spreading activation is a different beast. It’s a deliberate information search conducted by the prospect. That means that the nodes accessed and the connections between those nodes becomes of critical importance. Advertisers have to understand what those external activation maps look like. They have to be intimately aware of the information nodes accessed and the connections used to get to those nodes. They also have to be familiar with the prospect’s information consumption preferences. At first glance, this seems to be an impossibly complex landscape to navigate. But in practice, we all tend to follow remarkable similar paths when establishing our external activation networks. Search is often the first connector we use. The nodes accessed and the information within those nodes follow predictable patterns for most product categories.

For the advertiser, it comes down to a question of where to most profitably invest your efforts. Traditional advertising was built on the foundation of controlling the internal activation. This was the psychology behind classic treatises such as Ries and Trout’s “Positioning, The Battle for Your Mind.” And, in most cases, that battle was won by whomever could assemble the best collection of smoke and mirrors. Advertising messaging had very little to do with facts and everything to do with persuasion.

But as Simonsen and Rosen point out, the relative position of a brand in a prospect’s mind is becoming less and less relevant to the eventual purchase decision. Many purchases are now determined by what happens in the external activation. Factual, reliable information and easy access to that information becomes critical. Smoke and mirrors are relegated to advertising “noise” in this scenario. The marketer with a deep understanding of how the prospect searches for and determines what the “truth” is about a potential product will be the one who wins. And traditional marketing is becoming less and less important to that prospect.

 

The Spreading Activation Model of Marketing

“Beatle.”

I have just primed you. Before you even finished reading the word above, you had things popping into your mind. Perhaps it was a mental image of an individual Beatle – either John, Paul, George or Ringo. Perhaps it was a snippet of song. Perhaps it was grainy black and white footage of the Ed Sullivan show appearance. But as the concept “Beatle” entered your working memory, your brain was hard at work retrieving what you believed were relevant concepts from your long-term memory. (By the way, if your reaction was “What’s a Beatle?” – substitute “Imagine Dragons.”)

1-brain-neural-network-pasiekaThat’s a working example of spreading activation. The activation of your working memory pulls associated concepts from your long-term memory to create a mental construct that creates your internal definition of whatever that first label was.

Now, an important second step may or may not happen. First, you have to decide how long you’re going to let the “Beatle” prime occupy your working memory. If it’s of fleeting interest, you’ve probably already wiped the slate clear, ready for the next thing that catches your interest. But if that prime is strong enough to establish a firm grip on your attention, then you have a choice to make. Is your internal representation complete, or do you require more information? If you require more information then you have to turn to external sources for that information.

Believe it or not, this column is not intended as a 101 primer in Cognitive Psych. But the mental gymnastics I describe are important when we think about marketing, as we go through exactly the same process when we think about potential purchases. If we can understand that process better, we gain some valuable hints about how to more effectively market in an exceedingly fluid technological environment.

Much of advertising is built on the first half of the process – building associative brand concepts and triggering the prime that retrieves those concepts into working memory. Most of what isn’t working about advertising lies on this side of the cognitive map. We’ve been overly focused on the internal activation, at the expense of the external. But thanks to an explosion of available (and objective) information we’re less reliant on using our internal knowledge when making purchase decisions. Itamar Simonson and Emanuel Rosen explain in their book “Absolute Value”: “A person’s decision to buy is affected by a mix of three related sources: The individual’s Prior preferences, beliefs, and experiences (P) Others. Other people and information services (O) and Marketers (M).”

Simonson and Rosen say that with near perfect information available for the consumer, we now rely more on (O) and less on (P) and (M). Let’s leave (M) and (O) aside for the moment and focus on the (P) in this equation. (P) represents our internal spreading activation. After we’re primed, we retrieve a representation of the product or service we’re thinking of. At this point, we make an internal calculation. We balance how confident we are that our internal representation is adequate to make a purchase against how much effort we have to expend to gather further information. This calculation is largely made subconsciously. It follows Herbert Simon’s principle of Bounded Rationality. It also depends on how much risk is involved in the purchase we’re contemplating. If all the factors dictate that we’re reasonably confident in our internal representation and the risk we’re assuming, we’ll pull out our wallets and buy. If, however, we aren’t confident, we’ll start seeking more information. And that’s where (O) and (M) come in.

Simonson and Rosen lay out a purchase behaviour continuum, from (O) Dependent to (O) Independent. It’s at the (O) Dependent end, where internal confidence in retrieved beliefs and experience is low, that buying behaviors are changing dramatically. And it’s there where conventional approaches to advertising are falling far short of the mark. They are still stuck in the mythical times of Mad Men, where marketers relied on a “Prime, Retrieve (Internal beliefs), Purchase” path. Today, it’s much more likely that the Prime and Retrieve stages will be followed by an external spreading activation. We’ll pick up that thread in next week’s Online Spin.

 

Consuming in Context

npharris-oscarsIt was interesting watching my family watch the Oscars Sunday night. Given that I’m the father of two millennials, who have paired with their own respective millennials, you can bet that it was a multi-screen affair. But to be fair, they weren’t the only ones splitting their attention amongst the TV and various mobile devices. I was also screen hopping.

As Dave Morgan pointed out last week, media usage no longer equates to media opportunity. And it’s because the nature of our engagement has changed significantly in the last decade. Unfortunately, our ad models have been unable to keep up. What is interesting is the way our consumption has evolved. Not surprisingly, technology is allowing our entertainment consumption to evolve back to its roots. We are watching our various content streams in much the same way that we interact with our world. We are consuming in context.

The old way of watching TV was very linear in nature. It was also divorced from context. We suspended engagement with our worlds so that we could focus on the flickering screen in front of us. This, of course, allowed advertisers to buy our attention in little 30-second blocks. It was the classic bait and switch technique. Get our attention with something we care about, and then slip in something the advertiser cares about.

The reason we were willing to suspend engagement with the world was that there was nothing in that world that was relevant to our current task at hand. If we were watching Three’s Company, or the Moon Landing, or a streaker running behind David Niven at the 1974 Oscar ceremony, there was nothing in our everyday world that related to any of those TV events. Nothing competed for the spotlight of our attention. We had no choice but to keep watching the TV to see what happened next.

But imagine if a nude man suddenly appeared behind Matthew McConaughey at the 2015 Oscars. We would immediately want to know more about the context of what just happened. Who was it? Why did it happen? What’s the backstory? The difference is now, we have channels at our disposal to try to find answers to those questions. Our world now includes an extended digital nervous system that allows us to gain context for the things that happen on our TV screens. And because TV no longer has exclusive control of our attention, we switch to the channel that is the best bet to find the answers we seek.

That’s how humans operate. Our lives are a constant quest to fill gaps in our knowledge and by doing so, make sense of the world around us. When we become aware of one of these gaps we immediate scan our environment to find cues of where we might find answers. Then, our senses are focused on the most promising cues. We forage for information to satiate our curiosity. A single-minded focus on one particular cue, especially one over which we have no control, is not something we evolved to do. The way we watched TV in the 60s and 70s was not natural. It was something we did because we had no option.

Our current mode of splitting attention across several screens is much closer to how humans naturally operate. We continually scan our environment, which, in this case, included various electronic interfaces to the extended virtual world, for things of interest to us. When we find one, our natural need to make sense sends us on a quest for context. As we consume, we look for this context. The diligence of our quest for that context will depend on the degree of our engagement with the task at hand. If it is slight, we’ll soon move on to the next thing. If it’s deep, we’ll dig further.

On Sunday night, the Hotchkiss family quest for context continually skipped around, looking for what other movies J.K. Simmons had acted in, watching the trailer for Whiplash, reliving the infamous Adele Dazeem moment from last year and seeing just how old Benedict Cumberbatch is (I have two daughters that are hopelessly in love, much to the chagrin of their boyfriends). As much as the advertisers on the 88th Oscars might wish otherwise, all of this was perfectly natural. Technology has finally evolved to give our brain choices in our consumption.

 

 

 

 

 

 

Why More Connectivity is Not Just More – Why More is Different

data-brain_SMEric Schmidt is predicting from Davos that the Internet will disappear. I agree. I’ve always said that Search will go under the hood, changing from a destination to a utility. Not that Mr. Schmidt or the Davos crew needs my validation. My invitation seems to have got lost in the mail.

Laurie Sullivan’s recent post goes into some of the specifics of how search will become an implicit rather than an explicit utility. Underlying this is a pretty big implication that we should be aware of – the very nature of connectivity will change. Right now, the Internet is a tool, or resource. We access it through conscious effort. It’s a “task at hand.” Our attention is focused on the Internet when we engage with it. The world described by Eric Schmidt and the rest of the panel is much, much different.   In this world, the “Internet of Things” creates a connected environment that we exist in. And this has some pretty important considerations for us.

First of all, when something becomes an environment, it surrounds us. It becomes our world as we interpret it through our assorted sensory inputs. These inputs have evolved to interpret a physical world – an environment of things. We will need help interpreting a digital world – an environment of data. Our reality, or what we perceive our reality to be, will change significantly as we introduce technologically mediated inputs into it.

Our brains were built to parse information from a physical world. We have cognitive mechanisms that evolved to do things like keep us away from physical harm. Our brains were never intended to crunch endless reams of digital data. So, we will have to rely on technology to do that for us. Right now we have an uneasy alliance between our instincts and the capabilities of machines. We are highly suspicious of technology. There is every rational reason in the world to believe that a self-driving Google car will be far safer than a two ton chunk of accelerating metal under the control of a fundamentally flawed human, but who of us are willing to give up the wheel? The fact is, however, that if we want to function in the world Schmidt hints at, we’re going to have to learn not only to trust machines, but also to rely totally on them.

The other implication is one of bandwidth. Our brains have bottlenecks. Right now, our brain together with our senses subconsciously monitor our environment and, if the situation warrants, they wake up our conscious mind for some focused and deliberate processing. The busier our environment gets, the bigger this challenge becomes. A digitally connected environment will soon exceed our brain’s ability to comprehend and process information. We will have to determine some pretty stringent filtering thresholds. And we will rely on technology to do the filtering. As I said, our physical senses were not built to filter a digital world.

It will be an odd relationship with technology that will have to develop. Even if we lower our guard on letting machines do much of our “thinking” (in terms of processing environmental inputs for us) we still have to learn how to give machines guidelines so they know what our intentions are. This raises the question, “How smart do we want machines to become?” Do we want machines that can learn about us over time, without explicit guidance from us? Are we ready for technology that guesses what we want?

One of the comments on Laurie’s post was from Jay Fredrickson, “Sign me up for this world, please. When will this happen and be fully rolled out? Ten years? 20 years?” Perhaps we should be careful what we wish for.  While this world may seem to be a step forward, we will actually be stepping over a threshold into a significantly different reality. As we step over that threshold, we will change what it means to be human. And there will be no stepping back.

Why Our Brains Love TV

brain-TV-e1318029026863Forrester Research analyst Shar VanBoskirk has pegged 2019 as the year when digital ad spend will surpass TV, topping the $100 billion mark. This is momentous in a number of ways, but not really surprising. If you throw all digital marketing in a single bucket, it was a question of when, not if, it would finally surpass TV. What is more surprising to me is how resilient TV has proven to be as an advertising medium. After all, we’re only a little more than a decade away from the 100th anniversary of broadcast TV (which started in 1928). TV has been the king of the media mountain for a long time.

So, what is it about TV that has so captured us for so long? What is it about the medium that allows our brains to connect to it so easily?

The Two Most Social Senses – Sight and Sound

Even as digital overtakes broadcast and cable television, we’re still mesmerized by the format of TV. Our interaction with the medium has shifted in a few interesting ways, notably time shifting, new platforms to consume it on and binge watching, but our actual interaction with the format itself hasn’t changed very much, save for the continual improvements in fidelity. It’s still sight and sound delivered electronically. And for us, that seems to be a very compelling combination. Despite some thus-far failed attempts to introduce another sense or dimension into the sight/sound duopoly, our brains seem to naturally default back to a relatively stable format of sound and two-dimensional images.

It’s no coincidence that these are the same two senses we rely on most heavily to connect with the outside world. They allow us to scan our environments “at-a-distance,” picking up cues of potential threats or rewards that we can then use our other senses to interact with more intimately. Smell, taste and touch are usually “close-up” senses that are relied on only when sight and sound have given the “all-clear” signal to our brains. For this reason, our brains have some highly developed mechanisms that allow us to parse the world through sight and sound – particularly sight. For example, the fusiform gyrus is a part of our brain that is dedicated to categorizing forms we see and fitting them into categories our brain recognizes. It’s this part of our brain that allows us to recognize faces and fit them into understandable categories such as friends, enemies, family, celebrities, etc.

These are also the two senses we use most often in social settings. If it weren’t for sight and sound, our ability to interact with each other would be severely curtailed. This offers another clue. Television is a good fit with our need to socialize. Sight and sound are the channel inputs to empathy. Our mirror neurons are activated when we see somebody else doing something. That’s why the saying is “Monkey See, Monkey Do,” and not “Monkey Taste, Monkey Do.” These two senses are all we really need to build a fairly rich representation of the world and create emotional connections to it.

We want Immersion, But Not Too Much immersion

So, if the combination of sight and sound seems to be a good match with our mechanisms for understanding the world – why has “more” not proven to be “better?” Why, for instance, has 3D and Interactive TV not caught on to the extent forecast?

I think we’ve developed a comfortable balance with TV. Remember, sight and sound are generally used as “at-a-distance” parsers of our world. Because of the sheer volume of visual and auditory information coming through these channels, the brain has learned to filter input and only alert us when further engagement is required. If our brain had to process all the visual information available to it, it would overload to the point of breakdown. So while we want to be engaged in whatever we’re watching on TV, we aren’t looking to be totally immersed in it. This is why we have the multi-screen/multi-tasking behaviors emerging that are quickly becoming the norm while we watch TV. 3D or Interactive TV both add a dimension of focal attention that isn’t necessary to enjoy a TV show.

The Concept of “Durable” Media

It’s interesting that as technology advances, every so often a media format emerges that is what I would call “durable.” It’s information or entertainment presented in a format that is a good cognitive match for our preferences and abilities. Even if technology is capable of adding “more” to these media, over time it turns out that “more” isn’t perceived as “better.”

Books are perhaps the most durable of media. The basic format of a book has been digitized, but our interaction with a book doesn’t look much different than it did in Guttenberg’s day. It’s still printed words on a page. Television also appears to be a durable medium. The format itself is fairly stable. It’s the revenue models that are built around it that will evolve as time goes on.

The Unintended Consequences of Technology

Who_caresIn last Friday’s Online Spin Column, Kaila Colbin asks a common question when it comes to the noise surrounding the latest digital technologies: Who Cares? Colbin rightly points out that we tend to ascribe unearned importance to whatever digital technology we seemed to be focused on at the given time. This is called, aptly enough, the focusing illusion and in the words of Daniel Kahneman, who coined the term, “Nothing in life is as important as you think it is, while you are thinking about it.”

But there’s another side to this. How important are the things we aren’t thinking about? For example, because it’s difficult to wrap our minds around big picture consequences in the future, we tend not to think as much as we should about them. In the case of digital technology shifts such as the ones Kaila mentioned, what we should care about is the overall shift caused by the cumulative impact of these technologies, not the individual components that make up the wave.

When we introduce a new technology, we usually have some idea of the impact they will have. These are the intended consequences. And we focus on these, which makes them more important in our minds. But some things will catch us totally by surprise. These are called unintended consequences. We won’t know them until the happen, but when they do, we will very much care about them. To illustrate that point, I’d like to tell the story about the introduction of one technology that dramatically changed one particular society.

yiryorontThe Yir Yoront were a nomadic tribe in Australia that somehow managed to avoid significant contact with the western world until well into the 20th century. In Yir Yoront society, one of the most valuable things you could possess was a stone axe. The making of these axes took time and skill and was typically done by elder males. In return, these “axe-makers” were conferred special status in aboriginal society. Only a man could own an axe and if a woman or child needed one, they had to borrow it. A complex social network evolved around the ownership of axes.

In 1915 the Anglican Church established a mission in Yir Yoront territory. The missionaries brought with them a large supply of steel hatchets. They distributed these freely to any Yir Yoront that asked for them. The intended consequence was to make life easier for the tribe and trigger an improvement in living conditions.

As anthropologist Lauriston Sharp chronicled, steel axes spread rapidly through the Yir Yoront. But they didn’t spread evenly. Elder males held on to their stone axes, both as a symbol of their status and because of their distrust of the missionaries. It was the younger men, women and children that previously had to borrow stone axes who eagerly adopted the new steel axes. The steel axes were more efficient, and so jobs were done in much less time. But, to the missionary’s horror, the Yir Yoront spent most of their extra leisure time sleeping.

Sleeping, however, was the least of the unintended consequences. Social structures, which had evolved over thousands of years, were dismantled overnight. Elders were forced to borrow steel axes from what would have been their social inferiors. People no longer attended important intertribal gatherings, which were once the exchange venues for stone axes. Traditional trading channels and relationships disappeared. Men began prostituting their daughters and wives in exchange for someone else’s steel ax. The very fabric of Yir Yoront society began unraveling as a consequence of the introduction of steel axes by the Anglican missionaries.

Now, one may argue that there were aspects of this culture that were overdue for change. A traditional Yir Yoront society was undeniably chauvinistic. But the point of this story is not to pass judgment. My only purpose here is to show how new technologies can bring massive and unanticipated disruption to a society.

Everett Rogers used the Yir Yoront example in his seminal book Diffusion of Innovations. In it, he said that introductions of new technologies typically have three components: Form, Function and Meaning. The first two of these tend to be understood and intended during the introduction. Both the Yir Yoront and the Anglican missionaries understand the form and function of the steel ax. But neither understood the meaning, because meaning was determined over time through the absorption of the technology into the receiving culture. This is where unintended consequences come from.

When it comes to digital technologies, we usually talk about form and function. We focus on what a technology is and what it will do. We seldom talk about what the meaning of a new technology might be. This is because form and function can be intentionally designed and defined. Meaning has to evolve. You can’t see it until it happens.

So, to return to Kaila’s question. Who cares? Specifically, who cares about the meaning of the new technologies we’re all voraciously adopting? If the story of the Yir Yoront is any lesson, we all should.

Evolved Search Behaviors: Take Aways for Marketers

In the last two columns, I first looked at the origins of the original Golden Triangle, and then looked at how search behaviors have evolved in the last 9 years, according to a new eye tracking study from Mediative. In today’s column, I’ll try to pick out a few “so whats” for search marketers.

It’s not about Location, It’s About Intent

In 2005, search marketing as all about location. It was about grabbing a part of the Golden Triangle, and the higher, the better. The delta between scanning and clicks from the first organic result to the second was dramatic – by a factor of 2 to 1! Similar differences were seen in the top paid results. It’s as if, given the number of options available on the page (usually between 12 and 18, depending on the number of ads showing) searchers used position as a quick and dirty way to filter results, reasoning that the higher the result, the better match it would be to their intent.

In 2014, however, it’s a very different story. Because the first scan is now to find the most appropriate chunk, the importance of being high on the page is significantly lessened. Also, once the second step of scanning has begun, within a results chunk, there seems to be more vertical scanning within the chunk and less lateral scanning. Mediative found that in some instances, it was the third or fourth listing in a chunk that attracted the most attention, depending on content, format and user intent. For example, in the heat map shown below, the third organic result actually got as many clicks as the first, capturing 26% of all the clicks on the page and 15% of the time spent on page. The reason could be because it was the only listing that had the Google Ratings Rich Snippet because of the proper use of structured data mark up. In this case, the information scent that promised user reviews was a strong match with user intent, but you would only know this if you knew what that intent was.

Google-Ford-Fiesta

This change in user search scanning strategies makes it more important than ever to understand the most common user intents that would make them turn to a search engine. What will be the decision steps they go through and at which of those steps might they turn to a search engine? Would it be to discover a solution to an identified need, to find out more about a known solution, to help build a consideration set for direct comparisons, to look for one specific piece of information (ie a price) or to navigate to one particular destination, perhaps to order online? If you know why your prospects might use search, you’ll have a much better idea of what you need to do with your content to ensure you’re in the right place at the right time with the right content.  Nothing shows this clearer than the following comparison of heat maps. The one on the left was the heat map produced when searchers were given a scenario that required them to gather information. The one on the right resulted from a scenario where searchers had to find a site to navigate to. You can see the dramatic difference in scanning behaviors.

Intent-compared-2

If search used to be about location, location, location, it’s now about intent, intent, intent.

Organic Optimization Matters More than Ever!

Search marketers have been saying that organic optimization has been dying for at least two decades now, ever since I got into this industry. Guess what? Not only is organic optimization not dead, it’s now more important than ever! In Enquiro’s original 2005 study, the top two sponsored ads captured 14.1% of all clicks. In Mediative’s 2014 follow up, the number really didn’t change that much, edging up to 14.5% What did change was the relevance of the rest of the listings on the page. In 2005, all the organic results combined captured 56.7% of the clicks. That left about 29% of the users either going to the second page of results, launching a new search or clicking on one of the side sponsored ads (this only accounted for small fraction of the clicks). In 2014, the organic results, including all the different category “chunks,” captured 74.6% of the remaining clicks. This leaves only 11% either clicking on the side ads (again, a tiny percentage) or either going to the second page or launching a new search. That means that Google has upped their first page success rate to an impressive 90%.

First of all, that means you really need to break onto the first page of results to gain any visibility at all. If you can’t do it organically, make sure you pay for presence. But secondly, it means that of all the clicks on the page, some type of organic result is capturing 84% of them. The trick is to know which type of organic result will capture the click – and to do that you need to know the user’s intent (see above). But you also need to optimize across your entire content portfolio. With my own blog, two of the biggest traffic referrers happen to be image searches.

Left Gets to Lead

The Left side of the results page has always been important but the evolution of scanning behaviors now makes it vital. The heat map below shows just how important it is to seed the left hand of results with information scent.

Googlelefthand

Last week, I talked about how the categorization of results had caused us to adopt a two stage scanning strategy, the first to determine which “chunks” of result categories are the best match to intent, and the second to evaluated the listings in the most relevant chunks. The vertical scan down the left hand of the page is where we decide which “chunks” of results are the most promising. And, in the second scan, because of the improved relevancy, we often make the decision to click without a lot of horizontal scanning to qualify our choice. Remember, we’re only spending a little over a second scanning the result before we click. This is just enough to pick up the barest whiffs of information scent, and almost all of the scent comes from the left side of the listing. Look at the three choices above that captured the majority of scanning and clicks. The search was for “home decor store toronto.” The first popular result was a local result for the well known brand Crate and Barrel. This reinforces how important brands can be if they show up on the left side of the result set. The second popular result was a website listing for another well known brand – The Pottery Barn. The third was a link to Yelp – a directory site that offered a choice of options. In all cases, the scent found in the far left of the result was enough to capture a click. There was almost no lateral scanning to the right. When crafting titles, snippets and metadata, make sure you stack information scent to the left.

In the end, there are no magic bullets from this latest glimpse into search behaviors. It still comes down to the five foundational planks that have always underpinned good search marketing:

  1. Understand your user’s intent
  2. Provide a rich portfolio of content and functionality aligned with those intents
  3. Ensure your content appears at or near the top of search results, either through organic optimization or well run search campaigns
  4. Provide relevant information scent to capture clicks
  5. Make sure you deliver on what you promise post-click

Sure, the game is a little more complex than it was 9 years ago, but the rules haven’t changed.

Google’s Golden Triangle – Nine Years Later

Last week, I reviewed why the Golden Triangle existed in the first place. This week, we’ll look at how the scanning patterns of Google user’s has evolved in the past 9 years.

The reason I wanted to talk about Information Foraging last week is that it really sets the stage for understanding how the patterns have changed with the present Google layout. In particular, one thing was true for Google in 2005 that is no longer true in 2014 – back then, all results sets looked pretty much the same.

Consistency and Conditioning

If humans do the same thing over and over again and usually achieve the same outcome, we stop thinking about what we’re doing and we simply do it by habit. It’s called conditioning. But habitual conditioning requires consistency.

In 2005, The Google results page was a remarkably consistent environment. There was always 10 blue organic links and usually there were up to three sponsored results at the top of the page. There may also have been a few sponsored results along the right side of the page. Also, Google would put what it determined to be the most relevant results, both sponsored and organic, at the top of the page. This meant that for any given search, no matter the user intent, the top 4 results should presumably include the most relevant one or two organic results and a few hopefully relevant sponsored options for the user. If Google did it’s job well, there should be no reason to go beyond these 4 top results, at least in terms of a first click. And our original study showed that Google generally did do a pretty good job – over 80% of first clicks came from the top 4 results.

In 2014, however, we have a much different story. The 2005 Google was a one-size-fits-all solution. All results were links to a website. Now, not only do we have a variety of results, but even the results page layout varies from search to search. Google has become better at anticipating user intent and dynamically changes the layout on each search to be a better match for intent.

google 2014 big

What this means, however, is that we need to think a little more whenever we interact with a search page. Because the Google results page is no longer the same for every single search we do, we have exchanged consistency for relevancy. This means that conditioning isn’t as important a factor as it was in 2005. Now, we must adopt a two stage foraging strategy. This is shown in the heat map above. Our first foraging step is to determine what categories – or “chunks” of results – Google has decided to show on this particular results page. This is done with a vertical scan down the left side of the results set. In this scan, we’re looking for cues on what each chunk offers – typically in category headings or other quickly scanned labels. This first step is to determine which chunks are most promising in terms of information scent. Then, in the second step, we go back to the most relevant chunks and start scanning in a more deliberate fashions. Here, scanning behaviors revert to the “F” shaped scan we saw in 2005, creating a series of smaller “Golden Triangles.”

What is interesting about this is that although Google’s “chunking” of the results page forces us to scan in two separate steps, it’s actually more efficient for us. The time spent scanning each result is half of what it was in 2005, 1.2 seconds vs. 2.5 seconds. Once we find the right “chunk” of results, the results shown tend to be more relevant, increasing our confidence in choosing them.  You’ll see that the “mini” Golden Triangles have less lateral scanning than the original. We’re picking up enough scent on the left side of each result to push our “click confidence” over the required threshold.

A Richer Visual Environment

Google also offers a much more visually appealing results page than they did 9 years ago. Then, the entire results set was text based. There were no images shown. Now, depending on the search, the page can include several images, as the example below (a search for “New Orleans art galleries”) shows.

Googleimageshot

The presence of images has a dramatic impact on our foraging strategies. First of all, images can be parsed much quicker than text. We can determine the content of an image in fractions of a second, where text requires a much slower and deliberate type of mental processing. This means that our eyes are naturally drawn to images. You’ll notice that the above heat map has a light green haze over all the images shown. This is typical of the quick scan we do immediately upon page entry to determine what the images are about. Heat in an eye tracking heat map is produced by duration of foveal focus. This can be misleading when we’re dealing with images for two reasons. First, the fovea centralis is, predictably, in the center of our eye where our focus is the sharpest. We use this extensively when reading but it’s not as important when we’re glancing at an image. We can make a coarse judgement about what a picture is without focusing on it. We don’t need our fovea to know it’s a picture of a building, or a person, or a map. It’s only when we need to determine the details of a picture that we’ll recruit the fine-grained resolution of our fovea.

Our ability to quickly parse images makes it likely that they will play an important role in our initial orientation scan of the results page. We’ll quickly scan the available images looking for information scent. It the image does offer scent, it will also act as a natural entry point for further scanning. Typically, when we see a relevant image, we look in the immediate vicinity to find more reinforcing scent. We often see scanning hot spots on titles or other text adjacent to relevant images.

We Cover More Territory – But We’re Also More Efficient

So, to sum up, it appears that with our new two step foraging strategy, we’re covering more of the page, at least on our first scan, but Google is offering richer information scent, allowing us to zero in on the most promising “chunks” of information on the page. Once we find them, we are quicker to click on a promising result.

Next week, I’ll look at the implications of this new behavior on organic optimization strategies.