Publishers as Matchmakers

gatekeeperI’m a content creator. And, in this particular case, I’ve chosen MediaPost as the distribution point for that content. If we’re exploring the role of publishing in the future, the important question to ask here is why? After all, I could publish this post in a couple clicks to my blog. And, thanks to my blogging software, it will automatically notify my followers that there’s a new post. So, what value does Mediapost add to that?

Again, we come back to signal and noise. I generate content primarily to reach both a wide and interested audience. As a digital marketing consultant, there is a financial incentive to grow my own personal brand, but to be honest, my reward is probably more tied up in the concepts of social capital and my own ego. I publish because I want to be heard. And I want to be heard by people who find my content valuable. I have almost 2000 followers between my blog, Twitter feed and other social networks, but those people already know me. Hopefully, Mediapost will introduce me to new people that don’t know me. I want Mediapost to be my matchmaker.

Now, the second question to ask is, why are you reading this post on Mediapost? While I don’t presume to be able to know your own personal intentions, I can take a pretty good shot at generalizing – you are a Mediapost reader because you find the collection of content they publish interesting. It’s certainly not the only place online you can find content about marketing and media. And, if they chose to, any of the MediaPost writers could easily publish their content on their own blogs. You have chosen MediaPost because it acts as both a convenient access point and an effective filter.

This connection between content and audience is where publishers like MediaPost add value. Because you trust MediaPost to deliver content you find interesting, it passes the first level of your filtering threshold. I, as a content creator, get the benefit of MediaPost’s halo effect. The odds are better that I can connect with new readers under the MediaPost banner than they are if you’re introduced to me through a random, unfiltered tweet or alert in your newsfeed. And here we have a potential clue in the future of revenue generation for publishers. If publishing is potentially a match making service, perhaps we need to look at other matchmakers to see how they generate revenue.

In the traditional publishing world, it would be blasphemous to suggest that content creators should be charged for access to an audience. After all, we used to get paid to generate content by the publishers. But that was then and this is now. Understand, I’m not talking about native advertising or advertorials here. In fact, it would be the publisher’s responsibility to filter out unacceptably commercial editorials. I’m talking about creating an audience market for true content generators. In this day of personal branding, audiences have value. The better the audience, the higher the value. It should be worth something to me to reach new audiences. Publishers, in turn, act as the reader’s filter, ensuring the content they provide matches the user’s interest. Again, if the match is good enough, that has value for the reader.

Of course, the problem here is quantifying value on both sides of the relationship. I would imagine that both the content creators and content consumers that are reading my suggestions are probably saying, “There is no way I would pay for that!” And, in the current state of online publishing, I wouldn’t either – as a creator nor a consumer. The value isn’t there because the match isn’t strong enough. But if publishers focused on building the best possible audience and on presenting the best possible content, it might be a different story. More importantly, it would be a revenue model that would realign publishers with their audience, rather than pit them against it.

From the reader’s perspective, if a publisher was acting as your own private information filter, and not as a platform for poorly targeted advertising, you would probably be more willing to indicate your preferences and share information. If the publisher was discriminating enough, you might even be willing to allow them to introduce very carefully targeted offers from advertiser’s, filtering down to only the offers you’re highly likely to be interested in. This provides three potential revenue sources to the publisher: content creators looking for an audience, readers looking for an effective filtering service and advertisers looking for highly targeted introductions to prospects. In the last case, the revenue should be split with the prospect, with the publisher taking a percentage for handling the introduction and the rest going to the prospect in return for agreeing to accept the advertiser’s introduction.

While radically different than today’s model, what I’ve proposed is not a new idea. It was first introduced in the book Net Worth, by John Hagel and Marc Singer. They introduced the idea in 1999. Granted, my take is less involved than theirs is, but the basic idea is the same – a shift from a relentless battering of prospects with increasingly overt advertising messages to a careful filtering and matching of interests and appropriate content. And, when you think about it, the matching of intent and content is what Google has been doing for two decades.

Disruptive innovations tend to change the ways that value is determined. They take previous areas of scarcity and change them to ones of abundance. They upend markets and alter existing balances between forces. When the markets shift to this extent, trying to stick to the old paradigm guarantees failure. The challenge is that there is no new paradigm to follow. Experimentation is the only option. And to experiment you have to be willing to explore the boundaries. The answer won’t be found in the old, familiar territory.

Same Conversation. Different Location.

online_publishing_vxwndNote: This is my first OnlineSpin column for MediaPost.

First of all, let’s get the pleasantries out of the way. I’m Gord. I’m new to Online Spin, but not to MediaPost. If you don’t know me, I have been writing over on the Search Insider side of the house for the past 10 and a half years.

Nice to meet you.

Now, on to business. Just before the switch, I took online publishing to task for sacrificing it’s ability to communication for the sake of advertising revenue. The user experience on most online publications is so littered with intrusive ads and misleading click bait that it becomes almost impossible to actually read the content. My point, which is probably obvious, is that the short-term quest for revenue is jeopardizing the long-term health of the business model.

Among the comments posted were a few asking for guidance rather than just criticism. Fair enough. It’s much easier to criticize that it is to create. So, where does the future of publishing lie?

The problem, as it is in so many other cases, is that technology has annihilated the proverbial publishing apple cart. Publishing as an industry began because of the high transactional cost of publicizing information. Information began to be stacked vertically, because that was the only cost effective way to do it. These vertical stacks of information attracted audiences because it was the only place they could get this information. Limited access points created large and loyal audiences which in turn allowed ad supported revenue models. Because transactional costs were high, information was scarce. Scarcity enabled profit.

Today, technology is, one by one, leveling the vertical stacks of information. Transactional costs of publishing have dropped to essentially zero. Yes, I’m publishing this post through a “publisher” but it would be just as easy for me to publish to my own blog. And while MediaPost’s audience is probably larger than my own bog’s, the gap between the two grows less every day. The lower transactional costs of publishing have erased the scarcity of information.

This disruptive change has flipped the publishing model on its head. The problem with information used to be that we had too little access. The problem today is that we have too much. What we need now are filters. We need a way to separate the signal from the ever-increasing noise.

Now, think of what this reversal does for revenue models of publishers. If the problem before were access, we would value any source of information that provided this access. We would be loyal to it. We would spend a significant amount of time with it. But if the problem becomes one of filtering, our loyalty level drops significantly. We just want to get to the information that is most interesting to us as quickly and efficiently as possible. If we have any allegiance to publishers at all, it is as a content filter. This is exactly why publishing empires are fragmenting into more and more specific vertical niches. We don’t need access points – we need effective filters.

Now, back to my original point. If the only way to make revenue from publishing is to introduce more noise – in the form of intrusive advertising – we quickly see the problem. We want publishers to eliminate extraneous noise and they add more. And to compound the problem, they intentionally blur the line between signal and noise in an attempt to generate more click-throughs. And, as Joe Marchese rightly points out, this vicious cycle is exacerbated by the bogus metric of “impressions” that publishers seem to have latched on to. The reader’s intent and the publisher’s intent are on a collision course with each other.

Given this, is there a way to save publishing? Perhaps, but it will be in a form much different than any we currently see. Publishing’s role may be in serving both as a filter and a matchmaker. More to come next Tuesday

The Sorry State of Online Publishing

ss-publishingDynamic tension can be a good thing. There are plenty of examples of when this is so. Online publishing isn’t one of them. The plunging transaction costs of publishing and the increasingly desperate attempts to shore up some sort of sustainable revenue model is creating a tug-of-war that’s threatening to tear apart the one person that this whole sorry mess is revolving around – the reader. Somebody better get their act together soon, because I’m one reader that’s getting sick of it.

Trying to read an article on most online is like trying to tiptoe through a cognitive minefield. The publishers have squeezed every possible advertising opportunity onto the page and in doing so, has sacrificed credibility, cohesiveness and clarity. The job of publishing is communication, but these publishers seem to think its actually sacrificing communication for revenue. Methinks if you have to attack your own business model to make a profit, you should be taking a long hard look at said model.

Either Fish or Cut Click Bait

The problem has grown so pervasive that academia is even piling on. In the past few months, a number of studies have looked at the dismal state of online publishing.

clickbaitIn the quest for page views, publishers have mastered the trick of pushing our subconscious BSO (Bright Shiny Object) buttons with clickbait. Clickbait is essentially brain porn – headlines, often misleading – that you can’t resist clicking on. The theory is more page views – more advertising opportunities. The problem is that clickbait essential derails the mind from its predetermined focus. And worse, clickbait often distracts the brain with a misleading headline the subsequent article fails to deliver on. As Jon Stewart recently told New York Magazine, “It’s like carnival barkers, and they all sit out there and go, “Come on in here and see a three-legged man!” So you walk in and it’s a guy with a crutch.”

A recent study from The Journal of Experimental Psychology showed that misleading headlines and something called “false balance” – where publishers give equal airtime to sources with very different levels of credibility – can negatively impact the reader’s ability to remember the story, create a cohesive understanding of the story and cognitively process the information. In other words, the publisher’s desperate desire to grab eyeballs gets in the way of their ability to communicate effectively.

Buzzfeed Editor-in-Chief Ben Smith has publicly gone on the record about why he doesn’t use click-bait headlines: “Here is a trade secret I’d decided a few years ago we’d be better off not revealing — clickbait stopped working around 2009.” He references Facebook engineer Khalid El-Arini in the post, saying “readers don’t want to be tricked by headlines; instead, they want to be informed by them.”

Now You Read Me, Now You Don’t

If you ever wanted to test your resolve, try getting to the end of an online article. What content there is is shoehorned into a format littered with ads and clickbait of every description. Many publishers even try to squeeze revenue from the content itself by using Text Enhance, an ad serving platform that hyperlinks keywords in the copy and shows ads if your cursor strays anywhere near these links. Users like me often use their cursor both as a place marker and a quick way to vet sources of embedded links. Text Enhance makes reading in this way an incredibly frustrating experience as it continually pops up poorly targeted ads while you try to tiptoe through the advertising landmines to piece together what the writer was originally trying to say. It turns reading content into a virtual game of “Whac-a-Mole.”

Of course, this is assuming you’ve made it past the page take-over and auto-play video ads that litter the “mind-field” between you and the content you want to access on a site like Forbes or The Atlantic. These interruptions in our intent create a negative mental framework that is compounded by having to weave through increasingly garish ad formats in order to piece together the content we’re trying to access.

A new study from Microsoft and Northwestern University shows that aggressive and annoying advertising may prop up short-term revenues, but at a long-term price that publishers should be thinking twice about paying, ““The practice of running annoying ads can cost more money than it earns, as people are more likely to abandon sites on which they are present. In addition, in the presence of annoying ads, people were less accurate in remembering what they had read. None of these effects on users is desirable from the publisher’s perspective.”

Again, we have this recurring theme about revenue getting in the way of user experience. This is a conflict from which there can be no long-term benefit. When you frustrate users, you slowly kill your revenue source. You engage in a vicious cycle from which there is no escape.

I understand that online publishers are desperate. I get that. They should be. I suspect the ad-supported business platform they’re trying to prop up is hopelessly damaged. Another will emerge to take its place. But the more they frustrate us, the faster that will happen.

 

 

Why Our Brains Love TV

brain-TV-e1318029026863Forrester Research analyst Shar VanBoskirk has pegged 2019 as the year when digital ad spend will surpass TV, topping the $100 billion mark. This is momentous in a number of ways, but not really surprising. If you throw all digital marketing in a single bucket, it was a question of when, not if, it would finally surpass TV. What is more surprising to me is how resilient TV has proven to be as an advertising medium. After all, we’re only a little more than a decade away from the 100th anniversary of broadcast TV (which started in 1928). TV has been the king of the media mountain for a long time.

So, what is it about TV that has so captured us for so long? What is it about the medium that allows our brains to connect to it so easily?

The Two Most Social Senses – Sight and Sound

Even as digital overtakes broadcast and cable television, we’re still mesmerized by the format of TV. Our interaction with the medium has shifted in a few interesting ways, notably time shifting, new platforms to consume it on and binge watching, but our actual interaction with the format itself hasn’t changed very much, save for the continual improvements in fidelity. It’s still sight and sound delivered electronically. And for us, that seems to be a very compelling combination. Despite some thus-far failed attempts to introduce another sense or dimension into the sight/sound duopoly, our brains seem to naturally default back to a relatively stable format of sound and two-dimensional images.

It’s no coincidence that these are the same two senses we rely on most heavily to connect with the outside world. They allow us to scan our environments “at-a-distance,” picking up cues of potential threats or rewards that we can then use our other senses to interact with more intimately. Smell, taste and touch are usually “close-up” senses that are relied on only when sight and sound have given the “all-clear” signal to our brains. For this reason, our brains have some highly developed mechanisms that allow us to parse the world through sight and sound – particularly sight. For example, the fusiform gyrus is a part of our brain that is dedicated to categorizing forms we see and fitting them into categories our brain recognizes. It’s this part of our brain that allows us to recognize faces and fit them into understandable categories such as friends, enemies, family, celebrities, etc.

These are also the two senses we use most often in social settings. If it weren’t for sight and sound, our ability to interact with each other would be severely curtailed. This offers another clue. Television is a good fit with our need to socialize. Sight and sound are the channel inputs to empathy. Our mirror neurons are activated when we see somebody else doing something. That’s why the saying is “Monkey See, Monkey Do,” and not “Monkey Taste, Monkey Do.” These two senses are all we really need to build a fairly rich representation of the world and create emotional connections to it.

We want Immersion, But Not Too Much immersion

So, if the combination of sight and sound seems to be a good match with our mechanisms for understanding the world – why has “more” not proven to be “better?” Why, for instance, has 3D and Interactive TV not caught on to the extent forecast?

I think we’ve developed a comfortable balance with TV. Remember, sight and sound are generally used as “at-a-distance” parsers of our world. Because of the sheer volume of visual and auditory information coming through these channels, the brain has learned to filter input and only alert us when further engagement is required. If our brain had to process all the visual information available to it, it would overload to the point of breakdown. So while we want to be engaged in whatever we’re watching on TV, we aren’t looking to be totally immersed in it. This is why we have the multi-screen/multi-tasking behaviors emerging that are quickly becoming the norm while we watch TV. 3D or Interactive TV both add a dimension of focal attention that isn’t necessary to enjoy a TV show.

The Concept of “Durable” Media

It’s interesting that as technology advances, every so often a media format emerges that is what I would call “durable.” It’s information or entertainment presented in a format that is a good cognitive match for our preferences and abilities. Even if technology is capable of adding “more” to these media, over time it turns out that “more” isn’t perceived as “better.”

Books are perhaps the most durable of media. The basic format of a book has been digitized, but our interaction with a book doesn’t look much different than it did in Guttenberg’s day. It’s still printed words on a page. Television also appears to be a durable medium. The format itself is fairly stable. It’s the revenue models that are built around it that will evolve as time goes on.

Facebook at Work – Stroke of Genius or Act of Desperation?

facebookworkSo, with the launching of Facebook at Work, Facebook wants to become your professional networking platform of choice, does it? Well, speaking as a sample of one, I don’t think so. And it all comes down to one key reason that I’ve talked about in the past, but for some reason, Facebook doesn’t seem to get – social modality.

Social modality is not a tough concept to understand. I’m one person in my office, another on the couch. The things that interest me in the office have little overlap with the things that interest me when I’m “sofa-tose” (nodding into a state of minimal consciousness on overstuffed furniture). But it’s not just about interests. It’s about context. I think differently. I act differently. I react differently. And I want to keep those two states as separate as possible.

Facebook seems to understand the need for separation. They’re building out Facebook at Work as a separate entity. But it’s still Facebook, and when I’ve got my business persona on, I don’t even think of Facebook. Neither, apparently, does anyone else. In 2010, BranchOut tried to build a professional network layer on top of Facebook. Last summer, it changed its business model. The reason? A lack of users. When you think of work, you just don’t think of Facebook. If fact, there’s almost an instinctual revulsion to the idea. Mixing Facebook and work is a cultural taboo.

When we look at the technologies we use to mediate our social activities, different rules apply. It’s not just about features or functionality – it’s about what instinctively feels right. Facebook is trying to create a monolithic platform for social connecting and that doesn’t seem to be where we’re heading. Rather than consolidating our social activity, it’s splintering over different tools and platforms. One reason is functionality. The other is that socially; we’re much too complex to fit into any one particular technological mold. I wrote a few months ago about the maturity continuum of social media. The final stage was to become a platform, which is exactly what Facebook is trying to do. But perhaps becoming a social media platform – at least in the sense that Facebook is attempting – isn’t possible. It could be that our social media personalities are too fractured to fit comfortably in any single destination.

Facebook’s revenue model depends on advertising, which depends on eyeballs. It’s a real estate play. Maybe to be successful, social has to be less about location and more about functionality. In other words, to become a social media platform, you have to be a utility, not a destination. Facebook seems to be trying to do both. According to an article in the Financial Times (registration required) Facebook at work will offer functionality through chat, contact management and document collaboration, but it will do so on a site that “looks very much like Facebook,” including, one assumes, ads served from Facebook. By trying to attract eyeballs to drive revenue, Facebook won’t be able to avoid mixing modality, and therein lays the problem. I suspect Facebook at Work will join an ever-increasing string of Facebook failures.

LinkedIn isn’t perfect, but it has definitely established itself as the B-to-B platform of choice. It fits our sensibilities of what a professional social networking tool should be. And it doesn’t suffer from Facebook’s overly ambitious hubris. It hasn’t launched “LinkedIn at Home” – trying to become the social network platform for our non-work life. It knows what it is. We know what it is. Our social modality isn’t conflicted. Facebook is another matter. It wants to be all things social to all people. I suppose from a revenue point you can’t blame them, but there’s a reason I don’t invite my co-workers to my family reunion – or vice versa.

Someday Facebook will learn that lesson. I suspect it will probably be the hard way.

#AlexfromTarget – An Unexpected Consequence of Technology

1414997478566_wps_10_Original_Tweet_of_Alex_frYes, I’m belatedly jumping on the #AlexfromTarget bandwagon, but it’s in service of a greater truth that I’m trying to illustrate. Last column, I spoke about the Unintended Consequences of Technology. I think this qualifies. And furthermore, this brings us full circle to Kaila Colbin’s original point, which started this whole prolonged discussion.

It is up to us to decide what is important, to create meaning and purpose. And, personally, I think we could do a better job than we’re doing now.

So, why did the entire world go ga-ga over a grocery bagger from Texas? What could possibly be important about this?

Well – nothing – and that’s the point. Thinking about important things is hard work. Damned hard work – if it’s really important. Important things are complex. They make our brains hurt. It’s difficult to pin them down long enough to plant some hooks of understanding in them. They’re like eating broccoli, or doing push ups. They may be good for us, but that doesn’t make them any more fun.

Remember the Yir Yoront from my last column – the tribal society that was thrown into a tail spin by the introduction of steel axes? The intended consequence of that introduction was to make the Yir Yoront more productive. The axes did make the tribe more productive, in that they were able to do the essential tasks more quickly, but the result was that the Yir Yoront spent more time sleeping.

Here’s the thing about technology. It allows us to be more human – and by that I mean the mixed bag of good and bad that defines humanity. It extends our natural instincts. It’s natural to sleep if you don’t have to worry about survival. And it’s also natural for young girls to gossip about adorable young boys. These are hard-wired traits. Deep philosophical thought is not a hard-wired trait. Humans can do it, but it takes conscious effort

Here’s where the normal distribution curve comes in. Any genetically determined trait will have a normal distribution over the population. How we apply new technologies will be no different. The vast majority of the population will cluster around the mean. But here’s the other thing – that “mean” is a moving target. As our brains “re-wire” and adapt to new technologies, the mean that defines typical behavior will move over time. We adapt strategies to incorporate our new technology-aided abilities. This creates a new societal standard and it is also human to follow the unwritten rules of society. This creates a cause and effect cycle. Technologies enable new behaviors that are built on top of the foundations of human instinct – society determines whether these new behaviors are acceptable – and if they are acceptable, they become the new “mean” of our behavioral bell curve. We bounce new behaviors off the backboard of society. So, much as we may scoff at the fan-girls that gave “Alex” insta-fame – ultimately it’s not the girl’s fault, or technology’s. The blame lies with us. It also lies with Ellen DeGeneres, the New York Times, and the other barometers of societal acceptance that offered endorsement of the phenomenon.

It’s human to be distracted by the titillating and trivial. It’s also human to gossip about it. There’s nothing new here. It’s just that these behaviors used to remain trapped within the limited confines of our own social networks. Now, however, they’re amplified through technology. It’s difficult to determine what the long-term consequences of this might be. Is Nicholas Carr right? Is technology leading us down the garden path to imbecility, forever distracted by bright, shiny objects? Or is our finest moment yet to come?

The Unintended Consequences of Technology

Who_caresIn last Friday’s Online Spin Column, Kaila Colbin asks a common question when it comes to the noise surrounding the latest digital technologies: Who Cares? Colbin rightly points out that we tend to ascribe unearned importance to whatever digital technology we seemed to be focused on at the given time. This is called, aptly enough, the focusing illusion and in the words of Daniel Kahneman, who coined the term, “Nothing in life is as important as you think it is, while you are thinking about it.”

But there’s another side to this. How important are the things we aren’t thinking about? For example, because it’s difficult to wrap our minds around big picture consequences in the future, we tend not to think as much as we should about them. In the case of digital technology shifts such as the ones Kaila mentioned, what we should care about is the overall shift caused by the cumulative impact of these technologies, not the individual components that make up the wave.

When we introduce a new technology, we usually have some idea of the impact they will have. These are the intended consequences. And we focus on these, which makes them more important in our minds. But some things will catch us totally by surprise. These are called unintended consequences. We won’t know them until the happen, but when they do, we will very much care about them. To illustrate that point, I’d like to tell the story about the introduction of one technology that dramatically changed one particular society.

yiryorontThe Yir Yoront were a nomadic tribe in Australia that somehow managed to avoid significant contact with the western world until well into the 20th century. In Yir Yoront society, one of the most valuable things you could possess was a stone axe. The making of these axes took time and skill and was typically done by elder males. In return, these “axe-makers” were conferred special status in aboriginal society. Only a man could own an axe and if a woman or child needed one, they had to borrow it. A complex social network evolved around the ownership of axes.

In 1915 the Anglican Church established a mission in Yir Yoront territory. The missionaries brought with them a large supply of steel hatchets. They distributed these freely to any Yir Yoront that asked for them. The intended consequence was to make life easier for the tribe and trigger an improvement in living conditions.

As anthropologist Lauriston Sharp chronicled, steel axes spread rapidly through the Yir Yoront. But they didn’t spread evenly. Elder males held on to their stone axes, both as a symbol of their status and because of their distrust of the missionaries. It was the younger men, women and children that previously had to borrow stone axes who eagerly adopted the new steel axes. The steel axes were more efficient, and so jobs were done in much less time. But, to the missionary’s horror, the Yir Yoront spent most of their extra leisure time sleeping.

Sleeping, however, was the least of the unintended consequences. Social structures, which had evolved over thousands of years, were dismantled overnight. Elders were forced to borrow steel axes from what would have been their social inferiors. People no longer attended important intertribal gatherings, which were once the exchange venues for stone axes. Traditional trading channels and relationships disappeared. Men began prostituting their daughters and wives in exchange for someone else’s steel ax. The very fabric of Yir Yoront society began unraveling as a consequence of the introduction of steel axes by the Anglican missionaries.

Now, one may argue that there were aspects of this culture that were overdue for change. A traditional Yir Yoront society was undeniably chauvinistic. But the point of this story is not to pass judgment. My only purpose here is to show how new technologies can bring massive and unanticipated disruption to a society.

Everett Rogers used the Yir Yoront example in his seminal book Diffusion of Innovations. In it, he said that introductions of new technologies typically have three components: Form, Function and Meaning. The first two of these tend to be understood and intended during the introduction. Both the Yir Yoront and the Anglican missionaries understand the form and function of the steel ax. But neither understood the meaning, because meaning was determined over time through the absorption of the technology into the receiving culture. This is where unintended consequences come from.

When it comes to digital technologies, we usually talk about form and function. We focus on what a technology is and what it will do. We seldom talk about what the meaning of a new technology might be. This is because form and function can be intentionally designed and defined. Meaning has to evolve. You can’t see it until it happens.

So, to return to Kaila’s question. Who cares? Specifically, who cares about the meaning of the new technologies we’re all voraciously adopting? If the story of the Yir Yoront is any lesson, we all should.

Evolved Search Behaviors: Take Aways for Marketers

In the last two columns, I first looked at the origins of the original Golden Triangle, and then looked at how search behaviors have evolved in the last 9 years, according to a new eye tracking study from Mediative. In today’s column, I’ll try to pick out a few “so whats” for search marketers.

It’s not about Location, It’s About Intent

In 2005, search marketing as all about location. It was about grabbing a part of the Golden Triangle, and the higher, the better. The delta between scanning and clicks from the first organic result to the second was dramatic – by a factor of 2 to 1! Similar differences were seen in the top paid results. It’s as if, given the number of options available on the page (usually between 12 and 18, depending on the number of ads showing) searchers used position as a quick and dirty way to filter results, reasoning that the higher the result, the better match it would be to their intent.

In 2014, however, it’s a very different story. Because the first scan is now to find the most appropriate chunk, the importance of being high on the page is significantly lessened. Also, once the second step of scanning has begun, within a results chunk, there seems to be more vertical scanning within the chunk and less lateral scanning. Mediative found that in some instances, it was the third or fourth listing in a chunk that attracted the most attention, depending on content, format and user intent. For example, in the heat map shown below, the third organic result actually got as many clicks as the first, capturing 26% of all the clicks on the page and 15% of the time spent on page. The reason could be because it was the only listing that had the Google Ratings Rich Snippet because of the proper use of structured data mark up. In this case, the information scent that promised user reviews was a strong match with user intent, but you would only know this if you knew what that intent was.

Google-Ford-Fiesta

This change in user search scanning strategies makes it more important than ever to understand the most common user intents that would make them turn to a search engine. What will be the decision steps they go through and at which of those steps might they turn to a search engine? Would it be to discover a solution to an identified need, to find out more about a known solution, to help build a consideration set for direct comparisons, to look for one specific piece of information (ie a price) or to navigate to one particular destination, perhaps to order online? If you know why your prospects might use search, you’ll have a much better idea of what you need to do with your content to ensure you’re in the right place at the right time with the right content.  Nothing shows this clearer than the following comparison of heat maps. The one on the left was the heat map produced when searchers were given a scenario that required them to gather information. The one on the right resulted from a scenario where searchers had to find a site to navigate to. You can see the dramatic difference in scanning behaviors.

Intent-compared-2

If search used to be about location, location, location, it’s now about intent, intent, intent.

Organic Optimization Matters More than Ever!

Search marketers have been saying that organic optimization has been dying for at least two decades now, ever since I got into this industry. Guess what? Not only is organic optimization not dead, it’s now more important than ever! In Enquiro’s original 2005 study, the top two sponsored ads captured 14.1% of all clicks. In Mediative’s 2014 follow up, the number really didn’t change that much, edging up to 14.5% What did change was the relevance of the rest of the listings on the page. In 2005, all the organic results combined captured 56.7% of the clicks. That left about 29% of the users either going to the second page of results, launching a new search or clicking on one of the side sponsored ads (this only accounted for small fraction of the clicks). In 2014, the organic results, including all the different category “chunks,” captured 74.6% of the remaining clicks. This leaves only 11% either clicking on the side ads (again, a tiny percentage) or either going to the second page or launching a new search. That means that Google has upped their first page success rate to an impressive 90%.

First of all, that means you really need to break onto the first page of results to gain any visibility at all. If you can’t do it organically, make sure you pay for presence. But secondly, it means that of all the clicks on the page, some type of organic result is capturing 84% of them. The trick is to know which type of organic result will capture the click – and to do that you need to know the user’s intent (see above). But you also need to optimize across your entire content portfolio. With my own blog, two of the biggest traffic referrers happen to be image searches.

Left Gets to Lead

The Left side of the results page has always been important but the evolution of scanning behaviors now makes it vital. The heat map below shows just how important it is to seed the left hand of results with information scent.

Googlelefthand

Last week, I talked about how the categorization of results had caused us to adopt a two stage scanning strategy, the first to determine which “chunks” of result categories are the best match to intent, and the second to evaluated the listings in the most relevant chunks. The vertical scan down the left hand of the page is where we decide which “chunks” of results are the most promising. And, in the second scan, because of the improved relevancy, we often make the decision to click without a lot of horizontal scanning to qualify our choice. Remember, we’re only spending a little over a second scanning the result before we click. This is just enough to pick up the barest whiffs of information scent, and almost all of the scent comes from the left side of the listing. Look at the three choices above that captured the majority of scanning and clicks. The search was for “home decor store toronto.” The first popular result was a local result for the well known brand Crate and Barrel. This reinforces how important brands can be if they show up on the left side of the result set. The second popular result was a website listing for another well known brand – The Pottery Barn. The third was a link to Yelp – a directory site that offered a choice of options. In all cases, the scent found in the far left of the result was enough to capture a click. There was almost no lateral scanning to the right. When crafting titles, snippets and metadata, make sure you stack information scent to the left.

In the end, there are no magic bullets from this latest glimpse into search behaviors. It still comes down to the five foundational planks that have always underpinned good search marketing:

  1. Understand your user’s intent
  2. Provide a rich portfolio of content and functionality aligned with those intents
  3. Ensure your content appears at or near the top of search results, either through organic optimization or well run search campaigns
  4. Provide relevant information scent to capture clicks
  5. Make sure you deliver on what you promise post-click

Sure, the game is a little more complex than it was 9 years ago, but the rules haven’t changed.

Google’s Golden Triangle – Nine Years Later

Last week, I reviewed why the Golden Triangle existed in the first place. This week, we’ll look at how the scanning patterns of Google user’s has evolved in the past 9 years.

The reason I wanted to talk about Information Foraging last week is that it really sets the stage for understanding how the patterns have changed with the present Google layout. In particular, one thing was true for Google in 2005 that is no longer true in 2014 – back then, all results sets looked pretty much the same.

Consistency and Conditioning

If humans do the same thing over and over again and usually achieve the same outcome, we stop thinking about what we’re doing and we simply do it by habit. It’s called conditioning. But habitual conditioning requires consistency.

In 2005, The Google results page was a remarkably consistent environment. There was always 10 blue organic links and usually there were up to three sponsored results at the top of the page. There may also have been a few sponsored results along the right side of the page. Also, Google would put what it determined to be the most relevant results, both sponsored and organic, at the top of the page. This meant that for any given search, no matter the user intent, the top 4 results should presumably include the most relevant one or two organic results and a few hopefully relevant sponsored options for the user. If Google did it’s job well, there should be no reason to go beyond these 4 top results, at least in terms of a first click. And our original study showed that Google generally did do a pretty good job – over 80% of first clicks came from the top 4 results.

In 2014, however, we have a much different story. The 2005 Google was a one-size-fits-all solution. All results were links to a website. Now, not only do we have a variety of results, but even the results page layout varies from search to search. Google has become better at anticipating user intent and dynamically changes the layout on each search to be a better match for intent.

google 2014 big

What this means, however, is that we need to think a little more whenever we interact with a search page. Because the Google results page is no longer the same for every single search we do, we have exchanged consistency for relevancy. This means that conditioning isn’t as important a factor as it was in 2005. Now, we must adopt a two stage foraging strategy. This is shown in the heat map above. Our first foraging step is to determine what categories – or “chunks” of results – Google has decided to show on this particular results page. This is done with a vertical scan down the left side of the results set. In this scan, we’re looking for cues on what each chunk offers – typically in category headings or other quickly scanned labels. This first step is to determine which chunks are most promising in terms of information scent. Then, in the second step, we go back to the most relevant chunks and start scanning in a more deliberate fashions. Here, scanning behaviors revert to the “F” shaped scan we saw in 2005, creating a series of smaller “Golden Triangles.”

What is interesting about this is that although Google’s “chunking” of the results page forces us to scan in two separate steps, it’s actually more efficient for us. The time spent scanning each result is half of what it was in 2005, 1.2 seconds vs. 2.5 seconds. Once we find the right “chunk” of results, the results shown tend to be more relevant, increasing our confidence in choosing them.  You’ll see that the “mini” Golden Triangles have less lateral scanning than the original. We’re picking up enough scent on the left side of each result to push our “click confidence” over the required threshold.

A Richer Visual Environment

Google also offers a much more visually appealing results page than they did 9 years ago. Then, the entire results set was text based. There were no images shown. Now, depending on the search, the page can include several images, as the example below (a search for “New Orleans art galleries”) shows.

Googleimageshot

The presence of images has a dramatic impact on our foraging strategies. First of all, images can be parsed much quicker than text. We can determine the content of an image in fractions of a second, where text requires a much slower and deliberate type of mental processing. This means that our eyes are naturally drawn to images. You’ll notice that the above heat map has a light green haze over all the images shown. This is typical of the quick scan we do immediately upon page entry to determine what the images are about. Heat in an eye tracking heat map is produced by duration of foveal focus. This can be misleading when we’re dealing with images for two reasons. First, the fovea centralis is, predictably, in the center of our eye where our focus is the sharpest. We use this extensively when reading but it’s not as important when we’re glancing at an image. We can make a coarse judgement about what a picture is without focusing on it. We don’t need our fovea to know it’s a picture of a building, or a person, or a map. It’s only when we need to determine the details of a picture that we’ll recruit the fine-grained resolution of our fovea.

Our ability to quickly parse images makes it likely that they will play an important role in our initial orientation scan of the results page. We’ll quickly scan the available images looking for information scent. It the image does offer scent, it will also act as a natural entry point for further scanning. Typically, when we see a relevant image, we look in the immediate vicinity to find more reinforcing scent. We often see scanning hot spots on titles or other text adjacent to relevant images.

We Cover More Territory – But We’re Also More Efficient

So, to sum up, it appears that with our new two step foraging strategy, we’re covering more of the page, at least on our first scan, but Google is offering richer information scent, allowing us to zero in on the most promising “chunks” of information on the page. Once we find them, we are quicker to click on a promising result.

Next week, I’ll look at the implications of this new behavior on organic optimization strategies.

The Evolution of Google’s Golden Triangle

In search marketing circles, most everyone has heard of Google’s Golden Triangle. It even has it’s own Wikipedia entry (which is more than I can say). The “Triangle” is rapidly coming up to its 10th birthday (it was March of 2005 when Did It and Enquiro – now Mediative – first released the study). This year, Mediative conducted a new study to see if what we found a decade ago still continues to be true. Another study from the Institute of Communication and Media Research in Cologne, Germany also looked at the evolution of search user behaviors. I’ll run through the findings of both studies to see if the Golden Triangle still exists. But before we dive in, let’s look back at the original study.

Why We Had a Golden Triangle in the First Place

To understand why the Golden Triangle appeared in the first place, you have to understand about how humans look for relevant information. For this, I’m borrowing heavily from Peter Pirolli and Stuart Card at PARC and their Information Foraging Theory (by the way, absolutely every online marketer, web designer and usability consultant should be intimately familiar with this theory).

Foraging for Information

Humans “forage” for information. In doing so, they are very judicious about the amount of effort they go to find the available information. This is largely a subconscious activity, with our eyes rapidly scanning for cues of relevancy. Pirolli and Card refer to this as “information scent.” Picture a field mouse scrambling across a table looking for morsels to eat and you’ll have an appropriate mental context in which to understand the concept of information foraging. In most online contexts, our initial evaluation of the amount of scent on a page takes no more than a second or two. In that time, we also find the areas that promise the greatest scent and go directly to them. To use our mouse analogy, the first thing she does is to scurry quickly across the table and see where the scent of possible food is the greatest.

The Area of Greatest Promise

Now, Imagine that same mouse comes back day after day to the same table and every time she returns, she finds the greatest amount of food is always in the same corner. After a week or so, she learns that she doesn’t have to scurry across the entire table. All she has to do is go directly to that corner and start there. If, by some fluke, there is no food there, then the mouse can again check out the rest of the table to see if there are better offerings elsewhere. The mouse has been conditioned to go directly to the “Area of Greatest Promise” first.

Golden Triangle original

F Shaped Scanning

This was exactly the case when we did the first eye tracking study in 2005. Google had set a table of available information, but they always put the best information in the upper right corner. We became conditioned to go directly to the area of greatest promise. The triangle shape came about because of the conventions of how we read in the western world. We read top to bottom, left to right. So, to pick up information scent, we would first scan down the beginning of each of the top 4 or 5 listings. If we saw something that seemed to be a good match, we would scan across the title of the listing. If it was still a good match, we would quickly scan the description and the URL. If Google was doing it’s job right, there would be more of this lateral scanning on the top listing than there would be on the subsequent listings. This F shaped scanning strategy would naturally produce the Golden Triangle scanning pattern we saw.

Working Memory and Chunking

There was another behavior we saw that helped explain the heat maps that emerged. Our ability to actively compare options requires us to hold in our mind information about each of the options. This means that the number of options we can compare at any one time is restricted by the limits of our working memory. George Miller, in a famous paper in 1956, determined this to be 7 pieces of information, plus or minus two. The actual number depends on the type of information to be retained and the dimension of variability. In search foraging, the dimension is relevancy and the inputs to the calculation will be quick judgments of information scent based on a split second scan of the listing. This is a fairly complex assessment, so we found that the number of options to be compared at once by the user tends to max out about 3 or 4 listings. This means that the user “chunks” the page into groupings of 3 or 4 listings and determines if one of the listings is worthy of a click. If not, the user moves on to the next chunk. We also see this in the heat map shown. Scanning activity drops dramatically after the first 4 listings. In our original study, we found that over 80% of first clicks on all the results pages tested came from the top 4 listings. This is also likely why Google restricted the paid ads shown above organic to 3 at the most.

So, that’s a quick summary of our findings from the 2005 study. Next week, we’ll look how search scanning has changed in the past 9 years.

Note: Mediative and SEMPO will be hosting a Google+ Hang Out talking about their research on October 14th. Full details can be found here.