Why SEO Never Lived Up to Its Potential

IAB Canada President Chris Williams asked me a great question last week.

seo9We had just finished presenting the results of the new eye tracking study I told you about in the last three columns. I had mentioned that about 84% of all the clicks on the page in the study were on some type of non-paid result. I had also polled the audience of some 400 plus Internet marketers about how many were doing some type of organic optimization. A smattering of hands (which, in case you’re wondering, is somewhere south of a dozen, or about 3% of the audience) went up. Williams picked up on the disconnect right away. “We have a multi-billion dollar interactive advertising industry here in Canada, and you’re telling me that (on search at least) that only represents about 16% of the potential traffic? Why isn’t SEO a massive industry?”

Like I said – great question. I wish I had responded with a great answer. But the best I could do fell well short of the mark: “Uhh..well…(pick up slight whining tone at this point)…SEO is just really, really hard!”

Okay, maybe I was slightly more eloquent than that – but the substance of my reply was essentially that flimsy. SEO is a backbreaking way to earn a living, whether you’re a lone consultant, an agency or an in-house marketer.

Coincidentally, I was also in an inaugural call last week with a dear friend of mine who asked me to serve on the advisory board of his successful digital agency. I asked if they offered SEO services. I got the same answer from him – SEO was just too hard to make it profitable. They dropped it a few years ago from their services portfolio.

It Was a Case of Showing Search the Money

The potential value of SEO hasn’t changed in the almost 20 years since I started in this biz. In fact, it’s probably greater than ever. But SEO never seems to gain traction. The reason becomes clear when you start following the money. Goto.com (which became Overture, which was swallowed by Yahoo) sealed SEO’s fate when it started auctioning off search ads in 1998.  Google eventually followed suit in 2000 and the rest, along with SEO, was history. Even devout SEOers (myself included) eventually followed the money trail to the paid side of the house. The reasons why were abundantly and painfully clear when you consider this one particular example. We had the SEO contract with one fortune 500 brand that brought in about $300K annually. At the time, it would have been our biggest SEO contract, but it also was resource intensive. We had an entire team working on it. We did well, securing a number of first page results for some very high traffic terms. Based on what analytics we had it appeared that SEO was driving about 90% of the traffic and was converting substantially better than any other traffic source, including paid search. This translated into hundreds of millions of dollars in business yearly. But we could never seem to grow our contract beyond that $300K ceiling.

Paid search was another story. From fairly humble beginnings, that same brand became one of Google’s top advertisers, spending over $30 million per year. The management of that contract became a multimillion-dollar account. Unfortunately, it wasn’t our account. It belonged to another agency – a much smarter and more profitable agency.

Why We Got Pigeon Holed with SEO

If, as a service provider, you live and die by SEO, it’s probable that you’ll end up dying by SEO. Here’s why. To gain any traction you need to have influence over almost every aspect of the business. SEO has to become systemic. It has to be baked into the way an organization does business. It can’t be done as window dressing.

Most organizations don’t get that. They get tantalized by initial easy wins – things like cleaning up code, improving crawlability and doing some basic content optimization. Organic traffic skyrockets and everyone cheers. Life is good. But then it gets hard. The next step means rolling up your sleeves and diving deep into the guts of the organization. And if that organization isn’t ready to open the kimono to the SEO consultant at all levels, you hit a brick wall. This is typically where the organization falls prey to more unscrupulous SEO promises and practices from other vendors, which invariably get slammed by a future algo-update. And that brings us to the last challenge for SEO.

Flip Your SEO Coin

Even the best SEOers can get blindsided by Google. A tweak in an algorithm or a shift in ranking factors can drop you like a rock from the first page. And, if the recent study showed anything, it was that you can’t afford to drop from the first page. Traffic can go from a roar to a whisper overnight. That’s tough for the marketing department of an organization to swallow. People in the C-Suite that sign off on a sizable SEO contract have a tough time understanding why their investment suddenly got flushed down Google’s drain, perhaps never to resurface. They love control, and SEO offers anything but. As important as SEO is, it’s not predictable. You can’t bank on it.

So Chris…thanks for the question. Like I said, it was a really good one. And I hope this is a little better answer than the one I came up with on the spot.

Evolved Search Behaviors: Take Aways for Marketers

In the last two columns, I first looked at the origins of the original Golden Triangle, and then looked at how search behaviors have evolved in the last 9 years, according to a new eye tracking study from Mediative. In today’s column, I’ll try to pick out a few “so whats” for search marketers.

It’s not about Location, It’s About Intent

In 2005, search marketing as all about location. It was about grabbing a part of the Golden Triangle, and the higher, the better. The delta between scanning and clicks from the first organic result to the second was dramatic – by a factor of 2 to 1! Similar differences were seen in the top paid results. It’s as if, given the number of options available on the page (usually between 12 and 18, depending on the number of ads showing) searchers used position as a quick and dirty way to filter results, reasoning that the higher the result, the better match it would be to their intent.

In 2014, however, it’s a very different story. Because the first scan is now to find the most appropriate chunk, the importance of being high on the page is significantly lessened. Also, once the second step of scanning has begun, within a results chunk, there seems to be more vertical scanning within the chunk and less lateral scanning. Mediative found that in some instances, it was the third or fourth listing in a chunk that attracted the most attention, depending on content, format and user intent. For example, in the heat map shown below, the third organic result actually got as many clicks as the first, capturing 26% of all the clicks on the page and 15% of the time spent on page. The reason could be because it was the only listing that had the Google Ratings Rich Snippet because of the proper use of structured data mark up. In this case, the information scent that promised user reviews was a strong match with user intent, but you would only know this if you knew what that intent was.


This change in user search scanning strategies makes it more important than ever to understand the most common user intents that would make them turn to a search engine. What will be the decision steps they go through and at which of those steps might they turn to a search engine? Would it be to discover a solution to an identified need, to find out more about a known solution, to help build a consideration set for direct comparisons, to look for one specific piece of information (ie a price) or to navigate to one particular destination, perhaps to order online? If you know why your prospects might use search, you’ll have a much better idea of what you need to do with your content to ensure you’re in the right place at the right time with the right content.  Nothing shows this clearer than the following comparison of heat maps. The one on the left was the heat map produced when searchers were given a scenario that required them to gather information. The one on the right resulted from a scenario where searchers had to find a site to navigate to. You can see the dramatic difference in scanning behaviors.


If search used to be about location, location, location, it’s now about intent, intent, intent.

Organic Optimization Matters More than Ever!

Search marketers have been saying that organic optimization has been dying for at least two decades now, ever since I got into this industry. Guess what? Not only is organic optimization not dead, it’s now more important than ever! In Enquiro’s original 2005 study, the top two sponsored ads captured 14.1% of all clicks. In Mediative’s 2014 follow up, the number really didn’t change that much, edging up to 14.5% What did change was the relevance of the rest of the listings on the page. In 2005, all the organic results combined captured 56.7% of the clicks. That left about 29% of the users either going to the second page of results, launching a new search or clicking on one of the side sponsored ads (this only accounted for small fraction of the clicks). In 2014, the organic results, including all the different category “chunks,” captured 74.6% of the remaining clicks. This leaves only 11% either clicking on the side ads (again, a tiny percentage) or either going to the second page or launching a new search. That means that Google has upped their first page success rate to an impressive 90%.

First of all, that means you really need to break onto the first page of results to gain any visibility at all. If you can’t do it organically, make sure you pay for presence. But secondly, it means that of all the clicks on the page, some type of organic result is capturing 84% of them. The trick is to know which type of organic result will capture the click – and to do that you need to know the user’s intent (see above). But you also need to optimize across your entire content portfolio. With my own blog, two of the biggest traffic referrers happen to be image searches.

Left Gets to Lead

The Left side of the results page has always been important but the evolution of scanning behaviors now makes it vital. The heat map below shows just how important it is to seed the left hand of results with information scent.


Last week, I talked about how the categorization of results had caused us to adopt a two stage scanning strategy, the first to determine which “chunks” of result categories are the best match to intent, and the second to evaluated the listings in the most relevant chunks. The vertical scan down the left hand of the page is where we decide which “chunks” of results are the most promising. And, in the second scan, because of the improved relevancy, we often make the decision to click without a lot of horizontal scanning to qualify our choice. Remember, we’re only spending a little over a second scanning the result before we click. This is just enough to pick up the barest whiffs of information scent, and almost all of the scent comes from the left side of the listing. Look at the three choices above that captured the majority of scanning and clicks. The search was for “home decor store toronto.” The first popular result was a local result for the well known brand Crate and Barrel. This reinforces how important brands can be if they show up on the left side of the result set. The second popular result was a website listing for another well known brand – The Pottery Barn. The third was a link to Yelp – a directory site that offered a choice of options. In all cases, the scent found in the far left of the result was enough to capture a click. There was almost no lateral scanning to the right. When crafting titles, snippets and metadata, make sure you stack information scent to the left.

In the end, there are no magic bullets from this latest glimpse into search behaviors. It still comes down to the five foundational planks that have always underpinned good search marketing:

  1. Understand your user’s intent
  2. Provide a rich portfolio of content and functionality aligned with those intents
  3. Ensure your content appears at or near the top of search results, either through organic optimization or well run search campaigns
  4. Provide relevant information scent to capture clicks
  5. Make sure you deliver on what you promise post-click

Sure, the game is a little more complex than it was 9 years ago, but the rules haven’t changed.

Google’s Golden Triangle – Nine Years Later

Last week, I reviewed why the Golden Triangle existed in the first place. This week, we’ll look at how the scanning patterns of Google user’s has evolved in the past 9 years.

The reason I wanted to talk about Information Foraging last week is that it really sets the stage for understanding how the patterns have changed with the present Google layout. In particular, one thing was true for Google in 2005 that is no longer true in 2014 – back then, all results sets looked pretty much the same.

Consistency and Conditioning

If humans do the same thing over and over again and usually achieve the same outcome, we stop thinking about what we’re doing and we simply do it by habit. It’s called conditioning. But habitual conditioning requires consistency.

In 2005, The Google results page was a remarkably consistent environment. There was always 10 blue organic links and usually there were up to three sponsored results at the top of the page. There may also have been a few sponsored results along the right side of the page. Also, Google would put what it determined to be the most relevant results, both sponsored and organic, at the top of the page. This meant that for any given search, no matter the user intent, the top 4 results should presumably include the most relevant one or two organic results and a few hopefully relevant sponsored options for the user. If Google did it’s job well, there should be no reason to go beyond these 4 top results, at least in terms of a first click. And our original study showed that Google generally did do a pretty good job – over 80% of first clicks came from the top 4 results.

In 2014, however, we have a much different story. The 2005 Google was a one-size-fits-all solution. All results were links to a website. Now, not only do we have a variety of results, but even the results page layout varies from search to search. Google has become better at anticipating user intent and dynamically changes the layout on each search to be a better match for intent.

google 2014 big

What this means, however, is that we need to think a little more whenever we interact with a search page. Because the Google results page is no longer the same for every single search we do, we have exchanged consistency for relevancy. This means that conditioning isn’t as important a factor as it was in 2005. Now, we must adopt a two stage foraging strategy. This is shown in the heat map above. Our first foraging step is to determine what categories – or “chunks” of results – Google has decided to show on this particular results page. This is done with a vertical scan down the left side of the results set. In this scan, we’re looking for cues on what each chunk offers – typically in category headings or other quickly scanned labels. This first step is to determine which chunks are most promising in terms of information scent. Then, in the second step, we go back to the most relevant chunks and start scanning in a more deliberate fashions. Here, scanning behaviors revert to the “F” shaped scan we saw in 2005, creating a series of smaller “Golden Triangles.”

What is interesting about this is that although Google’s “chunking” of the results page forces us to scan in two separate steps, it’s actually more efficient for us. The time spent scanning each result is half of what it was in 2005, 1.2 seconds vs. 2.5 seconds. Once we find the right “chunk” of results, the results shown tend to be more relevant, increasing our confidence in choosing them.  You’ll see that the “mini” Golden Triangles have less lateral scanning than the original. We’re picking up enough scent on the left side of each result to push our “click confidence” over the required threshold.

A Richer Visual Environment

Google also offers a much more visually appealing results page than they did 9 years ago. Then, the entire results set was text based. There were no images shown. Now, depending on the search, the page can include several images, as the example below (a search for “New Orleans art galleries”) shows.


The presence of images has a dramatic impact on our foraging strategies. First of all, images can be parsed much quicker than text. We can determine the content of an image in fractions of a second, where text requires a much slower and deliberate type of mental processing. This means that our eyes are naturally drawn to images. You’ll notice that the above heat map has a light green haze over all the images shown. This is typical of the quick scan we do immediately upon page entry to determine what the images are about. Heat in an eye tracking heat map is produced by duration of foveal focus. This can be misleading when we’re dealing with images for two reasons. First, the fovea centralis is, predictably, in the center of our eye where our focus is the sharpest. We use this extensively when reading but it’s not as important when we’re glancing at an image. We can make a coarse judgement about what a picture is without focusing on it. We don’t need our fovea to know it’s a picture of a building, or a person, or a map. It’s only when we need to determine the details of a picture that we’ll recruit the fine-grained resolution of our fovea.

Our ability to quickly parse images makes it likely that they will play an important role in our initial orientation scan of the results page. We’ll quickly scan the available images looking for information scent. It the image does offer scent, it will also act as a natural entry point for further scanning. Typically, when we see a relevant image, we look in the immediate vicinity to find more reinforcing scent. We often see scanning hot spots on titles or other text adjacent to relevant images.

We Cover More Territory – But We’re Also More Efficient

So, to sum up, it appears that with our new two step foraging strategy, we’re covering more of the page, at least on our first scan, but Google is offering richer information scent, allowing us to zero in on the most promising “chunks” of information on the page. Once we find them, we are quicker to click on a promising result.

Next week, I’ll look at the implications of this new behavior on organic optimization strategies.

Rethinking the Channelization of Advertising

Anybody who has been a regular reader of my column knows I very seldom write a column exclusively about search, even though it runs every Thursday under the masthead of “Search Insider.” I’ve been fortunate in that Ken Fadner and the editorial staff of Mediapost has never restricted my choice of subject matter. But the eclecticism of my column isn’t simply because I’m attention deficit. It’s because the subject that interests me most is the intersection between human behavior and technology. Although that often involves search, it also includes mobile, social, email and a number of other channels. I simply couldn’t write about what interests me if I was restricted to a single channel.

So why is Mediapost divided into the subject areas it is? Why, when you go to navigate the site, do you choose from email marketing, search marketing, mobile marketing, real time marketing, video marketing or social media marketing? Mediapost is structured this way because it’s a reflection of the industry it serves. Online marketing is divvied up in exactly the same way. We are an industry of channels.

the_rhine_color_coverThe problem here is one of perspective – the industry perspective vs. the customer perspective. Let me use another example to make my point. One of the best things about cruising the Rhine is that there is a stunning medieval castle or fortress around every bend. From Rüdesheim to Koblenz (the Middle Rhine) there are over 40 of these fortifications sprinkled along 40 miles of the river. As picturesque as they are, they were not put there to enhance the views for generations of sightseers yet to come. They were put there because the river was one of the major thoroughfares of Europe and anyone who owned land along the river had the opportunity to make some money. They exacted tolls from travellers to guarantee safe passage.

While this build up along the Rhine probably made sense for the German land barons, it did nothing to make life easier for the poor souls who had to get up the Rhine to reach their eventual destination. Unfortunately, they had few alternatives. They were stuck with paying the tolls.

The advertising business is divided up into channels for exactly the same reason the Rhine has a castle every mile. Channels are there to show ownership of property. Advertising is a way to generate revenue from that ownership. It is a toll that customers have to pay. Mediapost is divided up the way it is because its readers are the modern day equivalent of medieval land barons and that’s they way they think. If it were published in 1224 its sections may have been labeled Pfalzgrafenstein, Sterrenberg and Reichenstein (3 of the Rhine castles).

But if you’re like me, you’re not as interested in the castles as in the journey itself. And, in this way, I think we have built our industry in exactly the wrong way. We should all be more interested in the journey than in ownership of individual destinations along that journey. If you asked a traveller from Rüdesheim to Koblenz in 1205 which they would prefer; paying 40 separate tolls or paying one guide to safely escort them to the destination, I’m pretty sure they would chose the later. That is what our industry should aspire to.

The reason our industry is channel obsessive is because we had no option previously. In a pre-digital world, all we could do is own or control a channel. But technology is rapidly allowing us an option. Today, it is possible for us to map a customer’s journey and act as a guide along the way. All that is required is a change of perspective.

I believe it’s time to consider it.

Climbing the Slippery Slope of Advertising

First published June 6, 2013 in Mediapost’s Search Insider

Google’s Matt Cutts is warning advertisers not to try passing off “native ads” – or advertorials – as legit content. Apparently, the line between advertising and content continues to get blurrier. The reason is that advertisers are still trying to find an ad that works. And they have been for over 300 years.

The first newspaper ads, which seem to mark the dawn of advertising, appeared very early in the 18th century. Because they looked just like the articles surrounding them, they had to be labeled as an “Advertisement.” Sound familiar?

Now, wouldn’t you think that if you’ve been doing something for over 300 years, you would have figured it out? So why does most advertising still suck? Why are we still trying to find some magic formula that works.

We could attribute it to changing technologies, saying that advertising continues to evolve because the marketplace it operates in is in constant flux, along with the delivery channels it uses and the creative possibilities it offers. That would be what an “advertiser” would tell you.

The answer, I think, it a bit simpler than that.  It comes down to a three-century disconnect between the market and the marketers: marketers want advertising, the market doesn’t. At least, we don’t want advertising in the form that it usually takes. Advertisers have been tinkering for all that time to find something the public doesn’t reject outright.

Perhaps, as we often do in the Thursday Search Insider, we can find some clues in the etymology of the word. “Advertisement” comes from the French verb “avertir” – which means to give notice or, more ominously, warning. Ironically, the very word we use to label our industry came from roots that carry a negative connotation. To move it to a more positive light, we could say that the purpose of advertising is to make us aware of something we weren’t previously aware of. That seems rather benign. Helpful, even. And it would be accurate to say that the earliest ads aspired to this purpose.

But somewhere along the line, ads stepped over the line and became something we learned to hate. How did this happen?

Like many of the social issues that plague us today, the roots can be traced back to the Industrial Revolution. Technology enabled scale. Mass production became reality. And, to keep pace, advertising showed us its less benign side.

Prior to mass production, the output of a product was limited to the resources of a producer. Increasing quantity usually had an inverse impact on quality, which relied on the skills of a single craftsperson. One person could only produce so much. The first brands were introduced by these craftspeople to identify their products, differentiating them from inferior competitors.

But with mechanization and the introduction of the assembly line, suddenly scale became virtually unlimited. Uniform products could be produced by the trainload. Profits became tied to scale, and greed became tied to profit. From that point forward, the three moved in lock step.

It was at that point that advertising moved from being a helpful notice to an annoying plea to buy crap we don’t need. And that’s when advertisers had to learn to start pushing the public’s buttons, whether we wanted them pushed or not. Everything started to go off the rails early in the 20th century, and the wreckage really piled up with the introduction of mass communication. Suddenly, unlimited greed had an unlimited capacity to annoy us. Advertising couldn’t stop at informing. It had to start selling.

The twist in all this came right at the end of the “Century of Annoyance.”  In 1998, Goto.com introduced paid search (no, it wasn’t Google). It was an ad with one purpose, to make someone aware of something they weren’t previously aware of. And it was delivered in the perfect context. The market, in the form of a searcher, was looking to become more aware about something by seeking out new information. It gets even better. The searcher could decide whether or not to take the advertiser up on their offer by choosing to click or not.

Of course, with time, we advertisers will figure out a way to screw that up too. The good news, if you’re Matt Cutts, is that it means you’ll have a job for the foreseeable future.

Psychological Priming and the Path to Purchase

First published March 27, 2013 in Mediapost’s Search Insider

In marketing, I suspect we pay too much attention to the destination, and not enough to the journey. We don’t take into account the cumulative effect of the dozens of subconscious cues we encounter on the path to our ultimate purchase. We certainly don’t understand the subtle changes of direction that can result from these cues.

Search is a perfect example of this.

As search marketers, we believe that our goal is to drive a prospect to a landing page. Some of us worry about the conversion rates once a prospect gets to the landing page. But almost none of us think about the frame of mind of prospects once they reach the landing page.

“Frame” is the appropriate metaphor here, because the entire interaction will play out inside this frame. It will impact all the subsequent “downstream” behaviors. The power of priming should not be taken likely.

Here’s just one example of how priming can wield significant unconscious power over our thoughts and actions. Participants primed by exposure to a stereotypical representation of a “professor” did better on a knowledge test than those primed with a representation of a “supermodel.”

A simple exposure to a word can do the trick. It can frame an entire consumer decision path. So, if many of those paths start with a search engine, consider the influence that a simple search listing may have.

We could be primed by the position of a listing (higher listings = higher quality alternatives).  We could be primed (either negatively or positively) by an organization that dominates the listing real estate. We could be primed by words in the listing. We could be primed by an image. A lot can happen on that seemingly innocuous results page.

Of course, the results page is just one potential “priming” platform. Priming could happen on the landing page, a third-party site or the website itself. Every single touch point, whether we’re consciously interacting with it or not, has the potential to frame, or even sidetrack, our decision process.

If the path to purchase is littered with all these potential landmines (or, to take a more positive approach, “opportunities to persuade”), how do we use this knowledge to become better marketers? This does not fall into the typical purview of the average search marketer.

Personally, I’m a big fan of the qualitative approach (I know — big surprise) in helping to lay down the most persuasive path possible. Actually talking to customers, observing them as they navigate typical online paths in a usability testing session, and creating some robust scenarios to use in your own walk-throughs will yield far better results than quantitative number-crunching. Excel is not a particularly good at being empathetic.

Jakob Nielsen has said that online, branding is all about experience, not exposure. As search marketers, it’s our responsibility to ensure that we’re creating the most positive experience possible, as our prospects make their way to the final purchase.

The devil, as always, is in the details — whether we’re paying conscious attention to them or not.

Search: The Boon or Bane of B2B Marketers

First published February 21, 2013 in Mediapost’s Search Insider

Optify recently released its 2012 B2B marketing Benchmark Report. While reading the executive summary, two apparently conflicting points jumped out at me: “Google is the single most important referring domain to B2B websites, responsible for over 36% of all visits.”

And: “Paid search usage showed a constant decline among B2B marketers in 2012. Over 10% of companies in the report discontinued their paid search campaigns during 2012.”

OK, what gives? How can search be the single most important referrer of traffic, yet fail so miserably as a marketing channel that many B2B marketers have thrown in the towel?

The fact is, B2B search is a dramatically different beast, and many of the unique nuances that come with it are likely to lead to the apparent paradox that the Optify study unearthed. Here are some possible reasons for the anomaly:

B2B search has a really, really long tail. Many B2B marketers are dealing with a huge variety of SKUs, with a broader distribution of traffic across keywords than is typical in most consumer categories. This makes keyword discovery a monumental task. But more than this, the revenue per managed keyword (assuming you can accurately measure revenue — more on this below) is quite often very small. This creates a cost-of-campaign management issue.

When the “long tail” of search was first introduced, many search marketers embraced it as a cost-effective way to manage campaigns. The assumption was that long-tail queries, being quite specific, would yield higher ROI than shorter, more generic queries. And while the traffic (and subsequent revenue) per keyword would be very small, cumulatively a long-tail campaign could deliver impressive returns.

This is true, up to a point. But long-tail campaigns require significant administrative overhead. A query that gets one search a month requires as much set-up as one that gets 1,500 searches a month. Even if you use broad match, you’re constantly tweaking your negative match list to filter out the low-value traffic.

While a long-tail approach seems like a good idea in theory, in practice most marketers end up culling most of the long-tail keywords from the campaign because the returns just aren’t worth the ongoing effort.  This would not bode well for B2B marketers considering search as a channel.

A B2B search vocabulary is difficult to define. Compounding the long-tail problem is the issue that many B2B vendors sell complex products or services. With complexity comes ambiguity in language. It’s hard to pin many B2B products down to an obvious search phrase you can be sure searchers would use. Often, many B2B prospects only know they have a problem, not what the possible solution might be called. This makes it very difficult to create an effective search campaign. There is a lot of trial and error involved.

And, because prospects aren’t searching for a familiar product from vendors they know, it becomes even more difficult to create a compelling search ad that attracts its fair share of searches and subsequent conversions. In a marketing channel that depends on words to interpret buying intent, ill-defined vocabularies can make a marketer’s job exponentially more difficult.

B2B ROI has to be measured differently. Finally, let’s say you get past the first two obstacles. Ultimately, search campaigns live and die on their effectiveness. And this requires a comprehensive approach to performance measurement. As any B2B marketer will tell you, this is much easier said than done. B2B markets tend to be more circuitous than consumer markets, winding their way through several stops in an extended value change. This makes end-to-end tracking extremely difficult.  And if value isn’t easily measured, search campaigns can’t prove their value. This makes them likely candidates for an unceremonious axing.

So if That’s the Bad News, What’s the Good?

If the deck is stacked so fully against search in the B2B world, why was Google the primary referrer of traffic in the Optify study? Well, search for B2B can be tremendously effective; it’s just hard to predict. This makes B2B a prime candidate for a broad-based SEO effort. Content creation creates a rich bed of long-tail fodder that search spiders love. Organic best practices combined with a dedication to thought leadership can create content that intercepts those prospects looking for a solution to their identified pains, even before they know what they’re looking for.

In the case of B2B, especially in complex, nascent markets, I generally recommend leading with SEO and content development. Then, monitor search traffic and let that help inform your subsequent PPC efforts. It may turn out that paid search isn’t a major play for your market. The B2B Beast can be tamed by search; it just takes a different approach.

Weighing Positive and Negative Impacts on Users

First published January 31, 2013 in Mediapost’s Search Insider

We humans hate loss. In fact, we seem to value losing something about twice as high as gaining something. For example, imagine I gave you a coffee cup and then offered to buy it back from you. That’s scenario 1. In scenario 2, I ask you to buy the same coffee cup from me. The price you assign to the coffee cup in the first scenario will be, on the average, about twice as much as in the second. And yes, there’s research to back this up.

When it comes to winning and losing, it’s been proven that “loss looms larger than gains.” It’s just one of the weird glitches in our logical circuitry.  We tend to be hardwired to look at glasses as half empty.

Recently, I was reviewing an academic study done in 2008, with this scintillating title: “Procedural Priming and Consumer Judgment: Effects on the Impact of Positively and Negatively Valenced Information” by Shen and Wyer. If you can get beyond the rather dry title, you find a treasure trove of tidbits to consider when crafting your online user experience.

For example, when we evaluate a product for potential purchase, we may run across both positive and negative information. The order we run into this information can have a dramatic impact on what we do downstream from that interaction. To use psychological terms, it “primes” our mental framework.  And, because we tend to focus on negatives, less favorable information has a greater impact on our decision than positive information.

But it’s not just that we pay more attention to bad news than good news. It’s that bad news can hijack the entire consideration process. According to Shen and Wyer, if we run into negative information, it can change our information-seeking strategies, leading us down further negatively biased channels to confirm the initial information we saw. Bad news tends to lead to more bad news.

Also, we can get “bad news” hangovers. If we compare negatives in one decision process, that negative mental framework can carry over to an entirely different decision that has nothing to do with the first, giving us a heightened awareness of negative information in the new situation.

Here’s another interesting finding. If we’re rushed for time, this preoccupation with the negatives will dramatically affect the decision we make. But, if we have all the time in the world, the impact is relatively insignificant. Given time, we seem to cancel out our inherently negative biases.

All this news is not bad for marketers, however. It seems that simply getting users to state their preference for one feature over another, even though they’re not actively considering purchase at that time, leads to a much greater likelihood of purchase in the future. It seems that if you can get users to compare alternatives — and, more importantly, to commit to saying they prefer one alternative over another — they clear the mental hurdle of deciding “will I buy?” and instead start considering  “what will I buy?”

Finally, there is also a recency effect, especially if prospects had ample time to consider all their alternatives. Shen and Wyer found that the last information considered seemed to have the greatest effect on the buyer.  So, if information was both positive and negative, it was good to get the least favorable information in front of the prospect early, and then move to the most favorable information. Again, this is true only if the user had plenty of time to weigh the options. If they were rushed, the opposite was true.

All in all, these are all intriguing concepts to consider when crafting an ideal online user experience. They also underscore the importance of first impressions, especially negative ones.

The Tricky Intersection of Social and Search

First published September 20, 2012 in Mediapost’s Search Insider

People don’t trust search ads. At least, 64% of people don’t trust search ads.

Apparently, search is not unique. According to the same research, nobody trusts ads of any kind. That’s not really surprising, given that it’s advertising. Its entire purpose is to make us suddenly want crap we don’t need. Small wonder we don’t trust it.

But you know what we do trust? The opinions of our friends.

Nothing I should have said up to this point should come as a shock to anyone reading this column. The only thing I found mildly surprising here was that we had such a low level of trust in search ads. Typically, search advertising is better aligned with intent and less hyperbolic in nature. But, apparently, we marketers have bastardized even the purity of search to the point where it’s less trusted than TV ads (gasp!)

So, to recap, we don’t trust ads, we do trust friends. This seems to present a simple solution: combine the two so that pesky advertising can bask in the halo effect of social endorsement.  You’ve been hearing about this for many years now, including several Search Insider columns from my fellow pundits and myself.

So, given that we’ve been testing the waters for sometime, why haven’t we got this advertising/social thing locked down yet? Why are Facebook stockholders wailing over their deflated portfolios? Why are we still stumbling out of the starting gate in our efforts to marry the magic of social and search? This shouldn’t be rocket science.

In fact, it’s more complex than rocket science. It’s psychology; it’s sociology and at least a handful of other “ologies.” When we talk about combing search and social — or for that matter, any type of advertising and social — we’re talking about trying to understand what makes humans tick.

If we talk about the simplest integration of the two, where social acts as a type of reinforcing influence that is subordinate to the primary act of searching, it’s not hard to follow the train of thought. We search for something, and in the results, we see some type of social badge that indicates how our social connections feel about the options presented to us. In this case, intent is already engaged. Social just serves to grease the decision wheels, helping us differentiate between our options. This type of integration can easily be seen on Google (Plus integration) as well as vertical engines such as TripAdvisor or Yelp.

But that type of integration doesn’t really fire the imagination of marketers and get their market acquisition juices flowing. It’s just hedging your bets on a market that’s already pretty easy to identify and capture. It does nothing to open up new markets. And it’s there where things get muddy.

The problem is this niggling question of intent. Somehow, something needs to activate intent in the mind of the prospect. It’s here where we truly need to be persuaded, moving our mental mechanisms from disengaged to engaged.

To do this, you need to reverse the order of importance between the two channels. Social recommendation needs to be in the driver’s seat, hopefully engaging and moving prospects to the point where they initiate a search. And that’s a much bigger hurdle to get over. Once the order is reversed, the odds of success plummet precipitously.

Here are just a few of the hurdles that have to be cleared:

Trust – Whichever channel is chosen to deliver the social recommendation, it has to be received with trust. This factor can be affected by how the recommendation is presented, the social proof that accompanies it, the aesthetic value of the interface, and the recipient’s attitude towards the channel itself. There is no lack of nuanced detail to consider here.

Alignment of Interest – When the recommendation is delivered, it must be of interest to the recipient. This relies on an accurate assessment of context and intent. Whatever the targeting channel, there has to be a pretty good chance of delivering the right message at the right time.

Social Modality – So, let’s assume you’ve figured out how to get the first two things right – you are using a trusted channel and you’ve done a good job of targeting. You’re not home free yet. Here’s the thing – we don’t act the same way all the time. We adapt our behaviors to fit the social circumstances we are currently in. There are predetermined modes of behavior that we conform to. It’s why we act one way with our coworkers and another way with our children. It’s why it’s okay to tip a waiter in a restaurant, but not okay to tip your mother-in-law after Sunday dinner. This modality is carried over from the real world to the virtual world of social networks. And it’s very difficult to determine what mode a prospect may be in. But it can make all the difference in the success of a socially targeted advertising message.

The Fight for Attention – This is the big one. Even if you do everything else right, your odds for successfully capturing the attention of a prospect and holding it for long enough to generate actual consideration of your product are not nearly as good as you might hope. You’d probably do better at a Vegas craps table. It all depends on what the incumbent’s intent is. What brought her to the online destination where you managed to intercept her? How critical is it that she finish what she’s currently doing? How engaged is she in the task at hand?

With the first example of search/social integration (search first, social second), the odds for success are pretty high, because intent has already been established. You’re just using social endorsement to expedite a process that’s already in motion.

But in the second example (social first, search second), we’re talking about an entirely different ball game. You have to derail the incumbent intent and replace it with a new one. Think of it as the difference between pushing a car downhill that’s already started to roll, and pushing the same car from a standing start up the hill.

No wonder we’re having some difficulty getting things rolling.

Climbing the Slippery Slopes of Mount White Hat

First published August 30, 2012 in Mediapost’s Search Insider

On Monday of this week, fellow Search Insider Ryan DeShazer bravely threw his hat back in the ring regarding this question: Is Google better or worse off because of SEO?

DeShazer confessed to being vilified after a previous column indicated that Google owed us something. I admit I have a column penned but never submitted that Ryan could have added to the “vilify” side of that particular tally. But in his Monday column, Ryan touches on a very relevant point: “What is the thin line between White Hat and Black Hat SEO?” For as long as I’ve been in this industry (which is pushing 17 years now) I’ve heard that same debate. I’ve been at conference sessions where white hats and black hats went head to head on the question. It’s one of those discussions that most sane people in the world could care less about, but we in the search biz can’t seem to let go.

Ryan stirs the pot again by indicating that Google may be working on an SEO “Penalty Box”: a temporary holding pen for sites that are using “rank modifying spammers” where results will fluctuate more than in the standard index. The high degree of flux should lead to further modifications by the “spammers” that will help Google identify them and theoretically penalize them. DeShazer’s concern is the use of the word “spammers” in the wording of the patent application, which seems to include any “webmasters who attempt to modify their search engine ranking.”

I personally think it’s dangerous to try to apply wording used in a patent application (the source for this speculation) arbitrarily against what will become a business practice. Wording in a patent is intended to help convey the concept of the intellectual property as quickly and concisely as possible to a patent review bureaucrat. The wording deals in concepts that are (ironically) pretty black and white. It has little to no relationship to how that IP will be used in the real world, which tends to be colored in various shades of gray. But let’s put that aside for a moment.

Alan Perkins, an SEO I would call vociferously “white hat,” some years ago came up with what I believe is the quintessential difference here. Black hats optimize for a search engine. White hats optimize for humans.  When I make site recommendations, they are to help people find better content faster and act on it. I believe, along with Perkins, that this approach will also do good things for your search visibility.

But that also runs the danger of being an oversimplification. The picture is muddied by clients who measure our success as SEO agencies by their position relative to their competitors on a keyword-by-keyword level. This is the bed the SEO industry has built for itself, and now we’re forced to sleep in it. I’m as guilty as the next guy of cranking out competitive ranking reports, which have conditioned this behavior over the past decade and a half.

The big problem, and one continually pointed out by vocal grey/black hats, is that you can’t keep up with competition who are using methods more black than white by staying with white-hat tactics alone. The fact is, black hat works, for a while. And if I’m the snow-white SEO practitioner whose clients are repeatedly trounced by those using a black hat consultant, I’d better expect some client churn. Ethics and profitably don’t always go together in this industry.

To be honest, over the past five years, I’ve largely stopped worrying about the whole white hat/black hat thing. We’ve lost some clients because we weren’t aggressive enough, but the ones who stayed were largely untouched by the string of recent Google updates targeting spammers. Most benefited from the house cleaning of the index. I’ve also spent the last five years focused a lot more on people and good experiences than on algorithms and link juice, or whatever the SEO flavor du jour is.

I think Alan Perkins nailed it way back in 2007. Optimize for humans. Aim for the long haul. And try to be ethical. Follow those principles, and I find it hard to imagine that Google would ever tag you with the label of “spammer.”