Psychological Priming and the Path to Purchase

First published March 27, 2013 in Mediapost’s Search Insider

In marketing, I suspect we pay too much attention to the destination, and not enough to the journey. We don’t take into account the cumulative effect of the dozens of subconscious cues we encounter on the path to our ultimate purchase. We certainly don’t understand the subtle changes of direction that can result from these cues.

Search is a perfect example of this.

As search marketers, we believe that our goal is to drive a prospect to a landing page. Some of us worry about the conversion rates once a prospect gets to the landing page. But almost none of us think about the frame of mind of prospects once they reach the landing page.

“Frame” is the appropriate metaphor here, because the entire interaction will play out inside this frame. It will impact all the subsequent “downstream” behaviors. The power of priming should not be taken likely.

Here’s just one example of how priming can wield significant unconscious power over our thoughts and actions. Participants primed by exposure to a stereotypical representation of a “professor” did better on a knowledge test than those primed with a representation of a “supermodel.”

A simple exposure to a word can do the trick. It can frame an entire consumer decision path. So, if many of those paths start with a search engine, consider the influence that a simple search listing may have.

We could be primed by the position of a listing (higher listings = higher quality alternatives).  We could be primed (either negatively or positively) by an organization that dominates the listing real estate. We could be primed by words in the listing. We could be primed by an image. A lot can happen on that seemingly innocuous results page.

Of course, the results page is just one potential “priming” platform. Priming could happen on the landing page, a third-party site or the website itself. Every single touch point, whether we’re consciously interacting with it or not, has the potential to frame, or even sidetrack, our decision process.

If the path to purchase is littered with all these potential landmines (or, to take a more positive approach, “opportunities to persuade”), how do we use this knowledge to become better marketers? This does not fall into the typical purview of the average search marketer.

Personally, I’m a big fan of the qualitative approach (I know — big surprise) in helping to lay down the most persuasive path possible. Actually talking to customers, observing them as they navigate typical online paths in a usability testing session, and creating some robust scenarios to use in your own walk-throughs will yield far better results than quantitative number-crunching. Excel is not a particularly good at being empathetic.

Jakob Nielsen has said that online, branding is all about experience, not exposure. As search marketers, it’s our responsibility to ensure that we’re creating the most positive experience possible, as our prospects make their way to the final purchase.

The devil, as always, is in the details — whether we’re paying conscious attention to them or not.

Search: The Boon or Bane of B2B Marketers

First published February 21, 2013 in Mediapost’s Search Insider

Optify recently released its 2012 B2B marketing Benchmark Report. While reading the executive summary, two apparently conflicting points jumped out at me: “Google is the single most important referring domain to B2B websites, responsible for over 36% of all visits.”

And: “Paid search usage showed a constant decline among B2B marketers in 2012. Over 10% of companies in the report discontinued their paid search campaigns during 2012.”

OK, what gives? How can search be the single most important referrer of traffic, yet fail so miserably as a marketing channel that many B2B marketers have thrown in the towel?

The fact is, B2B search is a dramatically different beast, and many of the unique nuances that come with it are likely to lead to the apparent paradox that the Optify study unearthed. Here are some possible reasons for the anomaly:

B2B search has a really, really long tail. Many B2B marketers are dealing with a huge variety of SKUs, with a broader distribution of traffic across keywords than is typical in most consumer categories. This makes keyword discovery a monumental task. But more than this, the revenue per managed keyword (assuming you can accurately measure revenue — more on this below) is quite often very small. This creates a cost-of-campaign management issue.

When the “long tail” of search was first introduced, many search marketers embraced it as a cost-effective way to manage campaigns. The assumption was that long-tail queries, being quite specific, would yield higher ROI than shorter, more generic queries. And while the traffic (and subsequent revenue) per keyword would be very small, cumulatively a long-tail campaign could deliver impressive returns.

This is true, up to a point. But long-tail campaigns require significant administrative overhead. A query that gets one search a month requires as much set-up as one that gets 1,500 searches a month. Even if you use broad match, you’re constantly tweaking your negative match list to filter out the low-value traffic.

While a long-tail approach seems like a good idea in theory, in practice most marketers end up culling most of the long-tail keywords from the campaign because the returns just aren’t worth the ongoing effort.  This would not bode well for B2B marketers considering search as a channel.

A B2B search vocabulary is difficult to define. Compounding the long-tail problem is the issue that many B2B vendors sell complex products or services. With complexity comes ambiguity in language. It’s hard to pin many B2B products down to an obvious search phrase you can be sure searchers would use. Often, many B2B prospects only know they have a problem, not what the possible solution might be called. This makes it very difficult to create an effective search campaign. There is a lot of trial and error involved.

And, because prospects aren’t searching for a familiar product from vendors they know, it becomes even more difficult to create a compelling search ad that attracts its fair share of searches and subsequent conversions. In a marketing channel that depends on words to interpret buying intent, ill-defined vocabularies can make a marketer’s job exponentially more difficult.

B2B ROI has to be measured differently. Finally, let’s say you get past the first two obstacles. Ultimately, search campaigns live and die on their effectiveness. And this requires a comprehensive approach to performance measurement. As any B2B marketer will tell you, this is much easier said than done. B2B markets tend to be more circuitous than consumer markets, winding their way through several stops in an extended value change. This makes end-to-end tracking extremely difficult.  And if value isn’t easily measured, search campaigns can’t prove their value. This makes them likely candidates for an unceremonious axing.

So if That’s the Bad News, What’s the Good?

If the deck is stacked so fully against search in the B2B world, why was Google the primary referrer of traffic in the Optify study? Well, search for B2B can be tremendously effective; it’s just hard to predict. This makes B2B a prime candidate for a broad-based SEO effort. Content creation creates a rich bed of long-tail fodder that search spiders love. Organic best practices combined with a dedication to thought leadership can create content that intercepts those prospects looking for a solution to their identified pains, even before they know what they’re looking for.

In the case of B2B, especially in complex, nascent markets, I generally recommend leading with SEO and content development. Then, monitor search traffic and let that help inform your subsequent PPC efforts. It may turn out that paid search isn’t a major play for your market. The B2B Beast can be tamed by search; it just takes a different approach.

Weighing Positive and Negative Impacts on Users

First published January 31, 2013 in Mediapost’s Search Insider

We humans hate loss. In fact, we seem to value losing something about twice as high as gaining something. For example, imagine I gave you a coffee cup and then offered to buy it back from you. That’s scenario 1. In scenario 2, I ask you to buy the same coffee cup from me. The price you assign to the coffee cup in the first scenario will be, on the average, about twice as much as in the second. And yes, there’s research to back this up.

When it comes to winning and losing, it’s been proven that “loss looms larger than gains.” It’s just one of the weird glitches in our logical circuitry.  We tend to be hardwired to look at glasses as half empty.

Recently, I was reviewing an academic study done in 2008, with this scintillating title: “Procedural Priming and Consumer Judgment: Effects on the Impact of Positively and Negatively Valenced Information” by Shen and Wyer. If you can get beyond the rather dry title, you find a treasure trove of tidbits to consider when crafting your online user experience.

For example, when we evaluate a product for potential purchase, we may run across both positive and negative information. The order we run into this information can have a dramatic impact on what we do downstream from that interaction. To use psychological terms, it “primes” our mental framework.  And, because we tend to focus on negatives, less favorable information has a greater impact on our decision than positive information.

But it’s not just that we pay more attention to bad news than good news. It’s that bad news can hijack the entire consideration process. According to Shen and Wyer, if we run into negative information, it can change our information-seeking strategies, leading us down further negatively biased channels to confirm the initial information we saw. Bad news tends to lead to more bad news.

Also, we can get “bad news” hangovers. If we compare negatives in one decision process, that negative mental framework can carry over to an entirely different decision that has nothing to do with the first, giving us a heightened awareness of negative information in the new situation.

Here’s another interesting finding. If we’re rushed for time, this preoccupation with the negatives will dramatically affect the decision we make. But, if we have all the time in the world, the impact is relatively insignificant. Given time, we seem to cancel out our inherently negative biases.

All this news is not bad for marketers, however. It seems that simply getting users to state their preference for one feature over another, even though they’re not actively considering purchase at that time, leads to a much greater likelihood of purchase in the future. It seems that if you can get users to compare alternatives — and, more importantly, to commit to saying they prefer one alternative over another — they clear the mental hurdle of deciding “will I buy?” and instead start considering  “what will I buy?”

Finally, there is also a recency effect, especially if prospects had ample time to consider all their alternatives. Shen and Wyer found that the last information considered seemed to have the greatest effect on the buyer.  So, if information was both positive and negative, it was good to get the least favorable information in front of the prospect early, and then move to the most favorable information. Again, this is true only if the user had plenty of time to weigh the options. If they were rushed, the opposite was true.

All in all, these are all intriguing concepts to consider when crafting an ideal online user experience. They also underscore the importance of first impressions, especially negative ones.

Evolving on the Fly: Growth Hackers, Agile Marketers, Bayesian Strategists and CMTs

First published January 10, 2013 in Mediapost’s Search Insider

If you are a Darwinist, one of the questions you may have asked yourself is, on what timescale does evolution play out? Is it a long, gradual development of new and differentiated species? Or, as Stephen Jay Gould and Niles Eldridge believe, does evolution happen in short spurts, separated by long periods of stasis (their theory is called Punctuated Equilibrium)?

The next question you might ask is, what does this have to do with marketing?

I venture to say: everything. Bear with me.

If you believe, as I believe, that evolution happens in spurts, then it’s important to understand what causes those spurts. Among many contentious alternatives, one that seems to be more commonly accepted is a sudden dramatic change in what evolutionists call the adaptive landscape.  This is the real world that species must adapt to in order to survive. “Flat” landscapes create an even playing field for all species to survive, resulting in relative stasis. “Rugged” landscapes significantly favor some species over others, accelerating evolution dramatically. “Rugged” landscapes generally emerge after some big event, like a catastrophe.

I propose that marketing is currently a very rugged adaptive landscape. Some marketers are going to thrive, and others are going to disappear from the face of the earth. We’re already seeing exciting new species emerge.

Growth Hackers

If you haven’t heard about them, Growth Hackers are “the next big thing,” at least, according to Fast Company.  A post by Andrew Chen is referenced, where he explains, “Growth hackers are a hybrid of marketer and coder, one who looks at the traditional question of ‘How do I get customers for my product?’ and answers with A/B tests, landing pages, viral factor, email deliverability, and Open Graph.” Think of hackers as tech-savvy marketing guerillas. They move fast, exploit technical opportunities, and track and test everything.

Agile Marketers

According to the Agile Marketing Manifesto, this offshoot of Agile Development enshrines customer focus, validated learning, iterative approaches, flexibility and learning from our mistakes. In the words of my friend Mike Moran, it’s learning how to “Do It Wrong Quickly.” As opposed to Growth Hackers, which is more of a job description, Agile Marketing is a corporate philosophy that encourages (demands) rapid evolution. It embraces the realities of a “rugged” adaptive landscape.

Bayesian Strategists

This was top of mind after my last column, so I added this in as my contribution. As stated last week, I envision strategic thinking to become less of a “shot in the dark” and more of a “testable hypothesis.”  I would never want to see “Big Thinking” give way to “Big Data,” but I believe the two can co-exist, and co-evolve, quite nicely.

Chief Marketing Technologist

Finally, under whose watch does all of this fall? If you believe Scott Brinker (which I invariably do — he’s from Boston and he’s “wicked smaaht”) it falls quit nicely into the job description of the Chief Marketing Technologist. I’ll let him explain in his own words: “A chief marketing technologist (CMT) is the person responsible for leading an organization’s marketing technology.”

A CMT sits astride the rapidly colliding worlds of marketing and technology and makes sure an organization does not fall prey to the all-too-common trap of having these overseen by two completely separate (and often outrightly hostile) departments.

A CMT understands the following realities:

Everything is Marketing

Everything is Changing

Everyone Must Be Agile

In the words of Peter Drucker, “Business has only two basic functions: marketing and innovation.” In today’s world, those two functions are inextricably linked. As a marketer, you have two choices: adapt and survive, or stand still and die. The ones who do the first the best will emerge at the top of the marketing food chain.

The Evolution of Strategy

First published January 3, 2013 in Mediapost’s Search Insider

Last week I asked the question, “Will Big Data Replace Strategic Thinking?”  Many of you answered, with a ratio splitting approximately two for one on the side of thinking. But, said fellow Search Insider Ryan Deshazer, “Not so fast! Go beyond the rebuttal!”

I agree with my friend Ryan. This is not a simple either/or answer. We  (or at least 66% of us) may agree that models and datasets, no matter how good they are, can’t replace thinking. But we can’t dismiss the importance of them,either. Strategy will change, and data will be a massive driver in that change.

Both the Harvard Business Review and the New York Times have recent posts on the subject. In HBR, Justin Fox tells of a presentation by Vivek Ranadive, who said, “I believe that math is trumping science. What I mean by that is you don’t really have to know why, you just have to know that if a and b happen, c will happen.”

He further speculates that U.S. monetary policy might do better being guided by an algorithm rather than bankers: “The fact is, you can look at information in real time, and you can make minute adjustments, and you can build a closed-loop system, where you continuously change and adjust, and you make no mistakes, because you’re picking up signals all the time, and you can adjust.”

The Times’ Steve Lohr also talks about the recent enthusiasm for a quantitative approach to management, evangelized by Erik Brynjolfsson, Director of the MIT Center for Digital Business, who says Big Data will “replace ideas, paradigms, organizations and ways of thinking about the world.”

However, Lohr and Fox (who wrote the excellent book, “The Myth of the Rational Market”) caution about the oversimplifications inherent in modeling. Take, for example, some of the potentially flawed assumptions in Ranadive’s version of an algorithmically driven monetary policy:

–       Something as complex as monetary policy can be contained in a closed loop system

–       The past can reliably predict the future

–       If it doesn’t — and things do head into uncharted territory, — you’ll be able to “tweak” things into place as new information becomes available.

Fox uses the analogy of a Landing Page A/B (or multivariate) test as an example of the new quantitative approach to the world. In theory, page design could be left to a totally automated and testable process, where real-time feedback from users eventually decides the optimal layout. It sounds good in theory, but here’s the problem with this approach to marketing: You can’t test what you don’t think of. The efficacy of testing depends on the variables you choose to test. And that requires some thinking. Without a solid hypothesis based on a strategic view of the situation, you can quickly go down a rabbit hole of optimizing for the wrong things.

For example, most heavily tested landing pages I’ve seen all reach the same eventual destination: a page optimized for one definition of a conversion. Typically this would be the placement of an order or the submission of a form. There will be reams of data showing why this is the optimal variation. But what about all the prospects that hit that page for which the one offered conversion wasn’t the right choice? How do they get captured in the data? Did anyone even think to include them in the things to test for?

Fox offers a hybrid view of strategic management that more closely aligns with where I see this all going — call it Bayesian Strategic management. Traditional qualitative strategic thinking is required to set the hypothetical view of possible outcomes, but then we apply a quantitative rigor to measure, test and adjust based on the data we collect. This treads the line between the polarities of responses gathered by last week’s column – it puts the “strategic” horse before the “big data” cart. More importantly, it holds our strategic view accountable to the data. A strategy becomes a hypothesis to be tested.

One final thought. Whether we’re talking about Ranadive’s utopian (or dystopian?) vision of a data driven world or any of the other Big Data evangelists, there seems to be one assumption that I believe is fundamentally flawed, or at least, overly optimistic: that human behaviors can be adequately contained in a predictable, rational, controlled closed loop system. When it comes to understanding human behavior, the capabilities of our own brain far outstrip any algorithmically driven model ever created — yet we still get it wrong all the time.

If Big Data could really reliably predict human behaviors, do you think we’d be in financial situation we are now?

Will Big Data Replace Strategy?

First published December 27, 2012 in Mediapost’s Search Insider

Anyone who knows me knows I love strategy. I have railed incessantly about our overreliance on tactical execution and our overlooking of the strategy that should guide said execution. So imagine my discomfort this past week when, in the midst of my following up on the McLuhan theme of my last column, I ran into a tidbit from Ray Rivera, via Forbes, that speculated that strategic management might becoming obsolescent.

Here’s an excerpt: As amounts of data approaching entire populations become available, models become less predictive and more descriptive. As inference becomes obsolete, management methods that rely on it will likely be affected. A likely casualty is strategic management, which attempts to map out the best course of action while factoring in constraints. Classic business strategy (e.g., the five forces) is especially vulnerable to losing the relevance it accumulated over several decades.

The crux of this is the obsolescence of inference. Humans have historically needed to infer to compensate for imperfect information. We couldn’t know everything with certainty, so we had to draw conclusions from the information we did have. The bigger the gap, the greater the need for inference. And, like most things that define us, the ability to infer was sprinkled through our population in a bell-curved standard distribution. We all have the ability to fill in the gaps through inference, but some of us are much better at it than others.

The author of this post speculates that as we get better and more complete information, it will become less important to fill in the gaps to set a path for the future — and more important to act quickly on what we know, correcting our course in real time: With access to comprehensive data sets and an ability to leave no stone unturned, execution becomes the most troublesome business uncertainty. Successful adaptation to changing conditions will drive competitive advantage more than superior planning.

Now, just in case you’re wondering, I don’t agree with the premise, but there is considerable merit to Rivera’s hypothesis, so let’s consider it using a fairly accessible analogy: the driving of a car. If we’re driving to a destination where we’ve never been before, and we don’t know what we’ll encounter en route, we need a strategy. We need to know the general direction, we need a high-level understanding of the available routes, we need to know what an acceptable period of time would be to reach our destination, and we need some basic strategic guidelines to deal with the unexpected – for example, if a primary route is clogged with traffic, we will find an alternative route using secondary roads. These are all tools we use to help us infer what the best way to get from point A to B might be.

But what if we have a GPS that has access to real-time traffic information and can automatically plot the best available route? Given the analogous scenario, this is as close to perfect information at we’re likely to get. We no longer need a strategy. All we need to do is follow the provided directions and drive. No inference is required. The gaps are filled by the data we have available to us.

So far, so good. But here is the primary reason why I believe strategic thinking is in no danger of expiring anytime soon. If strategy was only about inference, I might agree with Rivera’s take (by he way, he’s from SAP, so he may have a vested interest in promoting the wonders of Big Data).

However, I believe that interpretation and synthesis are much more important outcome of strategy.  The drawback of data is that it needs to be put into a context to make it useful.  Unlike traffic jams and roadways, which tend to be pretty concrete concepts (stop and go, left or right — and yes, I used the pun intentionally), business is a much more abstract beast. One can measure performance indicators ad nauseam, but there should be some framework to give them meaning. We can’t just count trees (or, in the era of Big Data, the number of leaves per limb per tree). We need to recognize a forest when we see one.

Interpretation is one advantage, but synthesis is the true gold that strategic thinking yields. Data tends to live in silos. Metrics tend to be analyzed in homogenous segments (for example, Web stats, productivity yields, efficiency KPIs). True strategy can bring disparate threads together and create opportunities where none existed before. Here, strategy is not about filling the gaps in the information you have, it’s about using that information in new ways to create something remarkable.

I disagree most vehemently with Rivera when he says: While not disappearing altogether, strategy is likely to combine with execution to become a single business function.

I’ve been working in this business for going on three decades now. In all that time, I have rarely seen strategy and execution combine successfully in a single function (or, for that matter, a single person). They are two totally different ways of thinking, relying on two different skill sets. They are both required, but I don’t believe they can be combined.

Strategy is that intimately and essentially human place where business is not simply science, but becomes art. It is driven by intuition and vision. And I, for one, am not looking forward to the day where it becomes obsolescent.

The Tricky Intersection of Social and Search

First published September 20, 2012 in Mediapost’s Search Insider

People don’t trust search ads. At least, 64% of people don’t trust search ads.

Apparently, search is not unique. According to the same research, nobody trusts ads of any kind. That’s not really surprising, given that it’s advertising. Its entire purpose is to make us suddenly want crap we don’t need. Small wonder we don’t trust it.

But you know what we do trust? The opinions of our friends.

Nothing I should have said up to this point should come as a shock to anyone reading this column. The only thing I found mildly surprising here was that we had such a low level of trust in search ads. Typically, search advertising is better aligned with intent and less hyperbolic in nature. But, apparently, we marketers have bastardized even the purity of search to the point where it’s less trusted than TV ads (gasp!)

So, to recap, we don’t trust ads, we do trust friends. This seems to present a simple solution: combine the two so that pesky advertising can bask in the halo effect of social endorsement.  You’ve been hearing about this for many years now, including several Search Insider columns from my fellow pundits and myself.

So, given that we’ve been testing the waters for sometime, why haven’t we got this advertising/social thing locked down yet? Why are Facebook stockholders wailing over their deflated portfolios? Why are we still stumbling out of the starting gate in our efforts to marry the magic of social and search? This shouldn’t be rocket science.

In fact, it’s more complex than rocket science. It’s psychology; it’s sociology and at least a handful of other “ologies.” When we talk about combing search and social — or for that matter, any type of advertising and social — we’re talking about trying to understand what makes humans tick.

If we talk about the simplest integration of the two, where social acts as a type of reinforcing influence that is subordinate to the primary act of searching, it’s not hard to follow the train of thought. We search for something, and in the results, we see some type of social badge that indicates how our social connections feel about the options presented to us. In this case, intent is already engaged. Social just serves to grease the decision wheels, helping us differentiate between our options. This type of integration can easily be seen on Google (Plus integration) as well as vertical engines such as TripAdvisor or Yelp.

But that type of integration doesn’t really fire the imagination of marketers and get their market acquisition juices flowing. It’s just hedging your bets on a market that’s already pretty easy to identify and capture. It does nothing to open up new markets. And it’s there where things get muddy.

The problem is this niggling question of intent. Somehow, something needs to activate intent in the mind of the prospect. It’s here where we truly need to be persuaded, moving our mental mechanisms from disengaged to engaged.

To do this, you need to reverse the order of importance between the two channels. Social recommendation needs to be in the driver’s seat, hopefully engaging and moving prospects to the point where they initiate a search. And that’s a much bigger hurdle to get over. Once the order is reversed, the odds of success plummet precipitously.

Here are just a few of the hurdles that have to be cleared:

Trust – Whichever channel is chosen to deliver the social recommendation, it has to be received with trust. This factor can be affected by how the recommendation is presented, the social proof that accompanies it, the aesthetic value of the interface, and the recipient’s attitude towards the channel itself. There is no lack of nuanced detail to consider here.

Alignment of Interest – When the recommendation is delivered, it must be of interest to the recipient. This relies on an accurate assessment of context and intent. Whatever the targeting channel, there has to be a pretty good chance of delivering the right message at the right time.

Social Modality – So, let’s assume you’ve figured out how to get the first two things right – you are using a trusted channel and you’ve done a good job of targeting. You’re not home free yet. Here’s the thing – we don’t act the same way all the time. We adapt our behaviors to fit the social circumstances we are currently in. There are predetermined modes of behavior that we conform to. It’s why we act one way with our coworkers and another way with our children. It’s why it’s okay to tip a waiter in a restaurant, but not okay to tip your mother-in-law after Sunday dinner. This modality is carried over from the real world to the virtual world of social networks. And it’s very difficult to determine what mode a prospect may be in. But it can make all the difference in the success of a socially targeted advertising message.

The Fight for Attention – This is the big one. Even if you do everything else right, your odds for successfully capturing the attention of a prospect and holding it for long enough to generate actual consideration of your product are not nearly as good as you might hope. You’d probably do better at a Vegas craps table. It all depends on what the incumbent’s intent is. What brought her to the online destination where you managed to intercept her? How critical is it that she finish what she’s currently doing? How engaged is she in the task at hand?

With the first example of search/social integration (search first, social second), the odds for success are pretty high, because intent has already been established. You’re just using social endorsement to expedite a process that’s already in motion.

But in the second example (social first, search second), we’re talking about an entirely different ball game. You have to derail the incumbent intent and replace it with a new one. Think of it as the difference between pushing a car downhill that’s already started to roll, and pushing the same car from a standing start up the hill.

No wonder we’re having some difficulty getting things rolling.

A Benchmark in Time

First published September 13, 2012 in Mediapost’s Search Insider

That’s the news from Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average. — Garrison Keillor

How good are you? How intelligent, how talented, how kind, how patient? You can give me your opinion, but just like the citizens of Lake Wobegon, you’ll be making those judgments in a vacuum unless you compare yourself to others. Hence the importance of benchmarking.

The term benchmarking started with shoemakers, who asked their customers to put their feet on a bench where they were marked to serve as a pattern for cutting leather. But of course, feet are absolute things. They are a certain size and that’s all there is to it. Benchmarking has since been adapted to a more qualitative context.

For example, let’s take digital marketing maturity. How does one measure how good a company is at connecting with customers online? We all have our opinions, and I suspect, just like those little Wobegonians, most of us think we’re above average. But, of course, we all can’t be above average, so somebody is fudging the truth somewhere.

I have found that when we work with a client, benchmarking is an area of great political sensitivity, depending on your audience. Managers appreciate competitive insight and are a lot less upset when you tell them they have an ugly baby (or, at least, a baby of below-average attractiveness) than the practitioners who are on the front lines. I personally love benchmarking, as it serves to get a team on the same page. False complacency vaporizes in the face of real evidence that a competitor is repeatedly kicking your tushie all over the block.  It grounds a team in a more objective view of the marketplace and takes decision-making out of the vacuum.

But before going on a benchmarking bonanza, here are some things to consider:

Weighting is Important

It’s pretty easy to assign a score to something. But it’s more difficult to understand that some things are more important than others. For example, I can measure the social maturity of a marketer based on Facebook likes, the frequency of Twitter activity, the number of stars they have on Yelp or the completeness of their Linked In Profile, but these things are not equal in importance. Not only are they not equal, but the relative importance of each social activity will change from industry to industry and market to market. If I’m marketing a hotel, TripAdvisor reviews can make or break me, but I don’t care as much about my number of LinkedIn connections. If I’m marketing a movie or a new TV show, Facebook “Likes” might actually be a measure that has some value. Before you start assigning scores, you need a pretty accurate way to weight them for importance.

Be Careful Whom You’re Benchmarking Against

If you ask any marketer who their primary competitors are, they’ll be able to give you three or four names off the top of their head. That’s the obvious competition. But if we’re benchmarking digital effectiveness, it’s the non-obvious competition you have to worry about. That’s why we generally include at least one “aspirational” candidate in our benchmarking studies. These candidates set the bar higher and are often outside the traditional competitive set. While it may be gratifying to know you’re ahead of your primary competitors, that will be small comfort if a disruptive competitor (think Amazon in the industrial supply category) suddenly changes the game and blows up your entire market model by resetting your customer’s expectations. Good benchmarking practices should spot those potential hazards before they become critical.

Keep Objective

If qualitative assessments are part of your benchmarking (and there’s nothing wrong with that), make sure your assessments aren’t colored by internal biases. Having your own people do benchmarking can sometimes give you a skewed view of your market.  It might be worthwhile to find an external partner to help with benchmarking, who can ensure objectivity when it comes to evaluation and scoring.

And finally, remember that everybody is above average in something…

The Berkowitz Guide to Creating Content that Matters

First published September 6, 2012 in Mediapost’s Search Insider

I made it! I got through the summer without writing too many “filler” columns triggered by the realization it was already Wednesday and my editor was expecting something in her inbox by end of day. There were no summer vacation columns, no “10 things I learned about [fill in blank],” and with the exception of one column about the joy of digging holes, no lame reminiscing about the zillion years I’ve spent doing this. Sometimes I even managed to write about search.

Of course, now that we’re safely past Labor Day, all that comes to a crashing halt. Because, yet again, it’s Wednesday (as of the time I’m writing) and, yet again, the well is dry.  So, it was sad yet somehow consoling that I read the final column of David Berkowitz, the one MediaPost writer I know who has actually logged more columns (400) than I (383 as of this one). David recapped what’s he’s learned from writing a little over 300,000 words, squeezed out every week over the past eight years. As David so astutely says, “I know not every post is amazing, but I still put in the time. It takes just as long to write an average column as it does to write a great one.”

I would urge you to take the time to read David’s column. I’ve been talking a lot lately about the importance of content creation. In the new information economy, content is currency. We all have to start thinking like publishers. And that means that many of us will have to create content. David’s lessons are valuable ones.

One of the thing’s I’ve most admired about David is his ability to write both from his heart and his head. He has a keen intellect, but he’s also a good and decent person, and both qualities shine through in his writing. Being genuine is an often-overlooked gift.

Whatever forms your content takes, make sure you’re creating it for the right reasons. Speak because you have something to say, not just to fill a room (or blog post) with noise. I especially liked David’s Lesson #2: “Big ideas matter, even if they don’t spread.” The columns I’m most proud of are often the ones that got the fewest retweets or comments.

Long ago, when I started writing and speaking, I had to come to terms with the fact that I will seldom go “viral.” I don’t seem to have a flair for creating memes.

But after watching other speakers who are more “meme”-worthy get swarmed after a presentation while I stood quietly to the side, I began to notice a pattern. Often, someone would come up and say, “Thank you so much for what you talked about. It was a different angle and it gave me something to think about.” I decided then and there that it was these individuals I was writing and presenting for. There may be only a handful of them in the room, or reading my column on any given week, but if I can pass along something that causes them to adjust their perspective and see something that was previously undiscovered, it’s been worth it. Retweets are not always the best measure of importance.

Writing should never be a “to-do” task. Yes, the weekly rhythm of this column can frankly be a pain in the butt some days when my to-do list overflows —  but that feeling always goes away when I start writing. As David said, “Each column is a learning experience, starting with a thesis, or a hypothesis, or a half-decent idea for the middle of a nonexistent story.”

Yes, writing is a learning experience, forcing you to put some semblance on structure to half-formed thoughts, but it’s also a chance each week to learn a little bit more about yourself. I like to think of writing as sharing little shards of your soul. You put yourself out there in a way that few others do.

At least, you do if you write as well as Mr. Berkowitz does.

Climbing the Slippery Slopes of Mount White Hat

First published August 30, 2012 in Mediapost’s Search Insider

On Monday of this week, fellow Search Insider Ryan DeShazer bravely threw his hat back in the ring regarding this question: Is Google better or worse off because of SEO?

DeShazer confessed to being vilified after a previous column indicated that Google owed us something. I admit I have a column penned but never submitted that Ryan could have added to the “vilify” side of that particular tally. But in his Monday column, Ryan touches on a very relevant point: “What is the thin line between White Hat and Black Hat SEO?” For as long as I’ve been in this industry (which is pushing 17 years now) I’ve heard that same debate. I’ve been at conference sessions where white hats and black hats went head to head on the question. It’s one of those discussions that most sane people in the world could care less about, but we in the search biz can’t seem to let go.

Ryan stirs the pot again by indicating that Google may be working on an SEO “Penalty Box”: a temporary holding pen for sites that are using “rank modifying spammers” where results will fluctuate more than in the standard index. The high degree of flux should lead to further modifications by the “spammers” that will help Google identify them and theoretically penalize them. DeShazer’s concern is the use of the word “spammers” in the wording of the patent application, which seems to include any “webmasters who attempt to modify their search engine ranking.”

I personally think it’s dangerous to try to apply wording used in a patent application (the source for this speculation) arbitrarily against what will become a business practice. Wording in a patent is intended to help convey the concept of the intellectual property as quickly and concisely as possible to a patent review bureaucrat. The wording deals in concepts that are (ironically) pretty black and white. It has little to no relationship to how that IP will be used in the real world, which tends to be colored in various shades of gray. But let’s put that aside for a moment.

Alan Perkins, an SEO I would call vociferously “white hat,” some years ago came up with what I believe is the quintessential difference here. Black hats optimize for a search engine. White hats optimize for humans.  When I make site recommendations, they are to help people find better content faster and act on it. I believe, along with Perkins, that this approach will also do good things for your search visibility.

But that also runs the danger of being an oversimplification. The picture is muddied by clients who measure our success as SEO agencies by their position relative to their competitors on a keyword-by-keyword level. This is the bed the SEO industry has built for itself, and now we’re forced to sleep in it. I’m as guilty as the next guy of cranking out competitive ranking reports, which have conditioned this behavior over the past decade and a half.

The big problem, and one continually pointed out by vocal grey/black hats, is that you can’t keep up with competition who are using methods more black than white by staying with white-hat tactics alone. The fact is, black hat works, for a while. And if I’m the snow-white SEO practitioner whose clients are repeatedly trounced by those using a black hat consultant, I’d better expect some client churn. Ethics and profitably don’t always go together in this industry.

To be honest, over the past five years, I’ve largely stopped worrying about the whole white hat/black hat thing. We’ve lost some clients because we weren’t aggressive enough, but the ones who stayed were largely untouched by the string of recent Google updates targeting spammers. Most benefited from the house cleaning of the index. I’ve also spent the last five years focused a lot more on people and good experiences than on algorithms and link juice, or whatever the SEO flavor du jour is.

I think Alan Perkins nailed it way back in 2007. Optimize for humans. Aim for the long haul. And try to be ethical. Follow those principles, and I find it hard to imagine that Google would ever tag you with the label of “spammer.”