Seperating the Strategic Signal from the Tactical Noise in Marketing

First published April 4, 2013 in Mediapost’s Search Insider

It’s somewhat ironic that, as a die-hard Darwinist, I find myself in the position of defending strategy against the onslaught of Big Data. Since my initial column on this subject a few months ago, I’ve been diving deeper into this topic.

Here’s the irony.

Embracing Big Data is essentially embracing a Darwinist approach to marketing.  It resists taking a top-down approach (aka strategy) by using data feedback to enforce evolution of your marketing program. It makes marketing “antifragile,” in the words of Nassim Nicholas Taleb. In theory, it uses disorder, mistakes and unexpected events to continually improve marketing.

Embracing strategy — at least my suggested Bayesian approach to strategy — would be akin to embracing intelligent design. It defines what an expected outcome should be, then starts defining paths to get there. But it does this in the full realization that those paths will continually shift and change. In fact, it sets up the framework to enable this strategic fluidity. It still uses “Big Data,” but puts it in the context of “Big Testing” (courtesy Scott Brinker).

To remove the strategy from the equation, as some suggest, would be to leave your marketing subject to random chance. Undoubtedly, given perfect feedback and the ability to quickly adapt using that feedback, marketing could improve continually. After all, we evolved in just such an environment and we’re pretty complex organisms.  But it’s hard to argue that a designer would have designed such flaws as our pharynx, which is used both for eating and breathing, leading to a drastically higher risk of choking; our spinal column, which tends to become misaligned in a significant portion of the population; or the fact that our retinas are “inside out.”

Big Data also requires separating “signal” from “noise” in the data. But without a strategic framework, what is the signal and what is the noise? Which of the datum do you pay attention to, and which do you ignore?

Here’s an even bigger question. What constitutes success and failure in your marketing program? Who sets these criteria? In nature, it’s pretty simple. Success is defined by genetic propagation. But it’s not so clear-cut in marketing. Success needs to align to some commonly understood objectives, and these objectives should be enshrined in — you guessed it, your strategy.

I believe that if  “intelligent designers” are available, why not use them? And I would hope that most marketing executives should fit the bill. As long as strategy includes a rigorous testing methodology and honest feedback does not fall victim to egotistical opinions and “yes speak” (which is a huge caveat, and a topic too big to tackle here), a program infused with strategy should outperform one left to chance.

But what about Taleb’s “Black Swans”? He argues that by providing “top down” direction, leading to interventionism, you tend to make systems fragile. In trying to smooth out the ups and downs of the environment, you build in limitations and inflexibility. You lose the ability to deal with a Black Swan, that unexpected occurrence that falls outside of your predictive horizon.

It’s a valid point. I believe that Black Swans have to be expected, but should not dictate your strategy. By their very nature, they may never happen. And if they do, they will be infrequent. If your strategy meets a Black Swan head on, a Bayesian approach should come with the humility to realize that the rules have changed, necessitating a corresponding change in strategy. But it would be a mistake to abandon strategy completely based on a “what-if.”

Psychological Priming and the Path to Purchase

First published March 27, 2013 in Mediapost’s Search Insider

In marketing, I suspect we pay too much attention to the destination, and not enough to the journey. We don’t take into account the cumulative effect of the dozens of subconscious cues we encounter on the path to our ultimate purchase. We certainly don’t understand the subtle changes of direction that can result from these cues.

Search is a perfect example of this.

As search marketers, we believe that our goal is to drive a prospect to a landing page. Some of us worry about the conversion rates once a prospect gets to the landing page. But almost none of us think about the frame of mind of prospects once they reach the landing page.

“Frame” is the appropriate metaphor here, because the entire interaction will play out inside this frame. It will impact all the subsequent “downstream” behaviors. The power of priming should not be taken likely.

Here’s just one example of how priming can wield significant unconscious power over our thoughts and actions. Participants primed by exposure to a stereotypical representation of a “professor” did better on a knowledge test than those primed with a representation of a “supermodel.”

A simple exposure to a word can do the trick. It can frame an entire consumer decision path. So, if many of those paths start with a search engine, consider the influence that a simple search listing may have.

We could be primed by the position of a listing (higher listings = higher quality alternatives).  We could be primed (either negatively or positively) by an organization that dominates the listing real estate. We could be primed by words in the listing. We could be primed by an image. A lot can happen on that seemingly innocuous results page.

Of course, the results page is just one potential “priming” platform. Priming could happen on the landing page, a third-party site or the website itself. Every single touch point, whether we’re consciously interacting with it or not, has the potential to frame, or even sidetrack, our decision process.

If the path to purchase is littered with all these potential landmines (or, to take a more positive approach, “opportunities to persuade”), how do we use this knowledge to become better marketers? This does not fall into the typical purview of the average search marketer.

Personally, I’m a big fan of the qualitative approach (I know — big surprise) in helping to lay down the most persuasive path possible. Actually talking to customers, observing them as they navigate typical online paths in a usability testing session, and creating some robust scenarios to use in your own walk-throughs will yield far better results than quantitative number-crunching. Excel is not a particularly good at being empathetic.

Jakob Nielsen has said that online, branding is all about experience, not exposure. As search marketers, it’s our responsibility to ensure that we’re creating the most positive experience possible, as our prospects make their way to the final purchase.

The devil, as always, is in the details — whether we’re paying conscious attention to them or not.

Viewing the World through Google Colored Glass

First published March 7, 2013 in Mediapost’s Search Insider

Let’s play “What If” for a moment. For the last few columns, I’ve been pondering how we might more efficiently connect with digital information. Essentially, I see the stripping away of the awkward and inefficient interfaces that have been interposed between that information and us. Let’s imagine, 15 years from now, that Google Glass and other wearable technology provides a much more efficient connection, streaming real-time information to us that augments our physical world. In the blink of an eye, we can retrieve any required piece of information, expanding the capabilities of our own limited memories beyond belief. We have perfect recall, perfect information — we become omniscient.

To facilitate this, we need to move our cognitive abilities to increasingly subterranean levels of processing – taking advantage of the “fast and dirty” capabilities of our subliminal mind. As we do this, we actually rewire our brains to depend on these technological extensions. Strategies that play out with conscious guidance become stored procedures that follow scripts written by constant repetition. Eventually, overtraining ingrains these procedures as habits, and we stop thinking and just do. Once this happens, we surrender much of our ability to consciously change our behaviors.

Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the “cloud.” It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections. It’s the “Big Data” representation of us, condensed into a virtual representation that can be parsed and manipulated by the technology we use to connect with the virtual world.

In my last column, Rob Schmultz and Randy Kirk wondered what a world full of technologically enhanced Homo sapiens would look like. Would we all become the annoying guy in the airport that can’t stop talking on his Bluetooth headset? Would we become so enmeshed in our digital connections that we ignore the physical ones that lie in front of our own noses? Would Google Glass truly augment our understanding of the world, or iwould it make us blind to its charms? And what about the privacy implications of a world where our every move could instantly be captured and shared online — a world full of digital voyeurs?

I have no doubt that technology can take us to this not-too-distant future as I envisioned it. Much of what’s required already exists. Implantable hardware, heads up displays, sub-vocalization, bio-feedback — it’s all very doable. What I wonder about is not the technology, but rather us. We move at a much slower pace.  And we may not recognize any damage that’s done until it’s too late.

The Darwinian Brain

At an individual level, our brains have a remarkable ability to absorb technology. This is especially true if we’re exposed to that technology from birth. The brain represents a microcosm of evolutionary adaption, through a process called synaptic pruning. Essentially, the brain builds and strengthens neural pathways that are used often, and “prunes” away those that aren’t. In this way, the brain literally wires itself to be in sync with our environment.

The majority of this neural wiring happens when we’re still children. So, if our childhood environment happens to include technologies such as heads-up displays, implantable chips and other direct interfaces to digital information, our brains will quickly adapt to maximize the use of those technologies. Adults will also adapt to these new technologies, but because our brains are less “plastic” than that of children, the adaption won’t be as quick or complete.

The Absorption of Technology by Society

I don’t worry about our brain’s ability to adapt. I worry about the eventual impact on our society. With changes this portentous, there is generally a social cost. To consider what might come, it may be beneficial to look at what has been. Take television, for example.

If a technology is ubiquitous and effective enough to spread globally, like TV did, there is the issue of absorption. Not all sectors of society will have access to the technology at the same time. As the technology is absorbed at different rates, it can create imbalances and disruption. Think about the societal divide caused by the absorption of TV, which resulted in completely different information distribution paradigm. One can’t help thinking that TV played a significant role in much of the political change we saw sweep over the world in the past 3 decades.

And even if our brains quickly adapt to technology, that doesn’t mean our social mores and values will move as quickly. As our brains rewire to adapt to new technologies our cultural frameworks also need to shift. With different generations and segments of society at different places on the absorption curve, this can create further tensions. If you take the timeline of societal changes documented by Robert Putnam in “Bowling Alone” and overlay the timing of the adoption of TV, the correlation is striking and not a little frightening.

Even if our brains have the ability to adapt to technology, it isn’t always a positive change. For example, there is compelling evidence that early exposure to TV has contributed to the recent explosion of diagnosed ADHD and possibly even autism.

Knowing Isn’t Always the Same as Understanding

Finally, we have the greatest fear of Nicholas Carr:  maybe this immediate connection to information will have the “net” effect of making us stupid — or, at least, more shallow thinkers. If we’re spoon-fed information on demand, do we grow intellectually lazy? Do we start to lose the ability to reason and think critically? Will we swap quality for quantity?

Personally, I’m not sure Carr’s fears are founded on this front. It may be that our brains adapt and become even more profound and capable. Perhaps when we offload the simple journeyman tasks of retrieving information and compiling it for consideration to technology, our brains will be freed up to handle deeper and more abstract tasks. The simple fact is, we won’t know until it happens. It could be another “Great Leap Forward,” or it may mark the beginning of the decline of our species.

The point is, we’ve already started down the path, and it’s highly unlikely we’ll retreat at this point. I suppose we have no option but to wait and see.

Why I – And Mark Zuckerberg – are Bullish on Google Glass

First published February 28, 2013 in Mediapost’s Search Insider

Call it a Tipping Point. Call it an Inflection Point. Call it Epochal (what ever that means). The gist is, things are going to change — and they’re going to change in a big, big way!

First, with due deference to the brilliant Kevin Kelley, let’s look at how technology moves. In his book “What Technology Wants,” Kelley shows that technology is not dependent on a single invention or inventor. Rather, it’s the sum of multiple, incremental discoveries that move technology to a point where it can breach any resistance in its way and move into a new era of possibility. So, even if Edison had never lived, we’d still have electric lights in our home. If he weren’t there, somebody else would have discovered it (or more correctly, perfected it). The momentum of technology would not have been denied.

Several recent developments indicate that we’re on the cusp of another technological wave of advancement. These developments have little to do with online technologies or capabilities. They’re centered on how humans and hardware connect — and it’s impossible to overstate their importance.

The Bottleneck of Our Brains

Over the past two decades, there has been a massive build-up of online capabilities. In this case, what technology has wanted is the digitization of all information. That was Step One. Step Two is to render all that information functional. Step Three will be to make all the functionality personalized. And we’re progressing quite nicely down that path, thank you very much. The rapidly expanding capabilities of online far surpass what we are able to assimilate and use at any one time. All this functionality is still fragmented and is in the process of being developed (one of the reasons I think Facebook is in danger of becoming irrelevant) but it’s there. It’s just a pain in the butt for us to utilize it.

The problem is one of cognition. The brain has two ways to process information, one fast and one slow. The slow way (using our conscious parts of the brain) is tremendously flexible but inefficient. This is the system we’ve largely used to connect online. Everything has to be processed in the form of text, both in terms of output and input, generally through a keyboard and a screen display. It’s the easiest way for us to connect with information, but it’s far from the most efficient way.

The second way is much, much faster. It’s the subconscious processing of our environment that we do everyday.  It’s what causes us to duck when a ball is thrown at our head, jump out of the way of an oncoming bus, fiercely protect our children and judge the trustworthiness of a complete stranger. If our brains were icebergs, this would be the 90% hidden beneath the water. But we’ve been unable to access most of this inherent efficiency and apply it to our online interactions — until now.

The Importance of Siri and Glass

Say what you want about Mark Zuckerberg, he’s damned smart. That’s why he knew immediately that Google Glass is important.

I don’t know if Google Glass will be a home run for Google. I also don’t know if Siri will every pay back Apple’s investment in it. But I do know that 30 years from now, they’ll both be considered important milestones. And they’ll be important because they were representative of a sea change in how we connect with information. Both have the potential to unlock the efficiency of the subconscious brain. Siri does it by utilizing our inherent communication abilities and breaking the inefficient link that requires us not only to process our thoughts as language, but also laboriously translate them into keystrokes. In neural terms, this is one of the most inefficient paths imaginable.

But if Siri teases us with a potentially more efficient path, Google Glass introduces a new, mind-blowing scenario of what might be possible. To parse environment cues and stream information directly into our visual cortex in real time, creating a direct link with all that pent-up functionality that lives “in the cloud,” wipes away most of the inefficiency of our current connection paradigm.

Don’t think of the current implementation that Google is publicizing. Think beyond that to a much more elegant link between the vast capabilities of a digitized world and our own inner consciousness. Whatever Glass and Siri (and their competitors) eventually evolve into in the next decade or so, they will be far beyond what we’re considering today.

With the humanization of these interfaces, a potentially dark side effect will take place. These interfaces will become hardwired into our behavior strategies. Now, because our online interactions are largely processed at a conscious level, the brain tends to maintain maximum flexibility regarding the routines it uses. But as we access subconscious levels of processing with new interface opportunities, the brain will embed these at a similarly subconscious level. They will become habitual, playing out without conscious intervention. It’s the only way the brain can maximize its efficiency. When this happens, we will become dependent on these technological interfaces. It’s the price we’ll pay for the increased efficiency.

Search: The Boon or Bane of B2B Marketers

First published February 21, 2013 in Mediapost’s Search Insider

Optify recently released its 2012 B2B marketing Benchmark Report. While reading the executive summary, two apparently conflicting points jumped out at me: “Google is the single most important referring domain to B2B websites, responsible for over 36% of all visits.”

And: “Paid search usage showed a constant decline among B2B marketers in 2012. Over 10% of companies in the report discontinued their paid search campaigns during 2012.”

OK, what gives? How can search be the single most important referrer of traffic, yet fail so miserably as a marketing channel that many B2B marketers have thrown in the towel?

The fact is, B2B search is a dramatically different beast, and many of the unique nuances that come with it are likely to lead to the apparent paradox that the Optify study unearthed. Here are some possible reasons for the anomaly:

B2B search has a really, really long tail. Many B2B marketers are dealing with a huge variety of SKUs, with a broader distribution of traffic across keywords than is typical in most consumer categories. This makes keyword discovery a monumental task. But more than this, the revenue per managed keyword (assuming you can accurately measure revenue — more on this below) is quite often very small. This creates a cost-of-campaign management issue.

When the “long tail” of search was first introduced, many search marketers embraced it as a cost-effective way to manage campaigns. The assumption was that long-tail queries, being quite specific, would yield higher ROI than shorter, more generic queries. And while the traffic (and subsequent revenue) per keyword would be very small, cumulatively a long-tail campaign could deliver impressive returns.

This is true, up to a point. But long-tail campaigns require significant administrative overhead. A query that gets one search a month requires as much set-up as one that gets 1,500 searches a month. Even if you use broad match, you’re constantly tweaking your negative match list to filter out the low-value traffic.

While a long-tail approach seems like a good idea in theory, in practice most marketers end up culling most of the long-tail keywords from the campaign because the returns just aren’t worth the ongoing effort.  This would not bode well for B2B marketers considering search as a channel.

A B2B search vocabulary is difficult to define. Compounding the long-tail problem is the issue that many B2B vendors sell complex products or services. With complexity comes ambiguity in language. It’s hard to pin many B2B products down to an obvious search phrase you can be sure searchers would use. Often, many B2B prospects only know they have a problem, not what the possible solution might be called. This makes it very difficult to create an effective search campaign. There is a lot of trial and error involved.

And, because prospects aren’t searching for a familiar product from vendors they know, it becomes even more difficult to create a compelling search ad that attracts its fair share of searches and subsequent conversions. In a marketing channel that depends on words to interpret buying intent, ill-defined vocabularies can make a marketer’s job exponentially more difficult.

B2B ROI has to be measured differently. Finally, let’s say you get past the first two obstacles. Ultimately, search campaigns live and die on their effectiveness. And this requires a comprehensive approach to performance measurement. As any B2B marketer will tell you, this is much easier said than done. B2B markets tend to be more circuitous than consumer markets, winding their way through several stops in an extended value change. This makes end-to-end tracking extremely difficult.  And if value isn’t easily measured, search campaigns can’t prove their value. This makes them likely candidates for an unceremonious axing.

So if That’s the Bad News, What’s the Good?

If the deck is stacked so fully against search in the B2B world, why was Google the primary referrer of traffic in the Optify study? Well, search for B2B can be tremendously effective; it’s just hard to predict. This makes B2B a prime candidate for a broad-based SEO effort. Content creation creates a rich bed of long-tail fodder that search spiders love. Organic best practices combined with a dedication to thought leadership can create content that intercepts those prospects looking for a solution to their identified pains, even before they know what they’re looking for.

In the case of B2B, especially in complex, nascent markets, I generally recommend leading with SEO and content development. Then, monitor search traffic and let that help inform your subsequent PPC efforts. It may turn out that paid search isn’t a major play for your market. The B2B Beast can be tamed by search; it just takes a different approach.

Building a Better Meta-Me

First published February 14, 2013 in Mediapost’s Search Insider

Last week I forecast that Facebook would become irrelevant. Some of you disagreed. Ron Stitt called Facebook the “public square” or “crossroads” of social connection.

Andre Szykier pointed out a very real challenge with the successful socialization of online: “The problem is connecting the content from my social walled gardens into a virtual cloud point. Google+ is going about it a different way. They keep expanding their walled garden with search, mail, video, chat services along with social and app services that they provide, hoping you eventually will find their garden big and rich enough so everybody will migrate. While it helps them be the CyBorg of data, it makes people more uneasier (sic) to have all of that in one garden than spread across many. Time will tell which model will thrive.”

Thank you, SI readers. As you so often do, you challenged me to give this idea a little more thought. I still inherently believe that Facebook is being marginalized on the social periphery, but both Ron and Andre have nailed a fundamental concept here that I believe merits further discussion. What does the connection point between ourselves and online (I extend this beyond social alone) evolve into?

The problem, I believe, comes with control. Who controls the connection? Understandably, Facebook, Google, and a host of others want to control this critical territory. It’s an online land grab; they offer us destinations, and we go to them. In return, because the connection happens on their turf, they get to monetize that turf. It’s like an online Monopoly game, with everyone scrambling to own Park Place so they can put more hotels on it.

The problem is that to effectively monetize, all these destinations ask us to invest in letting them know who we are. This creates the problem of profiles – so many profiles to maintain, so little time. If I move to another square, I have to start all over again.

All this profile information is used to create a “meta” representation of us. It’s the online data handshake that enables successful connection.  The issue is that Facebook, Google and all the others want us to build the profile, but for them to own it. This means we have to build multiple “meta” profiles of ourselves. It’s terribly inefficient and requires us to do most of the heavy lifting. Also, as Andre points out, it raises an important question – why should Google (or anyone else) own the meta version of me? I think that’s something I should own.

This dynamic introduces another problem: In order to reduce the heavy lifting, these destinations use our own activity to help build the profile. The more we do, the more they can learn about us. This is fine, as long as the best way to do any of these things is the option offered by the destination that’s trying to build the profile. But even with the vast resources available to a Google or Facebook, it’s almost impossible for them to stay ahead of the constant evolution of online innovation. Sooner or later, there will be a better way to do something somewhere else.  At this point, we’re faced with a dilemma: Do we stick with the original destination, where we’ve invested in building a rich meta version of ourselves, or do we trade that for the better functionality offered by the new alternative, knowing that we have to start building yet again another meta-me?

Google and Facebook, as Ron and Andre point out, have both gone down the road of building a support platform for other innovators, hoping to at least share a significant slice of the territory with new alternatives. This allows us to use that version of our profile in more ways. But it’s still a territorial analogy, and ultimately, that creates a sustainable vulnerability in an environment as dynamic as online. It’s very difficult to successfully hold territory in our ever-expanding online world.

To me, there’s only one eventual answer. We have to own our own meta-selves. Our online profile must be rich and completely portable. When we choose a new destination, our meta-me immediately unlocks the full potential of the destination, tailored specifically for us. There are challenges to be overcome — primarily around issues of privacy — but this is the only sustainable path.

Up to now, the Internet has been all about who owns what territory. This is not surprising — it’s a natural extension of our existing worldview, one formed in a physical environment. Our minds need time to grapple and assimilate abstract concepts.  Up to now, we’ve “gone” to places online. But the evolved functionality of the Internet has expanded beyond this parochial mental scaffolding. It’s time to reimagine the possibilities, using our own concepts of consciousness as a new framework. We will live at the center, we define who we are and what we want — and the Internet will be a vast extension of our mental potential that we can call at will, without our having to “go” anywhere. We’ve seen hints of this in search already, conceptually fleshing out Wegner’s transactive memory.

Daunting? Yes. Kurzweilian (with all the negative and positive connotations that implies)? Probably.  Inevitable? I believe so.

Breaking Out of Facebook’s Walled Garden

First published February 7, 2013 in Mediapost’s Search Insider

According to PEW, 27% of us are looking to wean ourselves off the Facebook habit.

This is not particularly surprising. While Facebook can be incredibly distracting, it’s not really relevant to our lives. It has never been woven into the fabric of our day-to-day activities. It’s more like an awkward, albeit entertaining, interlude jammed into the long list of stuff we have to do today. That list represents our life. Facebook represents the stuff that lies on the periphery.

Here’s one way to think about it. What if Facebook went down today? Would it really matter? Sure, it might be a disappointment, but would it make us substantially change our plans?

Now consider if Google went down for the day. How many times in a day would you got to use it, then curse because it wasn’t there?

The problem is that our online social interactions are outgrowing the walled garden that is Facebook. It has failed to become essential in the way that Google has. I can go entire months without logging into my Facebook account. I have trouble going an hour without using Google. And when I need Google, I need it now.

Again, I turn to how we use language as a clue as to how we feel about things. To “search” is a verb. It’s an action that connects intents with outcomes. It’s something we have to do. And, if you’re loyal to Google as your search engine, it’s pretty easy to swap “googling” for “searching” and for everyone to know exactly what you mean.

But what, I ask, is social? It’s not a verb. It’s not even a noun. It’s an adjective, to describe someone or something.  If I told you I “Facebooked” someone, you probably wouldn’t know what I meant. And that’s an important distinction. “Social” is tied to who we are. It isn’t tied to any single destination. Social travels with us.

When Facebook came on the scene, it did do a good job of showing us how online could be used to keep better track of our extended social networks. But now there are other ways to do that. An informal poll by Macquarie Securites also found that Instagrams are a quickly growing way to connect, especially among Facebook’s core market of 18- to 25-year-olds.

Facebook can’t own social in the same way Google can own search. We own social, because we are social. And we will use multiple tools to allow us to be social.

Facebook envisioned a social ecosystem that could then be monetized with targeted advertising. But as the PEW study points out, Facebook just couldn’t contain all our social activity. Many of us are thinking that we should probably spend less time on Facebook, as we find other ways to connect online. While Facebook has never been essential, it now also risks becoming irrelevant.

Weighing Positive and Negative Impacts on Users

First published January 31, 2013 in Mediapost’s Search Insider

We humans hate loss. In fact, we seem to value losing something about twice as high as gaining something. For example, imagine I gave you a coffee cup and then offered to buy it back from you. That’s scenario 1. In scenario 2, I ask you to buy the same coffee cup from me. The price you assign to the coffee cup in the first scenario will be, on the average, about twice as much as in the second. And yes, there’s research to back this up.

When it comes to winning and losing, it’s been proven that “loss looms larger than gains.” It’s just one of the weird glitches in our logical circuitry.  We tend to be hardwired to look at glasses as half empty.

Recently, I was reviewing an academic study done in 2008, with this scintillating title: “Procedural Priming and Consumer Judgment: Effects on the Impact of Positively and Negatively Valenced Information” by Shen and Wyer. If you can get beyond the rather dry title, you find a treasure trove of tidbits to consider when crafting your online user experience.

For example, when we evaluate a product for potential purchase, we may run across both positive and negative information. The order we run into this information can have a dramatic impact on what we do downstream from that interaction. To use psychological terms, it “primes” our mental framework.  And, because we tend to focus on negatives, less favorable information has a greater impact on our decision than positive information.

But it’s not just that we pay more attention to bad news than good news. It’s that bad news can hijack the entire consideration process. According to Shen and Wyer, if we run into negative information, it can change our information-seeking strategies, leading us down further negatively biased channels to confirm the initial information we saw. Bad news tends to lead to more bad news.

Also, we can get “bad news” hangovers. If we compare negatives in one decision process, that negative mental framework can carry over to an entirely different decision that has nothing to do with the first, giving us a heightened awareness of negative information in the new situation.

Here’s another interesting finding. If we’re rushed for time, this preoccupation with the negatives will dramatically affect the decision we make. But, if we have all the time in the world, the impact is relatively insignificant. Given time, we seem to cancel out our inherently negative biases.

All this news is not bad for marketers, however. It seems that simply getting users to state their preference for one feature over another, even though they’re not actively considering purchase at that time, leads to a much greater likelihood of purchase in the future. It seems that if you can get users to compare alternatives — and, more importantly, to commit to saying they prefer one alternative over another — they clear the mental hurdle of deciding “will I buy?” and instead start considering  “what will I buy?”

Finally, there is also a recency effect, especially if prospects had ample time to consider all their alternatives. Shen and Wyer found that the last information considered seemed to have the greatest effect on the buyer.  So, if information was both positive and negative, it was good to get the least favorable information in front of the prospect early, and then move to the most favorable information. Again, this is true only if the user had plenty of time to weigh the options. If they were rushed, the opposite was true.

All in all, these are all intriguing concepts to consider when crafting an ideal online user experience. They also underscore the importance of first impressions, especially negative ones.

Reflections on Turning 400

First published January 24, 2013 in Mediapost’s Search Insider

So, this is my 400th column for Search Insider.

Of course, you’ll notice that recently, I’ve paid scant attention to the domain restrictions of the column’s title. In the past year, I’ve only written about search less than half the time.

Mediapost’s publisher, Ken Fadner, noticed this some time ago and offered me a slot on one of the other columns, like Online Spin, which is a bit more free-ranging in its topicality. But what can I say? I like my Thursday slot here on SI. I figure after nine years of writing about everything from evolutionary psychology to macroeconomics, you’ve come to expect a somewhat eclectic approach from me.

In part, I think the mix of topics you’ll find in my column is reflective of search. As I’ve always said, search acts as a connector in the online landscape. It stitches together our online experiences, as a foundational underpinning to the new digital world.

As such, I think it’s entirely appropriate that this column regularly bust out of the shackles of search marketing. The topics I’ve tackled over the past 400 columns really mark the evolution of my own personal interests, and with it, my career. This column has acted as my experimental petri dish, allowing me to incubate my little cell clusters of thoughts in the medium of public opinion.

In the next few months, that career will evolve once again. The company I started back in 1999, offering search engine optimization services, will fully transfer to a new owner. While I’ll still be involved in the world of online marketing in some form or another, I look forward to having the freedom to further develop some of the ideas that first saw the light of day in this column: from how humans have adapted to their new digital environments, to how organizations are struggling to adapt in a massively transformed marketplace, how disintermediation is stripping huge parts of our economic structure away, and how the very nature of strategic thinking is being transformed by ubiquitous data. My hope is that these ideas will eventually end up in book form.

Most of all, I have appreciated the forum this column has provided to share ideas. With the many changes the Internet has wrought, this is the most significant to me. The sharing of ideas, freely and openly, is something that can only benefit us.

For me, it’s a cycle. I seek out new ideas in the form of books I’ve read, academic studies or posts by my favorite bloggers. Then I digest them, weaving them into my own beliefs and perspective. Then, often still half-baked, I share them with you, hoping that something from this column may end up woven into your own tapestry of thought.

In that spirit, here’s the idea I would like to share with you today. I’m currently reading a fascinating book called “The Philosophical Breakfast Club: Four Remarkable Friends who Transformed Science and Changed the World.” Author Laura Snyder explores the lives and careers of four friends who met at Cambridge as students in the early 1800s: Charles Babbage (inventor of the first “computer”); John Herschel (noted astronomer); William Whewell (polymath and professor); and Richard Jones (one of the creators of modern socioeconomic theory).

Jones, while one of the most original and insightful thinkers of his era, was not the most diligent of authors. Whewell would incessantly nag him to keep up with his writing. This passage came from one of Whewell’s many letters to Jones: “The only moral I can extract … is the importance of getting our speculations into such a form that not calamity nor adversity shall have the power, by putting an end to us… to destroy the chance of our beautiful theories coming before the world.”

You see, once you share your idea, it’s no longer bound by your own mortality. What better incentive could you find to keep writing every Thursday?

The Social Media Menagerie

First published January 17, 2013 in Mediapost’s Search Insider

Did you know there are 18,903 social media gurus on Twitter? I haven’t the faintest idea what the prerequisites are for becoming a “guru,” but apparently thousands of people have passed the hypothetical “bar.” As a baseline, the original Sanskrit meaning of “guru” meant “teacher” or “master.” Fair enough, I suppose. It seems fairly benign. But the way many use the term, I think Wikipedia’s definition might be more fitting:  “In the United States, the meaning of ‘guru’ has been used to cover anyone who acquires followers, especially by exploiting their naiveté.”

To be fair, I have had the label applied to myself by others in certain contexts. But I have never used it to refer to myself. To me, it just smacks of a king-sized stroking of one’s own ego. What the hell makes you a guru? Did you take a test? Study under a true “master”? Lock yourself away in solitude to consider the intricacies of Facebook or Twitter? Was there a vote of a “guru” nominating committee that conferred the title on you? Did the god of social media anoint you? Or did you just sign up for a Twitter account and suddenly decide you were ready to go into the consulting biz?

I’m sure some of the 18,903 actually know what they’re doing. But I’m betting there are just as many that you should fend off with the proverbial 10-foot pole. Let’s face it: if you need to call yourself a guru to justify your self-worth, there may be other inadequacies in your own personal inventory.

To me, true masters always refers to themselves as students. They know they don’t know everything, but they’re always ready to learn. They open themselves up to constantly growing by doing. They know the value of “screwing up.” They realize that this is an area that is just defining itself, and to believe you have it mastered is the height of presumption. Give me one social media “student” over 18,903 “gurus” any day.

Of course, “guru” is not the only moniker appropriated in the Twittersphere – there are also 21,928 social media “mavens” and 21,876 “ninjas.” For some reason, I don’t take the same offence to these terms. In Yiddish, a “maven” is “one who understands, based on an accumulation of knowledge.”  And a “ninja” is a “covert agent or mercenary who specialized in unorthodox warfare.” The former seems to be a little less self-aggrandizing, and the latter is just stupid. Let the mavens keep learning, and let the ninjas battle each other to the death in some type of social media grudge match. I presume they use Twitter throwing stars and Linked In nunchucks.

Apparently, to consult in social media requires some kind of “out-there” title. There are only 9,031 social media “consultants”, 5,555 social media “experts” and 1,555 social media “marketers”. But there are 287 “freaks,” 104 “warriors,” and 35 “wonks.” I was also heartened to find that there are 174 social media “whores.” Now, there’s a title you can relate to.

Look, I get that you need to “stand out” — but if there are 20,000 other people calling themselves the same thing, how much are you really standing out?