A Decade with the Database of Intentions

First published September 27, 2012 in Mediapost’s Search Insider

It’s been over 10 years since John Battelle first started considering what he called the “Database of intentions.” It was, and is:

The aggregate results of every search ever entered, every result list ever tendered, and every path taken as a result. It lives in many places, but three or four places in particular hold a massive amount of this data (ie MSN, Google, and Yahoo). This information represents, in aggregate form, a place holder for the intentions of humankind – a massive database of desires, needs, wants, and likes that can be discovered, supoenaed, archived, tracked, and exploited to all sorts of ends. Such a beast has never before existed in the history of culture, but is almost guaranteed to grow exponentially from this day forward. This artifact can tell us extraordinary things about who we are and what we want as a culture. And it has the potential to be abused in equally extraordinary fashion.

When Battelle considered the implications, it overwhelmed him. “Once I grokked this idea (late 2001/early 2002), my head began to hurt.” Yet, for all its promise, marketers have only marginally leveraged the Database of Intentions.

In the intervening time, the possibilities of the Database of Intention have not diminished. In fact, they have grown exponentially:

My mistake in 2003 was to assume that the entire Database of Intentions was created through our interactions with traditional web search. I no longer believe this to be true. In the past five or so years, we’ve seen “eruptions” of entirely new fields, each of which, I believe, represent equally powerful signals – oxygen flows around which massive ecosystems are already developing. In fact, the interplay of all of these signals (plus future ones) represents no less than the sum of our economic and cultural potential.

Sharing Battelle’s predilection for “Holy Sh*t” moments, a post by MediaPost’s Laurie Sullivan this Tuesday got me thinking again about Battelle’s “DBoI.” A recent study by Google and EA showed that using search data can predict 84% of video game sales.  But the data used in the prediction is only scratching the surface of what’s possible. Adam Stewart from Google hints at what might be possible, “Aside from searches, Google plans to build in game quality, TV investment, online display investment, and social buzz to create a multivariate model for future analysis.”

This is very doable stuff. All we need to create predictive models that match (and probably far exceed) the degree of accuracy already available. The data is just sitting there, waiting to be interpreted. The implications for marketing are staggering, but to Battelle’s point, let’s not be too quick to corral this simply for the use of marketers. The DBoI has implications that reach into every aspect of our society and lives. This is big — really big! If that sounds unduly ominous to you, let me give you a few reasons why you should be more worried than you are.

Typically, if we were to predict patterns in human behavior, there would be two sources of signals. One comes from an understanding of how humans act. As we speak, this is being attacked on multiple fronts. Neuroscience, behavioral economics, evolutionary psychology and a number of other disciplines are rapidly converging on a vastly improved understanding of what makes us tick. From this base understanding, we can then derive hypotheses of predicted behaviors in any number of circumstances.

This brings us to the other source of behavior signals. If we have a hypothesis, we need some way to scientifically test it. Large-scale collections of human behavioral data allow us to search for patterns and identify underlying causes, which can then serve as predictive signals for future scenarios. The Database of Intentions gives us a massive source of behavior signals that capture every dimension of societal activity. We can test our hypotheses quickly and accurately against the tableau of all online activity, looking for the underlying influences that drive behaviors.

At the intersection of these two is something of tremendous import. We can start predicting human behavior on a massive scale, with unprecedented accuracy. With each prediction, the feedback loop between qualitative prediction and quantitative verification becomes faster and more efficient. Throw a little processing power at it and we suddenly have an artificially intelligent, self-ssimproving predictive model that will tell us, with startling accuracy, what we’re likely to do in the future.

This ain’t just about selling video games, people. This is a much, much, much bigger deal.

Climbing the Slippery Slopes of Mount White Hat

First published August 30, 2012 in Mediapost’s Search Insider

On Monday of this week, fellow Search Insider Ryan DeShazer bravely threw his hat back in the ring regarding this question: Is Google better or worse off because of SEO?

DeShazer confessed to being vilified after a previous column indicated that Google owed us something. I admit I have a column penned but never submitted that Ryan could have added to the “vilify” side of that particular tally. But in his Monday column, Ryan touches on a very relevant point: “What is the thin line between White Hat and Black Hat SEO?” For as long as I’ve been in this industry (which is pushing 17 years now) I’ve heard that same debate. I’ve been at conference sessions where white hats and black hats went head to head on the question. It’s one of those discussions that most sane people in the world could care less about, but we in the search biz can’t seem to let go.

Ryan stirs the pot again by indicating that Google may be working on an SEO “Penalty Box”: a temporary holding pen for sites that are using “rank modifying spammers” where results will fluctuate more than in the standard index. The high degree of flux should lead to further modifications by the “spammers” that will help Google identify them and theoretically penalize them. DeShazer’s concern is the use of the word “spammers” in the wording of the patent application, which seems to include any “webmasters who attempt to modify their search engine ranking.”

I personally think it’s dangerous to try to apply wording used in a patent application (the source for this speculation) arbitrarily against what will become a business practice. Wording in a patent is intended to help convey the concept of the intellectual property as quickly and concisely as possible to a patent review bureaucrat. The wording deals in concepts that are (ironically) pretty black and white. It has little to no relationship to how that IP will be used in the real world, which tends to be colored in various shades of gray. But let’s put that aside for a moment.

Alan Perkins, an SEO I would call vociferously “white hat,” some years ago came up with what I believe is the quintessential difference here. Black hats optimize for a search engine. White hats optimize for humans.  When I make site recommendations, they are to help people find better content faster and act on it. I believe, along with Perkins, that this approach will also do good things for your search visibility.

But that also runs the danger of being an oversimplification. The picture is muddied by clients who measure our success as SEO agencies by their position relative to their competitors on a keyword-by-keyword level. This is the bed the SEO industry has built for itself, and now we’re forced to sleep in it. I’m as guilty as the next guy of cranking out competitive ranking reports, which have conditioned this behavior over the past decade and a half.

The big problem, and one continually pointed out by vocal grey/black hats, is that you can’t keep up with competition who are using methods more black than white by staying with white-hat tactics alone. The fact is, black hat works, for a while. And if I’m the snow-white SEO practitioner whose clients are repeatedly trounced by those using a black hat consultant, I’d better expect some client churn. Ethics and profitably don’t always go together in this industry.

To be honest, over the past five years, I’ve largely stopped worrying about the whole white hat/black hat thing. We’ve lost some clients because we weren’t aggressive enough, but the ones who stayed were largely untouched by the string of recent Google updates targeting spammers. Most benefited from the house cleaning of the index. I’ve also spent the last five years focused a lot more on people and good experiences than on algorithms and link juice, or whatever the SEO flavor du jour is.

I think Alan Perkins nailed it way back in 2007. Optimize for humans. Aim for the long haul. And try to be ethical. Follow those principles, and I find it hard to imagine that Google would ever tag you with the label of “spammer.”

Living a B-Rated Life

First published August 16, 2012 in Mediapost’s Search Insider

I love ratings and reviews — and I’m not alone.  4.7 people out of 5 people love reviews. We give them two thumbs up. They rate 96.5% on the Tomato-meter.  I find it hard to imagine what my life would be without those ubiquitous 5 stars to guide me.

This past weekend, I was in Banff, Alberta for my sister’s wedding. My family decided to find a place to go for breakfast. The first thing I did was check with Yelp, and soon we were stacking up the Eggs Benny at a passable breakfast buffet less than two miles from our hotel. I never knew said buffet existed before checking the reviews — but once I found it, I trusted the wisdom of crowds. It seldom steers me wrong.

Now, you do have to learn how to read between the lines of a typical review site. Just before heading to my sister’s wedding, I spent the day in Seattle at the Bazaar Voice user event and was fascinated to learn that their user research shows that the typical number of reviews scanned is generally about seven. Once people hit seven reviews, they feel they have a good handle on the overall tone, even if there are 1,000 reviews in total. This seems right to me. It’s about the number of reviews I scan if possible.

But we also rely on the average rating summaries that typically show above the individual reviews and comments. When I read a review, I tend to follow these rules of thumb:

  • Look for the entry with the most reviews.
  • Find one that has a high average, but be suspicious of ones that have absolutely no negative reviews (unusual if you follow Rule One).
  • Scan the top six or seven reviews to get an overall sense of what people like and dislike.
  • Sort by the most negative reviews and read at least one to see what people hate.
  • Decide whether the negative reviews are the result of a one-off bad experience, or possibly an impossible-to-please customer (you can usually pick them out by their comments).
  • Do the “sniff test” to see if there are planted reviews (again, they’re not that hard to pick out).

I’ve used the same approach for restaurants, hotels, consumer electronics, cars, movies, books, hot tubs – pretty much anything I’ve had to open my wallet for in the past five or six years. It’s made buying so much easier. Ratings and reviews are like the Cole’s notes of word of mouth. They condense the opinions of the marketplace down to the bare essentials.

It’s little wonder that Google is starting to invest heavily in this area, with recent acquisitions of Zagat and Frommer’s. These are companies that built entire businesses on eliminating risk through reviews. The aggregation and organization of opinion is a natural extension for search engines. Of course, we should give it a fancy name, like “social graph,”, so we can sound really smart at industry conferences, but the foundations are built on plain common sense. Our attraction to reviews is hardwired into our noggins. We are social animals and like to travel in packs.  Language evolved so we could point each other to the best cassava root patch and pass along the finer points of mastodon hunting.

As Google acquires more and more socially informed content, it will be integrated into Google’s algorithms. This is why Google had to launch its own social network. Unfortunately, Google+ hasn’t gained the critical mass needed to provide the signals Google is looking for. I personally haven’t had a Google+ invite in months. Despite Larry Page’s insistence that it’s a roaring success, others have pointed out that Google+ seems to be a network of tire kickers, with little in the way of ongoing engagement. Contrast that with Pinterest, which is all the various women in my life seem to talk about — and is outperforming even Twitter when it comes to driving referrals.

I personally love the proliferation of structured word-of-mouth. Some say it negates serendipity, but I actually believe I will be more apt to explore if there is some reassurance I won’t have a horrible experience. Otherwise, this weekend my family and I would have been having Egg McMuffins at the Banff McDonald’s — and really, is that the life you want?

Marissa Mayer and Yahoo’s Regression to the Mean

First published July 26, 2012 in Mediapost’s Search Insider

There is not a lot of overlap between the universes of Gord Hotchkiss and Marissa Mayer, but our orbits have intersected on a few occasions in the past. I’ve had the opportunity to talk to Mayer about various aspects of search on a handful of occasions, so it was with some interest that I watched the announcement and subsequent buzz about her appointment as Yahoo CEO.

Much has been said about Mayer’s personal qualifications for the job, and the general consensus is that this is a good thing for Yahoo. If this were a movie, I’m thinking she would score an 82% on the Tomatometer, handily qualifying as “fresh.” Personally, I would agree. Mayer has a razor-sharp (and somewhat intimidating) intellect, a core love for search and an innate sense of what’s right for the user. All of these things will be big plusses for Yahoo. What she hasn’t been tested on is her ability to run a big company. And that’s where things could get interesting.

No doubt Google still imparts its own “halo” effect on anyone who has spent time at the “Plex” in a leadership position. And few have spent as much time there as Mayer, who, as hire number 20, was Google’s first female engineer, logging 13 years with bosses (and hopefully still friends) Page and Brin.  These three tied a tight little knot in the early days of Google, but from the outside, that knot seems to have frayed just a little in the past few years. Mayer’s recent moves in the company have been more lateral than vertical, as later additions to the Google team were promoted above her. Undoubtedly, this was a contributing factor to the parting of the ways with Google.

But how much value does Mayer’s vast inside knowledge of Google and its past successes bring to Yahoo? It must have played a major role in her selection as the new chief Yahooligan. But was she instrumental in the streak of seemingly picture-perfect management calls in the early days of the Internet’s Golden Child? And, even if she were, does it really matter?

Earlier this year, I took part in an open forum on search at an industry conference. Our moderator tossed a ticking time bomb at the panel, in the form of this delicately stated question: “What the #%^&$ is Google doing lately? Have they gone insane?” We each offered our opinions, which ranged in the degree of madness ascribed to Google’s executives. I started my response with this, “I think we tend to downplay the role luck played in the early days of Google. Maybe their luck is just running out.”

There is a much fancier name for the hypothetical situation I described, which is called “regression to the mean.”  In his recent book, “Thinking, Fast and Slow,” (a HIGHLY recommended read) psychologist and Nobel laureate Daniel Kahneman explores how this can lead us to overvalue executive talent when it’s combined with the halo effect. Kahneman even uses Google as an example: “Of course there was a great deal of skill in the Google story, but luck played a more important role in the actual event than it does in the telling of it. And the more luck was involved, the less there is to be learned.”

Regression to the mean simply means that when you take a snapshot in time that represents either exceptionally good or bad performance, subsequent snapshots tend to move closer to the average. And those highs and lows generally involve luck to some extent. So you can poach talent from a company on a hot streak, only to find that it wasn’t the executives responsible for the performance, but simply the planets aligning in a favorable way.

As an ex-CEO of a company, albeit a tiny one, I find it hard to swallow that leadership might not be as important as we think in the fortunes of a company. But I generally find Kahneman to be an incredibly astute observer of human errors in judgment, so I have to resist the urge to go with my own cognitive biases here and trust Kahneman’s research.  He doesn’t say leadership is inconsequential, but he does caution against ignoring the role of timing and sheer luck.

This is also not to downplay the role Marissa Mayer will play in the future of Yahoo.  Somebody has to lead the company, and Mayer is at least as good a choice as anyone else I can think of.

Who knows? Maybe Yahoo’s luck is due to change. In their case, “regression to the mean” means there’s no place to go but up.

Will Google X Get Google’s Mojo Back?

First published May 31, 2012 in Mediapost’s Search Insider

What do you do when the search engine you started up with your fellow uber-geek partner makes you fabulously wealthy, but somehow all the billions it’s raking in leaves you feeling rather empty?

What do you do when you’re no longer the darling of the mainstream press, who once enthused that no challenge was too daunting for you and your company full of exceptionally gifted and only slightly less egotistical baby geniuses?

Well, if you’re Sergey Brin, you find a new toy. You leave the mind-numbingly mundane business of running a multibillion-dollar mega-corporation to your power-tripping co-founder, and you lock yourself away in an undisclosed office somewhere in Silicon Valley, spending your day playing with robots, space elevators, virtual reality glasses and self-driving cars.

You go back to what you wanted to do in the first place, which was to “put a ding in the universe.” And it’s probably no coincidence that you’re following in the footsteps of your “love me or hate me” mentor, the late Steve Jobs.

Say what you want about Google, I don’t think there’s any doubt that Brin and Page wanted to change the world in substantial (and hopefully non-evil) ways when they started. But the business of running a business tends to make one put ideals on hold and focus on the bottom line. Taking your company public doesn’t help. Shareholders typically value revenue over revolution, profits over prophesy. “Sure, robots and space elevators are cool, but tell me how that’s going to contribute to our quarterly earnings?” Public companies, by necessity, tend to focus on the short term rather than the long.

But Brin has never been a short-term guy. Neither has Page, for that matter. They both love to take something and spin it into a grandiose vision. For Page, he felt he could best realize that by taking over the leadership of Google. But for all the power that comes with that role, there’s also a heaping helping of compromise. Brin apparently felt more comfortable in the more idealistic environs of the Google-X Lab.

If you’re not familiar with Google X, it’s a super-secret hidden laboratory where an ultra-powerful super computer and high tech gadgets allow the billionaire to fight crime… no, wait, that’s the Bat Cave. Google X is a secret laboratory where Brin has been spending a lot of time lately. In a New York Times article from last November, it’s described as a, “clandestine lab where Google is tackling a list of 100 shoot-for-the-stars ideas. Google is so secretive about the effort that many employees do not even know the lab exists.”

What are some of these “shoot-for-the-star” ideas? There is no definitive list, given the “hush hush” nature of Google X, but third-party reports commonly mention space elevators, driverless cars, connected household appliances, and one project that is starting to see the light of day: Google Glass, wearable technology that someday could bring a Google interface to the world around us (more about this in a future column).

Google X certainly doesn’t suffer from a lack of ambition. It’s the type of thing we used to routinely expect from the Google we knew and loved.  And it’s got oodles of “cool”: robots and space elevators and driverless cars, oh my! But these types of skunk work projects are often just a way to pacify a few highly placed egos and keep them out of the way while the real work of the company gets done by those who are a little less grandiose in their ambitions.

And Google X does suffer from Google’s long-term problem of trying to do everything at once. The company has always had a problem with focus. Unlike Google X, Jobs’ lofty ambitions and breakaway projects at Apple were tied to a product that would ship sometime in the next decade. Don’t expect to see a space elevator coming to your neighborhood anytime soon.

So the question remains: Will Google X define the future of Google, or is it just a plaything to keep Sergey happy? Only time will tell.

Living Beyond Our Expectations

First published May 25, 2012 in Mediapost’s Search Insider

To my father-in-law, the Internet is a big black box that he doesn’t understand, but inside of which, all is possible. This became clear to me after the following conversation:

F-I-L: Gord?

Me: Yes?

F-I-L: Can you go on your computer and find the combination for my safe?

Me: Huh?

F-I-L: I have an old safe that I locked years ago and I can’t remember the combination. I thought you could probably find it on your computer.

Of course, by “computer,” he meant the Internet. To him, the Internet is the sum collection of all information, and in that, he’s not far wrong. Chances are, in some archive of manufacturer’s data somewhere, the lost combination probably exists. If it does, it’s just one database call away from being public. One would hope that this information would always remain private, but my point is, as naïve as my father-in-law’s question seems to be, it’s probably not that far removed from reality.

Technology and our expectations of what’s possible also seem to play a game of cat and mouse.  No matter what we dream up, it seems that it becomes reality in the blink of an eye. In fact, I suspect that technology now regularly outpaces our wildest dreams. Almost anything is possible, at least in theory. If it doesn’t exist, it’s probably just that it’s not practical. Nobody has bothered to put in the effort to make it happen.

Consider marketing intelligence, for instance. Remember the first time you encountered what John Battelle dubbed the “database of intentions”? It was Google’s query data, and Battelle had what he called a “Holy Sh*t” moment when he realized:

This information represents, in aggregate form, a place holder for the intentions of humankind – a massive database of desires, needs, wants, and likes that can be discovered, supoenaed, archived, tracked, and exploited to all sorts of ends. Such a beast has never before existed in the history of culture, but is almost guaranteed to grow exponentially from this day forward. This artifact can tell us extraordinary things about who we are and what we want as a culture. And it has the potential to be abused in equally extraordinary fashion.

For marketers, Google had provided us with the biggest source of marketing intelligence ever compiled. It was the crystallization of consumer intent, in searchable form. We collectively salivated over it.

But that was a decade ago. Now, as marketers, we routinely curse the gaps in and shortcomings of Google’s query data. As powerful as it once seemed, our expectations have leapfrogged ahead of it.

Battelle has recently updated his definition of the database of intent, adding four new “fields” to it. Originally there was the search “query,” signaling “what I want.” Now, the “social graph” indicates “who I am” and “who I know.” The “status update” signals “what I’m doing” and “what’s happening.” The “check-in” signals “where I am.” And the “purchase” signals “what I’m buying.”

For a marketer, this is mind-blowing stuff.  The trick, of course, is to bring this all together in a meaningful way. To do so, there are multiple technology, intellectual property and privacy hurdles to get over. But it’s all very doable. It’s administration, not technology, that’s holding us back. A big part of Facebook’s IPO valuation was based on successfully pulling this off.

Again, technology has dangled a possibility at the leading edge of our expectations. But it will happen. And when it does, it will suddenly seem ho-hum to us. Our expectations will rocket forward to another possibility.

But even as fast as our expectations move, I guarantee, somewhere, someone is already working on something that lies beyond anything we ever dreamed of. Thank goodness our expectations are as elastic as they seem to be.

Search and the Age of “Usefulness”

First published April 19, 2012 in Mediapost’s Search Insider

There has been a lot of digital ink spilled over the recent changes to Google’s algorithm and what it means for the SEO industry. This is not the first time the death knell has been rung for SEO. It seems to have more lives than your average barnyard cat. But there’s no doubt that Google’s recent changes throws a rather large wrench in the industry as a whole. In my view, that’s a good thing.

First of all, from the perspective of the user, Google’s changes mark an evolution of search beyond a tool used to search for information to one used by us to do the things we want to do. It’s moving from using relevance as the sole measure of success to incorporating usefulness.

The algorithm is changing to keep pace with the changes in the Web as a whole. No longer is it just the world’s biggest repository of text-based information; it’s now a living, interactive, functional network of apps, data and information, extending our capabilities through a variety of connected devices.

Google had to introduce these back-end changes. Not to do so would have guaranteed the company would have soon become irrelevant in the online world.

As Google succeeds in consistently interpreting more and more signals of user intent, it can become more confident in presenting a differentiated user experience. It can serve a different type of results set to a query that’s obviously initiated by someone looking for information than it does to the user who’s looking to do something online.

We’ve been talking about the death of the monolithic set of search results for years now. In truth, it never died; it just faded away, pixel by pixel. The change has been gradual, but for the first time in several years of observing search, I can truthfully say that my search experience (whether on Google, Bing or the other competitors) looks significantly different today than it did three years ago.

As search changes, so do the expectations of users. And that affects the “use case” of search. In its previous incarnation, we accepted that search was one of a number of necessary intermediate steps between our intent and our ultimate action. If we wanted to do something, we accepted the fact that we would search for information, find the information, evaluate the information and then, eventually, take the information and do something with it. The limitations of the Web forced us to take several steps to get us where we wanted to go.

But now, as we can do more of what we want to online, the steps are being eliminated. Information and functionality are often seamlessly integrated in a single destination. So we have less patience with seemingly superfluous steps between us and our destination. That includes search.

Soon, we will no longer be content with considering the search results page as a sort of index to online content. We will want the functionality we know exists served to us via the shortest possible path. We see this beginning as answers to common information requests are pushed to the top of the search results page.

What this does, in terms of user experience, is make the transition from search page to destination more critical than ever. As long as search was a reference index, the user expected to bounce back and forth between potential destinations, deciding which was the best match. But as search gets better at unearthing useful destinations, our “post-click” expectations will rise accordingly.  Whatever lies on the other side of that search click better be good. The changes in Google’s algorithm are the first step (of several yet to come) to ensure that it is.

What this does for SEO specialists is to suddenly push them toward considering a much bigger picture than they previously had to worry about. They have to think in terms of a search user’s unique intent and expectations. They have to understand the importance of the transition from a search page to a landing page and the functionality that has to offer. And, most of all, they have to counsel their clients on the increasing importance of “usefulness” — and how potential customers will use online to seek and connect to that usefulness.  If the SEO community can transition to that role, there will always be a need for them.

The SEO industry and the Google search quality team have been playing a game of cat and mouse for several years now. It’s been more “hacking” than “marketing” as SEO practitioners prod for loopholes in the Google algorithm. All too often, a top ranking was the end goal, with no thought to what that actually meant for true connections with prospects.

In my mind, if that changes, it’s perhaps the best thing to ever happen in the SEO business.

The ZMOT Continued: More from Jim Lecinski

First published July 28, 2011 in Mediapost’s Search Insider

Last week, I started my conversation with Jim Lecinski, author of the new ebook from Google: “ZMOT, Winning the Zero Moment of Truth.”  Yesterday, Fellow Search Insider Aaron Goldman gave us his take on ZMOT. Today, I’ll wrap up by exploring with Jim the challenge that the ZMOT presents to organizations and some of the tips for success he covers in the book.

First of all, if we’re talking about what happens between stimulus and transaction, search has to play a big part in the activities of the consumer. Lecinski agreed, but was quick to point out that the online ZMOT extends well beyond search.

Jim Lecinski: Yes, Google or a search engine is a good place to look. But sometimes it’s a video, because I want to see [something] in use…Then [there’s] your social network. I might say, “Saw an ad for Bobby Flay’s new restaurant in Las Vegas. Anybody tried it?” That’s in between seeing the stimulus, but before… making a reservation or walking in the door.

We see consumers using… a broad set of things. In fact, 10.7 sources on average are what people are using to make these decisions between stimulus and shelf.

A few columns back, I shared the pinball model of marketing, where marketers have to be aware of the multiple touchpoints a buyer can pass through, potentially heading off in a new and unexpected direction at each point. This muddies the marketing waters to a significant degree, but it really lies at the heart of the ZMOT concept:

Lecinski: It is not intended to say, “Here’s how you can take control,” but you need to know what those touch points are. We quote the great marketer Woody Allen: “‘Eighty percent of success in life is just showing up.”

So if you’re in the makeup business, people are still seeing your ads in Cosmo and Modern Bride and Elle magazine, and they know where to buy your makeup. But if Makeupalley is now that place between stimulus and shelf where people are researching, learning, reading, reviewing, making decisions about your $5 makeup, you need to show up there.

Herein lies an inherent challenge for the organization looking to win the ZMOT: whose job is that? Our corporate org chart reflects marketplace realities that are at least a generation out of date. The ZMOT is virgin territory, which typically means it lies outside of one person’s job description. Even more challenging, it typically cuts across several departments.

Lecinski: We offer seven recommendations in the book, and the first one is “Who’s in charge?” If you and I were to go ask our marketer clients, “Okay, stimulus — the ad campaigns. Who’s in charge of that? Give me a name,” they could do that, right? “Here’s our VP of National Advertising.”

Shelf — if I say, “Who’s in charge of winning at the shelf?” “Oh. Well, that’s our VP of Sales” or “Shopper Marketing.” And if I say, “Product delivery,” – “well that’s our VP of Product Development” or “R&D” or whatever. So there’s someone in charge of those classic three moments. Obviously the brand manager’s job is to coordinate those. But when I say, “Who’s in charge of winning the ZMOT?” Well, usually I get blank stares back.

If you’re intent on winning the ZMOT, the first thing you have to do is make it somebody’s job. But you can’t stop there. Here are Jim’s other suggestions:

The second thing is, you need to identify what are those zero moments of truth in your category… Start to catalogue what those are and then you can start to say, “Alright. This is a place where we need to start to show up.”

The next is to ask, “Do we show up and answer the questions that people are asking?”

Then we talk about being fast and being alert, because up to now, stimulus has been characterized as an ad you control. But sometimes it’s not. Sometimes it’s a study that’s released by an interest group. Sometimes it’s a product recall that you don’t control. Sometimes it’s a competitor’s move. Sometimes it’s Colbert on his show poking a little fun at Miracle Whip from Kraft. That wasn’t in your annual plan, but now there’s a ZMOT because, guess what happens — everybody types in “Colbert Miracle Whip video.” Are you there, and what do people see? Because that’s how they’re going to start making up their mind before they get to Shoppers Drug Mart to pick up their Miracle Whip.

Winning the ZMOT is not a cakewalk. But it lies at the crux of the new marketing reality. We’ve begun to incorporate the ZMOT into the analysis we do for clients. If you don’t, you’re leaving a huge gap between the stimulus and shelf — and literally anything could happen in that gap.