Marketing in the “Middle”

First published August 1, 2013 in Mediapost’s Search Insider

In case you haven’t heard, email is dead. In fact, it’s died several times. You could call it the cat of digital marketing, working it’s way through its nine lives. And it’s not alone. Search has died more than a few times. Display was DOA over a decade ago, and has resurrected itself, only to suffer several more untimely demises. In fact, for any digital channel you might care to mention, I can probably find an obituary.

For some reason, we love to declare things dead. We like clarity and finality, and there’s nothing like death for getting an unequivocal point across. Death, by its very nature, should be the final word – except that, in these cases, it almost never is. These channels, like Mark Twain, have had “the rumors of their deaths greatly exaggerated.”

It’s yet another example of how we hate ambiguity. We don’t like being in the middle, drifting between two far off anchor points. It feels uncertain and “mushy”. Humans don’t do well with “mushy”. We prefer predictability. We like to know where we stand, which requires knowing what’s under our feet. The middle represents “terra incognito” – undiscovered and unstable. We know, if we stand here, we have to be prepared to be nimble and fleet of foot.

This tendency comes down to an unfortunate human fragility – we like predictable outcomes, but we suck at making predictions. Not just some of us suck at it – we all suck at it. Philip Tetlock conducted a two-decade study looking at the success rate of “experts” in making predictions in a wide variety of subjects, especially politics. The outcome? Experts come out slightly ahead of coin tosses and chimps throwing darts. Tetlock’s long list of blundered predictions is staggering. Expertise does not lead to accuracy in divining the future. Yet, we still cling to this false hope. We crave a universe that unfolds as it should, or, at least, as we expect it to.

The messiness comes from the complexity of real life. There’s just too much “stuff” happening for us to make sense of it with our limited intellectual horsepower. Evolution, in its blind wisdom, has allowed for that by building in some natural defenses against complexity. We refer to them as instincts, emotions and beliefs. The nasty “gotch ya” in this is that the more we accumulate experience and knowledge, the more inflexible those beliefs and instincts become. We tend to adopt “big ideas” or “macro-beliefs” as guiding principles and philosophical anchors, which become the lens through which we see the world. We trade off open mindedness for expertise. Tetlock calls these “hedgehogs”, from Isiah Berlin’s essay. “Foxes”, on the other hand, draw on a wide variety of experiences to shape their views. They, by their nature, tend to live in the middle.  Tetlock found that foxes have much better track records when it comes to prediction. So, if you want to know what might happen, don’t ask an expert, especially one who is regularly seen on TV. Ask a dilettante – who is much more comfortable with “mushy.”

Ironically, Jim Collins, of Good to Great and Built to Last fame, also taps Berlin for the hedgehog and fox analogies, but he believed that “hedgehogs” are what makes great companies great, because they provide a single objective to focus on – the “hedgehog” concept.

So, who’s right – Tetlock or Collins? The answer, as you would expect in a column on this theme, is that they’re both right. The world is neither a place exclusively for foxes nor hedgehogs. The sweet spot is in the middle.

Nowhere is this truer than in marketing – which has to mirror all the irrationality of human behavior. There are no absolutes in marketing; there is just a lot of mushiness in the middle.  We need hedgehogs for the “big ideas” that make great marketing great. But we also need foxes to help us navigate through middle successfully.  In fact, the more time I’ve spent in marketing (trying assiduously to avoid becoming an “expert”) the more I’ve realized that the middle is where all the action is: between quantitative and qualitative, between strategy and big data, between creative branding and direct marketing, between science and art.

And here, in the middle, we hate to call anything “dead,” because you just never know what might happen.

The Ill Defined Problem of Attribution

First published July 11, 2013 in Mediapost’s Search Insider

For the past few years, I’ve sat on the board of a company that audits audience for various publications. One of the challenges the entire audience measurement industry has faced is the explosion of channels traditional publishers have been forced to use. It’s one thing to tally up the audience of a single newspaper, magazine or radio station. It’s quite another to try to get an aggregate view of an audience of publishers that, in addition to their magazines, have a website, several blogs, various email newsletters, a full slate of webinars, a YouTube channel, multiple Twitter accounts, Facebook pages, other social destinations, digital versions of magazines and an ever-growing collection of tablet and smartphone apps. Consider, for instance, how you would estimate the size of MediaPost’s total audience.

The problem, one quickly realizes, is how you find a common denominator across all these various points of audience engagement. It’s the classic “apples and oranges” challenge, multiplied several times over.

This is the opposite side of the attribution problem. How do you attribute value, whether it’s in terms of persuading a single prospect, or the degree of engagement across an entire audience, when there are so many variables at play?

Usually, when you talk about attribution, someone in the room volunteers that the answer to the problem can be found by coming up with the right algorithm, with the usual caveat something like this: “I don’t know how to do it, but I’m sure someone far smarter than I could figure it out.” The assumption is that if the data is there, there should be a solution hiding in there somewhere.

No disrespect to these hypothetical “smart” data-crunchers out there, but I believe there is a fundamental flaw in that assumption. The problem behind that assumption is that we’re accepting the problem as a “well defined” one – when in fact it’s an “ill-defined” problem.

We would like to believe that this is a solvable problem that could be reduced to a simplified and predictable model. This is especially true for media buyers (who use the audience measurement services) and marketers (who would like to find a usable attribution model). The right model, driven by the right algorithm, would make everyone’s job much easier. So, let’s quit complaining and just hire one of those really smart people to figure it out!

However, if we’re talking about an ill-defined problem, as I believe we are, then we have a significantly bigger challenge. Ill-defined problems defy clear solutions because of their complexity and unpredictability. They usually involve human elements impossible to account for. They are nuanced and “grey” as opposed to clear-cut “blacks and white.” If you try to capture an ill-defined problem in a model, you are forced to make heuristic assumptions that may be based on extraneous noise rather than true signals. This can lead to “overfitting.”

Let me give you an example. Let’s take that essential human goal: finding a life partner. Our task is to build an attribution model for successful courtship. Let us assume that we met our own livelong love in a bar. We would assume, then, that bars should have a relatively generous attribution of value in the partnership “conversion” funnel. But we’re ignoring all the “ill-defined” variables that went into that single conversion event: our current availability, the availability of the prospect, our moods, our level of intoxication, the friends we were with, the song that happened to be playing, the time of night, the necessity to get up early the next morning to go to work, etc.

In any human activity, the list of variables that must be considered to truly “define” the problem quickly becomes impossible. If we assume that bars are good places to find a partner, we must simplify to the point of “over-fitting.”  It may turn out that a grocery store, ATM or dentist’s waiting room would have served the purpose equally well.

Of course, you could take a purely statistical view, based on backwards-looking data. For example, we could say that of all couples, 23.7% of them met in bars. That may give us some very high level indications of “what” is happening, but it does little to help us understand the “why” of those numbers. Why do bars act as a good meeting ground?

In the end, audience measurement and attribution, being ill-defined problems, may end up as rough approximations at best. And that’s OK. It’s better than nothing. But I feel it’s only fair to warn those who believe there’s a “smarter” whiz out there who can figure all this out: Human nature is notoriously tough to predict.

Seperating the Strategic Signal from the Tactical Noise in Marketing

First published April 4, 2013 in Mediapost’s Search Insider

It’s somewhat ironic that, as a die-hard Darwinist, I find myself in the position of defending strategy against the onslaught of Big Data. Since my initial column on this subject a few months ago, I’ve been diving deeper into this topic.

Here’s the irony.

Embracing Big Data is essentially embracing a Darwinist approach to marketing.  It resists taking a top-down approach (aka strategy) by using data feedback to enforce evolution of your marketing program. It makes marketing “antifragile,” in the words of Nassim Nicholas Taleb. In theory, it uses disorder, mistakes and unexpected events to continually improve marketing.

Embracing strategy — at least my suggested Bayesian approach to strategy — would be akin to embracing intelligent design. It defines what an expected outcome should be, then starts defining paths to get there. But it does this in the full realization that those paths will continually shift and change. In fact, it sets up the framework to enable this strategic fluidity. It still uses “Big Data,” but puts it in the context of “Big Testing” (courtesy Scott Brinker).

To remove the strategy from the equation, as some suggest, would be to leave your marketing subject to random chance. Undoubtedly, given perfect feedback and the ability to quickly adapt using that feedback, marketing could improve continually. After all, we evolved in just such an environment and we’re pretty complex organisms.  But it’s hard to argue that a designer would have designed such flaws as our pharynx, which is used both for eating and breathing, leading to a drastically higher risk of choking; our spinal column, which tends to become misaligned in a significant portion of the population; or the fact that our retinas are “inside out.”

Big Data also requires separating “signal” from “noise” in the data. But without a strategic framework, what is the signal and what is the noise? Which of the datum do you pay attention to, and which do you ignore?

Here’s an even bigger question. What constitutes success and failure in your marketing program? Who sets these criteria? In nature, it’s pretty simple. Success is defined by genetic propagation. But it’s not so clear-cut in marketing. Success needs to align to some commonly understood objectives, and these objectives should be enshrined in — you guessed it, your strategy.

I believe that if  “intelligent designers” are available, why not use them? And I would hope that most marketing executives should fit the bill. As long as strategy includes a rigorous testing methodology and honest feedback does not fall victim to egotistical opinions and “yes speak” (which is a huge caveat, and a topic too big to tackle here), a program infused with strategy should outperform one left to chance.

But what about Taleb’s “Black Swans”? He argues that by providing “top down” direction, leading to interventionism, you tend to make systems fragile. In trying to smooth out the ups and downs of the environment, you build in limitations and inflexibility. You lose the ability to deal with a Black Swan, that unexpected occurrence that falls outside of your predictive horizon.

It’s a valid point. I believe that Black Swans have to be expected, but should not dictate your strategy. By their very nature, they may never happen. And if they do, they will be infrequent. If your strategy meets a Black Swan head on, a Bayesian approach should come with the humility to realize that the rules have changed, necessitating a corresponding change in strategy. But it would be a mistake to abandon strategy completely based on a “what-if.”

The Evolution of Strategy

First published January 3, 2013 in Mediapost’s Search Insider

Last week I asked the question, “Will Big Data Replace Strategic Thinking?”  Many of you answered, with a ratio splitting approximately two for one on the side of thinking. But, said fellow Search Insider Ryan Deshazer, “Not so fast! Go beyond the rebuttal!”

I agree with my friend Ryan. This is not a simple either/or answer. We  (or at least 66% of us) may agree that models and datasets, no matter how good they are, can’t replace thinking. But we can’t dismiss the importance of them,either. Strategy will change, and data will be a massive driver in that change.

Both the Harvard Business Review and the New York Times have recent posts on the subject. In HBR, Justin Fox tells of a presentation by Vivek Ranadive, who said, “I believe that math is trumping science. What I mean by that is you don’t really have to know why, you just have to know that if a and b happen, c will happen.”

He further speculates that U.S. monetary policy might do better being guided by an algorithm rather than bankers: “The fact is, you can look at information in real time, and you can make minute adjustments, and you can build a closed-loop system, where you continuously change and adjust, and you make no mistakes, because you’re picking up signals all the time, and you can adjust.”

The Times’ Steve Lohr also talks about the recent enthusiasm for a quantitative approach to management, evangelized by Erik Brynjolfsson, Director of the MIT Center for Digital Business, who says Big Data will “replace ideas, paradigms, organizations and ways of thinking about the world.”

However, Lohr and Fox (who wrote the excellent book, “The Myth of the Rational Market”) caution about the oversimplifications inherent in modeling. Take, for example, some of the potentially flawed assumptions in Ranadive’s version of an algorithmically driven monetary policy:

–       Something as complex as monetary policy can be contained in a closed loop system

–       The past can reliably predict the future

–       If it doesn’t — and things do head into uncharted territory, — you’ll be able to “tweak” things into place as new information becomes available.

Fox uses the analogy of a Landing Page A/B (or multivariate) test as an example of the new quantitative approach to the world. In theory, page design could be left to a totally automated and testable process, where real-time feedback from users eventually decides the optimal layout. It sounds good in theory, but here’s the problem with this approach to marketing: You can’t test what you don’t think of. The efficacy of testing depends on the variables you choose to test. And that requires some thinking. Without a solid hypothesis based on a strategic view of the situation, you can quickly go down a rabbit hole of optimizing for the wrong things.

For example, most heavily tested landing pages I’ve seen all reach the same eventual destination: a page optimized for one definition of a conversion. Typically this would be the placement of an order or the submission of a form. There will be reams of data showing why this is the optimal variation. But what about all the prospects that hit that page for which the one offered conversion wasn’t the right choice? How do they get captured in the data? Did anyone even think to include them in the things to test for?

Fox offers a hybrid view of strategic management that more closely aligns with where I see this all going — call it Bayesian Strategic management. Traditional qualitative strategic thinking is required to set the hypothetical view of possible outcomes, but then we apply a quantitative rigor to measure, test and adjust based on the data we collect. This treads the line between the polarities of responses gathered by last week’s column – it puts the “strategic” horse before the “big data” cart. More importantly, it holds our strategic view accountable to the data. A strategy becomes a hypothesis to be tested.

One final thought. Whether we’re talking about Ranadive’s utopian (or dystopian?) vision of a data driven world or any of the other Big Data evangelists, there seems to be one assumption that I believe is fundamentally flawed, or at least, overly optimistic: that human behaviors can be adequately contained in a predictable, rational, controlled closed loop system. When it comes to understanding human behavior, the capabilities of our own brain far outstrip any algorithmically driven model ever created — yet we still get it wrong all the time.

If Big Data could really reliably predict human behaviors, do you think we’d be in financial situation we are now?

Will Big Data Replace Strategy?

First published December 27, 2012 in Mediapost’s Search Insider

Anyone who knows me knows I love strategy. I have railed incessantly about our overreliance on tactical execution and our overlooking of the strategy that should guide said execution. So imagine my discomfort this past week when, in the midst of my following up on the McLuhan theme of my last column, I ran into a tidbit from Ray Rivera, via Forbes, that speculated that strategic management might becoming obsolescent.

Here’s an excerpt: As amounts of data approaching entire populations become available, models become less predictive and more descriptive. As inference becomes obsolete, management methods that rely on it will likely be affected. A likely casualty is strategic management, which attempts to map out the best course of action while factoring in constraints. Classic business strategy (e.g., the five forces) is especially vulnerable to losing the relevance it accumulated over several decades.

The crux of this is the obsolescence of inference. Humans have historically needed to infer to compensate for imperfect information. We couldn’t know everything with certainty, so we had to draw conclusions from the information we did have. The bigger the gap, the greater the need for inference. And, like most things that define us, the ability to infer was sprinkled through our population in a bell-curved standard distribution. We all have the ability to fill in the gaps through inference, but some of us are much better at it than others.

The author of this post speculates that as we get better and more complete information, it will become less important to fill in the gaps to set a path for the future — and more important to act quickly on what we know, correcting our course in real time: With access to comprehensive data sets and an ability to leave no stone unturned, execution becomes the most troublesome business uncertainty. Successful adaptation to changing conditions will drive competitive advantage more than superior planning.

Now, just in case you’re wondering, I don’t agree with the premise, but there is considerable merit to Rivera’s hypothesis, so let’s consider it using a fairly accessible analogy: the driving of a car. If we’re driving to a destination where we’ve never been before, and we don’t know what we’ll encounter en route, we need a strategy. We need to know the general direction, we need a high-level understanding of the available routes, we need to know what an acceptable period of time would be to reach our destination, and we need some basic strategic guidelines to deal with the unexpected – for example, if a primary route is clogged with traffic, we will find an alternative route using secondary roads. These are all tools we use to help us infer what the best way to get from point A to B might be.

But what if we have a GPS that has access to real-time traffic information and can automatically plot the best available route? Given the analogous scenario, this is as close to perfect information at we’re likely to get. We no longer need a strategy. All we need to do is follow the provided directions and drive. No inference is required. The gaps are filled by the data we have available to us.

So far, so good. But here is the primary reason why I believe strategic thinking is in no danger of expiring anytime soon. If strategy was only about inference, I might agree with Rivera’s take (by he way, he’s from SAP, so he may have a vested interest in promoting the wonders of Big Data).

However, I believe that interpretation and synthesis are much more important outcome of strategy.  The drawback of data is that it needs to be put into a context to make it useful.  Unlike traffic jams and roadways, which tend to be pretty concrete concepts (stop and go, left or right — and yes, I used the pun intentionally), business is a much more abstract beast. One can measure performance indicators ad nauseam, but there should be some framework to give them meaning. We can’t just count trees (or, in the era of Big Data, the number of leaves per limb per tree). We need to recognize a forest when we see one.

Interpretation is one advantage, but synthesis is the true gold that strategic thinking yields. Data tends to live in silos. Metrics tend to be analyzed in homogenous segments (for example, Web stats, productivity yields, efficiency KPIs). True strategy can bring disparate threads together and create opportunities where none existed before. Here, strategy is not about filling the gaps in the information you have, it’s about using that information in new ways to create something remarkable.

I disagree most vehemently with Rivera when he says: While not disappearing altogether, strategy is likely to combine with execution to become a single business function.

I’ve been working in this business for going on three decades now. In all that time, I have rarely seen strategy and execution combine successfully in a single function (or, for that matter, a single person). They are two totally different ways of thinking, relying on two different skill sets. They are both required, but I don’t believe they can be combined.

Strategy is that intimately and essentially human place where business is not simply science, but becomes art. It is driven by intuition and vision. And I, for one, am not looking forward to the day where it becomes obsolescent.

A Benchmark in Time

First published September 13, 2012 in Mediapost’s Search Insider

That’s the news from Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average. — Garrison Keillor

How good are you? How intelligent, how talented, how kind, how patient? You can give me your opinion, but just like the citizens of Lake Wobegon, you’ll be making those judgments in a vacuum unless you compare yourself to others. Hence the importance of benchmarking.

The term benchmarking started with shoemakers, who asked their customers to put their feet on a bench where they were marked to serve as a pattern for cutting leather. But of course, feet are absolute things. They are a certain size and that’s all there is to it. Benchmarking has since been adapted to a more qualitative context.

For example, let’s take digital marketing maturity. How does one measure how good a company is at connecting with customers online? We all have our opinions, and I suspect, just like those little Wobegonians, most of us think we’re above average. But, of course, we all can’t be above average, so somebody is fudging the truth somewhere.

I have found that when we work with a client, benchmarking is an area of great political sensitivity, depending on your audience. Managers appreciate competitive insight and are a lot less upset when you tell them they have an ugly baby (or, at least, a baby of below-average attractiveness) than the practitioners who are on the front lines. I personally love benchmarking, as it serves to get a team on the same page. False complacency vaporizes in the face of real evidence that a competitor is repeatedly kicking your tushie all over the block.  It grounds a team in a more objective view of the marketplace and takes decision-making out of the vacuum.

But before going on a benchmarking bonanza, here are some things to consider:

Weighting is Important

It’s pretty easy to assign a score to something. But it’s more difficult to understand that some things are more important than others. For example, I can measure the social maturity of a marketer based on Facebook likes, the frequency of Twitter activity, the number of stars they have on Yelp or the completeness of their Linked In Profile, but these things are not equal in importance. Not only are they not equal, but the relative importance of each social activity will change from industry to industry and market to market. If I’m marketing a hotel, TripAdvisor reviews can make or break me, but I don’t care as much about my number of LinkedIn connections. If I’m marketing a movie or a new TV show, Facebook “Likes” might actually be a measure that has some value. Before you start assigning scores, you need a pretty accurate way to weight them for importance.

Be Careful Whom You’re Benchmarking Against

If you ask any marketer who their primary competitors are, they’ll be able to give you three or four names off the top of their head. That’s the obvious competition. But if we’re benchmarking digital effectiveness, it’s the non-obvious competition you have to worry about. That’s why we generally include at least one “aspirational” candidate in our benchmarking studies. These candidates set the bar higher and are often outside the traditional competitive set. While it may be gratifying to know you’re ahead of your primary competitors, that will be small comfort if a disruptive competitor (think Amazon in the industrial supply category) suddenly changes the game and blows up your entire market model by resetting your customer’s expectations. Good benchmarking practices should spot those potential hazards before they become critical.

Keep Objective

If qualitative assessments are part of your benchmarking (and there’s nothing wrong with that), make sure your assessments aren’t colored by internal biases. Having your own people do benchmarking can sometimes give you a skewed view of your market.  It might be worthwhile to find an external partner to help with benchmarking, who can ensure objectivity when it comes to evaluation and scoring.

And finally, remember that everybody is above average in something…

Zappos New Business Model: Have Insight, Will Respond

A story this morning in Adweek about Zappos reminded me of a recent experience with a client. I’ll get to the Zappos story in a moment, but first our client’s story.

This customer wanted to set up a client summit at Google’s main office in Mountain View. Attending the summit were not only their search team but also some highly placed executives. The reason for the summit was ostensibly to talk about the client’s search campaign, but it soon became apparent that the executives were looking for something more. They had specifically asked for someone to spend some time talking about Google’s culture.

Throughout the day, Google paraded a number of new advertising offerings in front of the team. While the front line teams were intrigued, one particular senior executive seemed to be almost snoozing through the sales pitches for Google’s new advertising gadgets and gizmos. It was only when the conversation turned to Google’s business practices that the executive perked up, suddenly taking volumes of notes. It made me realize that sometimes, it’s not only what we sell that has value for our customers, it’s what we are. I chatted about this recently with someone from Google, saying that their corporate philosophy and way of doing business is of interest to people. I urged him to find a way to package it as a value add for customers. While he agreed the idea was intriguing, I think it got relegated to the “polite jotting down without any intention of acting on it” category.

Now, back to the Zappo’s story. That’s exactly what they’re doing, taking their customer service religion and packaging it so that thousands of businesses can learn by going directly to the source. Zappos Insights is a subscription service ($39.95 per month) that let’s aspiring businesses ask questions about the “Zappos way” and get answers from actual Zappos employees.

The service, said CEO Tony Hsieh, is targeted at the “Fortune 1 million” looking to build their businesses. “There are management consulting firms that charge really high rates,” he said. “We wanted to come up with something that’s accessible to almost any business.”

It’s a pretty smart move. There’s no denying we’re going through a sea change in how business is done. And I’ve always felt that there’s a impractical divide between consultants and businesses that are consistently implementing every day. It seems like you can either do, or teach, but not both. Amazing stories such as Apple, Google, Southwest and Zappos have shown that innovation with culture is as important as innovation in what ends up in the customer’s hands. Zappos is trying to blend the two in an intriguing revenue model.