The Evolution of Strategy

First published January 3, 2013 in Mediapost’s Search Insider

Last week I asked the question, “Will Big Data Replace Strategic Thinking?”  Many of you answered, with a ratio splitting approximately two for one on the side of thinking. But, said fellow Search Insider Ryan Deshazer, “Not so fast! Go beyond the rebuttal!”

I agree with my friend Ryan. This is not a simple either/or answer. We  (or at least 66% of us) may agree that models and datasets, no matter how good they are, can’t replace thinking. But we can’t dismiss the importance of them,either. Strategy will change, and data will be a massive driver in that change.

Both the Harvard Business Review and the New York Times have recent posts on the subject. In HBR, Justin Fox tells of a presentation by Vivek Ranadive, who said, “I believe that math is trumping science. What I mean by that is you don’t really have to know why, you just have to know that if a and b happen, c will happen.”

He further speculates that U.S. monetary policy might do better being guided by an algorithm rather than bankers: “The fact is, you can look at information in real time, and you can make minute adjustments, and you can build a closed-loop system, where you continuously change and adjust, and you make no mistakes, because you’re picking up signals all the time, and you can adjust.”

The Times’ Steve Lohr also talks about the recent enthusiasm for a quantitative approach to management, evangelized by Erik Brynjolfsson, Director of the MIT Center for Digital Business, who says Big Data will “replace ideas, paradigms, organizations and ways of thinking about the world.”

However, Lohr and Fox (who wrote the excellent book, “The Myth of the Rational Market”) caution about the oversimplifications inherent in modeling. Take, for example, some of the potentially flawed assumptions in Ranadive’s version of an algorithmically driven monetary policy:

–       Something as complex as monetary policy can be contained in a closed loop system

–       The past can reliably predict the future

–       If it doesn’t — and things do head into uncharted territory, — you’ll be able to “tweak” things into place as new information becomes available.

Fox uses the analogy of a Landing Page A/B (or multivariate) test as an example of the new quantitative approach to the world. In theory, page design could be left to a totally automated and testable process, where real-time feedback from users eventually decides the optimal layout. It sounds good in theory, but here’s the problem with this approach to marketing: You can’t test what you don’t think of. The efficacy of testing depends on the variables you choose to test. And that requires some thinking. Without a solid hypothesis based on a strategic view of the situation, you can quickly go down a rabbit hole of optimizing for the wrong things.

For example, most heavily tested landing pages I’ve seen all reach the same eventual destination: a page optimized for one definition of a conversion. Typically this would be the placement of an order or the submission of a form. There will be reams of data showing why this is the optimal variation. But what about all the prospects that hit that page for which the one offered conversion wasn’t the right choice? How do they get captured in the data? Did anyone even think to include them in the things to test for?

Fox offers a hybrid view of strategic management that more closely aligns with where I see this all going — call it Bayesian Strategic management. Traditional qualitative strategic thinking is required to set the hypothetical view of possible outcomes, but then we apply a quantitative rigor to measure, test and adjust based on the data we collect. This treads the line between the polarities of responses gathered by last week’s column – it puts the “strategic” horse before the “big data” cart. More importantly, it holds our strategic view accountable to the data. A strategy becomes a hypothesis to be tested.

One final thought. Whether we’re talking about Ranadive’s utopian (or dystopian?) vision of a data driven world or any of the other Big Data evangelists, there seems to be one assumption that I believe is fundamentally flawed, or at least, overly optimistic: that human behaviors can be adequately contained in a predictable, rational, controlled closed loop system. When it comes to understanding human behavior, the capabilities of our own brain far outstrip any algorithmically driven model ever created — yet we still get it wrong all the time.

If Big Data could really reliably predict human behaviors, do you think we’d be in financial situation we are now?

Will Big Data Replace Strategy?

First published December 27, 2012 in Mediapost’s Search Insider

Anyone who knows me knows I love strategy. I have railed incessantly about our overreliance on tactical execution and our overlooking of the strategy that should guide said execution. So imagine my discomfort this past week when, in the midst of my following up on the McLuhan theme of my last column, I ran into a tidbit from Ray Rivera, via Forbes, that speculated that strategic management might becoming obsolescent.

Here’s an excerpt: As amounts of data approaching entire populations become available, models become less predictive and more descriptive. As inference becomes obsolete, management methods that rely on it will likely be affected. A likely casualty is strategic management, which attempts to map out the best course of action while factoring in constraints. Classic business strategy (e.g., the five forces) is especially vulnerable to losing the relevance it accumulated over several decades.

The crux of this is the obsolescence of inference. Humans have historically needed to infer to compensate for imperfect information. We couldn’t know everything with certainty, so we had to draw conclusions from the information we did have. The bigger the gap, the greater the need for inference. And, like most things that define us, the ability to infer was sprinkled through our population in a bell-curved standard distribution. We all have the ability to fill in the gaps through inference, but some of us are much better at it than others.

The author of this post speculates that as we get better and more complete information, it will become less important to fill in the gaps to set a path for the future — and more important to act quickly on what we know, correcting our course in real time: With access to comprehensive data sets and an ability to leave no stone unturned, execution becomes the most troublesome business uncertainty. Successful adaptation to changing conditions will drive competitive advantage more than superior planning.

Now, just in case you’re wondering, I don’t agree with the premise, but there is considerable merit to Rivera’s hypothesis, so let’s consider it using a fairly accessible analogy: the driving of a car. If we’re driving to a destination where we’ve never been before, and we don’t know what we’ll encounter en route, we need a strategy. We need to know the general direction, we need a high-level understanding of the available routes, we need to know what an acceptable period of time would be to reach our destination, and we need some basic strategic guidelines to deal with the unexpected – for example, if a primary route is clogged with traffic, we will find an alternative route using secondary roads. These are all tools we use to help us infer what the best way to get from point A to B might be.

But what if we have a GPS that has access to real-time traffic information and can automatically plot the best available route? Given the analogous scenario, this is as close to perfect information at we’re likely to get. We no longer need a strategy. All we need to do is follow the provided directions and drive. No inference is required. The gaps are filled by the data we have available to us.

So far, so good. But here is the primary reason why I believe strategic thinking is in no danger of expiring anytime soon. If strategy was only about inference, I might agree with Rivera’s take (by he way, he’s from SAP, so he may have a vested interest in promoting the wonders of Big Data).

However, I believe that interpretation and synthesis are much more important outcome of strategy.  The drawback of data is that it needs to be put into a context to make it useful.  Unlike traffic jams and roadways, which tend to be pretty concrete concepts (stop and go, left or right — and yes, I used the pun intentionally), business is a much more abstract beast. One can measure performance indicators ad nauseam, but there should be some framework to give them meaning. We can’t just count trees (or, in the era of Big Data, the number of leaves per limb per tree). We need to recognize a forest when we see one.

Interpretation is one advantage, but synthesis is the true gold that strategic thinking yields. Data tends to live in silos. Metrics tend to be analyzed in homogenous segments (for example, Web stats, productivity yields, efficiency KPIs). True strategy can bring disparate threads together and create opportunities where none existed before. Here, strategy is not about filling the gaps in the information you have, it’s about using that information in new ways to create something remarkable.

I disagree most vehemently with Rivera when he says: While not disappearing altogether, strategy is likely to combine with execution to become a single business function.

I’ve been working in this business for going on three decades now. In all that time, I have rarely seen strategy and execution combine successfully in a single function (or, for that matter, a single person). They are two totally different ways of thinking, relying on two different skill sets. They are both required, but I don’t believe they can be combined.

Strategy is that intimately and essentially human place where business is not simply science, but becomes art. It is driven by intuition and vision. And I, for one, am not looking forward to the day where it becomes obsolescent.

Google’s Personality Crisis

First published November 15, 2012 in Mediapost’s Search Insider

“Be not afraid of marketing: some are born marketers, some achieve marketing, and some have marketing thrust upon them.” — (paraphrased from) William Shakespeare.

Google has never been comfortable as a marketing company. The only reason it became a marketing company (or worse, a media company) is because it happened to stumble on the single most effective marketing channel of all time and had to figure out some way to monetize it. Even then, Adwords wasn’t Google’s idea, but Goto’s (which became Overture, which became Yahoo). Google just stole it and tweaked it a little. Because that’s what engineers do. And that’s what Google is, first a foremost, a company of engineers. Google has worn its marketing mantle the same way I wear a Speedo: uncomfortably (and yes, a little incongruously).

Anytime Google has tried to embrace its inner “marketingness,” the results have ranged from vaguely boring to disastrous. Asking Google to become a marketer is kind of like asking Stephen Hawkins to enter a wet T-shirt content — a terrible waste of cranial processing power (and frankly, not something I’d particularly want to see).

Google had the questionable luck to become fabulously profitable as a marketer, simply because it created a utility that just happened to capture eyeballs when they were attached to wallets ready to spring into action. It was like stealing candy from a baby. But then the hard cold reality hit home. Google became a public company, which meant it had a lot of shareholders who fully expected the stroke of fate that poured money into Google’s coffers to continue. So the company had to find other marketing channels, which in turn meant its strategists had to get over their distaste of marketing in general.

So they, being resolutely Googlish, decided to reinvent marketing to make it less, well, ”markety.” They would introduce their idea of marketing, infused with a pure geekish streak of scalability, market efficiency and engineering precision. I think we all know how that turned out, as the echoes of Google TV, Google Print and Google Radio still reverberate in the Hall of Stupendously Spectacular Failures.

Face it Google. You don’t get marketing, so stop trying. Step away from the bling and tchotchkes. Retreat to the warm embrace of your slide rules and HP scientific calculators.

But, whether it gets marketing or not, Google’s dilemma remains. Its revenues depend on marketing. And marketing revenues can be staggeringly profitable, yet notoriously fickle. It’s all about eyeballs, preferably with wallets attached. Where can Google get more of the same, if not from marketing?

If we break this down, we can assume a few things to be true. Eyeballs will increasingly turn their gaze online, at some screen or another. Also, those eyeballs will be looking for ever-more-relevant stuff to do something with. Finally, if that “stuff” has something to do with buying things, then there’s a good opportunity for companies who market those things.

Let’s look at what Google is good at. Google is good — make that great — at engineering scalable, efficient, redundant systems.  Google strategists believe that if they could totally remove human “noise” from the equation, the world would be a much happier place. It’s Nirvana as envisioned by Stanley Kubrick: a little sterile, but oh-so-dependable.

That skill set is a horrible match for marketers, where empathy is kind of important. But it’s a great match for utility providers. At its roots, that’s what Google was, right from the first inception of “Backrub” running surreptitiously from a Stanford dorm room: it was a tool.

Google has tentatively ventured down this path — with WiFi access, Android, and, most recently, by rolling out high-speed Internet access for Google TV subscribers. But in each of those cases, the utility was not the end goal – it was to provide a platform for more marketing.

At what point will Google principals realize they suck at marketing, but are damned good at providing the underlying infrastructure required? It’s not as sexy, or as profitable, but as Google approaches middle age, isn’t it time they started getting comfortable in their own skin?

The Swapping of the Old “Middle” for the New

First published November 8, 2012 in Mediapost’s Search Insider

For the past several columns, I’ve been talking about disintermediation. My hypothesis is that technology is driving a general disintermediation of the marketplace (well, it’s not really my hypothesis — it’s a pretty commonly held view) and is eliminating a vast “middle” infrastructure that has accounted for much of the economic activity of the past several decades. It’s a massive shift (read “disruption”) in the market that will play out over the next several years.

But every good hypothesis must stand up to challenge, and an interesting one came from a recent article in Slate, which talks about the growth of a brand new kind of “gatekeeper,” the new “bots” that crawl the Web and filter (or, in some cases, generate) content based on a preset algorithm. These bots can crawl blog posts, pinpointing spam and malicious posts so they can be removed. The sophistication is impressive, as the most advanced of these tap into the social graph to learn, in real time, the context of posts so it can make nuanced judgment calls about what is and isn’t spam.

But these bots don’t simply patrol the online frontier, they also contribute to it. They can generate automated social content based on pre-identified themes. In other words, they can become propaganda generators. So now we have a new layer of “middle” that acts both as censor and propagandist. Have we gained anything here?

The key concept here is one of control. The “middle” used to control both ends of the market. It did so because it controlled the bridge between the producers and consumers.  This was control in every sense: control of the flow of finance, control of the physical market itself, and control of communication.

With disintermediation, direct connections are being built between producers and consumers. With this comes a redefinition of control. In terms of financial control, disintermediation should (theoretically) produce a more efficient marketplace, resulting in more profit for producers and better prices for consumers. That drastically oversimplifies the pain involved in getting to a more efficient marketplace, but you get the idea.  In this case, the only loser is the middle, so there’s no real incentive for the producers or consumers to ensure its survival.

Disintermediation of the physical market essentially works itself out. If the product needs a face-to-face representative, the middle will survive. If not, then we’ll figure out how to facilitate the sale online, and you can expect to see a lot of UPS vans in your neighborhood. We consumers may mourn the loss of a “face” in some segments of our marketplace, but we’ll get over it.

When it comes to control of communication, it’s more difficult to crystal-ball what might happen in the future. This area is also where new gatekeepers are most likely to appear.

Communication between marketers and the market used to be tightly channeled and controlled by the “middle.” It also used to flow in essentially one direction – from the marketer to the market. It was always very difficult for true communication to flow the other way.

But now, content is sprouting everywhere and becomes publicly accessible through a multitude of online touch points. It could soon become overwhelming to navigate through, both for consumers and producers. In this case, arguably, the middle served a very real service to both producers and consumers. The middle could edit communication, saving us from wading through a mountain of content to get what we were looking for.  It could also ensure that the messages producers wanted to get to the market were effectively delivered. The channels were under the control of the marketplace. For this reason, both marketers and the market may be reluctant to see disintermediation when it comes to communication.

The new gatekeepers, such as those featured in the Slate article, seem to serve both ends of the market. They help consumers access higher quality information by weeding out spam and objectionable content. And they help producers exercise some degree of control over negative content generated by the marketplace. In the absence of tight control of channels, a concept that’s gone the way of the dodo, this scalable, automated gatekeeper seems to serve a purpose.

If the need is great enough on both sides of the market, we are likely to find a new “middle” emerge: an “infomediary,” to use the term coined by John Hagel, Marc Singer and Jeffrey Rayport. According to this definition of the middle, Google emerges as the biggest of the “infomediaries.”

The question is, how much control are we willing to give this new evolution of the middle? In return for hacking some semblance of sanity out of the chaos that is an unmediated information marketplace, how much are we willing to pay in return? And, where does this control (and with it, the associated power) now live?  Who owns the new gatekeepers?  And who are those gatekeepers accountable to?

Disintermediation of a New, More Connected World

First published November 1, 2012 in Mediapost’s Search Insider

On Monday, one of the byproducts of disintermediation hit me with the force of, well — a hurricane, to be exact. We are more connected globally than ever before.

This Monday and Tuesday, three different online services I use went down because of Sandy. They all had data centers on the East Coast.

Disintermediation means centralization, which means that we will have more contact with people and businesses that spread across the globe.

The laptop I’m writing this column on (a MacBook Pro) was recently ordered from Apple. I was somewhat amazed to see the journey it took on its way to me. It left a factory in China, spent a day in Shanghai, then passed through Osaka, Japan on its way to Anchorage, Ala. From there it was on to Louisville, Ky. (ironically, the flight path probably went right over my house), then back to Seattle, Vancouver and then to my front door. If my laptop were a car, I would have refused delivery – it already had a full year’s worth of miles on it before I even got to use it.

A disintermediated world means a more globally reliant world. We depend on assembly factories in Taiyuan (China), chip factories in Yamaguchi (Japan), call centers in Pune (India), R&D labs in Hagenberg (Austria), industrial designers in Canberra (Australia) and yes, data centers in lower Manhattan. When workers brawl, tsunamis hit, labor strikes occur and tropical storms blow ashore, even though we’re thousands of miles away, we feel the impact. We no longer just rely on our neighbors, because the world is now our neighborhood.

This adds a few new wrinkles to the impacts of disintermediation, both positive and negative.

On the negative side, as we saw forcefully demonstrated this week, is the realization that our connected markets are more fragile than ever. As production becomes concentrated due to various global advantages, it is more vulnerable to single-point failures. One missing link and entire networks of co-dependent businesses go down. This lack of redundancy will probably be corrected in time, but for now, it’s what we have to live with.

But, on the positive side, our new connectedness also means we have to have interest in the well being of people that would have been out of our scope of consciousness just a mere decade ago. We care about the plight of the average worker at Foxconn, if for no other reason than it will delay the shipment of our new Mac. I exaggerate here (I hope we’re not that blasé about human rights in China) to make a point: when we have a personal stake in something, we care more. When you depend on someone for something important to you, you tend to treat them with more consideration. Thomas Friedman, in his book “The World is Flat,” called it the Dell Theory of Conflict Prevention:

“The Dell Theory stipulates: No two countries that are both part of a major global supply chain, like Dell’s, will ever fight a war against each other as long as they are both part of the same global supply chain.”

To all of you who weathered the storm, just know that you’re not alone in this. We depend on you – so, in turn, feel free to depend on us.

The Balancing of Market Information

First published October 25, 2012 in Mediapost’s Search Insider

In my three previous columns on disintermediation, I made a rather large assumption: that the market will continue to see a balancing of information available both to buyers and sellers. As this information becomes more available, the need for the “middle” will decrease.

Information Asymmetry Defined

Let’s begin by exploring the concept of information asymmetry, courtesy of George Akerlof, Michael Spence and Joseph Stiglitz.  In markets where access to information is unbalanced, bad things can happen.

If the buyer has more information than the seller, then we can have something called adverse selection. Take life and health insurance, for example. Smokers (on the average) get sick more often and die younger than non-smokers. If an insurance company has 50% of policyholders who are smokers, and 50% who aren’t, but the company is not allowed to know which is which, it has a problem with adverse selection. It will lose money on the smokers so it will increase rates across the board. The problem is that non-smokers, who don’t use insurance as much, will get angry and may cancel their policy. This will mean the “book of business” will become even less profitable, driving rates even higher.   The solution, which we all know, is simple: Ask policy applicants if they smoke. Imperfect information is thus balanced out.

If the seller has more information than the buyer, then we have a “market for lemons” (the name of Akerlof’s paper). Here,  buyers are  assuming risk in a purchase without knowingly accepting that risk, because they’re unaware of the problems that the seller knows exists. Think about buying a used car, without the benefit of an inspection, past maintenance records or any type of independent certification. All you know is what you can see by looking at the car on the lot. The seller, on the other hand, knows the exact mechanical condition of the car. This factor tends to drive down the prices of all products –even the good ones — in the market, because buyers assume quality will be suspect. The balancing of information in this case helps eliminates the lemons and has the long-term effect of improving the average quality of all products on the market.

Getting to Know You…

These two forces — the need for sellers to know more about their buyers, and the need for buyers to know more about what they’re buying — are driving a tremendous amount of information-gathering and dissemination. On the seller’s side, behavioral tracking and customer screening are giving companies an intimate glimpse into our personal lives. On the buyer’s side, access to consumer reviews, third-party evaluations and buyer forums are helping us steer clear of lemons. Both are being facilitated through technology.

But how does disintermediation impact information asymmetry, or vice versa?

If we didn’t have adequate information, we needed some other safeguard against being taken advantage of. So, failing a rational answer to this particular market dilemma, we found an irrational one: We relied on gut instinct.

Relying on Relationships

If we had to place our trust in someone, it had to be someone we could look in the eye during the transaction. The middle was composed of individuals who acted as the face of the market. Because they lived in the same communities as their customers, went to the same churches, and had kids that went to the same schools, they had to respect their markets. If they didn’t, they’d be run out of town. Often, their loyalties were also in the middle, balanced somewhere between their suppliers and their customers.

In the absence of perfect information, we relied on relationships. Now, as information improves, we still want relationships, because that’s what we’ve come to expect. We want the best of both worlds.

Will Customer Service Disappear with the Elimination of the “Middle”?

First published October 18, 2012 in Mediapost’s Search Insider

In response to my original column on disintermediation, Joel Snyder worried about the impact on customer service: The worst casualty is relationships and people skills. As consumers circumvent middlemen, they become harder to deal with. As merchants become more automated, customer service people have less power and less skills (and lower pay).

Cece Forrester agreed: Disintermediation doesn’t just let consumers be rude. It also lets organizations treat their customers rudely.

So, is rudeness an inevitable byproduct of disintermediation?

Rediscovering the Balance between Personalization and Automation

Technology introduces efficiency. It streamlines the “noise” and marketplace friction that comes with human interactions. But with that “noise” comes all the warm and fuzzy aspects of being human. It’s what both Joel and Cece fear may be lost with disintermediation. I, however, have a different view.

Shifts in human behavior don’t typically happen incrementally, settling gently into the new norm. They swing like a pendulum, going too far one way, then the other, before stability is reached. Some force — in this case, new technological capabilities — triggers the change. As society moves, the force, plus momentum, moves too far in one direction, which triggers an opposing force which pushes back against the trend. Eventually, balance is reached.

A Redefinition of Relationships

In this case, the opposing force will be our need for those human factors. Disintermediation won’t kill relationships. But it will force a redefinition of relationships. The challenge here is that existing market relationships were all tied to the “Middle,” which served as the bridge between producers and consumers. Because the Middle owned the end connection with the customer, it formed the relationships that currently exist. Now, as anyone who has experienced bad customer service will tell you, some who lived in the Middle were much better at relationships than others. Joel and Cece may be guilty of looking at our current paradigm through rose-colored glasses. I have encountered plenty of rudeness even with the Middle firmly in place.

But it’s also true that producers, who suddenly find themselves directly connected with their markets, have little experience in forming and maintaining these relationships. However, the market will eventually dictate new expectations for customer service, and producers will have to meet those expectations. One disintermediator, Zappos, figured that out very early in the game.

Ironically, disintermediation will ultimately be good for relationships. Feedback loops are being shortened. Technology is improving our ability to know exactly what our customers think about us. We’re actually returning to a much more intimate marketplace, enabled through technology. Producers are quickly educating themselves on how to create and maintain good virtual relationships. They can’t eliminate customer service, because we, the market, won’t let them. It will take a bit for us to find the new normal, but I venture to say that wherever we find it, we’ll end up in a better place than we are today.

The Good Side of Disintermediation

First published October 11, 2012 in Mediapost’s Search Insider

You know you’ve found a good topic for a column when half the comments are in support of whichever side of the topic you’ve lined up on, and half are against it. Such was the case last week when I wrote about disintermediation.

This week, I promised to present the positives of disintermediation. I’ll do so at the macro level, because there are market forces at work that will drive massive change at every level. But there were also some very interesting questions raised last week by readers:

  • Is disintermediation killing relationships and our ability to deal with people?
  • Are the benefits of disintermediation tied to social status, driving the haves and the have-nots even further apart?
  • Is more information good for the market, or does it just create more noise for us to wade through?
  • What will the social cost of disintermediation be?
  • What are the global implications of disintermediation?
  • In knowledge-based professional markets where experience and expertise are essential (i.e. health care) what role does disintermediation play?
  • Are we just replacing one type of “middle” with another (for example, online travel agencies for traditional travel agencies)?

Each of these questions is worthy of a column itself, so I’ll file those away for future writing over the next few weeks. But today, let’s focus on the silver lining inside the disintermediation cloud.

I’ve written about Kondratieff waves (also K waves) before. In the world of the macro-economist (who are of mixed opinion about the validity of the theory), these are massive waves of disruption (often driven by technological advances) that first deconstruct the marketplace and then rebuild it based on the new (improved?) paradigm.

The Industrial Revolution was one such wave. What that did was create a new marketplace built on scale. Bigger was better. It introduced mass manufacturing, mass markets and mass advertising. It also created the “middle,” which was an essential part of getting goods to the market. Given the scale of the new markets, it was essential to create a huge support infrastructure. Most of the wealth of the 20th century was built on the back of this particular K wave.

One of the characteristics of a K wave is that the positive benefits outweigh the negatives. After the period of destruction as the old market is torn apart, the new market scales to new heights. Technology fuels increased capabilities and opportunities. The world lurches ahead to a new possibility. We were better off (arguably) by most metrics after the Industrial Revolution than before it. We were more productive, had a higher standard of living and could do things we couldn’t do before.

Today, we’re in the middle of another K Wave disruption, and I believe this one is going to dwarf the impact of the Industrial Revolution. Of course, K waves by their nature are long-term phenomena whose impacts take decades to roll their way through society.

This particular K Wave is reversing many of the market dynamics established by the previous “Bigger is Better” one. We’ve begun to deconstruct the gargantuan support system required to service mass markets. Inevitably, there will be pain, and last week’s commentators zeroed in on many of those pain points. But there will also be growth. And the bigger the wave, the bigger the growth. In this case, the same factors I talked about last week – democratization of information, better user experiences, solving the distance problem – are all being driven by technology. As this wave continues, the market will become more efficient. Information asymmetry will be lessened (if not eliminated) and the superstructure of the “middle” will become unnecessary.

A more efficient marketplace means new opportunities. More businesses will start and grow. Previously unimagined sectors of a new economy will emerge. This new economy will be global in scope, but hyperlocal in nature. Pure ingenuity will have a chance to flourish, freed from the constraints of the need for scalability. Once we get through the stumbles inevitable in the transition period, the economy will ramp up for another bull run. But we have to get there first.

The Disintermediation of Everything

First published October 3, 2012 in Mediapost’s Search Insider

Up until five years ago, I had never used the word disintermediation. In fact, if it would have come up in casual conversation, I would have had to pick my way through its bushel of syllables to figure out exactly what it meant.

Today, I am acutely aware of the meaning. I use the word a lot. I would put it up there as one of the three or four most important trends to watch, right up there with the Database of Intentions, which I talked about last week. The truth is, if you’re a middleman and you’re not dead already, you’re living on borrowed time.

Why is the Middle suddenly such a bad place to be? A lot of people have made a lot of money in the Middle for hundreds of years. The Middle makes up a huge part of our economy, including a lot of middle-class jobs. Systematically eliminating it is going to cause a ton of grief. But the process has started, and there’s no turning back now.

Three big shifts are driving disintermediation:

The Democratization of Information

The Middle exists in part because we didn’t have access to what, in game theory, is called perfect information. Either we didn’t have access to information at all, or the information we had was not reliable or useful to us. So, in order to function in the marketplace, we needed a bridge to what information did exist.

Think of travel agents (which for the majority of us, is someone we probably haven’t spoken to for a few years). Travel agents were essential because we were walled off from the information we needed to arrange our own travel. We had no access to the latest airfares, hotel availability or room rates. If you had asked me what was the best hotel in Istanbul, I would have had no clue. We used travel agents because we had no choice.

Today, we do. The travel industry was one of the pioneers in democratizing information. The result? The travel marketplace is infinitely more efficient than it was even a decade ago. The average person can now put together a six-week multi-stop vacation relatively easily.  The middle is being eliminated. In 1998, there were 32,000 travel agencies in the US. Today, through elimination and consolidation, that number is closer to 10,000. Disintermediation has cost thousands of travel agents their jobs.

The Improvement of User Interfaces

When’s the last time you spoke to a bank teller? If you’re like me, it’s probably the last time you had to do something that couldn’t either be done through online banking or at a local ATM.  99% of our banking can now be done quicker and easier because banks have invested in creating platforms and interfaces that enable us to do it ourselves.  It’s better for us as customers, and it’s much more profitable for the banks. Disintermediation in banking has created a more efficient model. Ironically, unlike travel agents, bank tellers have not lost their jobs. They’ve just changed what they do.

The Overcoming of Geography

The final factor is the problem of distance. When mass manufacturing became possible, the distance between the factory and the market started to grow. Suddenly, distribution became a major challenge. Supply chains were born, making a lot of people very rich in the process. Becoming big became essential to overcoming the problem of distance.

But technology has made physical fulfillment much more efficient. Getting a product from the factory floor to your front door is still a challenge, but our ability to move stuff is so much better than it was even a few decades ago. The result? Massive disintermediation. And this particular trend is just beginning.

So What?

Much of what we’re familiar with today is part of the Middle. Just like travel agents, video stores and bank tellers, every year something we have always taken for granted will suddenly disappear. Huge swaths of the economy will be disruptively eliminated. That’s the bad news. The good news will have to wait till next week’s column.

A Benchmark in Time

First published September 13, 2012 in Mediapost’s Search Insider

That’s the news from Lake Wobegon, where all the women are strong, all the men are good-looking, and all the children are above average. — Garrison Keillor

How good are you? How intelligent, how talented, how kind, how patient? You can give me your opinion, but just like the citizens of Lake Wobegon, you’ll be making those judgments in a vacuum unless you compare yourself to others. Hence the importance of benchmarking.

The term benchmarking started with shoemakers, who asked their customers to put their feet on a bench where they were marked to serve as a pattern for cutting leather. But of course, feet are absolute things. They are a certain size and that’s all there is to it. Benchmarking has since been adapted to a more qualitative context.

For example, let’s take digital marketing maturity. How does one measure how good a company is at connecting with customers online? We all have our opinions, and I suspect, just like those little Wobegonians, most of us think we’re above average. But, of course, we all can’t be above average, so somebody is fudging the truth somewhere.

I have found that when we work with a client, benchmarking is an area of great political sensitivity, depending on your audience. Managers appreciate competitive insight and are a lot less upset when you tell them they have an ugly baby (or, at least, a baby of below-average attractiveness) than the practitioners who are on the front lines. I personally love benchmarking, as it serves to get a team on the same page. False complacency vaporizes in the face of real evidence that a competitor is repeatedly kicking your tushie all over the block.  It grounds a team in a more objective view of the marketplace and takes decision-making out of the vacuum.

But before going on a benchmarking bonanza, here are some things to consider:

Weighting is Important

It’s pretty easy to assign a score to something. But it’s more difficult to understand that some things are more important than others. For example, I can measure the social maturity of a marketer based on Facebook likes, the frequency of Twitter activity, the number of stars they have on Yelp or the completeness of their Linked In Profile, but these things are not equal in importance. Not only are they not equal, but the relative importance of each social activity will change from industry to industry and market to market. If I’m marketing a hotel, TripAdvisor reviews can make or break me, but I don’t care as much about my number of LinkedIn connections. If I’m marketing a movie or a new TV show, Facebook “Likes” might actually be a measure that has some value. Before you start assigning scores, you need a pretty accurate way to weight them for importance.

Be Careful Whom You’re Benchmarking Against

If you ask any marketer who their primary competitors are, they’ll be able to give you three or four names off the top of their head. That’s the obvious competition. But if we’re benchmarking digital effectiveness, it’s the non-obvious competition you have to worry about. That’s why we generally include at least one “aspirational” candidate in our benchmarking studies. These candidates set the bar higher and are often outside the traditional competitive set. While it may be gratifying to know you’re ahead of your primary competitors, that will be small comfort if a disruptive competitor (think Amazon in the industrial supply category) suddenly changes the game and blows up your entire market model by resetting your customer’s expectations. Good benchmarking practices should spot those potential hazards before they become critical.

Keep Objective

If qualitative assessments are part of your benchmarking (and there’s nothing wrong with that), make sure your assessments aren’t colored by internal biases. Having your own people do benchmarking can sometimes give you a skewed view of your market.  It might be worthwhile to find an external partner to help with benchmarking, who can ensure objectivity when it comes to evaluation and scoring.

And finally, remember that everybody is above average in something…