Now, That’s a Job Description I Could Get Behind!

First published February 20, 2014 in Mediapost’s Search Insider

I couldn’t help but notice that last week’s column, where I railed against the marketer’s obsession with tricks, loopholes and pat sound bites got a fair number of retweets. The irony? At least a third of those retweets twisted my whole point – that six seconds (or any arbitrary length of message) isn’t the secret to getting a prospect engaged. The secret is giving them something they want to engage with.

tweet ss

As anyone who has been unfortunate to spend some time with me when I’m in particularly cynical mood about marketing can attest to, I go a little nuts with this “Top Ten Tricks” or “The Secret to…” mentality that seems pervasive in marketing. I’m pretty sure that anyone who retweeted last week’s column with a preface like “Does your advertising engage your consumer in 6 seconds or less? If not, you’re likely losing customers” didn’t bother to actually read past the first paragraph. Maybe not even the first line.

And that’s the whole problem. How can we expect marketers to build empathy, usefulness and relevance into their strategy when many of them have the attention span of a small gnat? As my friend Scott Brinker likes to say when it comes to marketer’s misbehaving, “This is why we can’t have nice things.”

Marketing – good marketing – is not easy but it’s also not a black box. It’s not about secrets or tricks or one-off tactics. It’s about really understanding your customers at an incredibly deep level and then working your ass off to create a meaningful engagement with them. Trying to reduce marketing to anything less than that is like trying to breeze your way through 50 years of marriage by following the Top 3 Tricks to get lucky this Friday night.

Again, this is about meaningful engagements. And when I say meaningful, it’s the customer that gets to decide what’s meaningful. That’s what’s potentially so exciting about breakthroughs like the Oreo Super Bowl campaign. It’s the opportunity to learn what’s meaningful to prospects and then to shift and tailor our responses in real time. Until now, marketing has been “Plan, Push and Pray.” We plan our attack, we push out our message and we pray it finds it’s target and that they respond by buying stuff. If they don’t buy stuff, something went wrong, probably in the planning stage. But that is an awfully long feedback loop.

You’ll notice something about this approach to marketing. The only role for the prospect is as a consumer. If they don’t buy, they don’t participate.  This comes as a direct result of the current job description of a marketer: Someone who gets someone else to buy stuff. But what if we rethink that description? Technology that enables real time feedback is allowing us to create an entirely new relationship with customers. What would happen if we redefined marketing along these lines: To understand the customer’s reality, focusing on those areas where we can solve their problems and improve that reality?

And as much as that sounds like a pat sound bite, if you really dig into it, it’s far from a quick fix. This is a way to make a radically different organization. And it moves marketing into a fundamentally different role. Previously, marketing got its marching orders from the CEO and CFO. Essentially, they were responsible for moving the top line ever northward. It was an internally generated mandate – to increase sales.

But what if we rethink this? What if the entire organization’s role is to constantly adapt to a dynamic environment, looking for advantageous opportunities to improve that environment? And, in this redefined vision, what if marketing’s role was to become the sense-making interface of the company? What if it was the CMO’s job was to consistently monitor the environment, create hypotheses about how to best create adaptive opportunities and then test those hypotheses in a scientific manner?

In this redefinition of the job, Big Data and Real Time Marketing take on significantly new qualities, first as a rich vein of timely information about the marketplace and secondly as a never ending series of instant field experiments to provide empirical backing to strategy.

Now, marketing’s job isn’t to sell stuff, it’s to make sense of the market and, in doing so, help define the overall strategic direction of the company. There are no short cuts, no top ten tricks, but isn’t that one hell of a job description?

The Psychology of Usefulness: The Acceptance of Technology – Part Three

In Part Two of this series, I looked at Davis and Bagozzi’s Technology Acceptance Model, first proposed in 1989.

Technology_Acceptance_Model

As I said, while the model was elegant and parsimonious, it seems to simplify the realities of technology acceptance decisions too much. In 2000, Venkatesh and Davis tried to deal with this in TAM 2 – the second version of the Technology Acceptance Model.

TAM2

In this version, they added several determinants of Perceived Usefulness and demoted Perceived Ease of Use to being just one of the factors that impacted Perceived Usefulness.  Impacting this mental calculation were two mediating factors: Experience and Voluntariness. This rebalancing of factors provides some interesting insights into the mental process we go through when making a decision whether we’ll accept a new technology or not.

Let’s begin with the determinants of Perceived Usefulness in the order they appear in Venkatesh and Davis’s model:

Subjective Norm: TAM 2 resurrects one of the key components of the original Theory of Reasoned Action model – the opinions of others in your social environment.

Image: Venkatesh and Davis also included another social factor in their list of determinants – how would the acceptance of this technology impact your status in your social network? Notice that our calculation of the image enhancement potential has the Subjective Norm as an input. It’s a Bayesian prediction – we start with our perceived social image status (the prior) and adjust it based on new information, in this case the acceptance of a new technology.

Job Relevance: How applicable is the technology to the job you have to do?

Output Quality: How will this technology impact your ability to perform your job well?

Result Demonstrability: How easy is it to show the benefits of accepting the technology?

It’s interesting to note how these factors split: the first two (subjective norm and image) being related to social networks, the next two (Job Relevance and Output Quality) being part of a mental calculation of benefit and the last one, Demonstrability, bridging the two categories: How easy will it be to show others that I made the right decision?

According to the TAM 2 model, we use these factors, which combine practical task performance considerations and social status aspirations, into a rough calculation of the perceived usefulness of a technology. After this is done, we start balancing that with how easy we perceive the new technology to be to use. Venkatesh and Davis commented on this and felt that Perceived Ease of Use has a variable influence in two areas, the forming of an attitude towards the technology and a behavioral intention to use the technology. The first is pretty straight forward. Our attitude is our mental frame regarding the technology. Again, to use a Bayesian term, it’s our prior. If the attitude is positive, it’s very probably that we’ll form a behavioral intention to use the technology. But there are a few mediating factors at this point, so let’s take a closer look at the creation of Behavioral Intention..

In forming our intention, Perceived Ease of Use is just one of the determinants we use in our “Usefulness” calculation, according to the model. And it depends on a few things. It depends on efficacy – how comfortable we judge ourselves to be with the technology in question. It also depends on what resources we feel we will have access to to help us up the learning curve. But, in the forming of our attitude (and thereby our intention), Venkatesh and Davis felt that Perceived Usefulness will typically be more important than Perceived Ease of Use. If we feel a technology will bring a big enough reward, we will be willing to put up with a significant degree of pain. At least, we will in what we intend to do. It’s like making a New Year’s Resolution to lose weight. At the time we form the intention, the pain involved is sometime in the future, so we go forward with the best of intentions.

As we move forward from Attitude to Intention, this transition if further mediated in the model by our subjective norm – the cognitive context we place the decision in. Into this subjective norm falls our experience (our own evaluation of our efficacy), the attitudes of others towards the technology and also the “Voluntariness” of the acceptance. Obviously, our intention to use will be stronger if it’s a non-negotiable corporate mandate, as opposed to a low priority choice we have the latitude to make.

What is missing from the TAM 2 model is the link between Perceived Ease of Use and actual Usage. Just like a New Year’s Resolution, intentions don’t always become actions. Venkatesh and Davis said Perceived Ease of Use is a moving, iteratively updated calculation. As we gain hands-on experience, we update our original estimate of Ease of Use, either positively or negatively. If it’s positive, it’s more likely that Intention will become Usage. If negatively, the technology may fail to become accepted. In fact, I would say this feedback loop is an ongoing process that may repeat several times in the space between Intention and Usage. The model, with a single arrow going in one direction from Intention to Usage, belies the complexity of what is happening here.

Venkatesh and Davis wanted to create a more realistic model, expanding the front end of the model to account for determinants going into the creation of Intention. They also wanted to provide a model of the decision process that better represented how we balance Perceived Usefulness and Perceived Ease of Use. I think they made some significant gains here. But the model is still a linear one – going in one direction only. What they missed is the iterative nature of acceptance decisions, especially in the gap between Intention and Behavior.

In Part Four, we’ll look at TAM 3 and see how Venkatesh further modified his model to bring it closer to the real world.

So, Six Seconds is the Secret, Huh?

First published February 13, 2014 in Mediapost’s Search Insider

oreo-superbowl-blackout-adApparently, the new official time limit for customer engagement is 6 seconds, according to a recent post on Real Time Marketing. How did we come up with 6? Well, in the world of social media engagement it seemed like a good number and no one has called bull shit on it yet, so 6 it is

Marketers love to talk about time – just in time, real time, right time. At the root of all this “time talk” is the realization that customers really don’t have any time for us, so we have to somehow jam our messages into the tiny little cracks that may appear in the wall of willful ignorance they carefully build against marketing. The marketer’s goal is to erode their defenses by looking for any weakness that may appear.

Look at the supposed poster child for Real Time Marketing – the Oreo coup staged during the black out in the 2013 Super Bowl. Because the messaging was surprising and clever, and because, let’s face it, we weren’t doing much of anything else anyway, Oreo managed to gain a foothold in our collective consciousness for a few precious seconds. So, marketers being marketers, we all stumbled over ourselves to proclaim a new channel and launch a series of new micro-attacks on consumers. That’s where the 6 seconds came from. Apparently, that’s the secret to storming the walls. Five seconds and you’re golden. Seven seconds and you’re dead.

Oreo surprised us, and it wasn’t because the message was 6 seconds long. It was because we weren’t expecting a highly relevant, highly timely message. Humans are built to respond to things that don’t fit within our expected patterns. The whole approach of marketing is to constantly blanket us with untimely, irrelevant messages. Marketers, to be fair, try to deliver the right message at the right time to the right person, but it’s really hard to do that. So, we overcompensate by delivering lots of messages all the time to everyone, hoping to get lucky. Not to take anything away from the cleverness and nimbleness of the Oreo campaign, but they got lucky. We were surprised and we let our defenses down long enough to be amused and entertained. Real time marketing wasn’t a brilliant new channel; it was a shot in the dark – literally.

And there’s no six-second gold standard of engagement. If you can deliver the right message at the right time to the right person, you can spend hours talking to your prospective customer.  It’s only when you’re trying to interrupt someone with something irrelevant that you have to hopefully shoehorn it into their consciousness. Think of it like a Maslow’s hierarchy of advertising effectiveness.  At it’s best advertising should be useful. This sits at the top of the pyramid. After usefulness comes relevance – even if I don’t find the ad useful to me right now, at least you’re talking to the right person. After relevance comes entertainment – I’ll willingly give you a few seconds of my time if I find your message amusing or emotionally engaging.  I may not buy, but I’ll spend some time with you. After entertainment comes the category the majority of advertising falls into – a total waste of my time.  Not useful, irrelevant, not emotionally engaging. And making an ad that falls into this category 5 seconds long, no matter what channel it’s delivered through, won’t change that. You may fool me once, but next time, I’m still going to ignore you.

There was something important happening during the Oreo campaign at the 2013 Super Bowl, but it had nothing to do with some new magic formula, some recently discovered loophole in our cognitive defenses. It was a sign of what may, hopefully, emerge as trend in advertising – nimble, responsive marketing that establishes a true feedback loop with prospects. What may have happened when the lights went out in New Orleans is that we may have found a new, very potent way to make sense of our market and establish a truly interactive, responsive dialogue with them. If this is the case, we may have just found a way climb a rung or two on the Advertising Effectiveness Hierarchy.

How Can Humans Co-Exist with Data?

First published February 6, 2014 in Mediapost’s Search Insider

tumblr_inline_mpt49sqAwV1qz4rgpLast week, I talked about our ability to ignore data. I positioned this as a bad thing. But Pete Austin called me on it, with an excellent counterpoint:

Ignoring Data is the most important thing we do. Only the people who could ignore the trees and see the tiger, in real-time, survived to become our ancestors.”

Too true. We’re built to subconsciously filter and ignore vast amounts of input data in order to maintain focus on critical tasks, such as avoiding hungry tigers. If you really want to dive into this, I would highly recommend Daniel Simons and Christopher Chabris’s “The Invisible Gorilla.” But, as Simons and Chabris point out, with example after example of how our intuitions (which we use as filters) can mislead us, this “inattentional blindness” is not always a good thing. In the adaptive environment in which we evolved, it was pretty effective at keeping us alive.  But in a modern, rational environment, it can severely inhibit our ability to maintain an objective view of the world.

But Pete also had a second, even more valid point:

“What you need to concentrate on now is “curated data”, where the junk has already been ignored for you.”

And this brought to mind an excellent example from a recent interview I did as background for an upcoming book I’m working on.  This idea of pre-filtered, curated data becomes a key consideration in this new world of Big Data.

Nowhere are the stakes higher for the use of data than in healthcare. It’s what lead to the publication of a manifesto in 1992 calling for a revolution in how doctors made life and death decisions. One of the authors, Dr. Gordon Guyatt, coined the term “Evidence based medicine.” The rational is simple here. By taking an empirical approach to not just diagnosis but also to the best prescriptive path, doctors can rise above the limitations of their own intuition and achieve higher accuracy. It’s data driven decision-making, applied to health care. Makes perfect sense, right? But even though Evidence based medicine is now over 20 years old, it’s still difficult to consistently apply at the doctor to individual patient level.

I had the chance to ask Dr. Guyatt why this was:

“Essentially after medical school, learning the practice of medicine is an apprenticeship exercise and people adopt practice patterns according to the physicians who are teaching them and their role models and there is still a relatively small number of physicians who really do good evidence-based practice themselves in terms of knowing the evidence behind what they’re doing and being able to look at it critically.”

The fact is, a data driven approach to any decision-making domain that previously used to rely on intuition just doesn’t feel – well – very intuitive. It’s hard work. It’s time consuming. It, to Mr. Austin’s point, runs directly counter to our tiger-avoidance instincts.

Dr. Guyatt confirms that physicians are not immune to this human reliance on instinct:

“Even the best folks are not going to do it – maybe the best folks – but most folks are not going to be able to do that very often.”

The answer in healthcare, and likely the answer everywhere else where data should back up intuition, is the creation of solid data based resources, which adhere to empirical best practices without requiring every single practitioner to do the necessary heavy lifting. Dr. Guyatt has seen exactly this trend emerge in the last decade:

“What you need is preprocessed information. People have to be able to identify good preprocessed evidence-based resources where the people producing the resources have gone through that process well.”

The promise of curated, preprocessed data is looming large in the world of marketing. The challenge is that, unlike medicine, where data is commonly shared and archived, in the world of marketing much of the most important data stays proprietary. What we have to start thinking about is a truly empirical, scientific way to curate, analyze and filter our own data for internal consumption, so it can be readily applied in real world situations without falling victim to human bias.

A Tale of Two Research Philosophies

First published December 19, 2013 in Mediapost’s Search Insider

They only sit about five miles apart physically. One’s in Palo Alto, the other’s in Mountain View. But when it comes to how R&D is integrated into an organization’s strategy, there is significantly more distance between Xerox’s PARC and Google.

Xerox Alto computer

Xerox Alto computer

I recently visited both locations on the same day. PARC, of course, is the legendary research wing that created the graphical user interface, the personal computer, object oriented programming, the mouse, Ethernet and the laser printer. It was at PARC that Steve Jobs saw the interface that would eventually form the OS foundation for the Macintosh. Every time we touch the technology that today we take for granted, we should give thanks to the many people who have called the unassuming campus on Coyote Hill Road home.

But in 1969, when PARC was first created, there was a different attitude towards R&D. Research required isolation and distance from the regular business rhythms of the mother ship. Xerox could not have put more distance between its head office, in Rochester, N.Y., and its new research arm, 3,000 miles away. When it came to innovation, the choice of location was fortuitous. PARC, together with HP and other Silicon Valley pioneers, tapped into the stream of talent that was coming out of Stanford. In fact, PARC is located on land leased from Stanford. It soon became an innovation hotbed, thanks to the visionary leadership of Bob Taylor, who headed up the Computer Science division. But Xerox’s track record of bringing its own innovations to the market was dismal. As great as the physical distance was between PARC and the executive wing of Xerox in upstate New York, the philosophical distance was several times greater.

Google’s research efforts, under the leadership of Peter Norvig, is taking a much different direction, likely due to lessons learned from PARC and others.  Research is embedded in the ever-expanding Google campus that currently sprawls along Amphitheatre Parkway and Charleston Road. There is a free flow of traffic and communication between current product engineering teams (many riding brightly colored Google bikes) and those working on longer-term projects. The distance between “today” and “tomorrow” is minimized at every opportunity.

Norvig commented on this in a recent interview with me:

We don’t have a separate research entity whose job is to be isolated from the rest of the company and think about the future. Rather, everybody’s job, regardless of their job title, is to make our products better or invent a new product. So the distinction between being a researcher versus an engineer is not how academic you are, it’s not how forward-thinking you are  — whether you’re looking at this year or next year or the year after. It’s more in terms of the area that you work in. If you work in core search or in core distributed computer systems, then your title’s going to be software engineer, even if you’re a Nobel Prize-winning professor.

Google has taken a hybrid approach to research, in which even long-term projects are developed at production scale, minimizing the risk of projects failing during the technology transfer phase. Norvig touched on this in a recent article:

Elaborate research prototypes are rarely created, since their development delays the launch of improved end-user services. Typically, a single team iteratively explores fundamental research ideas, develops and maintains the software, and helps operate the resulting Google services — all driven by real-world experience and concrete data. This long-term engagement serves to eliminate most risk to technology transfer from research to engineering.

This was exactly the trap that PARC ran into, when some of the most innovative advances in the history of computing failed to significantly contribute to Xerox’s bottom line.  Google has thrown the doors open for internal research teams to access the full power of complete data sets and production scale systems while espousing the practice of agile development. The goal is to ensure that all innovation that happens at Google is not too far removed from the goal of either diversifying Google’s revenue stream with new products, or contributing to existing ones.

360 Degrees of Seperation

First published December 5, 2013 in Mediapost’s Search Insider

IMT_iconsIn the past two decades or so, a lot of marketers talked about gaining a 360-degree view of their customers.  I’m not exactly sure what this means, so I looked it up.  Apparently, for most marketers, it means having a comprehensive record of every touch point a customer has had with a company. Originally, it was the promise of CRM vendors, where anyone in an organization, at any time, can pull up a complete customer history.

So far, so good.

But like many phrases, it’s been appropriated by marketers and its meaning has become blurred. Today, it’s bandied about in marketing meetings, where everyone nods knowingly, confident in the fact that they are firmly ensconced in the customer’s cranium and have all things completely under control. “We have a 360-degree view of our customers,” the marketing manager beams, and woe to anyone that dares question it.

But there are no standard criteria that you have to meet before you use the term. There is no rubber-meets-the-road threshold you have to climb over. No one knows exactly what the hell it means. It sure sounds good, though!

If a company is truly striving to build as complete a picture of their customers as possible, they probably define 360 degrees as the total scope of a customer’s interaction with their company. This would follow the original CRM definition. In marketing terms, it would mean every marketing touch point and would hopefully extend through the customer’s entire relationship with that company. This would be 360-degrees as defined by Big Data.

But is it actually 360 degrees? If we envision this as a Venn diagram, we have one 360-degree sphere representing the mental model of customers, including all the things they care about. We have another 360-degree sphere representing the footprint of the company and all the things they do. What we’re actually looking at then, even in an ideal world, is where those two spheres intersect. At best, we’re looking at a relatively small chunk of each sphere.

So let’s flip this idea on its head. What if we redefine 360 degrees as understanding the customer’s decision space? I call this the Buyersphere. The traditional view of 360 degrees is from the inside looking out, from the company’s perspective. The Buyersphere moves the perspective to that of the customer, looking from the outside in. It expands the scope to include the events that lead to consideration, the competitive comparisons, the balancing of buying factors, interactions with all potential candidates and the branches of the buying path itself.  What if you decide to become the best at mapping that mental space?  I still wouldn’t call it a 360-degree view, but it would be a view that very few of your competitors would have.

One of the things that I believe is holding Big Data back is that we don’t have a frame within which to use Big Data. Peter Norvig, chief researcher for Google, outlined 17 warning signs in experimental design and interpretation. One was lack of a specific hypothesis, and the other was a lack of a theory. You need a conceptual frame from which to construct a theory, and then, from that theory, you can decide on a specific hypothesis for validation. It’s this construct that helps you separate signal from noise. Without the construct, you’re relying on serendipity to identify meaningful patterns, and we humans have a nasty tendency to mistake noise for patterns.

If we look at opportunities for establishing a competitive advantage, redefining what we mean by understanding our customers is a pretty compelling one. This is a construct that can provide a robust and testable space within which to use Big Data and other, more qualitative, approaches. It’s relatively doable for any organization to consolidate its data to provide a fairly comprehensive “inside-out” view of customer’s touch points. Essentially, it’s a logistical exercise. I won’t say it’s easy, but it is doable.  But if we set our goal a little differently, working to achieve a true “outside-in” view of our company, that sets the bar substantially higher.

360 degrees? Maybe not. But it’s a much broader view than most marketers have.

The Swapping of the Old “Middle” for the New

First published November 8, 2012 in Mediapost’s Search Insider

For the past several columns, I’ve been talking about disintermediation. My hypothesis is that technology is driving a general disintermediation of the marketplace (well, it’s not really my hypothesis — it’s a pretty commonly held view) and is eliminating a vast “middle” infrastructure that has accounted for much of the economic activity of the past several decades. It’s a massive shift (read “disruption”) in the market that will play out over the next several years.

But every good hypothesis must stand up to challenge, and an interesting one came from a recent article in Slate, which talks about the growth of a brand new kind of “gatekeeper,” the new “bots” that crawl the Web and filter (or, in some cases, generate) content based on a preset algorithm. These bots can crawl blog posts, pinpointing spam and malicious posts so they can be removed. The sophistication is impressive, as the most advanced of these tap into the social graph to learn, in real time, the context of posts so it can make nuanced judgment calls about what is and isn’t spam.

But these bots don’t simply patrol the online frontier, they also contribute to it. They can generate automated social content based on pre-identified themes. In other words, they can become propaganda generators. So now we have a new layer of “middle” that acts both as censor and propagandist. Have we gained anything here?

The key concept here is one of control. The “middle” used to control both ends of the market. It did so because it controlled the bridge between the producers and consumers.  This was control in every sense: control of the flow of finance, control of the physical market itself, and control of communication.

With disintermediation, direct connections are being built between producers and consumers. With this comes a redefinition of control. In terms of financial control, disintermediation should (theoretically) produce a more efficient marketplace, resulting in more profit for producers and better prices for consumers. That drastically oversimplifies the pain involved in getting to a more efficient marketplace, but you get the idea.  In this case, the only loser is the middle, so there’s no real incentive for the producers or consumers to ensure its survival.

Disintermediation of the physical market essentially works itself out. If the product needs a face-to-face representative, the middle will survive. If not, then we’ll figure out how to facilitate the sale online, and you can expect to see a lot of UPS vans in your neighborhood. We consumers may mourn the loss of a “face” in some segments of our marketplace, but we’ll get over it.

When it comes to control of communication, it’s more difficult to crystal-ball what might happen in the future. This area is also where new gatekeepers are most likely to appear.

Communication between marketers and the market used to be tightly channeled and controlled by the “middle.” It also used to flow in essentially one direction – from the marketer to the market. It was always very difficult for true communication to flow the other way.

But now, content is sprouting everywhere and becomes publicly accessible through a multitude of online touch points. It could soon become overwhelming to navigate through, both for consumers and producers. In this case, arguably, the middle served a very real service to both producers and consumers. The middle could edit communication, saving us from wading through a mountain of content to get what we were looking for.  It could also ensure that the messages producers wanted to get to the market were effectively delivered. The channels were under the control of the marketplace. For this reason, both marketers and the market may be reluctant to see disintermediation when it comes to communication.

The new gatekeepers, such as those featured in the Slate article, seem to serve both ends of the market. They help consumers access higher quality information by weeding out spam and objectionable content. And they help producers exercise some degree of control over negative content generated by the marketplace. In the absence of tight control of channels, a concept that’s gone the way of the dodo, this scalable, automated gatekeeper seems to serve a purpose.

If the need is great enough on both sides of the market, we are likely to find a new “middle” emerge: an “infomediary,” to use the term coined by John Hagel, Marc Singer and Jeffrey Rayport. According to this definition of the middle, Google emerges as the biggest of the “infomediaries.”

The question is, how much control are we willing to give this new evolution of the middle? In return for hacking some semblance of sanity out of the chaos that is an unmediated information marketplace, how much are we willing to pay in return? And, where does this control (and with it, the associated power) now live?  Who owns the new gatekeepers?  And who are those gatekeepers accountable to?

Disintermediation of a New, More Connected World

First published November 1, 2012 in Mediapost’s Search Insider

On Monday, one of the byproducts of disintermediation hit me with the force of, well — a hurricane, to be exact. We are more connected globally than ever before.

This Monday and Tuesday, three different online services I use went down because of Sandy. They all had data centers on the East Coast.

Disintermediation means centralization, which means that we will have more contact with people and businesses that spread across the globe.

The laptop I’m writing this column on (a MacBook Pro) was recently ordered from Apple. I was somewhat amazed to see the journey it took on its way to me. It left a factory in China, spent a day in Shanghai, then passed through Osaka, Japan on its way to Anchorage, Ala. From there it was on to Louisville, Ky. (ironically, the flight path probably went right over my house), then back to Seattle, Vancouver and then to my front door. If my laptop were a car, I would have refused delivery – it already had a full year’s worth of miles on it before I even got to use it.

A disintermediated world means a more globally reliant world. We depend on assembly factories in Taiyuan (China), chip factories in Yamaguchi (Japan), call centers in Pune (India), R&D labs in Hagenberg (Austria), industrial designers in Canberra (Australia) and yes, data centers in lower Manhattan. When workers brawl, tsunamis hit, labor strikes occur and tropical storms blow ashore, even though we’re thousands of miles away, we feel the impact. We no longer just rely on our neighbors, because the world is now our neighborhood.

This adds a few new wrinkles to the impacts of disintermediation, both positive and negative.

On the negative side, as we saw forcefully demonstrated this week, is the realization that our connected markets are more fragile than ever. As production becomes concentrated due to various global advantages, it is more vulnerable to single-point failures. One missing link and entire networks of co-dependent businesses go down. This lack of redundancy will probably be corrected in time, but for now, it’s what we have to live with.

But, on the positive side, our new connectedness also means we have to have interest in the well being of people that would have been out of our scope of consciousness just a mere decade ago. We care about the plight of the average worker at Foxconn, if for no other reason than it will delay the shipment of our new Mac. I exaggerate here (I hope we’re not that blasé about human rights in China) to make a point: when we have a personal stake in something, we care more. When you depend on someone for something important to you, you tend to treat them with more consideration. Thomas Friedman, in his book “The World is Flat,” called it the Dell Theory of Conflict Prevention:

“The Dell Theory stipulates: No two countries that are both part of a major global supply chain, like Dell’s, will ever fight a war against each other as long as they are both part of the same global supply chain.”

To all of you who weathered the storm, just know that you’re not alone in this. We depend on you – so, in turn, feel free to depend on us.

The Balancing of Market Information

First published October 25, 2012 in Mediapost’s Search Insider

In my three previous columns on disintermediation, I made a rather large assumption: that the market will continue to see a balancing of information available both to buyers and sellers. As this information becomes more available, the need for the “middle” will decrease.

Information Asymmetry Defined

Let’s begin by exploring the concept of information asymmetry, courtesy of George Akerlof, Michael Spence and Joseph Stiglitz.  In markets where access to information is unbalanced, bad things can happen.

If the buyer has more information than the seller, then we can have something called adverse selection. Take life and health insurance, for example. Smokers (on the average) get sick more often and die younger than non-smokers. If an insurance company has 50% of policyholders who are smokers, and 50% who aren’t, but the company is not allowed to know which is which, it has a problem with adverse selection. It will lose money on the smokers so it will increase rates across the board. The problem is that non-smokers, who don’t use insurance as much, will get angry and may cancel their policy. This will mean the “book of business” will become even less profitable, driving rates even higher.   The solution, which we all know, is simple: Ask policy applicants if they smoke. Imperfect information is thus balanced out.

If the seller has more information than the buyer, then we have a “market for lemons” (the name of Akerlof’s paper). Here,  buyers are  assuming risk in a purchase without knowingly accepting that risk, because they’re unaware of the problems that the seller knows exists. Think about buying a used car, without the benefit of an inspection, past maintenance records or any type of independent certification. All you know is what you can see by looking at the car on the lot. The seller, on the other hand, knows the exact mechanical condition of the car. This factor tends to drive down the prices of all products –even the good ones — in the market, because buyers assume quality will be suspect. The balancing of information in this case helps eliminates the lemons and has the long-term effect of improving the average quality of all products on the market.

Getting to Know You…

These two forces — the need for sellers to know more about their buyers, and the need for buyers to know more about what they’re buying — are driving a tremendous amount of information-gathering and dissemination. On the seller’s side, behavioral tracking and customer screening are giving companies an intimate glimpse into our personal lives. On the buyer’s side, access to consumer reviews, third-party evaluations and buyer forums are helping us steer clear of lemons. Both are being facilitated through technology.

But how does disintermediation impact information asymmetry, or vice versa?

If we didn’t have adequate information, we needed some other safeguard against being taken advantage of. So, failing a rational answer to this particular market dilemma, we found an irrational one: We relied on gut instinct.

Relying on Relationships

If we had to place our trust in someone, it had to be someone we could look in the eye during the transaction. The middle was composed of individuals who acted as the face of the market. Because they lived in the same communities as their customers, went to the same churches, and had kids that went to the same schools, they had to respect their markets. If they didn’t, they’d be run out of town. Often, their loyalties were also in the middle, balanced somewhere between their suppliers and their customers.

In the absence of perfect information, we relied on relationships. Now, as information improves, we still want relationships, because that’s what we’ve come to expect. We want the best of both worlds.

Will Customer Service Disappear with the Elimination of the “Middle”?

First published October 18, 2012 in Mediapost’s Search Insider

In response to my original column on disintermediation, Joel Snyder worried about the impact on customer service: The worst casualty is relationships and people skills. As consumers circumvent middlemen, they become harder to deal with. As merchants become more automated, customer service people have less power and less skills (and lower pay).

Cece Forrester agreed: Disintermediation doesn’t just let consumers be rude. It also lets organizations treat their customers rudely.

So, is rudeness an inevitable byproduct of disintermediation?

Rediscovering the Balance between Personalization and Automation

Technology introduces efficiency. It streamlines the “noise” and marketplace friction that comes with human interactions. But with that “noise” comes all the warm and fuzzy aspects of being human. It’s what both Joel and Cece fear may be lost with disintermediation. I, however, have a different view.

Shifts in human behavior don’t typically happen incrementally, settling gently into the new norm. They swing like a pendulum, going too far one way, then the other, before stability is reached. Some force — in this case, new technological capabilities — triggers the change. As society moves, the force, plus momentum, moves too far in one direction, which triggers an opposing force which pushes back against the trend. Eventually, balance is reached.

A Redefinition of Relationships

In this case, the opposing force will be our need for those human factors. Disintermediation won’t kill relationships. But it will force a redefinition of relationships. The challenge here is that existing market relationships were all tied to the “Middle,” which served as the bridge between producers and consumers. Because the Middle owned the end connection with the customer, it formed the relationships that currently exist. Now, as anyone who has experienced bad customer service will tell you, some who lived in the Middle were much better at relationships than others. Joel and Cece may be guilty of looking at our current paradigm through rose-colored glasses. I have encountered plenty of rudeness even with the Middle firmly in place.

But it’s also true that producers, who suddenly find themselves directly connected with their markets, have little experience in forming and maintaining these relationships. However, the market will eventually dictate new expectations for customer service, and producers will have to meet those expectations. One disintermediator, Zappos, figured that out very early in the game.

Ironically, disintermediation will ultimately be good for relationships. Feedback loops are being shortened. Technology is improving our ability to know exactly what our customers think about us. We’re actually returning to a much more intimate marketplace, enabled through technology. Producers are quickly educating themselves on how to create and maintain good virtual relationships. They can’t eliminate customer service, because we, the market, won’t let them. It will take a bit for us to find the new normal, but I venture to say that wherever we find it, we’ll end up in a better place than we are today.