Why I – And Mark Zuckerberg – are Bullish on Google Glass

First published February 28, 2013 in Mediapost’s Search Insider

Call it a Tipping Point. Call it an Inflection Point. Call it Epochal (what ever that means). The gist is, things are going to change — and they’re going to change in a big, big way!

First, with due deference to the brilliant Kevin Kelley, let’s look at how technology moves. In his book “What Technology Wants,” Kelley shows that technology is not dependent on a single invention or inventor. Rather, it’s the sum of multiple, incremental discoveries that move technology to a point where it can breach any resistance in its way and move into a new era of possibility. So, even if Edison had never lived, we’d still have electric lights in our home. If he weren’t there, somebody else would have discovered it (or more correctly, perfected it). The momentum of technology would not have been denied.

Several recent developments indicate that we’re on the cusp of another technological wave of advancement. These developments have little to do with online technologies or capabilities. They’re centered on how humans and hardware connect — and it’s impossible to overstate their importance.

The Bottleneck of Our Brains

Over the past two decades, there has been a massive build-up of online capabilities. In this case, what technology has wanted is the digitization of all information. That was Step One. Step Two is to render all that information functional. Step Three will be to make all the functionality personalized. And we’re progressing quite nicely down that path, thank you very much. The rapidly expanding capabilities of online far surpass what we are able to assimilate and use at any one time. All this functionality is still fragmented and is in the process of being developed (one of the reasons I think Facebook is in danger of becoming irrelevant) but it’s there. It’s just a pain in the butt for us to utilize it.

The problem is one of cognition. The brain has two ways to process information, one fast and one slow. The slow way (using our conscious parts of the brain) is tremendously flexible but inefficient. This is the system we’ve largely used to connect online. Everything has to be processed in the form of text, both in terms of output and input, generally through a keyboard and a screen display. It’s the easiest way for us to connect with information, but it’s far from the most efficient way.

The second way is much, much faster. It’s the subconscious processing of our environment that we do everyday.  It’s what causes us to duck when a ball is thrown at our head, jump out of the way of an oncoming bus, fiercely protect our children and judge the trustworthiness of a complete stranger. If our brains were icebergs, this would be the 90% hidden beneath the water. But we’ve been unable to access most of this inherent efficiency and apply it to our online interactions — until now.

The Importance of Siri and Glass

Say what you want about Mark Zuckerberg, he’s damned smart. That’s why he knew immediately that Google Glass is important.

I don’t know if Google Glass will be a home run for Google. I also don’t know if Siri will every pay back Apple’s investment in it. But I do know that 30 years from now, they’ll both be considered important milestones. And they’ll be important because they were representative of a sea change in how we connect with information. Both have the potential to unlock the efficiency of the subconscious brain. Siri does it by utilizing our inherent communication abilities and breaking the inefficient link that requires us not only to process our thoughts as language, but also laboriously translate them into keystrokes. In neural terms, this is one of the most inefficient paths imaginable.

But if Siri teases us with a potentially more efficient path, Google Glass introduces a new, mind-blowing scenario of what might be possible. To parse environment cues and stream information directly into our visual cortex in real time, creating a direct link with all that pent-up functionality that lives “in the cloud,” wipes away most of the inefficiency of our current connection paradigm.

Don’t think of the current implementation that Google is publicizing. Think beyond that to a much more elegant link between the vast capabilities of a digitized world and our own inner consciousness. Whatever Glass and Siri (and their competitors) eventually evolve into in the next decade or so, they will be far beyond what we’re considering today.

With the humanization of these interfaces, a potentially dark side effect will take place. These interfaces will become hardwired into our behavior strategies. Now, because our online interactions are largely processed at a conscious level, the brain tends to maintain maximum flexibility regarding the routines it uses. But as we access subconscious levels of processing with new interface opportunities, the brain will embed these at a similarly subconscious level. They will become habitual, playing out without conscious intervention. It’s the only way the brain can maximize its efficiency. When this happens, we will become dependent on these technological interfaces. It’s the price we’ll pay for the increased efficiency.

Building a Better Meta-Me

First published February 14, 2013 in Mediapost’s Search Insider

Last week I forecast that Facebook would become irrelevant. Some of you disagreed. Ron Stitt called Facebook the “public square” or “crossroads” of social connection.

Andre Szykier pointed out a very real challenge with the successful socialization of online: “The problem is connecting the content from my social walled gardens into a virtual cloud point. Google+ is going about it a different way. They keep expanding their walled garden with search, mail, video, chat services along with social and app services that they provide, hoping you eventually will find their garden big and rich enough so everybody will migrate. While it helps them be the CyBorg of data, it makes people more uneasier (sic) to have all of that in one garden than spread across many. Time will tell which model will thrive.”

Thank you, SI readers. As you so often do, you challenged me to give this idea a little more thought. I still inherently believe that Facebook is being marginalized on the social periphery, but both Ron and Andre have nailed a fundamental concept here that I believe merits further discussion. What does the connection point between ourselves and online (I extend this beyond social alone) evolve into?

The problem, I believe, comes with control. Who controls the connection? Understandably, Facebook, Google, and a host of others want to control this critical territory. It’s an online land grab; they offer us destinations, and we go to them. In return, because the connection happens on their turf, they get to monetize that turf. It’s like an online Monopoly game, with everyone scrambling to own Park Place so they can put more hotels on it.

The problem is that to effectively monetize, all these destinations ask us to invest in letting them know who we are. This creates the problem of profiles – so many profiles to maintain, so little time. If I move to another square, I have to start all over again.

All this profile information is used to create a “meta” representation of us. It’s the online data handshake that enables successful connection.  The issue is that Facebook, Google and all the others want us to build the profile, but for them to own it. This means we have to build multiple “meta” profiles of ourselves. It’s terribly inefficient and requires us to do most of the heavy lifting. Also, as Andre points out, it raises an important question – why should Google (or anyone else) own the meta version of me? I think that’s something I should own.

This dynamic introduces another problem: In order to reduce the heavy lifting, these destinations use our own activity to help build the profile. The more we do, the more they can learn about us. This is fine, as long as the best way to do any of these things is the option offered by the destination that’s trying to build the profile. But even with the vast resources available to a Google or Facebook, it’s almost impossible for them to stay ahead of the constant evolution of online innovation. Sooner or later, there will be a better way to do something somewhere else.  At this point, we’re faced with a dilemma: Do we stick with the original destination, where we’ve invested in building a rich meta version of ourselves, or do we trade that for the better functionality offered by the new alternative, knowing that we have to start building yet again another meta-me?

Google and Facebook, as Ron and Andre point out, have both gone down the road of building a support platform for other innovators, hoping to at least share a significant slice of the territory with new alternatives. This allows us to use that version of our profile in more ways. But it’s still a territorial analogy, and ultimately, that creates a sustainable vulnerability in an environment as dynamic as online. It’s very difficult to successfully hold territory in our ever-expanding online world.

To me, there’s only one eventual answer. We have to own our own meta-selves. Our online profile must be rich and completely portable. When we choose a new destination, our meta-me immediately unlocks the full potential of the destination, tailored specifically for us. There are challenges to be overcome — primarily around issues of privacy — but this is the only sustainable path.

Up to now, the Internet has been all about who owns what territory. This is not surprising — it’s a natural extension of our existing worldview, one formed in a physical environment. Our minds need time to grapple and assimilate abstract concepts.  Up to now, we’ve “gone” to places online. But the evolved functionality of the Internet has expanded beyond this parochial mental scaffolding. It’s time to reimagine the possibilities, using our own concepts of consciousness as a new framework. We will live at the center, we define who we are and what we want — and the Internet will be a vast extension of our mental potential that we can call at will, without our having to “go” anywhere. We’ve seen hints of this in search already, conceptually fleshing out Wegner’s transactive memory.

Daunting? Yes. Kurzweilian (with all the negative and positive connotations that implies)? Probably.  Inevitable? I believe so.

Breaking Out of Facebook’s Walled Garden

First published February 7, 2013 in Mediapost’s Search Insider

According to PEW, 27% of us are looking to wean ourselves off the Facebook habit.

This is not particularly surprising. While Facebook can be incredibly distracting, it’s not really relevant to our lives. It has never been woven into the fabric of our day-to-day activities. It’s more like an awkward, albeit entertaining, interlude jammed into the long list of stuff we have to do today. That list represents our life. Facebook represents the stuff that lies on the periphery.

Here’s one way to think about it. What if Facebook went down today? Would it really matter? Sure, it might be a disappointment, but would it make us substantially change our plans?

Now consider if Google went down for the day. How many times in a day would you got to use it, then curse because it wasn’t there?

The problem is that our online social interactions are outgrowing the walled garden that is Facebook. It has failed to become essential in the way that Google has. I can go entire months without logging into my Facebook account. I have trouble going an hour without using Google. And when I need Google, I need it now.

Again, I turn to how we use language as a clue as to how we feel about things. To “search” is a verb. It’s an action that connects intents with outcomes. It’s something we have to do. And, if you’re loyal to Google as your search engine, it’s pretty easy to swap “googling” for “searching” and for everyone to know exactly what you mean.

But what, I ask, is social? It’s not a verb. It’s not even a noun. It’s an adjective, to describe someone or something.  If I told you I “Facebooked” someone, you probably wouldn’t know what I meant. And that’s an important distinction. “Social” is tied to who we are. It isn’t tied to any single destination. Social travels with us.

When Facebook came on the scene, it did do a good job of showing us how online could be used to keep better track of our extended social networks. But now there are other ways to do that. An informal poll by Macquarie Securites also found that Instagrams are a quickly growing way to connect, especially among Facebook’s core market of 18- to 25-year-olds.

Facebook can’t own social in the same way Google can own search. We own social, because we are social. And we will use multiple tools to allow us to be social.

Facebook envisioned a social ecosystem that could then be monetized with targeted advertising. But as the PEW study points out, Facebook just couldn’t contain all our social activity. Many of us are thinking that we should probably spend less time on Facebook, as we find other ways to connect online. While Facebook has never been essential, it now also risks becoming irrelevant.

Weighing Positive and Negative Impacts on Users

First published January 31, 2013 in Mediapost’s Search Insider

We humans hate loss. In fact, we seem to value losing something about twice as high as gaining something. For example, imagine I gave you a coffee cup and then offered to buy it back from you. That’s scenario 1. In scenario 2, I ask you to buy the same coffee cup from me. The price you assign to the coffee cup in the first scenario will be, on the average, about twice as much as in the second. And yes, there’s research to back this up.

When it comes to winning and losing, it’s been proven that “loss looms larger than gains.” It’s just one of the weird glitches in our logical circuitry.  We tend to be hardwired to look at glasses as half empty.

Recently, I was reviewing an academic study done in 2008, with this scintillating title: “Procedural Priming and Consumer Judgment: Effects on the Impact of Positively and Negatively Valenced Information” by Shen and Wyer. If you can get beyond the rather dry title, you find a treasure trove of tidbits to consider when crafting your online user experience.

For example, when we evaluate a product for potential purchase, we may run across both positive and negative information. The order we run into this information can have a dramatic impact on what we do downstream from that interaction. To use psychological terms, it “primes” our mental framework.  And, because we tend to focus on negatives, less favorable information has a greater impact on our decision than positive information.

But it’s not just that we pay more attention to bad news than good news. It’s that bad news can hijack the entire consideration process. According to Shen and Wyer, if we run into negative information, it can change our information-seeking strategies, leading us down further negatively biased channels to confirm the initial information we saw. Bad news tends to lead to more bad news.

Also, we can get “bad news” hangovers. If we compare negatives in one decision process, that negative mental framework can carry over to an entirely different decision that has nothing to do with the first, giving us a heightened awareness of negative information in the new situation.

Here’s another interesting finding. If we’re rushed for time, this preoccupation with the negatives will dramatically affect the decision we make. But, if we have all the time in the world, the impact is relatively insignificant. Given time, we seem to cancel out our inherently negative biases.

All this news is not bad for marketers, however. It seems that simply getting users to state their preference for one feature over another, even though they’re not actively considering purchase at that time, leads to a much greater likelihood of purchase in the future. It seems that if you can get users to compare alternatives — and, more importantly, to commit to saying they prefer one alternative over another — they clear the mental hurdle of deciding “will I buy?” and instead start considering  “what will I buy?”

Finally, there is also a recency effect, especially if prospects had ample time to consider all their alternatives. Shen and Wyer found that the last information considered seemed to have the greatest effect on the buyer.  So, if information was both positive and negative, it was good to get the least favorable information in front of the prospect early, and then move to the most favorable information. Again, this is true only if the user had plenty of time to weigh the options. If they were rushed, the opposite was true.

All in all, these are all intriguing concepts to consider when crafting an ideal online user experience. They also underscore the importance of first impressions, especially negative ones.

McLuhan 50 Years Later

First published December 20, 2012 in Mediapost’s Search Insider

My daughter, who is in her senior year of high school, recently wrote an essay on Marshall McLuhan. She asked me to give my thoughts on McLuhan’s theories of media. To be honest, I hadn’t given McLuhan much thought since my college days, when I had packed away “Understanding Media: The Extensions of Man” for what I thought would likely be forever. I always found the title ironic. This book does many things, but promoting “understanding” is not one of them. It’s one of the more incomprehensible texts I’ve ever encountered.

My daughter’s essay caused me to dig up my half-formed understanding of what McLuhan was trying to say. I also tried to update that understanding from the early ‘60s, when it was written, to a half-century later, in the world we currently live in.

Consider this passage from McLuhan, written exactly 50 years ago: The next medium, whatever it is—it may be the extension of consciousness—will include television as its content, not as its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual’s encyclopedic function and flip into a private line to speedily tailored data of a saleable kind.

(See, I told you it was incomprehensible!)

The key thing to understand here is that McLuhan foretold something that I believe is unfolding before our eyes: The media we interact with are changing our patterns of cognition – not the message, but the medium itself. We are changing how we think. And that, in turn, is changing our society. While we focus on the messages we receive, we fail to notice that the ways we receive those messages are changing everything we know, forever. Twitter, Facebook, Google, the xBox and Youtube – all are co-conspirators in a wholesale rewiring of our world.

Now, to borrow from McLuhan’s own terminology, no one in our Global Village could ignore the horrific unfolding of events in Connecticut last week. But the channels we received the content through also affected our intellectual and visceral connection with that content. Watching parents search desperately for their children on television was a very different experience from catching the latest CNN update delivered via my iPhone.

When we watched through “hot” media, we connected at an immediate and emotional level. When the message was delivered through “cool” media, we stood somewhat apart, framing the messaging and interpreting it, abstracted at some length from the sights and sounds of what was unfolding. Because of the emotional connection afforded by the “hot” media, the terror of Newtown was also our own.

McLuhan foretold this as well: Unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. […] Terror is the normal state of any oral society, for in it everything affects everything all the time.

My daughter is graduating next June. The world she will inherit will bear little resemblance to the one I stepped into, fresh from my own graduation in 1979. It is smaller, faster, more connected and, in many ways, more terrifying. But, has the world changed as much as it seems, or is it just the way we perceive that world? And, in that perception, are we the ones unleashing the change?

The “Savanna” Hypothesis of Online Design

First published December 6, 2012 in Mediapost’s Search Insider

I’m currently reading a fascinating paper titled “Evolved Responses to Landscapes” by Gordon Orians and Judith Heerwagen that was written back in 1992. The objective was to see if humans have an evolved preference for an ideal habitat. The researchers called their hunch the Savanna Hypothesis, noting that because homo sapiens spent much of our evolutionary history on the plains of tropical Africa, we should have a natural affinity for this type of landscape.

Your typical savanna features some cover from vegetation and trees, but not too much, which would allow natural predators to advance unnoticed. The environment should offer enough lushness to indicate the presence of ample food and water. It should allow for easy mobility. And it should be visually intriguing, encouraging us to venture and explore our habitat.

Here’s a quote from the paper: “Landscapes that aid and encourage exploration, wayfinding and information processing should be more favored than landscapes that impede these needs.”

The researchers, after showing participants hundreds of pictures of different landscapes, found significant support for their hypothesis. Most of us have a preference for landscapes that resemble our evolutionary origin. And the younger we are, the more predictable the preference. With age, we tend to adapt to where we live and develop a preference for it.

In reading this study, I couldn’t help but equate it to Pirolli and Card’s Information Foraging Theory. The two PARC researchers said that the strategies we use to hunt for information in a hyperlinked digital format (such as a webpage) seem to correspond to evolved optimal foraging strategies used by many species, including humans back in our hunting and foraging days. If, as Pirolli and Card theorized, we borrow inherent strategies for foraging and adapt them for new purposes, like looking for information, why wouldn’t we also apply evolved environmental preferences to new experiences, like the design of a Web page?

Consider the description of an ideal habitat quoted above. We want to be able to quickly determine our navigation options, with just a teaser of things still to explore. We want open space, so we can quickly survey our options, but we also want the promise of abundant rewards, either in the form of food and sustenance — or, in the online case, information and utility. After all, what is a website but another environment to navigate?

I find the idea of creating a home page design that incorporates a liberal dose of intrigue and promise particularly compelling. In a physical space, such an invitation may take the form of a road or pathway curving behind some trees or over a gentle rise. Who can resist such an invitation to explore just a little further?

Why should we take the same approach with a home page or landing page? Orians and Heerwagen explain that we tend to “way-find” through new environments in three distinct stages: First, we quickly scan the environment to decide if it’s even worth exploring. Do we stay here or move on to another, more hospitable location? This very quick scan really frames all the interactions to take place after it. After this “go-no/go” scan, we then start surveying the environment to gather information and find the most promising path to take. The final phase — true engagement with our surroundings — is when we decide to stay put and get some things done.

Coincidentally (or not?), I have found users take a very similar approach to evaluating a webpage. We’ve even entrenched this behavior into a usability best practice we call the “3 Scan Rule.” The first scan is to determine the promise of the page. Is it visually appealing? Is it relevant? Is it user-friendly? All these questions should be able to be answered in one second or less. In fact, a study at Carleton University found that we can reliably judge the aesthetic appeal of a website in as short a span as 50 milliseconds. That’s less time than it takes to blink your eye.

The second scan is to determine the best path. This typically involves exploring the primary navigation options, scanning graphics and headings and quickly looking at bullet lists to determine how “rich” the page is. Is it relevant to our intent? Does it look like there’s sufficient content for us to invest our time? Are there compelling navigation options that offer us more? This scan should take no more than 10 seconds.

Finally, there’s the in-depth scan. It’s here where we more deeply engage with the content. This can take anywhere from several seconds to several minutes.

At this point, the connection between the inherently pleasing characteristics of the African savanna and a well-designed website is no more than a hypothesis on my part. But I have to admit: I find the concept intriguing, like a half-obscured pathway disappearing over a swell on the horizon, waiting to be explored.

Pursuing the Unlaunched Search

First published November 29, 2012 in Mediapost’s Search Insider

Google’s doing an experiment. Eight times a day, randomly, 150 people get an alert from their smartphone and Google asks them this question, “ What did you want to know recently?” The goal? To find out all the things you never thought to ask Google about.

This is a big step for Google. It moves search into a whole new arena. It’s shifting the paradigm from explicit searching to implicit searching. And that’s important for all of the following reasons:

Search is becoming more contextually sensitive. Mobile search is contextually sensitive search. If you have your calendar, your to-do list, your past activities and a host of other information all stored on a device that knows where you are, it becomes much easier to guess what you might be interested in. Let’s say, for example, that your calendar has “Date with Julie” entered at 7 p.m., and you’re downtown. In the past year, 57% of your “dates with Julie” have generally involved dinner and a movie. You usually spend between $50 and $85 dollars on dinner, and your movies of choice generally vacillate between rom-coms and action-adventures (depending on who gets to choose).

In this scenario, without waiting for you to ask, Google could probably be reasonably safe in suggesting local restaurants that match your preferences and price ranges, showing you any relevant specials or coupons, and giving you the line-up of suggested movies playing at local theatres. Oh, and by the way, you’re out of milk and it’s on sale at the grocery store on the way home.

Can Googling become implicit? “We’ve often said the perfect search engine will provide you with exactly what you need to know at exactly the right moment, potentially without you having to ask for it,” says Google Lead Experience Designer Jon Wiley, one of the leads of the research experiment.

As our devices know more about us, the act of Googling may move from a conscious act to a subliminal suggestion. The advantage, for Google and us, is that it can provide us with information we never thought to ask for.  In the ideal state envisioned by Google, it can read the cues of our current state and scour its index of information to provide relevant options. Let’s say we just bought a bookcase from Ikea. Without asking, Google can download the user’s manual and pull relevant posts from user support forums.

It ingrains the Google habit. Google is currently in the enviable position of having become a habit. We don’t think to use Google, we just do. Of course, habits can be broken. Habits are a subconscious script that plays out in a familiar environment, delivering an expected outcome without conscious intervention. To break a habit, you usually look at disrupting the environment, stopping the script before it has a chance to play out.

The environment of search is currently changing dramatically. This raises the possibility of the breaking of the Google habit. If our habits suddenly find themselves in unfamiliar territory, the regular scripts are blocked and we’re forced to think our way through the situation.

But if Google can adapt to unfamiliar environments and prompt us with relevant information without us having to give it any thought, the company not only preserves the Google habit but ingrains it even more deeply. Good news for Google, bad news for Bing and other competitors.

It expands Google’s online landscape. Finally, at this point, Google’s best opportunity for a sustainable revenue channel is to monetize search. As long as Google controls our primary engagement point with online information, it has no shortage of monetization opportunities. By moving away from waiting for a query and toward proactive serving of information, Google can exponentially expand the number of potential touch points with users. Each of these  touch points comes with another advertising opportunity.

All this is potentially ground-breaking, but it’s not new. Microsoft was talking about Implicit Querying a decade ago. It was supposed to be built into Windows Vista. At that time, it was bound to the desktop. But now, in a more mobile world, the implications of implicit searching are potentially massive.

The Balancing of Market Information

First published October 25, 2012 in Mediapost’s Search Insider

In my three previous columns on disintermediation, I made a rather large assumption: that the market will continue to see a balancing of information available both to buyers and sellers. As this information becomes more available, the need for the “middle” will decrease.

Information Asymmetry Defined

Let’s begin by exploring the concept of information asymmetry, courtesy of George Akerlof, Michael Spence and Joseph Stiglitz.  In markets where access to information is unbalanced, bad things can happen.

If the buyer has more information than the seller, then we can have something called adverse selection. Take life and health insurance, for example. Smokers (on the average) get sick more often and die younger than non-smokers. If an insurance company has 50% of policyholders who are smokers, and 50% who aren’t, but the company is not allowed to know which is which, it has a problem with adverse selection. It will lose money on the smokers so it will increase rates across the board. The problem is that non-smokers, who don’t use insurance as much, will get angry and may cancel their policy. This will mean the “book of business” will become even less profitable, driving rates even higher.   The solution, which we all know, is simple: Ask policy applicants if they smoke. Imperfect information is thus balanced out.

If the seller has more information than the buyer, then we have a “market for lemons” (the name of Akerlof’s paper). Here,  buyers are  assuming risk in a purchase without knowingly accepting that risk, because they’re unaware of the problems that the seller knows exists. Think about buying a used car, without the benefit of an inspection, past maintenance records or any type of independent certification. All you know is what you can see by looking at the car on the lot. The seller, on the other hand, knows the exact mechanical condition of the car. This factor tends to drive down the prices of all products –even the good ones — in the market, because buyers assume quality will be suspect. The balancing of information in this case helps eliminates the lemons and has the long-term effect of improving the average quality of all products on the market.

Getting to Know You…

These two forces — the need for sellers to know more about their buyers, and the need for buyers to know more about what they’re buying — are driving a tremendous amount of information-gathering and dissemination. On the seller’s side, behavioral tracking and customer screening are giving companies an intimate glimpse into our personal lives. On the buyer’s side, access to consumer reviews, third-party evaluations and buyer forums are helping us steer clear of lemons. Both are being facilitated through technology.

But how does disintermediation impact information asymmetry, or vice versa?

If we didn’t have adequate information, we needed some other safeguard against being taken advantage of. So, failing a rational answer to this particular market dilemma, we found an irrational one: We relied on gut instinct.

Relying on Relationships

If we had to place our trust in someone, it had to be someone we could look in the eye during the transaction. The middle was composed of individuals who acted as the face of the market. Because they lived in the same communities as their customers, went to the same churches, and had kids that went to the same schools, they had to respect their markets. If they didn’t, they’d be run out of town. Often, their loyalties were also in the middle, balanced somewhere between their suppliers and their customers.

In the absence of perfect information, we relied on relationships. Now, as information improves, we still want relationships, because that’s what we’ve come to expect. We want the best of both worlds.

Will Customer Service Disappear with the Elimination of the “Middle”?

First published October 18, 2012 in Mediapost’s Search Insider

In response to my original column on disintermediation, Joel Snyder worried about the impact on customer service: The worst casualty is relationships and people skills. As consumers circumvent middlemen, they become harder to deal with. As merchants become more automated, customer service people have less power and less skills (and lower pay).

Cece Forrester agreed: Disintermediation doesn’t just let consumers be rude. It also lets organizations treat their customers rudely.

So, is rudeness an inevitable byproduct of disintermediation?

Rediscovering the Balance between Personalization and Automation

Technology introduces efficiency. It streamlines the “noise” and marketplace friction that comes with human interactions. But with that “noise” comes all the warm and fuzzy aspects of being human. It’s what both Joel and Cece fear may be lost with disintermediation. I, however, have a different view.

Shifts in human behavior don’t typically happen incrementally, settling gently into the new norm. They swing like a pendulum, going too far one way, then the other, before stability is reached. Some force — in this case, new technological capabilities — triggers the change. As society moves, the force, plus momentum, moves too far in one direction, which triggers an opposing force which pushes back against the trend. Eventually, balance is reached.

A Redefinition of Relationships

In this case, the opposing force will be our need for those human factors. Disintermediation won’t kill relationships. But it will force a redefinition of relationships. The challenge here is that existing market relationships were all tied to the “Middle,” which served as the bridge between producers and consumers. Because the Middle owned the end connection with the customer, it formed the relationships that currently exist. Now, as anyone who has experienced bad customer service will tell you, some who lived in the Middle were much better at relationships than others. Joel and Cece may be guilty of looking at our current paradigm through rose-colored glasses. I have encountered plenty of rudeness even with the Middle firmly in place.

But it’s also true that producers, who suddenly find themselves directly connected with their markets, have little experience in forming and maintaining these relationships. However, the market will eventually dictate new expectations for customer service, and producers will have to meet those expectations. One disintermediator, Zappos, figured that out very early in the game.

Ironically, disintermediation will ultimately be good for relationships. Feedback loops are being shortened. Technology is improving our ability to know exactly what our customers think about us. We’re actually returning to a much more intimate marketplace, enabled through technology. Producers are quickly educating themselves on how to create and maintain good virtual relationships. They can’t eliminate customer service, because we, the market, won’t let them. It will take a bit for us to find the new normal, but I venture to say that wherever we find it, we’ll end up in a better place than we are today.

A Decade with the Database of Intentions

First published September 27, 2012 in Mediapost’s Search Insider

It’s been over 10 years since John Battelle first started considering what he called the “Database of intentions.” It was, and is:

The aggregate results of every search ever entered, every result list ever tendered, and every path taken as a result. It lives in many places, but three or four places in particular hold a massive amount of this data (ie MSN, Google, and Yahoo). This information represents, in aggregate form, a place holder for the intentions of humankind – a massive database of desires, needs, wants, and likes that can be discovered, supoenaed, archived, tracked, and exploited to all sorts of ends. Such a beast has never before existed in the history of culture, but is almost guaranteed to grow exponentially from this day forward. This artifact can tell us extraordinary things about who we are and what we want as a culture. And it has the potential to be abused in equally extraordinary fashion.

When Battelle considered the implications, it overwhelmed him. “Once I grokked this idea (late 2001/early 2002), my head began to hurt.” Yet, for all its promise, marketers have only marginally leveraged the Database of Intentions.

In the intervening time, the possibilities of the Database of Intention have not diminished. In fact, they have grown exponentially:

My mistake in 2003 was to assume that the entire Database of Intentions was created through our interactions with traditional web search. I no longer believe this to be true. In the past five or so years, we’ve seen “eruptions” of entirely new fields, each of which, I believe, represent equally powerful signals – oxygen flows around which massive ecosystems are already developing. In fact, the interplay of all of these signals (plus future ones) represents no less than the sum of our economic and cultural potential.

Sharing Battelle’s predilection for “Holy Sh*t” moments, a post by MediaPost’s Laurie Sullivan this Tuesday got me thinking again about Battelle’s “DBoI.” A recent study by Google and EA showed that using search data can predict 84% of video game sales.  But the data used in the prediction is only scratching the surface of what’s possible. Adam Stewart from Google hints at what might be possible, “Aside from searches, Google plans to build in game quality, TV investment, online display investment, and social buzz to create a multivariate model for future analysis.”

This is very doable stuff. All we need to create predictive models that match (and probably far exceed) the degree of accuracy already available. The data is just sitting there, waiting to be interpreted. The implications for marketing are staggering, but to Battelle’s point, let’s not be too quick to corral this simply for the use of marketers. The DBoI has implications that reach into every aspect of our society and lives. This is big — really big! If that sounds unduly ominous to you, let me give you a few reasons why you should be more worried than you are.

Typically, if we were to predict patterns in human behavior, there would be two sources of signals. One comes from an understanding of how humans act. As we speak, this is being attacked on multiple fronts. Neuroscience, behavioral economics, evolutionary psychology and a number of other disciplines are rapidly converging on a vastly improved understanding of what makes us tick. From this base understanding, we can then derive hypotheses of predicted behaviors in any number of circumstances.

This brings us to the other source of behavior signals. If we have a hypothesis, we need some way to scientifically test it. Large-scale collections of human behavioral data allow us to search for patterns and identify underlying causes, which can then serve as predictive signals for future scenarios. The Database of Intentions gives us a massive source of behavior signals that capture every dimension of societal activity. We can test our hypotheses quickly and accurately against the tableau of all online activity, looking for the underlying influences that drive behaviors.

At the intersection of these two is something of tremendous import. We can start predicting human behavior on a massive scale, with unprecedented accuracy. With each prediction, the feedback loop between qualitative prediction and quantitative verification becomes faster and more efficient. Throw a little processing power at it and we suddenly have an artificially intelligent, self-ssimproving predictive model that will tell us, with startling accuracy, what we’re likely to do in the future.

This ain’t just about selling video games, people. This is a much, much, much bigger deal.