Psychological Priming and the Path to Purchase

First published March 27, 2013 in Mediapost’s Search Insider

In marketing, I suspect we pay too much attention to the destination, and not enough to the journey. We don’t take into account the cumulative effect of the dozens of subconscious cues we encounter on the path to our ultimate purchase. We certainly don’t understand the subtle changes of direction that can result from these cues.

Search is a perfect example of this.

As search marketers, we believe that our goal is to drive a prospect to a landing page. Some of us worry about the conversion rates once a prospect gets to the landing page. But almost none of us think about the frame of mind of prospects once they reach the landing page.

“Frame” is the appropriate metaphor here, because the entire interaction will play out inside this frame. It will impact all the subsequent “downstream” behaviors. The power of priming should not be taken likely.

Here’s just one example of how priming can wield significant unconscious power over our thoughts and actions. Participants primed by exposure to a stereotypical representation of a “professor” did better on a knowledge test than those primed with a representation of a “supermodel.”

A simple exposure to a word can do the trick. It can frame an entire consumer decision path. So, if many of those paths start with a search engine, consider the influence that a simple search listing may have.

We could be primed by the position of a listing (higher listings = higher quality alternatives).  We could be primed (either negatively or positively) by an organization that dominates the listing real estate. We could be primed by words in the listing. We could be primed by an image. A lot can happen on that seemingly innocuous results page.

Of course, the results page is just one potential “priming” platform. Priming could happen on the landing page, a third-party site or the website itself. Every single touch point, whether we’re consciously interacting with it or not, has the potential to frame, or even sidetrack, our decision process.

If the path to purchase is littered with all these potential landmines (or, to take a more positive approach, “opportunities to persuade”), how do we use this knowledge to become better marketers? This does not fall into the typical purview of the average search marketer.

Personally, I’m a big fan of the qualitative approach (I know — big surprise) in helping to lay down the most persuasive path possible. Actually talking to customers, observing them as they navigate typical online paths in a usability testing session, and creating some robust scenarios to use in your own walk-throughs will yield far better results than quantitative number-crunching. Excel is not a particularly good at being empathetic.

Jakob Nielsen has said that online, branding is all about experience, not exposure. As search marketers, it’s our responsibility to ensure that we’re creating the most positive experience possible, as our prospects make their way to the final purchase.

The devil, as always, is in the details — whether we’re paying conscious attention to them or not.

Viewing the World through Google Colored Glass

First published March 7, 2013 in Mediapost’s Search Insider

Let’s play “What If” for a moment. For the last few columns, I’ve been pondering how we might more efficiently connect with digital information. Essentially, I see the stripping away of the awkward and inefficient interfaces that have been interposed between that information and us. Let’s imagine, 15 years from now, that Google Glass and other wearable technology provides a much more efficient connection, streaming real-time information to us that augments our physical world. In the blink of an eye, we can retrieve any required piece of information, expanding the capabilities of our own limited memories beyond belief. We have perfect recall, perfect information — we become omniscient.

To facilitate this, we need to move our cognitive abilities to increasingly subterranean levels of processing – taking advantage of the “fast and dirty” capabilities of our subliminal mind. As we do this, we actually rewire our brains to depend on these technological extensions. Strategies that play out with conscious guidance become stored procedures that follow scripts written by constant repetition. Eventually, overtraining ingrains these procedures as habits, and we stop thinking and just do. Once this happens, we surrender much of our ability to consciously change our behaviors.

Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the “cloud.” It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections. It’s the “Big Data” representation of us, condensed into a virtual representation that can be parsed and manipulated by the technology we use to connect with the virtual world.

In my last column, Rob Schmultz and Randy Kirk wondered what a world full of technologically enhanced Homo sapiens would look like. Would we all become the annoying guy in the airport that can’t stop talking on his Bluetooth headset? Would we become so enmeshed in our digital connections that we ignore the physical ones that lie in front of our own noses? Would Google Glass truly augment our understanding of the world, or iwould it make us blind to its charms? And what about the privacy implications of a world where our every move could instantly be captured and shared online — a world full of digital voyeurs?

I have no doubt that technology can take us to this not-too-distant future as I envisioned it. Much of what’s required already exists. Implantable hardware, heads up displays, sub-vocalization, bio-feedback — it’s all very doable. What I wonder about is not the technology, but rather us. We move at a much slower pace.  And we may not recognize any damage that’s done until it’s too late.

The Darwinian Brain

At an individual level, our brains have a remarkable ability to absorb technology. This is especially true if we’re exposed to that technology from birth. The brain represents a microcosm of evolutionary adaption, through a process called synaptic pruning. Essentially, the brain builds and strengthens neural pathways that are used often, and “prunes” away those that aren’t. In this way, the brain literally wires itself to be in sync with our environment.

The majority of this neural wiring happens when we’re still children. So, if our childhood environment happens to include technologies such as heads-up displays, implantable chips and other direct interfaces to digital information, our brains will quickly adapt to maximize the use of those technologies. Adults will also adapt to these new technologies, but because our brains are less “plastic” than that of children, the adaption won’t be as quick or complete.

The Absorption of Technology by Society

I don’t worry about our brain’s ability to adapt. I worry about the eventual impact on our society. With changes this portentous, there is generally a social cost. To consider what might come, it may be beneficial to look at what has been. Take television, for example.

If a technology is ubiquitous and effective enough to spread globally, like TV did, there is the issue of absorption. Not all sectors of society will have access to the technology at the same time. As the technology is absorbed at different rates, it can create imbalances and disruption. Think about the societal divide caused by the absorption of TV, which resulted in completely different information distribution paradigm. One can’t help thinking that TV played a significant role in much of the political change we saw sweep over the world in the past 3 decades.

And even if our brains quickly adapt to technology, that doesn’t mean our social mores and values will move as quickly. As our brains rewire to adapt to new technologies our cultural frameworks also need to shift. With different generations and segments of society at different places on the absorption curve, this can create further tensions. If you take the timeline of societal changes documented by Robert Putnam in “Bowling Alone” and overlay the timing of the adoption of TV, the correlation is striking and not a little frightening.

Even if our brains have the ability to adapt to technology, it isn’t always a positive change. For example, there is compelling evidence that early exposure to TV has contributed to the recent explosion of diagnosed ADHD and possibly even autism.

Knowing Isn’t Always the Same as Understanding

Finally, we have the greatest fear of Nicholas Carr:  maybe this immediate connection to information will have the “net” effect of making us stupid — or, at least, more shallow thinkers. If we’re spoon-fed information on demand, do we grow intellectually lazy? Do we start to lose the ability to reason and think critically? Will we swap quality for quantity?

Personally, I’m not sure Carr’s fears are founded on this front. It may be that our brains adapt and become even more profound and capable. Perhaps when we offload the simple journeyman tasks of retrieving information and compiling it for consideration to technology, our brains will be freed up to handle deeper and more abstract tasks. The simple fact is, we won’t know until it happens. It could be another “Great Leap Forward,” or it may mark the beginning of the decline of our species.

The point is, we’ve already started down the path, and it’s highly unlikely we’ll retreat at this point. I suppose we have no option but to wait and see.

Why I – And Mark Zuckerberg – are Bullish on Google Glass

First published February 28, 2013 in Mediapost’s Search Insider

Call it a Tipping Point. Call it an Inflection Point. Call it Epochal (what ever that means). The gist is, things are going to change — and they’re going to change in a big, big way!

First, with due deference to the brilliant Kevin Kelley, let’s look at how technology moves. In his book “What Technology Wants,” Kelley shows that technology is not dependent on a single invention or inventor. Rather, it’s the sum of multiple, incremental discoveries that move technology to a point where it can breach any resistance in its way and move into a new era of possibility. So, even if Edison had never lived, we’d still have electric lights in our home. If he weren’t there, somebody else would have discovered it (or more correctly, perfected it). The momentum of technology would not have been denied.

Several recent developments indicate that we’re on the cusp of another technological wave of advancement. These developments have little to do with online technologies or capabilities. They’re centered on how humans and hardware connect — and it’s impossible to overstate their importance.

The Bottleneck of Our Brains

Over the past two decades, there has been a massive build-up of online capabilities. In this case, what technology has wanted is the digitization of all information. That was Step One. Step Two is to render all that information functional. Step Three will be to make all the functionality personalized. And we’re progressing quite nicely down that path, thank you very much. The rapidly expanding capabilities of online far surpass what we are able to assimilate and use at any one time. All this functionality is still fragmented and is in the process of being developed (one of the reasons I think Facebook is in danger of becoming irrelevant) but it’s there. It’s just a pain in the butt for us to utilize it.

The problem is one of cognition. The brain has two ways to process information, one fast and one slow. The slow way (using our conscious parts of the brain) is tremendously flexible but inefficient. This is the system we’ve largely used to connect online. Everything has to be processed in the form of text, both in terms of output and input, generally through a keyboard and a screen display. It’s the easiest way for us to connect with information, but it’s far from the most efficient way.

The second way is much, much faster. It’s the subconscious processing of our environment that we do everyday.  It’s what causes us to duck when a ball is thrown at our head, jump out of the way of an oncoming bus, fiercely protect our children and judge the trustworthiness of a complete stranger. If our brains were icebergs, this would be the 90% hidden beneath the water. But we’ve been unable to access most of this inherent efficiency and apply it to our online interactions — until now.

The Importance of Siri and Glass

Say what you want about Mark Zuckerberg, he’s damned smart. That’s why he knew immediately that Google Glass is important.

I don’t know if Google Glass will be a home run for Google. I also don’t know if Siri will every pay back Apple’s investment in it. But I do know that 30 years from now, they’ll both be considered important milestones. And they’ll be important because they were representative of a sea change in how we connect with information. Both have the potential to unlock the efficiency of the subconscious brain. Siri does it by utilizing our inherent communication abilities and breaking the inefficient link that requires us not only to process our thoughts as language, but also laboriously translate them into keystrokes. In neural terms, this is one of the most inefficient paths imaginable.

But if Siri teases us with a potentially more efficient path, Google Glass introduces a new, mind-blowing scenario of what might be possible. To parse environment cues and stream information directly into our visual cortex in real time, creating a direct link with all that pent-up functionality that lives “in the cloud,” wipes away most of the inefficiency of our current connection paradigm.

Don’t think of the current implementation that Google is publicizing. Think beyond that to a much more elegant link between the vast capabilities of a digitized world and our own inner consciousness. Whatever Glass and Siri (and their competitors) eventually evolve into in the next decade or so, they will be far beyond what we’re considering today.

With the humanization of these interfaces, a potentially dark side effect will take place. These interfaces will become hardwired into our behavior strategies. Now, because our online interactions are largely processed at a conscious level, the brain tends to maintain maximum flexibility regarding the routines it uses. But as we access subconscious levels of processing with new interface opportunities, the brain will embed these at a similarly subconscious level. They will become habitual, playing out without conscious intervention. It’s the only way the brain can maximize its efficiency. When this happens, we will become dependent on these technological interfaces. It’s the price we’ll pay for the increased efficiency.

McLuhan 50 Years Later

First published December 20, 2012 in Mediapost’s Search Insider

My daughter, who is in her senior year of high school, recently wrote an essay on Marshall McLuhan. She asked me to give my thoughts on McLuhan’s theories of media. To be honest, I hadn’t given McLuhan much thought since my college days, when I had packed away “Understanding Media: The Extensions of Man” for what I thought would likely be forever. I always found the title ironic. This book does many things, but promoting “understanding” is not one of them. It’s one of the more incomprehensible texts I’ve ever encountered.

My daughter’s essay caused me to dig up my half-formed understanding of what McLuhan was trying to say. I also tried to update that understanding from the early ‘60s, when it was written, to a half-century later, in the world we currently live in.

Consider this passage from McLuhan, written exactly 50 years ago: The next medium, whatever it is—it may be the extension of consciousness—will include television as its content, not as its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual’s encyclopedic function and flip into a private line to speedily tailored data of a saleable kind.

(See, I told you it was incomprehensible!)

The key thing to understand here is that McLuhan foretold something that I believe is unfolding before our eyes: The media we interact with are changing our patterns of cognition – not the message, but the medium itself. We are changing how we think. And that, in turn, is changing our society. While we focus on the messages we receive, we fail to notice that the ways we receive those messages are changing everything we know, forever. Twitter, Facebook, Google, the xBox and Youtube – all are co-conspirators in a wholesale rewiring of our world.

Now, to borrow from McLuhan’s own terminology, no one in our Global Village could ignore the horrific unfolding of events in Connecticut last week. But the channels we received the content through also affected our intellectual and visceral connection with that content. Watching parents search desperately for their children on television was a very different experience from catching the latest CNN update delivered via my iPhone.

When we watched through “hot” media, we connected at an immediate and emotional level. When the message was delivered through “cool” media, we stood somewhat apart, framing the messaging and interpreting it, abstracted at some length from the sights and sounds of what was unfolding. Because of the emotional connection afforded by the “hot” media, the terror of Newtown was also our own.

McLuhan foretold this as well: Unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. […] Terror is the normal state of any oral society, for in it everything affects everything all the time.

My daughter is graduating next June. The world she will inherit will bear little resemblance to the one I stepped into, fresh from my own graduation in 1979. It is smaller, faster, more connected and, in many ways, more terrifying. But, has the world changed as much as it seems, or is it just the way we perceive that world? And, in that perception, are we the ones unleashing the change?

The “Savanna” Hypothesis of Online Design

First published December 6, 2012 in Mediapost’s Search Insider

I’m currently reading a fascinating paper titled “Evolved Responses to Landscapes” by Gordon Orians and Judith Heerwagen that was written back in 1992. The objective was to see if humans have an evolved preference for an ideal habitat. The researchers called their hunch the Savanna Hypothesis, noting that because homo sapiens spent much of our evolutionary history on the plains of tropical Africa, we should have a natural affinity for this type of landscape.

Your typical savanna features some cover from vegetation and trees, but not too much, which would allow natural predators to advance unnoticed. The environment should offer enough lushness to indicate the presence of ample food and water. It should allow for easy mobility. And it should be visually intriguing, encouraging us to venture and explore our habitat.

Here’s a quote from the paper: “Landscapes that aid and encourage exploration, wayfinding and information processing should be more favored than landscapes that impede these needs.”

The researchers, after showing participants hundreds of pictures of different landscapes, found significant support for their hypothesis. Most of us have a preference for landscapes that resemble our evolutionary origin. And the younger we are, the more predictable the preference. With age, we tend to adapt to where we live and develop a preference for it.

In reading this study, I couldn’t help but equate it to Pirolli and Card’s Information Foraging Theory. The two PARC researchers said that the strategies we use to hunt for information in a hyperlinked digital format (such as a webpage) seem to correspond to evolved optimal foraging strategies used by many species, including humans back in our hunting and foraging days. If, as Pirolli and Card theorized, we borrow inherent strategies for foraging and adapt them for new purposes, like looking for information, why wouldn’t we also apply evolved environmental preferences to new experiences, like the design of a Web page?

Consider the description of an ideal habitat quoted above. We want to be able to quickly determine our navigation options, with just a teaser of things still to explore. We want open space, so we can quickly survey our options, but we also want the promise of abundant rewards, either in the form of food and sustenance — or, in the online case, information and utility. After all, what is a website but another environment to navigate?

I find the idea of creating a home page design that incorporates a liberal dose of intrigue and promise particularly compelling. In a physical space, such an invitation may take the form of a road or pathway curving behind some trees or over a gentle rise. Who can resist such an invitation to explore just a little further?

Why should we take the same approach with a home page or landing page? Orians and Heerwagen explain that we tend to “way-find” through new environments in three distinct stages: First, we quickly scan the environment to decide if it’s even worth exploring. Do we stay here or move on to another, more hospitable location? This very quick scan really frames all the interactions to take place after it. After this “go-no/go” scan, we then start surveying the environment to gather information and find the most promising path to take. The final phase — true engagement with our surroundings — is when we decide to stay put and get some things done.

Coincidentally (or not?), I have found users take a very similar approach to evaluating a webpage. We’ve even entrenched this behavior into a usability best practice we call the “3 Scan Rule.” The first scan is to determine the promise of the page. Is it visually appealing? Is it relevant? Is it user-friendly? All these questions should be able to be answered in one second or less. In fact, a study at Carleton University found that we can reliably judge the aesthetic appeal of a website in as short a span as 50 milliseconds. That’s less time than it takes to blink your eye.

The second scan is to determine the best path. This typically involves exploring the primary navigation options, scanning graphics and headings and quickly looking at bullet lists to determine how “rich” the page is. Is it relevant to our intent? Does it look like there’s sufficient content for us to invest our time? Are there compelling navigation options that offer us more? This scan should take no more than 10 seconds.

Finally, there’s the in-depth scan. It’s here where we more deeply engage with the content. This can take anywhere from several seconds to several minutes.

At this point, the connection between the inherently pleasing characteristics of the African savanna and a well-designed website is no more than a hypothesis on my part. But I have to admit: I find the concept intriguing, like a half-obscured pathway disappearing over a swell on the horizon, waiting to be explored.

Pursuing the Unlaunched Search

First published November 29, 2012 in Mediapost’s Search Insider

Google’s doing an experiment. Eight times a day, randomly, 150 people get an alert from their smartphone and Google asks them this question, “ What did you want to know recently?” The goal? To find out all the things you never thought to ask Google about.

This is a big step for Google. It moves search into a whole new arena. It’s shifting the paradigm from explicit searching to implicit searching. And that’s important for all of the following reasons:

Search is becoming more contextually sensitive. Mobile search is contextually sensitive search. If you have your calendar, your to-do list, your past activities and a host of other information all stored on a device that knows where you are, it becomes much easier to guess what you might be interested in. Let’s say, for example, that your calendar has “Date with Julie” entered at 7 p.m., and you’re downtown. In the past year, 57% of your “dates with Julie” have generally involved dinner and a movie. You usually spend between $50 and $85 dollars on dinner, and your movies of choice generally vacillate between rom-coms and action-adventures (depending on who gets to choose).

In this scenario, without waiting for you to ask, Google could probably be reasonably safe in suggesting local restaurants that match your preferences and price ranges, showing you any relevant specials or coupons, and giving you the line-up of suggested movies playing at local theatres. Oh, and by the way, you’re out of milk and it’s on sale at the grocery store on the way home.

Can Googling become implicit? “We’ve often said the perfect search engine will provide you with exactly what you need to know at exactly the right moment, potentially without you having to ask for it,” says Google Lead Experience Designer Jon Wiley, one of the leads of the research experiment.

As our devices know more about us, the act of Googling may move from a conscious act to a subliminal suggestion. The advantage, for Google and us, is that it can provide us with information we never thought to ask for.  In the ideal state envisioned by Google, it can read the cues of our current state and scour its index of information to provide relevant options. Let’s say we just bought a bookcase from Ikea. Without asking, Google can download the user’s manual and pull relevant posts from user support forums.

It ingrains the Google habit. Google is currently in the enviable position of having become a habit. We don’t think to use Google, we just do. Of course, habits can be broken. Habits are a subconscious script that plays out in a familiar environment, delivering an expected outcome without conscious intervention. To break a habit, you usually look at disrupting the environment, stopping the script before it has a chance to play out.

The environment of search is currently changing dramatically. This raises the possibility of the breaking of the Google habit. If our habits suddenly find themselves in unfamiliar territory, the regular scripts are blocked and we’re forced to think our way through the situation.

But if Google can adapt to unfamiliar environments and prompt us with relevant information without us having to give it any thought, the company not only preserves the Google habit but ingrains it even more deeply. Good news for Google, bad news for Bing and other competitors.

It expands Google’s online landscape. Finally, at this point, Google’s best opportunity for a sustainable revenue channel is to monetize search. As long as Google controls our primary engagement point with online information, it has no shortage of monetization opportunities. By moving away from waiting for a query and toward proactive serving of information, Google can exponentially expand the number of potential touch points with users. Each of these  touch points comes with another advertising opportunity.

All this is potentially ground-breaking, but it’s not new. Microsoft was talking about Implicit Querying a decade ago. It was supposed to be built into Windows Vista. At that time, it was bound to the desktop. But now, in a more mobile world, the implications of implicit searching are potentially massive.

The Balancing of Market Information

First published October 25, 2012 in Mediapost’s Search Insider

In my three previous columns on disintermediation, I made a rather large assumption: that the market will continue to see a balancing of information available both to buyers and sellers. As this information becomes more available, the need for the “middle” will decrease.

Information Asymmetry Defined

Let’s begin by exploring the concept of information asymmetry, courtesy of George Akerlof, Michael Spence and Joseph Stiglitz.  In markets where access to information is unbalanced, bad things can happen.

If the buyer has more information than the seller, then we can have something called adverse selection. Take life and health insurance, for example. Smokers (on the average) get sick more often and die younger than non-smokers. If an insurance company has 50% of policyholders who are smokers, and 50% who aren’t, but the company is not allowed to know which is which, it has a problem with adverse selection. It will lose money on the smokers so it will increase rates across the board. The problem is that non-smokers, who don’t use insurance as much, will get angry and may cancel their policy. This will mean the “book of business” will become even less profitable, driving rates even higher.   The solution, which we all know, is simple: Ask policy applicants if they smoke. Imperfect information is thus balanced out.

If the seller has more information than the buyer, then we have a “market for lemons” (the name of Akerlof’s paper). Here,  buyers are  assuming risk in a purchase without knowingly accepting that risk, because they’re unaware of the problems that the seller knows exists. Think about buying a used car, without the benefit of an inspection, past maintenance records or any type of independent certification. All you know is what you can see by looking at the car on the lot. The seller, on the other hand, knows the exact mechanical condition of the car. This factor tends to drive down the prices of all products –even the good ones — in the market, because buyers assume quality will be suspect. The balancing of information in this case helps eliminates the lemons and has the long-term effect of improving the average quality of all products on the market.

Getting to Know You…

These two forces — the need for sellers to know more about their buyers, and the need for buyers to know more about what they’re buying — are driving a tremendous amount of information-gathering and dissemination. On the seller’s side, behavioral tracking and customer screening are giving companies an intimate glimpse into our personal lives. On the buyer’s side, access to consumer reviews, third-party evaluations and buyer forums are helping us steer clear of lemons. Both are being facilitated through technology.

But how does disintermediation impact information asymmetry, or vice versa?

If we didn’t have adequate information, we needed some other safeguard against being taken advantage of. So, failing a rational answer to this particular market dilemma, we found an irrational one: We relied on gut instinct.

Relying on Relationships

If we had to place our trust in someone, it had to be someone we could look in the eye during the transaction. The middle was composed of individuals who acted as the face of the market. Because they lived in the same communities as their customers, went to the same churches, and had kids that went to the same schools, they had to respect their markets. If they didn’t, they’d be run out of town. Often, their loyalties were also in the middle, balanced somewhere between their suppliers and their customers.

In the absence of perfect information, we relied on relationships. Now, as information improves, we still want relationships, because that’s what we’ve come to expect. We want the best of both worlds.

A Decade with the Database of Intentions

First published September 27, 2012 in Mediapost’s Search Insider

It’s been over 10 years since John Battelle first started considering what he called the “Database of intentions.” It was, and is:

The aggregate results of every search ever entered, every result list ever tendered, and every path taken as a result. It lives in many places, but three or four places in particular hold a massive amount of this data (ie MSN, Google, and Yahoo). This information represents, in aggregate form, a place holder for the intentions of humankind – a massive database of desires, needs, wants, and likes that can be discovered, supoenaed, archived, tracked, and exploited to all sorts of ends. Such a beast has never before existed in the history of culture, but is almost guaranteed to grow exponentially from this day forward. This artifact can tell us extraordinary things about who we are and what we want as a culture. And it has the potential to be abused in equally extraordinary fashion.

When Battelle considered the implications, it overwhelmed him. “Once I grokked this idea (late 2001/early 2002), my head began to hurt.” Yet, for all its promise, marketers have only marginally leveraged the Database of Intentions.

In the intervening time, the possibilities of the Database of Intention have not diminished. In fact, they have grown exponentially:

My mistake in 2003 was to assume that the entire Database of Intentions was created through our interactions with traditional web search. I no longer believe this to be true. In the past five or so years, we’ve seen “eruptions” of entirely new fields, each of which, I believe, represent equally powerful signals – oxygen flows around which massive ecosystems are already developing. In fact, the interplay of all of these signals (plus future ones) represents no less than the sum of our economic and cultural potential.

Sharing Battelle’s predilection for “Holy Sh*t” moments, a post by MediaPost’s Laurie Sullivan this Tuesday got me thinking again about Battelle’s “DBoI.” A recent study by Google and EA showed that using search data can predict 84% of video game sales.  But the data used in the prediction is only scratching the surface of what’s possible. Adam Stewart from Google hints at what might be possible, “Aside from searches, Google plans to build in game quality, TV investment, online display investment, and social buzz to create a multivariate model for future analysis.”

This is very doable stuff. All we need to create predictive models that match (and probably far exceed) the degree of accuracy already available. The data is just sitting there, waiting to be interpreted. The implications for marketing are staggering, but to Battelle’s point, let’s not be too quick to corral this simply for the use of marketers. The DBoI has implications that reach into every aspect of our society and lives. This is big — really big! If that sounds unduly ominous to you, let me give you a few reasons why you should be more worried than you are.

Typically, if we were to predict patterns in human behavior, there would be two sources of signals. One comes from an understanding of how humans act. As we speak, this is being attacked on multiple fronts. Neuroscience, behavioral economics, evolutionary psychology and a number of other disciplines are rapidly converging on a vastly improved understanding of what makes us tick. From this base understanding, we can then derive hypotheses of predicted behaviors in any number of circumstances.

This brings us to the other source of behavior signals. If we have a hypothesis, we need some way to scientifically test it. Large-scale collections of human behavioral data allow us to search for patterns and identify underlying causes, which can then serve as predictive signals for future scenarios. The Database of Intentions gives us a massive source of behavior signals that capture every dimension of societal activity. We can test our hypotheses quickly and accurately against the tableau of all online activity, looking for the underlying influences that drive behaviors.

At the intersection of these two is something of tremendous import. We can start predicting human behavior on a massive scale, with unprecedented accuracy. With each prediction, the feedback loop between qualitative prediction and quantitative verification becomes faster and more efficient. Throw a little processing power at it and we suddenly have an artificially intelligent, self-ssimproving predictive model that will tell us, with startling accuracy, what we’re likely to do in the future.

This ain’t just about selling video games, people. This is a much, much, much bigger deal.

A Look at the Future through Google Glasses?

First published June 7, 2012 in Mediapost’s Search Insider

“A wealth of information creates a poverty of attention.” — Herbert Simon

Last week, I explored the dark recesses of the hyper-secret Google X project.  Two X Projects in particular seem poised to change our world in very fundamental ways: Google’s Project Glass and the “Web of Things.”

Let’s start with Project Glass. In a video entitled “One Day…,” the future seen through the rose-colored hue of Google Glasses seems utopian, to say the least. In the video, we step into the starring role, strolling through our lives while our connected Google Glasses feed us a steady stream of information and communication — a real-time connection between our physical world and the virtual one.

In theory, this seems amazing. Who wouldn’t want to have the world’s sum total of information available instantly, just a flick of the eye away?

Couple this with the “Web of Things,” another project said to be in the Google X portfolio.  In the Web of Things, everything is connected digitally. Wearable technology, smart appliances, instantly findable objects — our world becomes a completely inventoried, categorized and communicative environment.

Information architecture expert Peter Morville explored this in his book “Ambient Findability.”  But he cautions that perhaps things may not be as rosy as you might think after drinking the Google X Kool-Aid. This excerpt is from a post he wrote on Ambient Findability:  “As information becomes increasingly disembodied and pervasive, we run the risk of losing our sense of wonder at the richness of human communication.”

And this brings us back to the Herbert Simon quote — knowing and thinking are not the same thing. Our brains were not built on the assumption that all the information we need is instantly accessible. And, if that does become the case through advances in technology, it’s not at all clear what the impact on our ability to think might be. Nicholas Carr, for one, believes that the Internet may have the long-term effect of actually making us less intelligent. And there’s empirical evidence he might be right.

In his book “Thinking, Fast and Slow,”Noble laureate Daniel Kahneman says that while we have the ability to make intuitive decisions in milliseconds (Malcolm Gladwell explored this in “Blink”), humans also have a nasty habit of using these “fast” mental shortcuts too often, relying on gut calls that are often wrong (or, at the very least, biased) when we should be using the more effortful “slow” and rational capabilities that tend to live in the frontal part of our brain. We rely on beliefs, instincts and habits, at the expense of thinking. Call it informational instant gratification.

Kahneman recounts a seminal study in psychology, where four-year-old children were given a choice: they could have one Oreo immediately, or wait 15 minutes (in a room with the offered Oreo in front of them, with no other distractions) and have two Oreos. About half of the children managed to wait the 15 minutes. But it was the follow-up study, where the researchers followed what happened to the children 10 to 15 years later, that yielded the fascinating finding:

“A large gap had opened between those who had resisted temptation and those who had not. The resisters had higher measures of executive control in cognitive tasks, and especially the ability to reallocate their attention effectively. As young adults, they were less likely to take drugs. A significant difference in intellectual aptitude emerged: the children who had shown more self-control as four year olds had substantially higher scores on tests of intelligence.”

If this is true for Oreos, might it also be true for information? If we become a society that expects to have all things at our fingertips, will we lose the “executive control” required to actually think about things? Wouldn’t it be ironic if Google, in fulfilling its mission to “organize the world’s information” inadvertently transgressed against its other mission, “don’t be evil,” by making us all attention-deficit, intellectual-diminished, morally bankrupt dough heads?

The “Field of Dreams” Dilemma

First published May 3, 2012 in Mediapost’s Search Insider

There’s a chicken and an egg paradox in mobile marketing. Many mobile sites sit moldering in the online wilderness, attracting few to no visitors. The same could be said for many elaborate online customer portals, social media outposts or online communities. Somebody went to the trouble to build them, but no one came. Why?

Well, it could be because no one thinks to go to the trouble to look for them, just as no one expects to find a ball diamond in the middle of an Iowa cornfield. It wasn’t until the ghosts of eight Chicago White Sox players, banned for life from playing the game they loved, started playing on the “Field of Dreams” that anyone bothered to drive to Ray Kinsella’s farm.  There was suddenly a reason to go.

The problem with many out-of–the-way online destinations is that there is no good reason to go. Because of this, we make two assumptions:

–       If there is no good reason for a destination to exist, then the destination probably doesn’t exist. Or,

–       If it does exist, it will be a waste of time and energy to visit.

If we jump to either of these two conclusions, we don’t bother looking for the destination. We won’t make the investment required to explore and evaluate. You see, there is a built-in mechanism that makes a “Build it and they will come” strategy a risky bet.

This built-in mechanism comes from behavioral ecology and is called the “marginal value theorem.” It was first identified by Eric Charnov in 1976 and has since been borrowed to explain behaviors in online information foraging by Peter Pirolli, amongst others. The idea behind it is simple: We will only invest the time and effort to find a new “patch” of online information if we think it’s better than “patches” we already know exist and are easy to navigate to.  In other words, we’re pretty lazy and won’t make any unnecessary trips.

This cost/benefit calculation is done largely at a subconscious level and will dictate our online behaviors. It’s not that we make a conscious decision not to look for new mobile sites or social destinations. But unbeknownst to us, our brain is already passing value judgments that will tend to keep us going down well-worn paths. So, if we are looking for information or functionality that would be unlikely to find in a mobile site or app, but we know of a website that has just what we’re looking for and time is not a urgent matter, we’ll wait until we’re in front of our regular computer to do the research. We automatically disqualify the mobile opportunity because our “marginal value” threshold has not been met.

The same is true for social sites. If we believe that there is a compelling reason to seek out a Facebook page (promotional offers, information not available elsewhere) then we’ll go to the trouble to track it down. Otherwise, we’ll stick to destinations we know.

I believe the marginal value theorem plays an important role in defining the scope of our online worlds. We only explore new territory when we feel our needs won’t be met by destinations we already know and are comfortable with.  And if we rule out entire categories of content or functionality as being unlikely to adapt well to a mobile or social environment (B2B research in complex sales scenarios being one example) then we won’t go to the trouble to look for them.

I should finish off by saying that this is a moving target. Once there is enough critical mass in new online territory to reset visitor expectations, you’ve increased the “richness” of the patch to the point where the “marginal value” conditions are met and the brain decides it’s worth a small investment of time and energy.

In other words, if Shoeless Joe Jackson, Chick Gandil, Eddie Cicotte, Lefty Williams, Happy Felsch, Swede Risberg, Buck Weaver and Fred McMullin all start playing baseball in a cornfield, than it’s probably worth hopping on the tractor and head’n over to the Kinsella place!