Wired for Information: A Brain Built to Google

First published August 26, 2010 in Mediapost’s Search Insider

In my last Search Insider, I took you on a neurological tour that gave us a glimpse into how our brains are built to read. Today, let’s dig deeper into how our brains guide us through an online hunt for information.

Brain Scans and Searching

First, a recap. In Nicholas Carr’s Book, “The Shallows: What the Internet is doing to Our Brains,I focused on one passage — and one concept — in particular. It’s likely that our brains have built a short cut for reading. The normal translation from a printed word to a concept usually requires multiple mental steps. But because we read so much, and run across some words frequently, it’s probable that our brains have built short cuts to help us recognize those words simply by their shape in mere milliseconds, instantly connecting us with the relevant concept. So, let’s hold that thought for a moment

The Semel Institute at UCLA recently did a neuroscanning study that monitored what parts of the brain lit up during the act of using a search engine online. What the institute found was that when we become comfortable with the act of searching, our brains become more active. Specifically, the prefrontal cortex, the language centers and the visual cortex all “light up” during the act of searching, as well as some sub-cortical areas.

It’s the latter of these that indicates the brain may be using “pre-wired” short cuts to directly connect words and concepts. It’s these sub-cortical areas, including the basal ganglia and the hippocampus, where we keep our neural “short cuts.”  They form the auto-pilot of the brain.

Our Brain’s “Waldo” Search Party

Now, let’s look at another study that may give us another piece of the puzzle in helping us understand how our brain orchestrates the act of searching online.

Dr. Robert Desimone at the McGovern Institute for Brain Research at MIT found that when we look for something specific, we “picture” it in our mind’s eye. This internal visualization in effect “wakes up” our brain and creates a synchronized alarm circuit: a group of neurons that hold the image so that we can instantly recognize it, even in complex surroundings. Think of a “Where’s Waldo” puzzle. Our brain creates a mental image of Waldo, activating a “search party” of Waldo neurons that synchronize their activities, sharpening our ability to pick out Waldo in the picture. The synchronization of neural activity allows these neurons to zero in on one aspect of the picture, in effect making it stand out from the surrounding detail

Pirolli’s Information Foraging

One last academic reference, and then we’ll bring the pieces together. Peter Pirolli, from Xerox’s PARC, believes we “forage” for information, using the same inherent mechanisms we would use to search for food. So, we hunt for the “scent” of our quarry, but in this case, rather than the smell of food, it’s more likely that we lodge the concept of our objective in our heads. And depending on what that concept is, our brains recruit the relevant neurons to help us pick out the right “scent” quickly from its surroundings.  If our quarry is something visual, like a person or thing, we probably picture it. But if our brain believes we’ll be hunting in a text-heavy environment, we would probably picture the word instead. This is the way the brain primes us for information foraging.

The Googling Brain

This starts to paint a fascinating and complex picture of what our brain might be doing as we use a search engine. First, our brain determines our quarry and starts sending “top down” directives so we can very quickly identify it.  Our visual cortex helps us by literally painting a picture of what we might be looking for. If it’s a word, our brain becomes sensitized to the shape of the word, helping us recognize it instantly without the heavy lifting of lingual interpretation.

Thus primed, we start to scan the search results. This is not reading, this is scanning our environment in mere milliseconds, looking for scent that may lead the way to our prey. If you’ve ever looked at a real-time eye-tracking session with a search engine, this is exactly the behavior you’d be seeing.

When we bring all the pieces together, we realize how instantaneous, primal and intuitive this online foraging is. The slow and rational brain only enters the picture as an afterthought.

Googling is done by instinct. Our eyes and brain are connected by a short cut in which decisions are made subconsciously and within milliseconds. This is the forum in which online success is made or missed.

How Our Brains are Wired to Read

First published August 19, 2010 in Mediapost’s Search Insider

How do we read? How do we take the arbitrary, human-made code that is the written word and translate it into thoughts and images that mean something to our brain, an organ that had its basic wiring designed thousands of generations before the appearance of the first written word? What is going on in your skull right now as your eyes scan the black squiggly lines that make up this column?

The Reading Short Cut

I’m currently reading Nicholas Carr’s “The Shallows: What the Internet is Doing to Our Brains,” a follow-up to Carr’s article in The Atlantic, “Is Google Making Us Stupid?” The concept Carr explores is fascinating to me: the impact of constant online usage on how the neural circuits of our brain are wired.

But there was one quote in particular, from Maryanne Wolf’s book, “Proust and the Squid: The Story and Science of the Reading Brain,” that literally leapt off the page for me: ‘The accomplished reader, Maryanne Wolf explains, develops specialized brain regions geared to the rapid deciphering of text. The areas are wired ‘to represent the important visual, phonological and semantic information and to retrieve this information at lightning speed.’ The visual cortex, for example, develops ‘a veritable collage’ of neuron assemblies dedicated to recognizing, in a matter of milliseconds, ‘visual images of letters, letter patterns and words.'”

For everyone reading this column today, that is one of the most relevant passages you may ever scan your eyes across. It’s vitally important to digital marketers and designers of online experiences. Humans that read a lot develop the ability to recognize word patterns instantly, without going through the tedious neural heavy lifting of translating the pattern through the language centers of the brain. A quick neurological tour is in order here.

How the Brain Reads

The brain has a habit of developing multiple paths to the same end goal. Many functions that our brain controls tend to have dual routes: a quick and dirty one that rips through the brain at lightning speed and a slower, more rational one. It’s the neural reality behind Malcolm Gladwell’s “Blink.” This dual speed processing is a tremendously efficient way of coping with our environment. The same mechanism, according to Wolf, has been adapted to our interpretation of the written word.

Humans have an evolved capacity for language. Noam Chomsky, Steven Pinker and others have shown convincingly that we come out of the box with inherent capabilities to communicate with each other. But those abilities, housed in the language centers of the brain (Wernicke’s and Broca’s Areas, if you’re interested) are limited to oral language. Written language hasn’t been around nearly long enough for evolution’s relatively slow timeline to have had much of an impact. That’s why we learn to speak naturally just by hanging around other humans, but only those with a formalized and structured education learn to read and write. We have to take the native machinery of the brain and force it to adapt to the required task by creating new neural paths.

Instantly Recognizable…

So, when we read a page of text, there’s a fairly complex and laborious process going on in our noggins. Our visual cortex scans the abstract code that is written language, feeds it to the language centers for translation, and then sends it to our prefrontal cortex and our long-term memory to be rendered into concepts that mean something to us. The word “horse” doesn’t really mean the large, hairy, four-legged mammal that we’re familiar with until it goes through this mental processing.

But, like anything that humans do often, we tend to create short cuts through repetition. It’s important to note that this isn’t evolution at work, it’s neuroplasticity. The ability to read and write is built in each human from scratch. The brain naturally tries to achieve maximum efficiency by taking things we do repeatedly and building little synaptic short cuts. Humans who read a lot become wired to recognize certain words just by their shape and appearance, without needing to run the full processing cycle. Your name is a good example. How often have you been reading a newspaper or book and run across your last name? Does it seem to “leap off the page?” That was your brain triggering one of its little short cuts.

So, what does this mean for online interactions, particularly with a search engine? In next week’s column, I’ll revisit a fascinating brain scanning study that was done by UCLA and take a peek at what might be happening under the hood when we launch a Web search.

 

The Jill Hotchkiss Inflection Point

First published July 29, 2010 in Mediapost’s Search Insider

Technology has reached a critical point in the adoption curve. My wife, who is imminently practical and intolerant of anything that smacks of gadgetry, is becoming intrigued by my iPhone. I can’t overstate the importance of this in terms of watershed moments. Steve Jobs, if you can get my wife to buy into your vision, you have crossed the chasm.

There’s something important to note here in attitudes towards technology that we digerati, gathered together on the leading edge of the bell curve, often forget. Technology only becomes important to most people when it lets them do something they care about. For my wife, my gleeful demonstrations of the wonder that is Shazam gained nothing but a prolonged rolling of the eyes. Twitter clients and Facebook apps? Puh-leeze! Redlaser elicited a brief spark of interest, but this quickly passed when she saw the steps she had to take to do any virtual shopping. Even the wonders of the cosmos, conveniently mapped by pUniverse, did not pass the Jill acid test. As long as my app inventory didn’t improve her life in any appreciable way, she remained resolutely unimpressed.

But lately, there have been cracks in the wall of technology defense she has carefully constructed since marrying me. A nifty little app called Mousewait was the first chink. Knowing the wait times in the ride lines on a recent trip to Disneyland was something she cared about. Suddenly, she was asking me to take out the iPhone and check to see how many minutes we’d have to wait at Splash Mountain. Yelp helped us find a reasonable family restaurant in San Diego. And Taxi Magic allowed us to quickly hail a cab in San Francisco.

But the moment I knew the defenses were ready to crumble was when she recently turned to me and said: “So, you can do all that stuff on an iPhone? What other things can you do?”

Aahhh… the door was open, but only a crack. If I’ve learned one thing in 21 years of marriage, I’ve learned to tread slowly when these opportunities present themselves. I had to carefully craft my response. Too much enthusiasm shown at this point could be fatal…

“Huh? What do you mean?”

“On the iPhone… what could you do with it?”

“What could I do with it, or what could you do with it?

“Me… let’s say.”

And here we come to the crux of the matter. I’m extremely tolerant of technology. I’ll struggle my way through an interface and put up with crappy design simply so I can emerge victorious on the top of the early adopter heap, holding my iPhone proudly aloft. At the first inkling of frustration, my wife will turf the thing into the nearest trashcan. If you functionality is what you’re looking for, app designers have to provide the shortest possible path from A to B.

If you really want to scale the opportunity that lies at the Jill Hotchkiss inflection point, what you have to do is start providing seamless functionality for app to app. The new iPhone OS is edging down this path by supporting multitasking, but there is still a long way to go before you’ll make my wife truly happy. And that, believe me, is a goal worthy of pursuit.

The Two Meanings of Engagement

Engagement: a betrothal. An exclusive commitment to another preceding marriage

Engagement: as in an engaging conversation.  Being highly involved in an interaction with something or someone.

The theme of the Business Marketing Association conference I talked about in last week’s column was “Engage.”  At the conference, the word engagement was tossed around more freely than wine and bomboniere at an Italian wedding. Unfortunately, engagement is one those buzzwords that has ceased to hold much meaning in marketing. The Advertising Research Foundation has gone as far as to try to put engagement forward as the one metric to unite all metrics in marketing, a cross-channel Holy Grail.

But what does engagement really mean? What does it mean to be “engaged?” The problem is that engagement itself is an ambiguous term. It has multiple meanings. As I pondered this and discussed with others, I realized the problem is that marketers and customers have two very different definitions of engagement. And therein lies the problem.

The Marketer’s Definition of Engagement

Marketers, whether they want to admit it or not, look at engagement in the traditional matrimonial sense. They want customers to make an exclusive commitment to them, forgoing all others. It’s a pledge of loyalty, a repulsion of other suitors, a bond of fidelity. To marketers, engagement is just another word for ownership and control.

When marketers talk about engagement, they envision prospects enthralled with their brands, hanging on every word, eager for every commercial message. They strive for a love that is blind.  Engagement ties up the customer’s intent and “share of wallet.”  Marketers talk about getting closer to the customer, but in all too many cases, it’s to keep tabs on them. For all the talk of engagement, the benefits are largely for the marketer, not the customer.

The Customer’s Definition of Engagement

Customers, on the other hand, define engagement as giving them a reason to care. They define engagement as it would relate to a conversation. Do you give me a reason to keep listening? And are you, in turn, listening to what I have to say? Is there a compelling reason for me to continue the conversation? I will be engaged with you only as long as it suits my needs to do so.  I will give you nothing you haven’t earned.

The engagement of a conversation is directly tied to how personally relevant it is. The topic has to mean something to me. If it’s mildly interesting, my attention will soon drift. But if you’re touching something that is deeply important to me, you will have my undivided attention for as long as you need it. That is engagement from the other side of the table.

So, as we talk about engagement at a marketing conference, let’s first agree on a definition of engagement. And let’s be honest about what our expectations are. Because I suspect marketers and customers are looking at different pages of the dictionary.

Our Indelible Lives

First published June 3, 2010 in Mediapost’s Search Insider

It’s been a fascinating week for me. First, it was off to lovely Muncie, Ind. to meet with the group at the Center for Media Design at Ball State University. Then, it was to Chicago for the National Business Marketing Association Conference, where I was fortunate enough to be on a panel about what the B2B marketplace might look like in the near future. There was plenty of column fodder from both visits, but this week, I’ll give the nod to Ball State, simply because that visit came first.

Our Digital Footprints

Mike Bloxham, Michelle Prieb and Jen Milks (the last two joined us for our most recent Search Insider Summit) were gracious hosts, and, as with last week (when I was in Germany) I had the chance to participate in a truly fascinating conversation that I wanted to share with you. We talked about the fact that this generation will be the first to leave a permanent digital footprint. Mike Bloxham called it the Indelible Generation. That title is more than just a bon mot (being British, Mike is prone to pithy observations) — it’s a telling comment about a fundament aspect of our new society.

Imagine some far-in-the-future anthropologist recreating our culture. Up to this point in our history, the recorded narrative of any society came from a small sliver of the population. Only the wealthiest or most learned received the honor of being chronicled in any way. Average folks spent their time on this planet with nary a whisper of their lives recorded for posterity. They passed on without leaving a footprint.

Explicit and Implicit Content Creation

But today — or if not today, certainly tomorrow — all of us will leave behind a rather large digital footprint. We will leave in our wake emails, tweets, blog posts and Facebook pages. And that’s just the content we knowingly create. There’s a lot of data generated by each of us that’s simply a byproduct of our online activities and intentions. Consider, for example, our search history. Search is a unique online beast because it tends to be the thread we use to stitch together our digital lives. Each of us leaves a narrative written in search interactions that provides a frighteningly revealing glimpse into our fleeting interests, needs and passions.

 Of course, not all this data gets permanently recorded. Privacy concerns mean that search logs, for example, get scrubbed at regular intervals. But even with all that, we leave behind more data about who we were, what we cared about and what thoughts passed through our minds than any previous generation. Whether it’s personally identifiable or aggregated and anonymized, we will all leave behind footprints.

 Privacy? What Privacy?

Currently we’re struggling with this paradigm shift and its implications for our privacy. I believe in time — not that much time — we’ll simply grow to accept this archiving of our lives as the new normal, and won’t give it a second thought. We will trade personal information in return for new abilities, opportunities and entertainment. We will grow more comfortable with being the Indelible Generation.

Of course, I could be wrong. Perhaps we’ll trigger a revolt against the surrender of our secrets. Either way, we live in a new world, one where we’re always being watched. The story of how we deal with that fact is still to be written.

Google vs Apple: an Open and Closed Case

First published May 27, 2010 in Mediapost’s Search Insider

Yesterday, I was eavesdropping on a debate about open-source vs. closed systems. I found the debate fascinating because two of the most important contributors to what our search experience might look like live at opposite ends of this debate. Apple is adamant about locking down every aspect of the user experience. Google wants to open it up to any and all comers. The third player, Microsoft, sits somewhere in between. The debate was about who might prevail. I was uncharacteristically silent during all this, because I had to think about it before throwing in my two cents. Now, 24 hours later, it’s time to toss in my ante.

In theory, open source should win hands down. The open environment allows a cooperative ecosystem to evolve, guaranteeing a rate of innovation simply not possible in closed system. But I think it depends on where we are in the maturity of the market. Open source allows for more innovation, but it’s also an open invitation for more things to go wrong. This can be deadly as you try to push along market adoption.

Apple Closes the Loop

There is a reason why Apple is the darling of the early adopter. The company insist on things working. And you can only do this when you can lock down each and every aspect of the user experience. If there’s one thing Apple understands at its core (sorry, couldn’t resist), it’s how to make a user happy. The Jobs BHAG of creating “insanely great” products only works if all that insanity leads to an expected end result. And I challenge anyone who’s used both a Mac and a Windows box to tell me that the Apple user experience isn’t more refined, more elegant and more delightful.

In the early days of market adoption, this stuff is important. You don’t want to drop way more cash than you should on a new tech-toy only to find the interface is clunky, amateurish and full of glitches. With Apple’s meticulous attention to detail, you know that whatever is available on your new iToy will work near-flawlessly. Sure, the code-police from Cupertino are overly dictatorial, which isn’t winning them any friends in the programming community, but the apps that are the end result are ridiculously simple to use and frequently beautiful to look at.

Google’s UX Challenges

Now, look at Google. I tried to find a polite way to say this, but couldn’t, so I’ll just lay it on the table: Google sucks at interface design. For years we’ve been lauding the simple, spartan look of Google search. The fact is, simple was all we needed for an ordered list of text results. Google’s algorithm provided enough power in the backend to make up for an anemic interface. But today, now that everyone’s caught up in the algo department, Google’s interface looks like a Grade 8 coding project.  The new 3 column search format follows in the footsteps of Gmail, Google Docs, Google Calendars and most other Google interfaces: it looks like it was designed by an engineer.

In my company, we tried to move to using Google’s suite of tools based on the fact that in an open-source environment, we should see more rapid innovation. Well, that and the price was hard to argue with. But the fact is, everyone on our team is completely fed up with clunky Google interfaces that seem full of quirks. It doesn’t feel like we’re using leading-edge innovation, it feels like we’re using freeware. And I, for one, expect more from Google.

Google … Give me that GUI Feeling!

That’s the problem with open source early in the market adoption model. There’s not enough maturity in the market to force developers to worry about nuance. User experience is considered the polish — the last thing to be applied. You can’t lock down all the details needed to guarantee a consistently acceptable user experience.

I still have tremendous respect for the innovation engine that sits at the heart of Google, but if I had one piece of advice to pass along, it would be this: Worry less about changing the world, and  more about polishing up the Gmail interface. You can always change the world tomorrow, but today I’d like to retrieve my email from something that doesn’t look like a dog’s breakfast.

The Human-Technology Connection: Enabling Change

First published May 6, 2010 in Mediapost’s Search Insider

Aaron Goldman scooped my column on Apple, Siri and search (although, looking at the column, I think I can claim partial authorship) so I’m going to broaden the lens a little bit. This is a theme I’ve discussed in a number of recent presentations, as well as at least one prior column, and I think it touches on why the news from Apple and Siri is potentially so important.

Humans Will Be Human

I’ve said before that “technology doesn’t cause our behaviors to change, it enables our behaviors to change.” The difference is subtle but profound. Let me give you an example.

I recently moderated a panel discussion on social media in the B2B marketplace. One by one, the panelists marched out their supporting evidence (14 zillion people access Facebook every 12 seconds, that sort of thing) and their own opinions. The consensus was: things have changed. Indeed, they have. But at the top of the session, I said this wasn’t about technology, this was about people. And people are social animals. We follow the herd, and more importantly, we communicate with the herd. One could feel the “Groundswell” (a pun and plug in one!) literally surging through the room.

At the end, we turned to the audience for Q&A. A middle-aged woman, definitely falling on the Digital Immigrant side of the tech-savvy divide, stood up and called the entire panel out: “I don’t buy it. I don’t buy all this technology is making us more connected. I haven’t seen any evidence of it. In fact, I’ve seen the opposite. I’m a professional recruiter and I can’t get a candidate to pick up the phone and talk to me. I need to get to know them and I can’t do that through an email. I need to have a conversation. I think technology is isolating us, not connecting us.”

It’s All About Options

The panelists pointed out the generational differences between her and her candidates, saying that this could be the cause of the change of behavior. But I wanted to probe a little deeper, because I wasn’t so sure technology was the culprit here:  “I suspect that when you’re recruiting, your motivation to connect with a candidate is not always the same as their motivation to contact you,” I said.

“It’s your job and top of mind, but for them, you’re just an interruption in what they were already doing. They may not be ready to have a chat with you,” I continued. “Twenty-five years ago, when we were starting our careers, the phone was the only choice for instantaneous, ‘at-a-distance’ communication. But now, we have many choices, thanks to technology. So, they have options and they’re picking the one that’s appropriate. They’re time-shifting the interruption to a time more convenient, when they’re more motivated to contact you. I suspect that if we had that choice 25 years ago, we would have done the same thing. Technology hasn’t changed us, it’s just given us more options to do the things we really want to do.”

The Human Act of Searching

So why is that important for Siri, Apple and Search? Well, just as we had to adapt to the phone as an instant communication channel, we’ve had to adapt to the interface that search gave us to seek information. Let’s face it; typing words into a box is not the way we evolved to communicate. We talk. We touch. We listen. We see. We’ve had to adapt to a non-organic, structured format — 10 blue links in a list — because we had no choice. It was all the technology would allow at the time.

Also, separating the acts of retrieving information and doing something with the information is not natural for us either. We’re used to a tighter connection between the two. Information is seldom an end point. Doing something with the information is a much more common objective.  But up to now, search could only really act as an information retrieval tool.  It was powerful, and we adapted quickly because we recognized the power, but it wasn’t natural.

But look at what Siri and Apple are trying to do: On this platform, search is asking for something, getting it and immediately doing something with it. Sound familiar? It should. It’s what we’ve done for most of our history as humans. And that’s what technology, at it’s best, should do: give us more ways to be human.

Human Irrationality Online

irrationalLast week, I talked about the work of Daniel Kahneman, Amos Tversky, Herbert Simon and George Akerlof, key figures in helping define the foundations of consumer behavior, both rational and irrational, that dictate the realities of the marketplace. Today, I want to talk about how these emotional and cognitive biases and limitations play out online, but first, a quick recap is in order:

Prospect Theory – The role of psychological framing and emotional biases in determining human behavior in risky economic decisions. For example, how we’re more sensitive about loss than we are about gain.

Bounded Rationality – How we cannot endlessly consider all alternatives for the optimal behavior, but rather rely on “gut instincts” to help sort through the available alternatives.

Information Asymmetry – Why the marketplace has traditionally been unbalanced, with the seller almost always having more information about the product than the buyer.

This is Nothing New…

As I said last week, these are all hardwired human conditions that have been present for hundreds of generations, even though it’s only been recently that we’ve learned enough about human behavior to recognize them. And it’s these inherent tendencies that have changed the marketplace since the introduction of the Internet. The huge volume of information available online allowed us to shift the balance of the marketplace to be more equitably distributed between sellers and buyers. Let’s explore how each of these occurrences drove the behavioral change, which was enabled, not caused, by the introduction of the Net.

We understand that risk is present in almost all consumer transactions. This fact brings Prospect Theory into the picture. We will unconsciously employ our emotional biases to deal with the risk inherent in each purchase: the greater the risk, the greater the degree of bias.

The Risk/Reward Balance

Consumer motivation relies on us mentally balancing risk and reward. The balance between these opposing forces will dictate how we deal with risk mitigation. If there is a high reward — for example, buying our mid-life crisis sports car or taking our dream vacation — our emotional biases will be tilted towards maximizing this reward. Consumer research is really more about wish fulfillment than it is about risk mitigation.

But if there is little or no reward, our research takes a much different path. Think about how we approach the purchase of life insurance, for example. There is no inherent reward here, just risk — or rather, mitigation of risk. And insurance salespeople mercilessly exploit the emotional bias of loss by getting you to picture your family’s future without you in it.

Informed Does Not Always Equal Rational

This risk/reward balance will dictate what our online research will look like. And this is where Akerlof’s Information Asymmetry comes in. One of the ways we mitigate risk is by educating ourselves about our purchase. We look up consumer ratings, read reviews and pore over feature sheets.

Today, consumers are much more informed than they were a generation ago. But all that information does not necessarily mean we will make a more logical decision. We humans tend to look at information to support our emotional biases, rather than refute them. So, the balancing of information asymmetry is still done through the lens of our emotional and psychological frames, as shown by Kahneman and Tversky. We have access to information online, but each of us may walk away with different messages, depending on the lens we’re seeing that information through.

All This Information, All These Choices…

And that, finally, brings us to Simon’s concept of Bounded Rationality. We have more information than ever to sift through. As I said a few columns back, we can employ different strategies to make decisions. Some of us embrace bounded rationality, or satisficing, making us more decisive. It’s important to note here that the fact we’re trusting our gut to make these satisficing calls means that we may be trusting emotion rather than logic. Others try to optimize each decision, weighing all the variables. While this is perhaps a more rational approach, it can tax our cognitive limits, leading to frustration and often abandonment of the optimal path, resulting in a decision that ends up being a “gut” call anyway.

Our need to access information to mitigate risk has lead to the behavioral changes in consumer behavior. The Internet enabled this. It wasn’t technology that changed our behavior; it was just that technology opened the door to allow us to pursue our hardwired tendencies.

The Four Horsemen of the Consumer Behavior Apocalypse

First published March 25, 2010 in Mediapost’s Search Insider

Right out of the gate, let’s assume that we all agree consumer behavior is in the throes of its biggest shift in history. And the cause is generally attributed to the Internet.

While I don’t disagree with this assessment, I believe there may be some misattribution when it comes to cause and effect. Did the Internet cause our consumer behavior to change? Or did it enable it to change? The distinction may seem like mere semantics, but there’s a fundamental difference here.

“Cause” implies that an outside force, namely the Internet, pushed us in a new direction that was different from the one we would have pursued had this new force not come along. “Enable” is a different beast, the opening of a previously locked door that allows us to pursue a new path of our own volition. I believe the latter to be true. I believe we weren’t pushed anywhere. We went there of our own free will.

Free Will? Or Hardwired Human Behavior?

But, even in my last statement, language again gets us in a sticky place. “Will” assumes it was a conscious and willful decision. I’m not sure this is the case. I suspect there were subconscious, hardwired behaviors that had a natural affinity for the new opportunities presented by the online marketplace.

For most of our recorded history, we have assumed that rational consideration and conscious will forms the basis of human thought. If we did seem programmed automatically to respond to certain cues, this was as a result of being conditioned by our environment, the classic Skinner black-box approach. But when we were on top of our game, we were carefully considering pros and cons, making consciously deliberated decisions. These were the forces that drove our society and our behaviors. This theory formed the basis of economics (Adam Smith’s Invisible hand), Cartesian logic, and most market research.

But in the last few decades, this view of rationality riding triumphant over human foibles has been brought into question. In particular, there were three concepts put forward by four academics that caused us to question what drove our behaviors. These folks uncovered deeper, subconscious routines and influences that lay buried beneath the strata of rational thought. And it’s these subconscious behaviors that I believe found the new online opportunities so enticing. Let’s spend a little time today looking at these four thinkers and the new paradigms they asked us to consider.

Amos Tversky and Daniel Kahneman – Prospect Theory

Adam Smith’s Invisible hand, driven by the wisdom of the market, has been presumed to be the ultimate economic governing factor. The assumption was that each of us, individually making rational economic decisions, would ultimately decide winners and losers and capitalism would stay alive and well.

But Tversky and Kahneman, in their paper on Prospect Theory, showed that the invisible hand might not always be guided by a decisive and logical mind. We all have significant hardwired cognitive biases that often cause us to make illogical economic choices. For example, if I offered you $1,000, with no questions asked, or a chance to win $2,500 based on a coin toss, you’d probably take the sure bet, even though mathematically, the odds for net gain are better with the coin toss.

Prospect Theory shot some holes in the previous theory of Expected Utility, a model where we carefully weighed the pros and cons of a potential purchase based on a return on investment model. Emotional framing and risk avoidance played a much bigger role than we suspected, handicapping our logic and often guiding us down non-rational paths. Tversky and Kahneman single-handedly found the new discipline of Behavioral Economics and changed our thinking in the process.

Herbert Simon – Bounded Rationality

Simon’s concept of Bounded Rationality superseded Kahneman and Tversky’s theory, but it dovetailed with it very nicely. Even if we are rationally engaged in a decision, Simon argued, we couldn’t possibly optimize it, especially in complex scenarios. There were simply too many factors to consider. So, we took “gut feeling” short cuts, which Simon called “satisficing,” a combination of satisfy and suffice. We short-listed our consideration set by using beliefs and instincts.

To make the satisficing short list is the goal of any brand campaign. At some point, logical weighing of pros and cons has to give way to calls based primarily on instinct.  And, as Kahneman and Tversky showed, those instinctive calls may well be based on irrational emotional biases.

George Akerlof – information Asymmetry

The last piece, and the one that really drove the online consumer revolution, is George Akerlof’s Information Asymmetry theory. Traditionally, there has been an imbalance of information between buyers and sellers, to the seller’s advantage. The seller always knew more about what they were selling than the buyer did. This made purchasing inherently risky.

With an absence of information, consumers created strong beliefs about brands as a way to guide their future buying decisions. Brand loyalty, whether rational or not, filled the void left by a lack of information. Manufacturers and retailers carefully controlled what information did enter the marketplace, pushing the positives and carefully suppressing the negatives.

These three concepts, intertwined, defined the psychological make-up of the market prior to the introduction of the Internet. In my next column, I’ll explore what happened when these behavioral powder kegs were exposed to the fanned flames of the digital marketplace.

Search and Our Online “Set Point”

First published March 18, 2010 in Mediapost’s Search Insider

Derek Gordon’s piece on Siri this week gave concrete proof of what I’ve been saying about the transition of search from a destination to a utility. Consider the example Derek gave of Siri’s functionality:

make action-oriented queries into your iPhone like “find me a good French restaurant for two tonight.”  Using your iPhone’s location coordinates, it will search Yelp for positive reviews of restaurants in your area, find a reservation for the most popular one via OpenTable and ask if you’d like to confirm a reservation.  Once you’ve confirmed the time, Siri will book the reservation for you. 

Notice the words Derek uses: “search” Yelp, “find” a reservation, both as intermediate steps to the end goal, allowing you to take action. And the Siri interface sits between you and the sources of the information. It’s exactly this interposing of a layer of functionality between the information and the user that I was talking about two weeks ago when I said that Steve Ballmer was thinking about the future of the search revenue model.

An application like Siri is only as good as the number of things it can do. Functionality, not information, is the new promise of the Internet. As John Battelle said in a recent chat with me, we quickly adjusted to the fact that the Internet could make us smarter. Now we expect it to let us do things better and faster. Information is only a means to an end.

Our Online ‘Set Point’

University of Pennsylvania psychologist Martin Seligman believes that we have a happiness “set point.” For example, winning a lottery doesn’t really make us happier in the long run. We just ratchet up our level of expectation to accommodate our new circumstance. I believe the same is true about our feelings towards advanced technology.

In the early days of the Internet, we were consistently amazed when we found information “out there.”  It seemed that no matter what we were looking for, with enough diligence, we could find some source for it. The Internet was one big information archive, and search was the key we used to unlock it. But as with happiness , we’re very quick to reset our expectations. Amazement quickly gives way to a sense of entitlement. We now accept the fact that the information is out there somewhere.  We now expect applications to gather it for us and present us with an opportunity to act on it.

The Road Ahead for Search

In a few short weeks, we’ll be gathering on Captiva Island in Florida to discuss where search is going. I believe a central theme will be this idea of search as a step towards usefulness.  We have reset our expectations and we need more from search. And this raises an interesting possibility. I have talked before about how Google became a habit for us. But habits only remain stable as long as they produce the expected results. Once we stop getting what we expect, we ready ourselves to break the habit and build a new one. It’s hard cognitive work, but we will undertake it if the payoff is worth it in terms of expected utility. As our expectations, fueled by glimpses of potential functionality through apps like Siri, are raised to a new set point, we will be less satisfied with the vanilla search experience offered by Google. This means, finally, we may be ready to break the Google Habit.

Google’s counter to that will be that Siri benefits from having a very focused purpose, supported through a dedicated interface and structured data. It’s impossible to match that functionality across all categories and use cases. Very true and very rational — but it doesn’t matter. If our online “set point” gets reset, our loyalty to Google will suffer. Suddenly, we won’t be satisfied anymore, because we believe something better is out there.