The ZMOT Continued: More from Jim Lecinski

First published July 28, 2011 in Mediapost’s Search Insider

Last week, I started my conversation with Jim Lecinski, author of the new ebook from Google: “ZMOT, Winning the Zero Moment of Truth.”  Yesterday, Fellow Search Insider Aaron Goldman gave us his take on ZMOT. Today, I’ll wrap up by exploring with Jim the challenge that the ZMOT presents to organizations and some of the tips for success he covers in the book.

First of all, if we’re talking about what happens between stimulus and transaction, search has to play a big part in the activities of the consumer. Lecinski agreed, but was quick to point out that the online ZMOT extends well beyond search.

Jim Lecinski: Yes, Google or a search engine is a good place to look. But sometimes it’s a video, because I want to see [something] in use…Then [there’s] your social network. I might say, “Saw an ad for Bobby Flay’s new restaurant in Las Vegas. Anybody tried it?” That’s in between seeing the stimulus, but before… making a reservation or walking in the door.

We see consumers using… a broad set of things. In fact, 10.7 sources on average are what people are using to make these decisions between stimulus and shelf.

A few columns back, I shared the pinball model of marketing, where marketers have to be aware of the multiple touchpoints a buyer can pass through, potentially heading off in a new and unexpected direction at each point. This muddies the marketing waters to a significant degree, but it really lies at the heart of the ZMOT concept:

Lecinski: It is not intended to say, “Here’s how you can take control,” but you need to know what those touch points are. We quote the great marketer Woody Allen: “‘Eighty percent of success in life is just showing up.”

So if you’re in the makeup business, people are still seeing your ads in Cosmo and Modern Bride and Elle magazine, and they know where to buy your makeup. But if Makeupalley is now that place between stimulus and shelf where people are researching, learning, reading, reviewing, making decisions about your $5 makeup, you need to show up there.

Herein lies an inherent challenge for the organization looking to win the ZMOT: whose job is that? Our corporate org chart reflects marketplace realities that are at least a generation out of date. The ZMOT is virgin territory, which typically means it lies outside of one person’s job description. Even more challenging, it typically cuts across several departments.

Lecinski: We offer seven recommendations in the book, and the first one is “Who’s in charge?” If you and I were to go ask our marketer clients, “Okay, stimulus — the ad campaigns. Who’s in charge of that? Give me a name,” they could do that, right? “Here’s our VP of National Advertising.”

Shelf — if I say, “Who’s in charge of winning at the shelf?” “Oh. Well, that’s our VP of Sales” or “Shopper Marketing.” And if I say, “Product delivery,” – “well that’s our VP of Product Development” or “R&D” or whatever. So there’s someone in charge of those classic three moments. Obviously the brand manager’s job is to coordinate those. But when I say, “Who’s in charge of winning the ZMOT?” Well, usually I get blank stares back.

If you’re intent on winning the ZMOT, the first thing you have to do is make it somebody’s job. But you can’t stop there. Here are Jim’s other suggestions:

The second thing is, you need to identify what are those zero moments of truth in your category… Start to catalogue what those are and then you can start to say, “Alright. This is a place where we need to start to show up.”

The next is to ask, “Do we show up and answer the questions that people are asking?”

Then we talk about being fast and being alert, because up to now, stimulus has been characterized as an ad you control. But sometimes it’s not. Sometimes it’s a study that’s released by an interest group. Sometimes it’s a product recall that you don’t control. Sometimes it’s a competitor’s move. Sometimes it’s Colbert on his show poking a little fun at Miracle Whip from Kraft. That wasn’t in your annual plan, but now there’s a ZMOT because, guess what happens — everybody types in “Colbert Miracle Whip video.” Are you there, and what do people see? Because that’s how they’re going to start making up their mind before they get to Shoppers Drug Mart to pick up their Miracle Whip.

Winning the ZMOT is not a cakewalk. But it lies at the crux of the new marketing reality. We’ve begun to incorporate the ZMOT into the analysis we do for clients. If you don’t, you’re leaving a huge gap between the stimulus and shelf — and literally anything could happen in that gap.

Marketing in the ZMOT: An Interview with Jim Lecinski

First published July 21, 2011 in Mediapost’s Search Insider

A few columns back, I mentioned the new book from Google, “ZMOT, Winning the Zero Moment of Truth.” But, in true Google fashion, it isn’t really a book, at least, not in the traditional sense. It’s all digital, it’s free, and there’s even a multimedia app (a Vook) for the iPad.

Regardless of the “book” ‘s format, I recently caught up with its author, Jim Lecinski, and we had a chance to chat about the ZMOT concept. Jim started by explaining what the ZMOT is: “The traditional model of marketing is stimulus – you put out a great ad campaign to make people aware of your product, then you win the FMOT (a label coined by Procter and Gamble) — the moment of truth, the purchase point, the shelf. Then the target takes home the product and hopefully it will live up to its promises. It makes whites whiter, brights brighter, the package actually gets there by 10:30 the next morning.

What we came out with here in the book is this notion that there’s actually a fourth node in the model  of equal importance.  We gave the umbrella name to that new fourth moment that happens in between stimulus and shelf: if it’s prior to FMOT, one minus F is zero, ‘Zero Moment of Truth.'”

Google didn’t invent the ZMOT, just as Procter & Gamble didn’t invent the FMOT. These are just labels applied to consumer behaviours. But Google, and online in general, have had a profound effect on a consumer’s ability to interact in the Zero Moment of Truth.

Lecinski: “There were always elements of a zero moment of truth. It could happen via word of mouth. And in certain categories, of course  — washing machines, automotive, certain consumer electronics — the zero moment of truth was won or lost in print publications like Consumer Reports or Zagat restaurant guide or Mobil Travel Guide.

But those things had obvious limitations. One: there was friction — you had to actually get in the car and go to the library. The second is timeliness  — the last time they reviewed wash machines might have been nine months ago. And then the third is accuracy: ‘Well, the model that they reviewed nine months ago isn’t exactly the one I saw on the commercial last night that’s on sale this holiday weekend at Sears.'”

The friction, the timeliness and the simple lack of information all lead to an imbalance in the market place that was identified by economist George Akerlof in 1970 as information asymmetry. In most cases, the seller knew more about the product than the buyer. But the Web has driven out this imbalance in many product categories.

Lecinski: “The means are available to everybody to remove that sort of information asymmetry and move us into a post-Akerlof world of information symmetry. I was on the ad agency side for a long time, and we made the TV commercial assuming information asymmetry. We would say, ‘Ask your dealer to explain more about X, Y, and Z.’

Well, now that kind of a call to action in a TV commercial sounds almost silly, because you go into the dealer and there’s people with all the printouts and their smartphones and everything… So in many ways we are in a post-Akerlof world. Even his classic example of lemons for cars, well, I can be standing on the lot and pull up the CARFAX history report off my iPhone right there in the car lot.”

Lecinski also believes that our current cash flow issues drive more intense consumer research.  “Forty seven percent of U.S. households say that they cannot come up with $2,000 in a 30-day period without having to sell some possessions,” he says. “This is how paycheck to paycheck life is.”

When money is tight, we’re more careful with how we part with it. That means we spend more time in the ZMOT.

Next week, I’ll continue my conversation with Jim, touching on what the online ZMOT landscape looks like, the challenge ZMOT presents marketers and the seven suggestions Jim offers about how to win the Zero Moment of Truth.

Different Platforms, Different Ads

First published June 9, 2011 in Mediapost’s Search Insider

There’s little argument that mobile’s time has come. According to Google, mobile searches make up anywhere from 5% to 12% of the total query volume for many popular keywords. And for many categories (like searches for local businesses) the percentage is much higher. That officially qualifies as “something to consider” in most marketing strategies. For many marketers, though, the addition of mobile is a simple check box addition in planning a search campaign. In Google’s quest to make life simple for marketers, we’re missing some fundamental aspects of marketing to mobile prospects. Okay, we’re missing one fundamental aspect:  it’s different. Really different.

Last week, I talked about how my behaviors vary across multiple devices. But it’s not just me. It’s everyone. And those differences in behavior will continue to diverge as experiences become more customized. The mobile use case will look significantly different than the tablet use case. Desktops and smart entertainment devices will be completely different beasts. We’ll use them in different ways, with different intents, and in different contexts. We’d better make sure our marketing messages are different too.

Let’s go back to the Jacquelyn Krones research from Microsoft, which I talked about in the last column. If we divide search activity into three buckets: missions, excavations and explorations, we can also see that three different approaches to search ads should go along with those divergent intents.

Excavation search sessions, which still live primarily on the desktop, are all about information gathering. Success ads for these types of searches should offer rich access to relevant content. Learn to recognize the keywords in your campaigns that indicate excavation queries. They are typically more general in nature, and are often aligned with events that require extensive research: major purchases, planning vacations, researching life-altering events like health concerns, moving to a new community, starting college or planning a wedding. In our quest to squeeze conversions off a landing page, we often not only pare down content, but also on-page navigation pointing to more content. For an excavation-type search, this is exactly the wrong approach. Here, the John Caples approach to copy writing might be just the ticket: long, information rich content that allows the user to “create knowledge.”

Missions, especially on mobile devices, are just that. You get in and you get out, hopefully with something useful — that lets you do something else. Successful ads in this environment should do the same thing: take you one (or several) steps closer to a successful completion of the mission. Ad messaging should offer the promise of successful mission completion, and the post-click destination should deliver on that promise. Clean, hassle-free and exquisitely simple to use are the marching orders of mobile advertising.

Perhaps the most interesting search use case is that on a tablet device. I’ve chatted with Yahoo’s relatively new VP of search, Shashi Seth, about this. He believes tablets might open the door for the visually rich, interactive ads that brand marketers love. And Krones research seems to indicate that this might indeed be the case. Tablets are ideal for exploration searches, which tend to be meandering voyages through the online landscape with less specific agendas. The delight of serendipity is one big component in an expedition search. And it’s this that marks a significant departure for most search marketers.

Every search marketer learns the hard way that it’s incredibly difficult to lure search users away from the task they have in mind. When we do our keyword analysis, we’re usually disappointed to find that the list of highly relevant words is much smaller than we thought. So, we extend our campaign into keywords that, while not directly relevant, are at least adjacent to the user’s anticipated intent. If they’re looking for a jigsaw, we might try running an ad for free children’s furniture plans. Or, if they’re looking for a new car, we might try running an ad that reminds them that they can save 15% on their car insurance just by clicking on our ad.

We’ve all been here. In the mind of the marketer, it makes sense to buy these keywords. After all, the two worlds are not so far apart. A new owner of a jig saw might indeed be interested in building a set of bunk beds. And the new car owner will need car insurance. The problem is, neither of those things are relevant “in the moment,” and “in the moment” rules in most search interactions. So, after a few months of trying, we reluctantly remove these keywords from our campaign, or drop the bid price so low they’re buried 3 pages of results deep.

But perhaps tablet users are different. I’m certain the search experience on a tablet will soon look significantly different than it does on a PC. I would expect it to be more tactile and interactive – less rigidly ordered. And, in that environment, given the looser constraints of an expedition-type search, we might be more willing to explore a visually rich distraction. Shashi Seth thinks so. Krones’ research seems to also point in this direction. For this search marketer, that’s reason enough to test the hypothesis. Or, I will test it, as soon as Google, Yahoo and Bing make that possible.

The Segmentation of My Slime Trail

First published June 2, 2011 in Mediapost’s Search Insider

My connected life is starting to drop into distinct buckets. Now that I have my choice of connecting through my smartphone (an iPhone), my tablet (an iPad), my work computer (a MacBook) and my home computer (a Windows box), not to mention the new Smart TVs we bought (Samsungs), I’m starting to see my digital footprints (or my digital slime trail, to use Esther Dyson’s term) diverge. And the nature of the divergence is interesting.

Take Netflix, for example. It’s finally come to Canada, although with a depressingly small number of movies to choose from. My Netflix account stretches across all my devices, but the things I watch on my iPad are quite a bit different than my choices on an iPhone. And there is yet another profile for the things I choose on my MacBook (mainly when I travel). On the iPad, it’s typically an episode of “Arrested Development,” “Fawlty Towers” or, if I have a little more time, “Mad Men,” (and yes, I realize those three choices create an interesting psychological profile of myself) that offers some respite when the women of my household commandeer all available TV sets. On the new Samsung, it’s usually a movie intended for viewing by myself and at least one other member of my family.

Kindle offers a similar divergence of reading patterns — again, one application that’s spread across multiple devices. And, like my movie watching, my reading habits vary significantly depending on what I’m doing the reading on. I almost never read on my laptop, but it’s my preferred platform for research and annotation. My favorite reading device is my iPad, but it’s primarily used at home. I only take it on the road for extended trips. My fall-back is the iPhone, which gets called into duty when I have time to kill when traveling or in between my kid’s volleyball games.

Jacquelyn Krones, from Microsoft, did a fascinating research project where she looked at search habits across multiple devices. She found that our searches could be grouped into three different categories: missions, excavations and explorations.

Mission is the typical task-based single interaction where we need to get something done. The nature of the mission can be significantly different on a mobile device, where the mission is usually related to our physical location. In this case, geo-location and alternative methods of input (i.e. taking a picture, recording a sound or scanning a bar code) can make completing the mission easier, because the outputs are more useful and relevant in the user’s current context. This is why app-based search is rapidly becoming the norm on mobile devices. Missions on the desktop tend to be more about seeking specific information when then allows us to complete a task beyond the scope of our search interaction.

Excavations are research projects that can extend over several sessions and are typically tied to an event of high interest to the user. Health issues, weddings, major travel, home purchases and choosing a college are a few examples. The desktop is the hands-down winner for this type of search engagement. It provides an environment where information can be consolidated and digested through the help of other applications. Krones calls this “making knowledge,” implying a longer and deeper commitment on the part of the user.

Finally, we have exploration. Explorations are more serendipitous in nature,  with  users setting some fairly broad and flexible boundaries for their online interactions. While excavation can become a part of exploration, the behaviors are usually distinct. Exploration tends to be a little more fluid and open to suggestion, with the user being open to persuasion, while excavation is more about assembling information to support an intent that is already decided upon. Tablets seem to be emerging as a strong contender in the exploration category. The relaxed nature of typical interaction with an iPad, for example, supports the open agenda of exploration.

What this means, of course, is that the trail I leave behind on my mobile device starts to look significantly different than the trail on my laptop or tablet. Each fits a different use case, as they start to become tools with distinct capabilities, over and above the fact that they’re all connected to the Internet.

How Smart Do We Want Search to Get?

First published February 17, 2011 in Mediapost’s Search Insider

Imagine if a search engine was smart enough to be able to anticipate your needs before you know you need them. There it sits, silently monitoring your every move and just when you get a hankering for Thai food (burbling up to the threshold of consciousness), there it is with the hottest Thai restaurants within a 2-mile radius. You didn’t have to do a thing. It was just that smart!

Sound utopian? Then take a moment to think again. Do we really want search to become that smart? Sure, it sounds great in theory, but what would we have to share to allow search to become truly prescient?

The odd thing about humans is that we want our lives to be easier, but we don’t want to sacrifice control in the process. Well, to be more precise, we don’t want to sacrifice control in some situations. It all comes down to our level of engagement with the task at hand and the importance of gut instinct.

Humans have a mental bias towards control. We are most anxious when we have no control over our environment. In fact, even when we have very little control over outcomes (such as in a casino) we fool ourselves into thinking we do. We believe that the way we toss the dice on a craps table (or the hat we’re wearing, or the color of our underwear) has some impact of which numbers come up for us. Factory workers on an assembly line are much happier when they have a button that can stop the line, even if they never use it. We love control and are loath to relinquish it.

Even if a search engine had a 100% success rate in anticipating our intent, chances are we’d feel anxious about surrendering control of our decisions. In fact, this issue has already played out once online. At the height of the dot-com boom, billions of dollars were invested in creating friction-fee online marketplaces. The theory was that certain buying purchases, especially in the B2B marketplace, could be totally automated.  In a magazine article for supply chain management in 2000, an industry consultant saw a bright future for e-procurement: “”As long as you understand the business rules for making decisions, there’s no reason why you can’t automate.  Why can’t two computer systems – with built in rules – talk to each other?” 

It sounds completely rational, but ration has little to do with what we want. We want to feel in control. B2B buying didn’t become automated because we have too much investing in making buying decisions, even when we’re buying widgets for the assembly line, a bank of servers or copy paper in bulk. We don’t trust machines, no matter how smart they are, to make our decisions for us.

What we want is a search engine that guides us, but doesn’t push us. We want a smarter search experience, but we think of it as a filter rather than an arbitrator. Ideally, we want a concierge, who can make informed suggestions that we can then act on.  

Could a search engine become smart enough to predict our wants and desires before we’re even aware of them? Possibly, but the other part of that trade-off may be one we’re unwilling to make. How much privacy do we have to give up in order for the engine to know us that well? One of the hottest growth markets is in the area of personal technology. These little bits of tech live with us day in and day out. Consider the Fitbit, a sophisticated motion sensor that tracks our daily movements as long as we keep it with us. This daily diary of our activity (even how restless our sleep is) can be fed directly to the Web. The idea is intriguing, but the reality is a little disconcerting, especially when you think where this technology may go in the future. 

As we embed more and more technology into our everyday lives, there is the opportunity to collect signals that could help a search engine (but at this point, the label “search engine” seems wholly inadequate), track behaviors and make very educated guesses about what we might be interested in.  Our dreams and desires could potentially be crunched into just another algorithm. Practical? Perhaps. Desirable? I suspect not.

Finally, slumbering just below this discussion is the lurking presence of ultra-targeted advertising, and it’s this that we may find most troubling. If technology someday succeeds in reading our very minds, how can we use that same mind to say no?

Why Can’t I Argue with Google (or Malcolm Gladwell)?

First published February 3, 2011 in Mediapost’s Search Insider

This week I was in San Francisco for Big Think’s Farsight 2011: Beyond the Search Box. I took copious notes but there was one comment in particular I found intriguing. Luc Barthelet, from Wolfram|Alpha said that the company’s goal is not just to provide an answer, but show the route taken to arrive at the answer. Then we’re free to question the validity of the answer. “I want to argue with a search engine. I want to be able to challenge its logic.”

This was the first time I had ever heard this, but it immediately struck a chord. Why can’t we argue with Google? Why do we just accept its answers? How do we know they’re right? Of course, Google doesn’t really create an answer, it connect us with answers. But more and more, Google is disintermediating the source of the answer. For many searches, we never go beyond the search results page. We accept the answer as presented by Google, without ever questioning the rationale behind the answer.

Why is arguing important? What could we gain from arguing with Google? Let me give you one example of why it’s good to argue.

There is no problem…

The Summit featured recorded video clips from famed pundits, including Malcolm Gladwell. Gladwell told us that the purpose of the Summit — to ponder how we might reinvent search — was misguided. “Can we build a better Google or Bing? Yeah, sure we can. But it solves a problem that’s not really a problem.” In Gladwell’s view, we already have access to all the information we need.

I diasagree vehemently with Gladwell. This same logic could be applied to any avenue of human endeavor and would stop all progress and innovation in its tracks. Could a horse and covered wagon transport us across the country? Yeah, sure it could. But I’d rather take a plane, thank you. And someday I hope there’s an even faster way. Gladwell’s off-the-cuff comment shocked the audience. How could he provide an answer so obviously lacking in informed context? The structure of his argument had holes so big we could have poked the Golden Gate Bridge through them.

Say What, Malcolm?

If Gladwell believes that a valid answer to every question is Wikipedia, perhaps his argument holds water. But he is ignoring the fundamental precepts of information foraging and retrieval. We need to surface the best information by taking the shortest possible path to it. Everyone who knows anything about search agrees with that, and we also agree that we’re not there yet. Not by a long shot.

But going beyond this, there’s the broader question: Is the current use case of search the one we need going forward? Right now, search is about the retrieval of relevant information. Let’s leave aside the question about whether it’s successful at doing that. But is simple retrieval of information (often false information) enough anymore? As Esther Dyson pointed out, perhaps “search” isn’t even the verb we should be using now. Is “solving” or “fulfilling” a better description of what we need? Dyson remarked, “We use the Ito connect to and affect the world around us.” And if that’s the use case, search falls far short of our expectations.

But I couldn’t argue with Gladwell, because he wasn’t in the room and I couldn’t uncover the rationale behind his pithy answer. He was a bit like Google; he dropped his wisdom from on high and was gone.

The Importance of Arguing

We argue because it knocks down intellectual straw men. It allows us to test and prod the logic that lies behind opinions. It challenges beliefs, which tend to keep us barricaded from the rest of world. If those beliefs are deeply held, they may be difficult (or impossible) to dislodge, but if they’re never questioned, minds will never change — and we’ll all barrel down those pre-laid tracks to a much too predictable future.

I agree with Barthelet. We should be able to argue with online information. We should be able to see the path taken to answers. We should be able to challenge sources. It’s more appropriate in some instances than others, and it’s an option we may not take advantage of very often, but it should be open to us.

Is the Internet Making Us Stupid – or a New Kind of Smart?

First published September 9, 2010 inn Mediapost’s Search Insider

As I mentioned a few weeks back, I’m reading Nicholas Carr’s book “The Shallows.” His basic premise is that our current environment, with its deluge of available information typically broken into bite-sized pieces served up online, is “dumbing down” our brains.  We no longer read, we scan. We forego the intellectual heavy lifting of prolonged reading for the more immediate gratification of information foraging. We’re becoming a society of attention-deficit dolts.

It’s a grim picture, and Carr does a good job of backing up his premise. I’ve written about many of these issues in the past. And I don’t dispute the trends that Carr chronicles (at length). But is Carr correct is saying that online is dulling our intellectual capabilities, or is it just creating a different type of intelligence?

While I’m at it, I suspect this new type of intelligence is much more aligned with our native abilities than the “book smarts” that have ruled the day for the last five centuries. I’m an avid reader (ironically, I’ve been reading Carr’s book on an iPad) and I’m the first to say that I would be devastated if reading goes the way of the dodo.  But are we projecting our view of what’s “right” on a future where the environment (and rules) have changed?

A Timeline of Intellect

If you expand your perspective of human intellectualism to the entire history of man, you find that the past 500 years have been an anomaly. Prior to the invention of the printing press (and the subsequent blossoming of intellectualism) our brains were there for one purpose: to keep us alive. The brain accomplished this critical objective through one of three ways:

Responding to Danger in Our Environments

Reading is an artificial human activity. We have to train our brains to do it. But scanning our surroundings to notice things that don’t fit is as natural to us as sleeping and eating. We have sophisticated, multi-layered mechanisms to help us recognize anomalies in our environment (which often signal potential danger).  I believe we have “exapted” these same mechanisms and use them every day to digest information presented online.

This idea goes back to something I have said repeatedly: Technology doesn’t change behavior, it enables behavior to change. Change comes from us pursuing the most efficient route for our brains. When technology opens up an option that wasn’t previously available, and the brain finds this a more natural path to take, it will take it. It may seem that the brain is changing, but in actuality it’s returning to its evolutionary “baseline.”

If the brain has the option of scanning, using highly efficient inherent mechanisms that have been created through evolution over thousands of generations, or reading, using jury-rigged, inefficient neural pathways that we’ve been forced to build from scratch through our lives, the brain will take the easiest path. The fact was, we couldn’t scan a book. But we can scan a Web site.

Making The Right Choices

Another highly honed ability of the brain is to make advantageous choices. We can consider alternatives using a combination of gut instincts (more than you know) and rational deliberation (less than you think) and more often than not, make the right choice. This ability goes in lock step with the previous one, scanning our environment.

Reading a book offers no choices. It’s a linear experience, forced to go in one direction. It’s an experience dictated by the writer, not the reader. But browsing a Web site is an experience littered with choices.  Every link is a new choice, made by the visitor. This is why we (at my company) have continually found that a linear presentation of information (for example, a Flash movie) is a far less successful user experience than a Web site where the user can choose from logical and intuitive navigation options.

Carr is right when he says this is distracting, taking away from the focused intellectual effort that typifies reading. But I counter with the view that scanning and making choices is more naturally human than focused reading.

Establishing Beneficial Social Networks

Finally, humans are herders. We naturally create intricate social networks and hierarchies, because it’s the best way of ensuring that our DNA gets passed along from generation to generation. When it comes to gene propagation, there is definitely safety in numbers.

Reading is a solitary pursuit. Frankly, that’s one of the things avid readers treasure most about a good book, the “me” time that it brings with it. That’s all well and good, but bonding and communication are key drivers of human behavior. Unlike a book, online experiences offer you the option of solitary entertainment or engaged social connection. Again, it’s a closer fit with our human nature.

From a personal perspective, I tend to agree with most of Carr’s arguments. They are a closer fit with what I value in terms of intellectual “worth.” But I wonder if we fall into a trap of narrowed perspective when we pass judgment on what’s right and what’s not based on what we’ve known, rather than on what’s likely to be.

At the end of the day, humans will always be human.

Wired for Information: A Brain Built to Google

First published August 26, 2010 in Mediapost’s Search Insider

In my last Search Insider, I took you on a neurological tour that gave us a glimpse into how our brains are built to read. Today, let’s dig deeper into how our brains guide us through an online hunt for information.

Brain Scans and Searching

First, a recap. In Nicholas Carr’s Book, “The Shallows: What the Internet is doing to Our Brains,I focused on one passage — and one concept — in particular. It’s likely that our brains have built a short cut for reading. The normal translation from a printed word to a concept usually requires multiple mental steps. But because we read so much, and run across some words frequently, it’s probable that our brains have built short cuts to help us recognize those words simply by their shape in mere milliseconds, instantly connecting us with the relevant concept. So, let’s hold that thought for a moment

The Semel Institute at UCLA recently did a neuroscanning study that monitored what parts of the brain lit up during the act of using a search engine online. What the institute found was that when we become comfortable with the act of searching, our brains become more active. Specifically, the prefrontal cortex, the language centers and the visual cortex all “light up” during the act of searching, as well as some sub-cortical areas.

It’s the latter of these that indicates the brain may be using “pre-wired” short cuts to directly connect words and concepts. It’s these sub-cortical areas, including the basal ganglia and the hippocampus, where we keep our neural “short cuts.”  They form the auto-pilot of the brain.

Our Brain’s “Waldo” Search Party

Now, let’s look at another study that may give us another piece of the puzzle in helping us understand how our brain orchestrates the act of searching online.

Dr. Robert Desimone at the McGovern Institute for Brain Research at MIT found that when we look for something specific, we “picture” it in our mind’s eye. This internal visualization in effect “wakes up” our brain and creates a synchronized alarm circuit: a group of neurons that hold the image so that we can instantly recognize it, even in complex surroundings. Think of a “Where’s Waldo” puzzle. Our brain creates a mental image of Waldo, activating a “search party” of Waldo neurons that synchronize their activities, sharpening our ability to pick out Waldo in the picture. The synchronization of neural activity allows these neurons to zero in on one aspect of the picture, in effect making it stand out from the surrounding detail

Pirolli’s Information Foraging

One last academic reference, and then we’ll bring the pieces together. Peter Pirolli, from Xerox’s PARC, believes we “forage” for information, using the same inherent mechanisms we would use to search for food. So, we hunt for the “scent” of our quarry, but in this case, rather than the smell of food, it’s more likely that we lodge the concept of our objective in our heads. And depending on what that concept is, our brains recruit the relevant neurons to help us pick out the right “scent” quickly from its surroundings.  If our quarry is something visual, like a person or thing, we probably picture it. But if our brain believes we’ll be hunting in a text-heavy environment, we would probably picture the word instead. This is the way the brain primes us for information foraging.

The Googling Brain

This starts to paint a fascinating and complex picture of what our brain might be doing as we use a search engine. First, our brain determines our quarry and starts sending “top down” directives so we can very quickly identify it.  Our visual cortex helps us by literally painting a picture of what we might be looking for. If it’s a word, our brain becomes sensitized to the shape of the word, helping us recognize it instantly without the heavy lifting of lingual interpretation.

Thus primed, we start to scan the search results. This is not reading, this is scanning our environment in mere milliseconds, looking for scent that may lead the way to our prey. If you’ve ever looked at a real-time eye-tracking session with a search engine, this is exactly the behavior you’d be seeing.

When we bring all the pieces together, we realize how instantaneous, primal and intuitive this online foraging is. The slow and rational brain only enters the picture as an afterthought.

Googling is done by instinct. Our eyes and brain are connected by a short cut in which decisions are made subconsciously and within milliseconds. This is the forum in which online success is made or missed.

How Our Brains are Wired to Read

First published August 19, 2010 in Mediapost’s Search Insider

How do we read? How do we take the arbitrary, human-made code that is the written word and translate it into thoughts and images that mean something to our brain, an organ that had its basic wiring designed thousands of generations before the appearance of the first written word? What is going on in your skull right now as your eyes scan the black squiggly lines that make up this column?

The Reading Short Cut

I’m currently reading Nicholas Carr’s “The Shallows: What the Internet is Doing to Our Brains,” a follow-up to Carr’s article in The Atlantic, “Is Google Making Us Stupid?” The concept Carr explores is fascinating to me: the impact of constant online usage on how the neural circuits of our brain are wired.

But there was one quote in particular, from Maryanne Wolf’s book, “Proust and the Squid: The Story and Science of the Reading Brain,” that literally leapt off the page for me: ‘The accomplished reader, Maryanne Wolf explains, develops specialized brain regions geared to the rapid deciphering of text. The areas are wired ‘to represent the important visual, phonological and semantic information and to retrieve this information at lightning speed.’ The visual cortex, for example, develops ‘a veritable collage’ of neuron assemblies dedicated to recognizing, in a matter of milliseconds, ‘visual images of letters, letter patterns and words.'”

For everyone reading this column today, that is one of the most relevant passages you may ever scan your eyes across. It’s vitally important to digital marketers and designers of online experiences. Humans that read a lot develop the ability to recognize word patterns instantly, without going through the tedious neural heavy lifting of translating the pattern through the language centers of the brain. A quick neurological tour is in order here.

How the Brain Reads

The brain has a habit of developing multiple paths to the same end goal. Many functions that our brain controls tend to have dual routes: a quick and dirty one that rips through the brain at lightning speed and a slower, more rational one. It’s the neural reality behind Malcolm Gladwell’s “Blink.” This dual speed processing is a tremendously efficient way of coping with our environment. The same mechanism, according to Wolf, has been adapted to our interpretation of the written word.

Humans have an evolved capacity for language. Noam Chomsky, Steven Pinker and others have shown convincingly that we come out of the box with inherent capabilities to communicate with each other. But those abilities, housed in the language centers of the brain (Wernicke’s and Broca’s Areas, if you’re interested) are limited to oral language. Written language hasn’t been around nearly long enough for evolution’s relatively slow timeline to have had much of an impact. That’s why we learn to speak naturally just by hanging around other humans, but only those with a formalized and structured education learn to read and write. We have to take the native machinery of the brain and force it to adapt to the required task by creating new neural paths.

Instantly Recognizable…

So, when we read a page of text, there’s a fairly complex and laborious process going on in our noggins. Our visual cortex scans the abstract code that is written language, feeds it to the language centers for translation, and then sends it to our prefrontal cortex and our long-term memory to be rendered into concepts that mean something to us. The word “horse” doesn’t really mean the large, hairy, four-legged mammal that we’re familiar with until it goes through this mental processing.

But, like anything that humans do often, we tend to create short cuts through repetition. It’s important to note that this isn’t evolution at work, it’s neuroplasticity. The ability to read and write is built in each human from scratch. The brain naturally tries to achieve maximum efficiency by taking things we do repeatedly and building little synaptic short cuts. Humans who read a lot become wired to recognize certain words just by their shape and appearance, without needing to run the full processing cycle. Your name is a good example. How often have you been reading a newspaper or book and run across your last name? Does it seem to “leap off the page?” That was your brain triggering one of its little short cuts.

So, what does this mean for online interactions, particularly with a search engine? In next week’s column, I’ll revisit a fascinating brain scanning study that was done by UCLA and take a peek at what might be happening under the hood when we launch a Web search.

 

Our Indelible Lives

First published June 3, 2010 in Mediapost’s Search Insider

It’s been a fascinating week for me. First, it was off to lovely Muncie, Ind. to meet with the group at the Center for Media Design at Ball State University. Then, it was to Chicago for the National Business Marketing Association Conference, where I was fortunate enough to be on a panel about what the B2B marketplace might look like in the near future. There was plenty of column fodder from both visits, but this week, I’ll give the nod to Ball State, simply because that visit came first.

Our Digital Footprints

Mike Bloxham, Michelle Prieb and Jen Milks (the last two joined us for our most recent Search Insider Summit) were gracious hosts, and, as with last week (when I was in Germany) I had the chance to participate in a truly fascinating conversation that I wanted to share with you. We talked about the fact that this generation will be the first to leave a permanent digital footprint. Mike Bloxham called it the Indelible Generation. That title is more than just a bon mot (being British, Mike is prone to pithy observations) — it’s a telling comment about a fundament aspect of our new society.

Imagine some far-in-the-future anthropologist recreating our culture. Up to this point in our history, the recorded narrative of any society came from a small sliver of the population. Only the wealthiest or most learned received the honor of being chronicled in any way. Average folks spent their time on this planet with nary a whisper of their lives recorded for posterity. They passed on without leaving a footprint.

Explicit and Implicit Content Creation

But today — or if not today, certainly tomorrow — all of us will leave behind a rather large digital footprint. We will leave in our wake emails, tweets, blog posts and Facebook pages. And that’s just the content we knowingly create. There’s a lot of data generated by each of us that’s simply a byproduct of our online activities and intentions. Consider, for example, our search history. Search is a unique online beast because it tends to be the thread we use to stitch together our digital lives. Each of us leaves a narrative written in search interactions that provides a frighteningly revealing glimpse into our fleeting interests, needs and passions.

 Of course, not all this data gets permanently recorded. Privacy concerns mean that search logs, for example, get scrubbed at regular intervals. But even with all that, we leave behind more data about who we were, what we cared about and what thoughts passed through our minds than any previous generation. Whether it’s personally identifiable or aggregated and anonymized, we will all leave behind footprints.

 Privacy? What Privacy?

Currently we’re struggling with this paradigm shift and its implications for our privacy. I believe in time — not that much time — we’ll simply grow to accept this archiving of our lives as the new normal, and won’t give it a second thought. We will trade personal information in return for new abilities, opportunities and entertainment. We will grow more comfortable with being the Indelible Generation.

Of course, I could be wrong. Perhaps we’ll trigger a revolt against the surrender of our secrets. Either way, we live in a new world, one where we’re always being watched. The story of how we deal with that fact is still to be written.