The Wisdom of Consumer Crowds?

Following up on the theme of the rewiring of our brains, is the internet making us smarter consumers as well? There certainly seems to be evidence pointing in that direction.

A study by ScanAlert  found that the average online shopper in 2005 took 19 hours between first visiting a store and completing a transaction. In 2007, that jumped almost 79% to 34 hours. We’re taking longer to make up our minds. And we’re also doing our homework. Deloitte’s Consumer Products group recently released research saying 62 percent of consumers read consumer written product reviews on the Internet, and of those, more than 8 in 10 are directly influenced by the reviews.

In James Surowiecki’s Wisdom of Crowds, he believes that large groups, thinking independently with access to a diversity of information, will always make a better collective decision than the smartest individual in the group. Isn’t the Internet wiring this wisdom into more and more purchases? When we access these online reviews, we’re in fact coming to collective decisions about a product, built on hundreds or thousands of individual experiences. As the network expands, we benefit from the diversity of all those opinions and probably get a much more accurate picture of the quality of a product than we ever could from vendor supplied information alone. The marketplace votes for their choice, and the best product should theoretically emerge as the winner.

Of course, nothing works perfectly all of the time. As Surowiecki points out, communication can be an inexact and imperfect process, and information cascades based on faulty inputs can spread faster than ever online. But it’s also true that if a cascade leads to rapid adoption of an inferior product, we’ll discover we’ve been “had” faster and this news can also spread quicker. The connections of online make for a much faster dissemination of information based on experience than ever before, ensuring that the self correcting mechanisms of the marketplace kick into gear faster.

There’s a pass along effect happening here as well. For social networking buffs, you’ve probably heard of Granovetter’s “Weak Ties”. Social networks are made up of dense, highly connected clusters, i.e. families, close friends, co-workers. The social ties within these clusters are strong ties. But spanning the clusters are “weak ties” between more distant acquaintances. The ability for word to spread depends on these weak ties. What the internet does is exponentially increase the number of weak ties, wiring thousands of clusters together into much bigger networks than were ever possible before. This allows word of mouth to travel not only in the physical world but also in the virtual. I looked at a fascinating follow up study to Granovetter’s where Jonathan Frenzen and Kent Nakamoto also looked at the value of the information and the self interest of the individual and their “strong ties” within a cluster as a factor in how quickly word of mouth passes through a network.

Deloitte’s study graphically illustrates the weak tie/strong tie effect. 7 out of 10 of the consumers who read reviews share them with friends, family or colleagues, moving the information that comes through the weak ties of the internet into each cluster, where it spreads rapidly thanks to the efficiency of strong ties. This effect pumps up the power of word of mouth by several orders of magnitude.

But are we also becoming more socially aware in our shopping? The research by Deloitte also seems to indicate this. 4 out of 10 consumers said they were swayed by “better for you” ingredients or components, eco-friendly usage and sourcing, and eco-friendly production or packaging. The internet wires us into communities, so it’s not surprising that we become more sensitive to the collective health of those communities in the process.

What all these leads to is a better informed consumer, who’s not reliant on marketing messaging coming from the manufacturer or the retailer. And that should make us all smarter.

Are Our Brains being Rewired?

I have to start out by thanking Nico Brooks and Jess Gao. Without intending to, they both provided me more than enough fodder for a rather lengthy column in Seach Engine Land on Friday.

Nico is the Chief Search Strategist at Atlas. Jess is our intern at Enquiro, who’s currently working towards her doctorate, specializing in cognitive psychology. Through different paths, they both gave me some major brain melting ideas to chew over. I’m still digesting, but you can catch the thought process in action on my column.

But consider this. What if our brains are being rewired by the internet? Some of our behaviors are innate. They’re our OEM operating software, put there by the manufacturer. Flight or fight. The need to procreate. The appreciation of beauty. This stuff is hardwired.

But some of our behaviors are learned. We’ve developed them as we go. The things sit in our temporary memory caches, and we can adjust them if they’re no longer working. The thing that started all this was how we learn to navigate a physical environment. First we look for landmarks, then we memorize routes, then we put the two together to create a cognitive map. Nico’s suspicion (and Nico, I hope I’m capturing the essence of the idea accurately) is that our need to identify landmarks and even our ability to memorize routes is probably innate. It’s just how we are programmed to get around. But cognitive mapping, at least in the essentially rectangular grid pattern that is common in the Cartesian coordinate model, is a learned behavior. Rectangles have no place in the n dimensional space of online, so as we spend more time navigating online, will we change our mapping process?

Then, with Jess, we had a great chat about how we perceive things, especially ads. There’s a great introduction to selective perception that I would urge you to check out. In recent studies we’ve done at Enquiro, one of the interesting findings has been that the more intrusive the ad, the less it seems to work. It registers high in the first stage of perception, stimulation, and manages to succeed in the second, registration, but fails in the last two stages, organization and interpretation.

Other conversations I had this week, that didn’t make it into either of the columns. On Thursday I was in New York for Google’s B to B Summit and had a chance to chat with Mark Martel, who supports the B to B Tech Sales Vertical at Google. Mark has a healthy intellectual curiosity and I always enjoy chatting with him. We discussed schemas and how important they are in the process of perception. Then, on Friday, I was in Toronto chatting with the Yahoo Canada gang, including Maor Daniels and Adina Zointz (what a great name, literally covering everything from A to Z!) and we talked about how quickly we’re learning to judge the authenticity of content online. It’s as if our bullshit filters are more finely tuned than ever.

I’m definitely on a riff here, but there’s a lot of threads coming together. Even in someone of my ever upwards creeping years (I’m 46) I suspect my synapses are under construction. Old routes are being torn out and new ones are being built. And with my daughters, many of the paths are being built differently right from the start. The routes that were so important to me in grade school, times tables, rote memorization, etc, are becoming overgrown with weeds through lack of use. But new routes I never even thought of, like how to do homework, carry on an online chat and watch the TV with one eye, are being upgraded into major turnpikes. Multitasking is a major operational imperative now, and selective perception is kicking into overdrive.

Anyway, to further dive into some of the things on my mind, here’s some of the columns where I’m beginning to open up some of these ideas to the fresh, online air:

Infomediating a Broken Marketplace – a look at Hagel and Singers Infomediary model from their book Net Worth. Is Google aiming to be the ultimate match maker in the marketplace?

4000 Ads a Day, and Counting – Part One of the Infomediary Doubleheader, looking at the disconnect between customers who just want the facts, and advertisers that just want to control our buying habits

Some Big Ideas for a Friday – Some musings about how we perceive advertising, based on recent studies we conducted, and how we might be remapping the perception process

How We Navigate Our Online Landscape – The original exploration of landmark, route and survey knowledge and how it may map (or not) to how we navigate our online space

And please, do me a favor. This is all stuff I want to explore further in the book. If you think I’m full of bullshit, call me on it. Share your thoughts. Post a comment. Start a dialogue. I know it’s a pain in the ass posting comments on blogs because of spam, but PLEEASSSE take a few moments to do so. Or drop me an email.

Live from the Google B2B Summit: Meet Amy

Amy_C_-_2_144_188_c1I’m just waiting in JFK after attending Google’s B2B Summit in New York and just had to drop a quick post. The highlight of the day was a keynote by Amy Curtis-McIntyre, the founding CMO of JetBlue airlines. Now, I shared a podium some time ago with David Neeleman, the (until recently) CEO of JetBlue and I can’t imagine an odder couple than the brash New York sass of Amy and the quiet Mormon values of Neeleman, but they made it work and created a phenomenal brand success story in the process.

For me, it was a particular pleasure to meet Amy, and I already made sure I lined up an interview to find out more about the amazing JetBlue story. Companies like this, that realize a brand promise to a consumer is sacrosanct, fascinate me. If you ever get a chance to hear Amy present (she’s now a surburban mom in Chicago) make sure you don’t miss it. She’s right up on my list of favorite speakers with Guy Kawasaki, and there’s an amazing degree of resonance in their messages. By the way, Guy’s also on my hit list for an interview at some point.

Infomediating a Broken Marketplace

First published October 18, 2007 in Mediapost’s Search Insider

Last week, I explored the disconnect between how advertisers define Nirvana; the ability to control consumer and persuade them at will by inundating them with advertising; and what consumers dream about: authentic and reliable information on needed products and services. There are costs associated with both sides, the cost of advertising, and the cost of consumer research. Max Kalehoff, from Nielsen BuzzMetric, pointed out another cost: the nuisance cost to the consumer of wading through an earlobe-deep sea of irrelevant and uninvited advertising: zapped TV commercials, blaring billboards, glaring signage, email spam, ubiquitous interstitials and pop-ups, preloads… .or one of the zillions of other ways advertisers choose to scream at you.

So, with this highly inefficient, annoying and disconnected marketplace, there has to be a better way, right? Well, Marc Singer and John Hagel III think so. They call it the infomediary, a concept introduced in their 1999 book, “Net Worth.” It’s well worth the read. The one thing that struck me is that in the entire book, the word “Google” is not mentioned once. This is not really surprising, given the publication date, but for reasons that will soon become clear, the irony was not lost on me.

How to Spot an Infomediary

Here’s the basic foundation of the infomediary. Acting on behalf of the client when he’s looking to make a purchase, the infomediary takes previously gathered personal information, as well as information volunteered by the client, and searches for the best match with vendors. The client can choose to remain anonymous, saving himself from an onslaught of advertising. Or, if the client agrees, the infomediary will pass his name along to a qualified vendor, and for this privilege, the vendor will pay the prospect. In essence, the infomediary plays the role of marketing matchmaker.

There are a number of offshoots of this basic premise. The infomediary supplies privacy tools to clients, marketing intelligence to vendors, the opportunity to bargain as a group for lower prices on regular consumable products, and it also acts as an aggregator of consumer power. In effect, the infomediary takes over control of the client relationship, inserting itself squarely between the consumer and the vendor, with the ultimate goal of protecting the consumer. This is a decidedly customer-centric model.

But it’s in the basic concept of gathering information about a client, and using that to ensure a good match with a vendor, that one begins to speculate about Google’s ambitions to fill this role. In essence, at a rudimentary level, Google is already fulfilling some of the role of the infomediary. Certainly if you factor personalization into the equation, we move a big step closer to Singer and Hagel’s concept.

Disruptive Influences

There are a number of dramatically disruptive possibilities in the infomediary model:

  • It forces advertisers to surrender all pretense of control over the consumer. Persuasion becomes a non-issue. The touchpoint with the consumer is stripped of hype, ensuring that product information is authentic and factual.
  • It gives the aggregated consumer voice a level of power never seen before. Previously, the marketplace was vendor-centric: here’s what we offer, here’s how we offer it, here’s what we charge. The consumer’s choice was restricted to “take it or leave it.” Now, the balance shifts to the consumer: here’s what we want, here’s how we want it, here’s what we want to pay. Provide it or we’ll find someone else who can.
  • By gaining control of the customer relationship, it forces companies to focus on two other core processes: one, either product innovation and commercialization; or two, infrastructure management, excelling in producing and distributing a product.

Something’s Rotten in the State of Advertising

There are a number of other seismic shifts in the landscape that come out of the infomediary model, but “Net Worth” weighs in at over 300 pages, and I have a bare 700 to 800 words for this column. The sum of it all is that the infomediary model, or some variation of it, dramatically changes the rules of the marketing game. A terribly inefficient marketplace has evolved in the past century, with some very wobbly power structures. The communication disconnect is almost laughable in its dysfunction. Advertisers spend more and more, hoping to penetrate a barricade set up by increasingly militant consumers. It’s literally a war, with strategies to match. The only hint of concession to the increasing power of the consumer has been search, and that has been done reluctantly. Remember Einstein’s definition of insanity? “Doing the same thing over and over again and expecting different results.”

If you look at the characteristics of an infomediary laid out by Singer and Hagel, Google has many of them in place already, and certainly has the resources to assemble the rest. The one piece that’s missing, and this is the critical one, is a purely customer-centric approach. For all Google’s focus on the user experience, their advertising models are still primarily driven by advertisers, not consumers. But for the model to work, consumers have to have complete trust in the infomediary and be willing to share their personal information. As we’ve seen with the initial pushback to personalization, there’s still a healthy degree of suspicion on the part of users that Google will use personal information for its benefit and not the advertiser’s.

4000 Ads a Day and Counting

First published October 11, 2007 in Mediapost’s Search Insider

It’s not easy being a consumer. Current estimates indicate that the average urban dweller is exposed to between 3,000 and 5,000 advertising messages every day. That means, settling on the middle number, that every waking hour (sleep seems to be our only reprieve, and I hear they’re working on that) you’re presented with an ad every 14.4 seconds. That’s every 14.4 seconds, every minute of every day you’re alive. The frequency of this advertising barrage has doubled in the past 30 years.

“Are We There Yet?”

So, let’s imagine that your 5-year-old child interrupted you every 14 and a half seconds with “Moooommmm…” or “Daaaaddd…”. If we use my patience limits as a baseline here, that mean’s you’d last about 1.3 minutes before you went ballistic. The difference, of course, is that we’re genetically hardwired to pay attention to our children, much as we sometimes might try not to. We’ve been conditioned to ignore advertising.

But what happens when we really want to buy something? Suddenly, we’re looking for information, and we spend a lot of time doing so. At least, that’s true for some purchases. Take a computer, for instance. It’s not unusual to spend 10 to 15 hours researching a computer purchase, from the minute you decide you need one to the minute you tear open the box in your home. That’s not including the many hours needed to get your “plug and play” box actually playing after plugging.

The Cost of Consumer Research

Of course, we generally don’t put a cost on our time, but let’s say an hour of your time is worth about $40 (an average rate for someone making $75,000 per year). That means that $1,000 box of electronics cost you an additional $600, just in time spent to pick the right box.

The Internet is not making this any easier. Yes, as consumers, we’re armed with more information sources, but we spend a lot of time sorting out sense from nonsense. The explosion of information sources, both the good and the bad, mean we’re spending more time thinking about what we should buy. A study by ScanAlert found that that across many ecommerce categories, the average time to buy has increased by almost 79% in the past two years. Now, this was just the duration from first visit to purchase in the actual online store. It doesn’t include any consumer research before visiting the store. But I think we’re safe to assume that there would be a corresponding increase in the amount of online consumer “tire kicking.”

It’s No Picnic for Advertisers Either

Before you feel too sorry for yourself, let me tell you, it’s not easy being an advertiser, either. How do we get past the filters? How do we stand out from the other 3,999 messages you’ll hear today?

To recycle some research I did for a previous column (because research is a terrible thing to waste), the Ontario Tourism Board ran newspaper ads in Toronto targeting people looking to vacation in the province. The ad cost (at posted rate card rates) about $54,000. Even with an exceptional response rate, that ad might sneak though the filters of 1,700 or so people and actually catch their attention. This works out to an average cost of about $32 per introduction, or, to put it another way, $32 to tear a hole through that advertising barricade you’ve been building.

Got a Minute? I’ll Make it Worth Your While

So, if advertisers are willing to pay to get your attention, why not cut out the middle man and pay you directly? Why should the Toronto Star get all that money, when you’re the person the advertiser wants to talk to? What if every one of those 4,000 advertisers who are going to try to get your attention today (Consuummmerrr…Consummmerrr!) paid you a dollar to listen to what they have to say? You’d do okay financially, to the tune of about $1.46 million a year. Of course, your brain would explode after the first hour.

The concept is not as far-fetched as it seems. In fact, in 1999 John Hagel III and Marc Singer, both principals with McKinsey and Company, wrote a book called “Net Worth” that explored this very premise (along with a number of others) as a potential online business model. The book provided a detailed business plan for a new concept: the infomediary. Some of the details have been passed over in the last eight years since publication, but the basic premise still addresses a significant disconnect in today’s advertising marketplace. Next week, I’ll lay out the foundation of infomediaries and look at how some of our favorite search players seem to be inching their way towards Hagel and Singer’s proposal.

We now return you to your regular commercial onslaught.

On Your Search Menu Tonight

First published October 4, 2007 in Mediapost’s Search Insider

This week Yahoo unveiled a new feature. It doesn’t really change the search game that much in terms of competitive functionality. If anything, it’s another case of Yahoo catching up with the competition. But it may have dramatic implications from a user’s point of view. To illustrate that point further I’d like to share a couple of stories with you.

The feature is called Search Assist. You type your query in, and Yahoo provides a list under the query box with a number of possible ways you could complete the query. This follows in the footsteps of Google’s search suggestions in its toolbar. Currently, Google doesn’t offer this functionality within the standard Google query box, at least in North America. Ask also offers this feature.

Because Yahoo is late to the game, the company had the opportunity to up the functionality a little bit. For example, the suggestions that come from Yahoo can include the word you’re typing anywhere in the suggested query phrase. Google uses straight stemming, so the word you’re typing is always at the beginning of the suggested phrases. Yahoo also seems to be pulling from a larger inventory of suggested phrases. The few test queries I did brought back substantially more suggestions than did Google.

It’s not so much the functionality of this feature that intrigues me; it’s how it could affect the way we search. I personally have found that I come to rely on this feature in the Google toolbar more and more. Rather than structuring a complete query in my mind, I type the first few letters of the root word in and see what Google offers me. It leads me to select query phrases that I probably never would have thought of myself.

Some time ago I wrote that contrary to popular belief, we’ve actually become quite adept at paring our queries down to the essential words. It’s not that we don’t know how to launch an advanced query; it’s that most times, we don’t need to. This becomes even truer with search suggestions. All we have to do is think of one word, and the search engine will serve us a menu of potential queries. It reduces the effort required from the searcher, but let me tell you a story about how this might impact a company’s reputation online.

I Wouldn’t Recommend That Choice

Some time ago I got a voicemail from an equity firm. The woman who left a message was brash, a little abrasive and left a rather cryptic message, insisting that I had to phone her right back. Now, since I’m in the search game, getting calls from venture capitalists and investment bankers is nothing really new. But I’d never quite heard this tone from one of these prospecting calls before. So, I did as I usually do in these cases and decided to do a little more research on the search engines to determine whether I was actually going to return this call or not. I did my quick 30-second reputation check.

Normally, I would just type in the name of the firm and see what came up in the top 10 results. Usually, if there’s strong negative content out there, it’s worth paying attention to and it tends to collect enough search equity to break the top 10. This time, I didn’t even have to get as far as the results page. The minute I started typing the company name into my Google toolbar, the suggestions Google was providing me told the entire story: “company” scam, “company” fraud and “company” lawsuits. Of the top eight suggestions, over half of them were negative in nature. Not great odds for success. Needless to say, I never returned the call.

If these search suggestions are going to significantly alter our search patterns, we should be aware of what’s coming up in those suggestions for our branded terms. Type your company name into Yahoo or Google’s toolbar and see what variations are being served to you. Some of them may not be that appetizing.

Would You Prefer Szechuan?

My belief is that users are increasingly going to use this to structure their queries. It moves search one step closer to be coming a true discovery engine. One of the overwhelming characteristics of search user behavior is that we’re basically lazy. We want to expand a minimal amount of effort but in return, we expect a significant degree of relevance. Search suggestions allow us to enter a minimum of keystrokes and the search engine obliges us with a full menu of options.

This brings me to my other story. Earlier this year we did some eye-tracking research on how Chinese citizens interact with the search engines Baidu and Google China. After we released the preliminary results of the study, I had a chance to talk to a Google engineer who worked on the search engine. In China, Google does provide real-time search suggestions right from the query box. The company found that it’s significantly more work to type a query in Mandarin than it is in most Western languages. Using a keyboard for input in China is, at best, a compromise. So Google found that because of the amount of work required to enter a query, the average query length was quite short in China, giving a substantially reduced degree of relevancy. In fact, many Chinese users would type in the bare minimum required and then would scroll to the bottom of the page, where Google showed other suggested queries. Then, the user would just click on one of these links. Hardly the efficient searching behavior Google was shooting for. After introducing real-time search suggestions for the query box, Google found the average length of query increased dramatically and supposedly, so did the level of user satisfaction.

Search query suggestions are just one additional way we’ll see our search behavior change significantly over the next year or two. Little changes, like a list of suggested queries or the inclusion of more types of content in our results pages, will have some profound effects. And when search is the ubiquitous online activity it is, it doesn’t take a very big rock to create some significant and far-reaching ripples.

In Search of B2B Landmarks

First published September 27, 2007 in Mediapost’s Search Insider

This week (actually, right about the time you’ll be reading this column) I’ll be talking to the American Business Media Publishers Summit in Chicago about online opportunities, from a user’s perspective. As I was getting ready for the address, I realized there’s a substantial piece of the B to B market that’s missing online. I call it a market enabler.

Looking for Landmarks

Think of our typical progression when we begin researching something online. If it’s new territory, the first thing we need to do is to find a landmark to navigate from. From that landmark we tend to navigate out from it. This is true both in the online and real world. Think of Google as everybody’s favorite landmark. It’s the starting point of nearly all our online navigation, because we know we can always get back to it if we’re lost. In fact, it becomes the vehicle of our online navigation in almost all cases. The only time we deviate from it is if we have enough familiarity with a certain section of the online landscape that we can find other online landmarks without it. For example, if I’m planning a trip somewhere, I usually don’t start at Google. I either start at one of the travel tools I have bookmarked (Farecompare.com, Kayak, Sidestep) or at my favorite travel community, Tripadvisor.com. I’ve been down this path before, so I’ve memorized other familiar landmarks. Otherwise, I always start at Google.

But there are some things we look for in our landmarks. We want them to be recognizable. We want them to be authoritative. We want them to be comprehensive. And usually, we want them to be relatively agnostic. We don’t want to be pushed in any particular direction. We want to choose our own paths. We want a neutral marketplace that allows us to compile our own consideration set, not have it built for us.

Making Life Easier

It also helps if our landmarks incorporate some strong navigational and comparison functionality. One of the best things about the travel sites and tools I’ve mentioned is their sophisticated search and filtering capabilities. They beat Google at this particular game. They’re a more useful landmark to navigate from within. And increasingly, they’re incorporating authentic community dialogue and reviews with the search functionality. I can search, sort and qualify, all in one place. They make the difficult job of planning a trip easier. They’re market enablers, because they allow us to compare alternatives more effectively. If we look at the two best examples of market enablers, eBay and Amazon, they share all of the above characteristics.

So, let’s return to the B to B marketplace. In our B to B survey, we found that almost everyone starts with Google, because most of the time when we research B to B purchases, we’re starting in unfamiliar territory. We have no landmarks. And while we usually end up going fairly quickly to vendor sites, the survey found a strong desire to find an unbiased landmark as the market’s middle ground. Yet, no enablers have strongly established themselves in this position. There is no eBay or Amazon, or even a TripAdvisor, of B to B. There are vertical engines, including Business.com, Knowledgestorm, KellySearch, ThomasNet and others, but none have dominated the landscape to this point.

Sorting through the Haystack

In a recent B to B panel I moderated, consultant Karen Breen Vogel mentioned that these vertical properties do restrict the scope of the search, so rather than looking for a needle in a haystack, you’re looking for a needle in a needlestack. While this is true, it can still be a pretty painful process if you’re looking for the right needle. The problem is that the B 2 B marketplace is vast and fragmented. Also, there are no obvious affiliation or revenue opportunities, as there are in the travel business. There isn’t an obvious money trail to follow in the B to B world to make enabling the marketplace a potentially lucrative proposition. Most of the players have morphed over from being directory publishers in the offline world, and are still following the paid listing model. Unfortunately, this doesn’t lend itself to the neutral marketplace favored by researching buyers.

There are few purchase processes that are more difficult or taxing than a complicated B to B one. Sorting out potential vendors can be a long, tedious and frustrating process. First of all, there’s no emotional investment. This isn’t planning a vacation. This is your job. Secondly, the risk level is extremely high. Screw up, and your job may evaporate. While the potential to make money may be obscured by the challenges, the buyer’s need is painfully obvious. And I can’t help thinking, if eBay could do it, given the immense diversity of its marketplace, there must be a way.

How Should I Compare Thee to Google?

First published September 20, 2007 in Mediapost’s Search Insider

There is a substantial amount of online speculation being generated around the question of where Facebook is going, and will it beat Google?  John Battelle is currently drafting a list of questions, including those two, to run past Facebook co-founder Mark Zuckerberg at the Web 2.0 Summit.

At first glance, asking if Facebook can beat Google is a bit like asking if a penguin could beat an aardvark.  Beat it at what? What’s the contest? Or, perhaps more appropriately, asking whether your neighborhood can beat your table saw. Talk about comparing apples and oranges — and at least those are both fruit. Facebook is a community and Google is a tool.  But the question may not be as farfetched as it seems, because undoubtedly, as each grows and explores new monetization opportunities, more common ground will emerge between the two.

The Next “Google” Is….

To be honest, I don’t quite understand this compulsion to compare every new online business model to Google.  It’s a bit like comparing every business in your city to a successful grocery store, or a gas station.  Businesses are unique — and this is true whether you’re looking online or on Main Street.  They have different revenue engines, different objectives, different customers, and different ways to connect with those customers.  I suppose you could compare bottom-line revenues, this usually being considered the lowest common denominator with most businesses, but I’m not sure what the point is in that.  What are you trying to prove?  The success of the company?  If the supermarket makes 150 times as much as a coffee shop, does this mean the supermarket is 150 times more successful?

Facebook: A Sense of Place

Nevertheless, let’s return to the question of whether Facebook will supplant the Google juggernaut.  Let me spend a few minutes looking at the inherent differences between the Facebook model and the Google model, at least as far as they sit today.  Facebook is an online environment, a community, and as such it’s a totally different animal than Google.  The nature of the interaction with users is completely different; the intent of the site is completely different.  Facebook creates an online space, and search is only incidentally used to navigate that space.  True, as the space becomes larger and more rich, search will become more important as a core functionality within Facebook. Communities need to be functional (something that Facebook seems to get better than any of its competitors). They need infrastructure, and because searching is fundamental functionality no matter where you are online, the same will be true in the Facebook community.

Google: The Right Tool

And it’s that core functionality that has allowed Google to grow and prosper while the many predecessors to Facebook have emerged, flourished briefly and died on the vine, including Google’s own Orkut.  Google is, right now, still the Swiss army knife of the Web.  When it comes to online functionality, and in particular, finding things online, Google is the undisputed champion.  I’m currently mulling over the concept of how we navigate online and physical spaces and the fact that, while we need spatial cognitive maps to navigate our hometown, we don’t need them to navigate the Web. There is no static physical 3-dimensional space that we have to memorize routes through.  Online landmarks occupy no physical location. Rather, we have a conceptual space, and we use search to navigate based on informational proximity, rather than physical proximity (thanks to Nico Brooks for planting this virulent little “thought weed” in my rather overgrown mental garden). Google has been tremendously successful because it’s the knee-jerk choice for millions of consumers to navigate the Web, looking for stuff to buy.

Twains On A Collision Course…

So, that’s a very quick view of how the two properties diverge. But let’s look at how they share similarities. For all Google’s success as a tool, it longs to be more than that. The introduction of iGoogle, which will be driven by Google’s moves into personalization, will make it more of your own online, conceptual space, encroaching on Facebook territory. And Google wants your iGoogle portal to be the place you organize the ever-increasing functionality of the semantic Web. That objective puts it on a head-on collision with Facebook. Both are encouraging an open platform development ecosystem where developers can plug new functionality into their infrastructure.  This last note is somewhat ironic, because philosophically, Microsoft has always wanted to be the one to create the development infrastructure of the new Web. Looks like another case where the big M was left sputtering at the starting line.

Facebook, in turn, is looking to be the place where you define yourself as an individual in the new online landscape. It wants to be your home in the emerging online “cloud.” Their exponential growth is nothing short of amazing. Other than Linked In, I have never received a significant number of invitations from any social network. But in the past two months, I’ve received more friend invitations from Facebook than I have linking requests from Linked In. And these are primarily people in my age group, so they’re hopelessly old and far removed from anything resembling a “cutting edge.”

My 14-year-old daughter is aghast at the notion that I even have a Facebook account. It’s akin to me tagging along with her and her friends on a visit to the mall. Our general manager, a grandma (although a very funky grandma) is hopelessly addicted to Facebook. Obviously, there’s more here than your usual “flash in the pan” social network. As Facebook incorporates more online functionality for the individual, and Google looks to create a sense of personalized place for that same individual, expect the two to go head to head.

I’ve always thought that the importance of “favorite” places online has been somewhat disregarded. We are creatures of habit, and unless we’re looking for something out of the ordinary, we’ll probably keep treading down the same online paths over and over. That’s why every new start-up is at an immediate disadvantage, unless it can provide something sufficiently remarkable and differentiated from what previously existed. Google did this, and it appears that Facebook is on the same path. And in that way, these two do beg comparison.

Personalization Catches the User’s Eye

First published September 13, 2007 in Mediapost’s Search Insider

Last week, I looked at the impact the inclusion of graphics on the search results page might have on user behavior, based on our most recent eye tracking report. This week, we look at the impact that personalization might bring.

One of the biggest hurdles is that personalization, as currently implemented by Google, is a pretty tentative representation of what personalization will become. It only impacts a few listings on a few searches, and the signals driving personalization are limited at this point. Personalization is currently a test bed that Google is working on, but Sep Kamvar and his team have the full weight of Google behind them, so expect some significant advances in a hurry. In fact, my suspicion is that there’s a lot being held in reserve by Google, waiting for user sensitivity around the privacy issue to lessen a bit. We didn’t really expect to see the current flavor of personalization alter user behavior that much, because it’s not really making that much of a difference on the relevancy of the results for most users.

But if we look forward a year or so, it’s safe to assume that personalization would become a more powerful influencer of user behavior. So, for our test, we manually pushed the envelope of personalization a bit. We divided up the study into two separate sessions around one task (an unrestricted opportunity to find out more about the iPhone) and used the click data from the first session to help us personalize the data for the search experience in the second session. We used past sites visited to help us first of all determine what the intent of the user might be (research, looking for news, looking to buy) and secondly to tailor the personalized results to provide the natural next step in their online research. We showed these results in organic positions 3, 4 and 5 on the page, leaving base Google results in the top two organic spots so we could compare.

Stronger Scent

The results were quite interesting. In the nonpersonalized results pages, taken straight from Google (in signed out mode) we saw 18.91% of the time spent looking at the page happened in these three results, 20.57% of the eye fixations happened here, and 15% of the clicks were on Organic listings 3, 4 and 5. The majority of the activity was much further up the page, in the typical top heavy Golden Triangle configuration.

But on our personalized results, participants spent 40.4% of their time on these three results, 40.95% of the fixations were on them, and they captured a full 55.56% of the clicks. Obviously, from the user’s point of view, we did a successful job of connecting intent and content with these listings, providing greater relevance and stronger information scent. We manually accomplished exactly what Google wants to do with the personalization algorithm.

Scanning Heading South

Something else happened that was quite interesting. Last week I shared how the inclusion of a graphic changed our “F” shaped scanning patterns into more of an “E” shape, with the middle arm of the “E” aligned with the graphic. We scan that first, and then scan above and below. When we created our personalized test results pages, we (being unaware of this behavioral variation at the time) coincidentally included a universal graphic result in the number 2 organic position, as this is what we were finding on Google.

When we combined the fact that users started scanning on the graphic, then looked above and below to see where they wanted to scan next with the greater relevance and information scent of the personalized results, we saw a very significant relocation of scanning activity, moving down from the top of the Golden Triangle.

One of the things that distinguished Google in our previous eye tracking comparisons with Yahoo and Microsoft was its success of keeping the majority of scanning activity high on the page, whether those top results were organic or sponsored.

Top of page relevance has been a religion at Google. More aggressive presentation of sponsored ads (Yahoo) or lower quality and relevance thresholds of those ads (Microsoft) meant that on these engines (at least as of early 2006) users scanned deeper and were more likely to move past the top of the page in their quest for the most relevant results. Google always kept scan activity high and to the left.

But ironically, as Google experiments with improving the organic results set, both through the inclusion of universal results and more personalization, their biggest challenge may be in making sure sponsored results aren’t left in the dust. Top of page scanning is ideal user behavior that also happens to offer a big win for advertisers. As results pages are increasingly in flux, it will be important to ensure that scanning doesn’t move too far from the upper left corner, at least as long as we still have a linear, 1 dimensional top to bottom list of results.

An Image Can Change Everything for the Searcher

First published September 6, 2007 in Mediapost’s Search Insider

For the many of you who responded to last week’s column about Nona Yolanda, I just want to take a few seconds to let you know that she passed away the evening of Sept. 3, having fought for five days more than doctors gave her. She was in the presence of her family right until the end. We printed off your comments and well wishes and posted them on the hospital door. It was somewhat surprising but very gratifying for my wife’s family to know that Nona’s story touched hearts around the world. Thank you. – G.H.

The world of the search results page is changing quickly, which means that we’re going to have to apply new rules for user behavior. This week, I’d like to look at some results from a recent eye tracking study we did about how we interact with search when graphic elements start to appear on the page. We also tested for the inclusion of personalized results. There’s a lot of ground to cover, so I’ll start off with Universal Search this week, and cover personalization and the future of search next week.

Warning: Graphic Depictions Ahead

You can’t get much more basic than the search results page we’ve all grown to know in the past decade. The 10 blue organic links and, more recently, the top and side sponsored ads have defined the interface. It’s been all text, ordered in a linear top to bottom format. The only sliver of real estate that saw any variation was the vertical results, sandwiched between top sponsored and top organic. So it was little wonder that we saw a consistent scan pattern emerge, which we labeled the Golden Triangle. It was created by an “F”-shaped scan pattern, where we scanned down the left hand side, looking for information scent, and then scanned across when we found it.

But that design paradigm is in the middle of change. The first and most significant of these will be the inclusion of different types of results on the same page, blended into the main results set. Google’s label is Universal Search, Ask’s is 3D Search and Yahoo’s is Omni Search. Whatever you choose to call it, it defines a whole new ball game for the user.

Starting at the Top…

In the classic pattern, users began at the top left corner because there was no real reason not to. We saw the page, our eyes swung up to the top left and then we started our “F”- shaped scans from there. Therefore, our interactions with the page were very top-heavy. The variable in this was the relevance of the top sponsored ads. If the engine maintained relevance by only showing top sponsored when they were highly relevant (i.e. Google) to the query, we scanned them. If the engine bowed to the pressures of monetization and showed the ads even when they might not be highly relevant to the query (we saw more examples of this on Yahoo and Microsoft) users tended to move down quickly and the Golden Triangle stretched much further down the page. It was a mild form of search banner blindness. The one thing that remained consistent was the upper left starting point.

But things change, at least for now, when you start mixing results into the equation. If the number 2 or 3 organic return is a blended one, with a thumbnail graphic, we assume the different presentation must mean the result is unique in some way. The graphic proves to be a power attractor for the eye, especially if it’s a relevant graphic. It’s information scent that can be immediately “grokked” (to use Jakob Nielsen’s parlance) and this often drew the eye quickly down, making this the new entry point for scanning. This reduces the top to bottom bias (or totally eliminates it), making the blended result the first one scanned. Also, we saw a much more deliberate scanning of this listing.

Give Me an F, Give Me an E…

Another common behavior we identified is the creation of a consideration set, by choosing three or four listings to scan before either choosing the most relevant one or selecting another consideration set. In the pre-blended results set, this consideration set was usually the top three or four results. But in blended results, it’s usually the image result being the first result scanned, and then the results immediately above and below it. Rather than an “F”-shaped scan, this changes the pattern to an “E”-shaped scan, with the middle arm of the “E” focused on the graphic result.

The implications are interesting to consider. The engines and marketers have come to accept the top to bottom behavior as one of the few dominant behavioral characteristics, and it has given us a foundation on which to build our positioning strategy. But if the inclusion of a graphic result suddenly moves the scanning starting point, we have to consider our best user interception opportunities on a case-by-case basis.

Next week, I’ll look at further findings.