What Makes a Rumor so Easy to Spread?

urban-legend-rumorWe all want to be part of the next viral world of mouth success story. We want our product to be at the epicenter of a “buzz” storm that spreads like wildfire across the internet. But the conditions that lead to true word of mouth viral outbreaks dictate that these outbreaks are few and far between.

Jumping the Weak Ties

First of all, let’s look at what’s required for word of mouth to spread. The trick to a true viral outbreak is finding something that will jump the “weak ties”. Mark Granovetter identified weak ties in a social network back in the 70’s. Basically, social networks are not uniform and even. They are “clumpy”. They have dense clusters, comprised of people who tend to spend a lot of time together. These are family members, co workers, close friends, members of the same church or organization. Word spreads quickly throughout these clusters, because of the frequency of communication and the nature of the relationships between the members of the cluster. There’s an inherent trust there and people talk to each other a lot. This makes the social ties within the cluster strong ties. Given this, once one person in the cluster knows something, there’s a pretty good bet that everyone in the cluster will know it in a relatively short period of time.

But the challenge comes in getting a message to make the jump from cluster to cluster. How does word of mouth spread from one group of co workers to a church group in another town? To do this, we’re relying on social ties that are much weaker than strong ties. We’re counting on an acquaintance to pass word along. And for that to happen, some conditions have to be met first.

Lowering the Drawbridge

In 1993, Jonathon Frenzen and Kent Nakamoto followed up on Granovetter’s earlier work (Frenzen, Nakamoto: “Structure, Cooperation and the Flow of Market Information,” The Journal of Consumer Research, December 1993) to see the conditions that had to be met before a message would jump across a weak tie. In their words,

“Instead of an array of islands interconnected by a network of fixed bridges, the islands are interconnected by a web of “drawbridges” that are metaphorically raised and lowered by transmitters depending on the moral hazards imposed by the information transmitted by word of mouth.

In their study, they looked at a number of factors, including the nature of the message itself, and the concept of moral hazard, or how it would impact the messenger. For the test, they used news about a sale. In one social network, they saw how fast word would spread about a 20% off sale. In the other social network, they used a sale where the discounts were a more remarkable 50 to 70% off. To introduce a moral hazard variable, they also altered the availability of sales items. In one case, quantities were very limited, and in the other, quantities were practically unlimited.

What they found was that amongst strong ties, word of the sales spread fairly quickly in most instances. But when the message wasn’t that remarkable (the 20% off example), word of mouth had difficulty jumping across weak ties. Also, when moral hazard was high (quantities were limited) again, the message tended to get stuck within a cluster and not be transmitted across the weak ties.

Mexican Vacation Sale

Let’s use an example to make this a little clearer. Let’s imagine an airline is having a seat sale to Mexico. In the first example, it’s $50 off per seat, but it applies to every seat on the plane, on every flight. There is no limit on the inventory available. In the second instance, instead of $50 off per seat, the entire cost of a return flight to Mexico is just $50. That’s much more remarkable. And in the third instance, the sale is again $50 per person, but it’s limited to 10 seats on 2 flights, for one day only. Only 20 tickets are available at this price.

In the first instance, you would probably only pass along the information if someone happened to mention to you that they were thinking of going to Mexico. The information is not that note worthy. The value of information is not that great. There’s little chance that this would ever move beyond your “strong tie” cluster. It’s not something you’d go out of your way to mention to an acquaintance.

In the second instance, a $50 flight to Mexico is big news. And we’re socially predisposed to share remarkable stories. We believe it elevates our social status within our cluster. Every one likes to be the first to tell someone about something remarkable. It’s part of human nature. So we’ll go out of our way to share this information. We don’t even wait for someone to raise the topic. This is noteworthy enough that it merits bringing up in any context. It’s worth interrupting normal conversations for. Word will spread far and wide, across strong ties and weak ties alike.

But in the third instance, even though the news is remarkable, we personally have something to lose by spreading the story. There are only 20 seats available, so if we tell too many people, we might not get a chance to take advantage of the sale ourselves. Chances are, we won’t tell anyone until our seats are booked. And even then, we’ll probably only tell those we’re closest to. After we look after ourselves, our next inclination is to make sure those that are closest to us won’t miss out on the opportunity. Again, because of this “moral hazard” there’s little likelihood that word will spread beyond our strong ties.

Rumor has it

So, now that we know the limitations of message transmission within a network, depending both on the structure of the network and the cooperativeness of it, let’s look at one type of information that always seems to spread like wildfire through any social network, regardless of the circumstance: the juicy rumor.

Rumors have no moral hazard, at least, not for us. There are no limitations of quantity. We don’t stand to lose out (at least, not in a material sense. We’ll leave the ethical questions aside for now) by spreading a rumor. So that restriction is gone.
Secondly, the likelihood to spread a rumor depends on the nature of the rumor itself. First of all, does it involve people we know? Personal rumors about people we know are almost irresistible to spread. They beg to be passed on, again, because they put us in the position of “being in the know” and having access to information not available to everyone. Second to the personal rumor is the celebrity rumor. These are a little less “spreadable” because we’re not in the same privileged informant position. Also, although we know the people involved, in the public sense, we don’t really know them in the personal sense. When it comes to rumors, the closer to home they hit, the better.

Finally, we have the “juiciness” of the rumor. How sensational is the story? How remarkable is it? A rumor about your neighbor’s washing machine breaking down isn’t going to go too far. But an affair leading to a marriage break up, being fired from a job or a significant health issue, unfortunately, are stories made to spread. Because we’re human and inherently competitive, we love to spread bad news about others.

Fine Tuning the Rumor

And this brings us to an almost universal behavior seen whenever rumors tend to spread. We like to fine tune the story to make it a little more interesting. Rumors are subjected to “flattening”, “sharpening” and “assimilation”, just to make the story a little more sticky. Flattening is where we get rid of the details that get in the way of what we feel is the noteworthy aspects of the story. In some cases, the discarded details are contradictory and in some cases they’re just extraneous. Regardless, if they’re not pertinent to the main story we want to get across, or if they dilute the story, we toss them out.

Sharpening takes the remaining facts and enhances them a little (or a lot) to bring the story and it’s value as news into sharper focus.

Finally, assimilation is where we take the story and make sure it fits within our shared mental framework. We alter the story so it fits with ours (and our recipients) shared beliefs and views of the world. That’s one reason why rumors are so “spreadable”. We alter the story to ensure it’s interesting, and the further the story goes, the more irresistible it becomes.

The ultimate example of this are urban legends, where once there may have been a kernel of truth, but the stories have become so flattened, sharpened and assimilated through countless retellings that now, as intriguing as they are, they are basically manufactured fictions.

Negative Word of Mouth

We’ve always known that negative word of mouth spreads faster than positive. When we take what we now know about social networking and apply it, we begin to see why. For instance, negative word of mouth and rumors share a lot in common. There’s generally no moral hazard in play. In fact, the reverse is true. You’re actually helping people out by sharing this information, and you get a little retribution and revenge yourself. It’s a twisted win-win!

And for some reason, humans are much more likely to pass along negative information than positive. Again, it comes to our concept of social hierarchy and building ourselves up through the misfortunes of others. Admirable it’s not, but predictable? You bet!

And finally, the better known a company or brand is, the more likely negative word of mouth will spread. If there’s bad buzz circling about Nike, McDonald’s or Starbucks, we’ll all take part because all those brands are part of our shared frame of reference. We’ve already assimilated them.

By the way, remember that negative word of mouth will also be subjected to flattening and sharpening, as well as assimilation. So the negative buzz will get worse with each retelling.

Obviously, if you’re counting on word of mouth as your marketing channel, you have to take the reasons why word of mouth spreads into account. It can be made to work for you, if the conditions are right, but remember, this is not a process you have much control over. You can plant the seeds, but then human nature will take it’s course.

Why We Have to Keep Doing Market Research

Following up on my previous post about the problems with most market research, here’s a plea why we should keep trying to get it right.

At the recent London SMX show, I presented on the Ad Testing and Research panel. Like other times I’ve done this panel (this is probably the 3rd or 4th time) I hear about skillful practitioners employing various A/B and multivariate testing methodologies. Ad testing is a definite must do, but before my presentation, which came at the end of the session, I took a few minutes to provide an alternative point of view.

I asked the small crowd how many of them were doing regular campaign management, checking click through rates, conversion rates and optimizing their campaigns based on what they saw. Almost everyone put up their hand. Then I asked how many did A/B testing. This time, a little more than half put up their hands. Next, I asked how many were doing multivariate testing. This time, about one third of the crowd. Finally, I asked how many had actually sat, watched a customer interact with their site and then asked them questions. We dropped down to about 10% of the group, and most of these were in a fairly structured usability test, with limited or no opportunity for interaction with the user.

Now, campaign optimization, A/B and multivariate testing are all best practices and should be done religiously. But I urged the marketers in the room to step back from their data heavy, spreadsheet  bound view of the world and pick up a book on cognitive psychology, social science or simple usability. Better yet, spend some time just watching how real people interact with your site. Try, for a moment, to look at the world through your customer’s eyes.

The problem with the typical, quantitative methods are that they’re all lagging indicators. You don’t get an idea of what’s happening until after customers have interacted with your ads and your site. You generally get a good sense of what they did, but it’s very difficult to determine why they did it. To do that, you have to dig beyond the numbers. You have to try to get into that subconscious mind. And that’s not easy. Typical market research methodologies won’t cut it. To get some idea of what’s required, read Clotaire Rapaille’s The Culture Code, or Gerald Zaltman’s How Customer’s Think. Do some digging into the work of Herbert Simon.  It takes a deft combination of psychiatric know how and detective skills. But here’s why it’s worth it.

For the past Century, we’ve largely refined our marketing practices based on trial and error. Pretty much everything has been done through seeing what’s worked, changing something, and seeing if it worked better. That’s been okay, as long as the channels we used to reach customer’s were relatively limited. With limited channels and a certain amount of control inherent in the process, we could do this. But those days are over.

Now, rather than a few controlled channels that run pretty much straight from the advertiser to the customer, we have an explosion of information that turns the typical buying process into a Gordian knot of unbelievable complexity. We can’t control the variables anymore. When there are so many channels, so many interdependent factors and so much of it affects customers below the conscious level, trial and error is just not an effective testing methodology anymore. In fact, it was never an effective methodology, for all the reasons I stated in my previous post. It’s just the best we had.

Let me use another example. The way we did marketing was pretty much like jumping in a car, randomly making decisions whether to turn right or left, keeping track of our success rate in getting nearer to our destination, and using this method to eventually pick the right route. This method might eventually work okay in a town of a few thousand people, but try doing that to navigate through New York or Los Angeles. We don’t have enough time in our lives to leave this much to chance. A map (or better yet, a GPS) is a much better alternative.

But we’re just starting to put that map together. And it won’t come from market research. Market research, at least in it’s current incarnation, is hopelessly flawed. It will come from diving deep into the workings of our brains. And once we begin putting the map together, it will allow us to begin to measure leading indicators. It will keep us from the trap of relying on self reported rationalizations and dig deeper into all the activity that’s happening below the conscious surface of our minds. That’s where the answers will be found.

Here’s another reason. Our brains are not only complex, but they’re also highly adaptive. As we do new mental activities more often, and abandon previous ones, new routes are established through the neurons and old ones become overgrown and eventually, unused neurons are cut away. It’s called “pruning” and “neuroplasticity”. It’s probably why you’re much better at using a search engine now than you are doing the geometry you learned in grade 9. We’ve worn new paths in our brain.

This is also true of how we’re buying. The way we buy now is bearing little resemblance to the way we bought in 1975. As time goes on and we rely on the Internet more and more, the paths that we used to use for our consumer decisions will become overgrown and we’ll clear new ones. This will happen not only at the conscious level, but also the sub conscious level. We will literally rewire how our brains decide what to buy. So the body of market research that has laboriously been gathered over the past several decades will become obsolete. And to discover those again through trial and error will be an long and potentially impossibly task.

So, a word of advice. Step back from the spread sheet now and again. Take a break from looking at “what” and start to explore “why”. Dig into things like the triune brain, selective perception, bounded rationality, working memory and some other basic cognitive concepts. It will be time well spent.

Satisficing, Bounded Rationality and Search

150px-HerbertSimonHerbert Simon came up with some pretty interesting concepts, among them satisficing, bounded rationality and chunking.

Before Simon, we commonly believed that humans came to optimal decisions in a rational manner, based on the information provided. We took all the data that was accessible, weighed pros and cons and used our cortexes to come to the best possible outcome.

Simon, in effect, said that this placed to high a load on us cognitively. In many cases, there was simply too much information available, so we had to make choices based more on heuristics, cutting the available information down to a more manageable level. He called this “satisficing”, a blend of satisfy and suffice. And Simon started saying this a half century ago. Imagine how this translates to the present time.

We have never had more information available. At the click of a mouse, we can access huge amounts of information. There’s simply no way we can process it all and come to rational decisions. And this brings us to another concept, that of bounded rationality. We’re more rational about some decisions than others. It depends on a number of factors, including risk, emotional enjoyment and brand self identification. Think of it as a chart with three axes. One axis is risk. We put more rational thought into decisions that expose us to greater risk. In consumer decisions, risk usually equates with cost, but in B to B decisions, it could also include professional reputation (related to but not always directly tied to cost). We’re going to put a lot more thought into the purchase of a car or house than that of a candy bar. Another axis is emotional enjoyment. This is a risk/reward mechanism to most decisions, and if the reward is one that is particularly appealing to us, we tend to be swayed more by emotion than rational decision. If we’re planning a holiday, we may make some irrational decisions (or at least, they might appear that way to an outsider) based on a sense of rewarding ourselves. We’ll treat ourselves to a few nights in a 5 star resort, when the 3 star resort would offer greater overall value. The final factor, and one that is usually buried somewhere in our subconscious, is how we use brands or products to define who we are. Now, no one usually admits to being defined by a brand, but we all are, to some extent. This touches on the cult-like devotees that some brands develop. Harley Davidson, Rolex, BMW, Apple and Nike all come to mind. Is a Rolex a rational choice? No. But a Rolex defines, to some extent, the person wearing it. It says something about the person.

Bounded rationality says that there are boundaries to the amount of rational thought that we can and we want to put into decisions. The amount we decide is sufficient depends on the three facts discussed.

Now, the use of Search tends to plot somewhere along this 3 dimensional chart. If risk is high and brand identification is low (buying software for the company), there is a high likelihood that search will be used extensively. If risk is low and brand identification is high (i.e. buying a soft drink or a beer) there is almost no likelihood that search will be used. In this case, the two factors usually work inversely to each other. Emotional enjoyment isn’t as directly tied to search activity. We will do as much (or as little) searching for a purchase that will give us great enjoyment as for those that won’t.

It’s interesting to watch how these factors impact search intent and behavior. Satisficing leads to a classic sort of search behavior, what I call I category search, where we use fairly generic, non branded queries that broadly define the category we’re looking at. Let me give you an example. Tomorrow my wife and I are headed to Europe for a week. We’re going to spend a few days in Portugal, then fly up to London for SMX (where I’ll be talking more about these ideas in some of my sessions). We’re flying into Lisbon, then renting a car and driving down to the Algarve region. I have GPS navigation software for my PDA, but only for North America. I wanted to get European software, but because of the limited use of it, I didn’t want to spend too much. The developer of my North American software didn’t make a EU version, so I turned to search to find a suitable candidate. Here there was no brand identification, some degree of risk (if it didn’t work in Europe, I’d be lost, literally) and no emotional enjoyment factor. My first search was what I call a “landmark” search. I wanted to find some sites to plot the landscape. Sites that listed and compared my alternatives would be ideal matches to my intent.

I searched for “pocket pc gps software”, knowing that “gps software” would be too broad. I soon found the sites were pretty much all about North American versions. Few of them offered or reviewed European versions. I spent several minutes on the TomTom site trying to order a European version from Canada but to no avail. Apparently TomTom doesn’t believe people in North America would ever choose to drive in Europe.

In classic “satisficing” behavior, I wanted to cut my research workload by setting some basic eligibility criteria: it had to work on a Pocket PC, it had to be reasonably priced (under $100 preferably) and it had to offer coverage for all of Europe (we’re going back to France and Italy next year and I’d like to use it then as well). My next search was for “pocket pc gps software europe”. This gave me what I needed to begin to create my satisficed list. Ideally, we want 3 or 4 alternatives to compare. I did find the TomTom choice, but I was already frustrated with this, and the price was over my threshold. Destinator also offered an alternative that seemed to be a little better match. It matched all the criteria, appeared to have some decent reviews and was available on eBay for about $75, including shipping. Sold! Was it the optimal choice? Maybe not. If I had spent hours more doing research, I could have probably found a better package or a better value. But it was good enough.

Chunking has to do with cognitive channel capacity, and the amount of information we can store in our heads, accessible for use. Again, we tend to maximize the available slots by creating chunks of information, grouping similar types of information together.

When you look at Simon’s work, even though the majority of it far preceded search engines, it sheds a lot of light on how we use search in a number of cases. If you want to tap into user intent, I would recommend finding out more about bounded rationality and satisficing. Chunking is probably worth a look as well.

Interfaces are only Skin Deep

Steve Haar had a great comment on my post about Ask breaking through in the search market share battle:

I agree about the interface being much better with Ask. But, what about the search results? I took a look at them compared to the others and, between sites for adsense and dead links, the results were so poor I was embarrassed for them. I wonder how many of the searches were from repeat users vs once and gone?

I think Steve points out a fundamental concept that we might tend to forget from time to time. The best interface on a piece of garbage just gives you nice looking garbage. Now, I’m not saying that Ask is garbage. But I’ve seen some cases (and heard anecdotally many more) of some issues with spam and I do think they have some work to do. Ultimately, it’s the quality of the results that will determine marketshare. In fact, a nice interface on top of poor results will kill Ask quicker than ever, as it draws more trial users (as Steve alludes to) and generates more negative word of mouth. This is exactly what Ask doesn’t want to happen.

I’m the first to speak up about the importance of the user experience, but it’s important to remember that the interface is only one small part of that. Ultimately, there needs to be enough under the hood to meet and exceed the user’s expectations. Steve (and others) are indicating that Ask might be falling short in the relevancy horsepower department.

Search and the Digital CPG Shelf

First published October 25, 2007 in Mediapost’s Search Insider

Last April, James Lamberti from comScore, Randy Peterson from Procter and Gamble and I (representing SEMPO) grabbed a relatively quiet corner at the New York Hilton to talk about a potential research project. Here was our wish list:

–    A study that tied together online activity to offline purchase behavior
–    A study that identified the impact of search in a category not typically one that would be identified with search marketing
–    A study that would attempt to quantify the leveraged impact of search with brand advocates

Search and CPG: Are You Kidding?

As you can see, these were pretty lofty targets to shoot for. Choosing the product category was done at that table. What is the last category you would think of as generating significant search activity? Consumer packaged goods. After all, aren’t these either replenishment purchases, where we keep buying the same brand over and over, or a non-considered purchase, where we’re not really concerned with doing much research? Why would we need to turn to a search engine to figure out which toothpaste to buy, or which would be the right chips for Sunday’s ball game? We reasoned that CPG had the “Sinatra” Factor going for it: If search can make it here, it can make it anywhere.

To be honest, we really didn’t know what to expect, but comScore, together with a lot of help from Yahoo and Procter and Gamble, managed to come up with a workable study design. SEMPO jumped on board as a co-sponsor and we put the study out in the field. This week, with numbers crunched and charts drawn, the results were released in a study labeled The Digital Shelf. After several months of holding our collective breaths, we were about to see if people had already locked CPG brands into their brains, eliminating the need to search for product information.

Apparently not.

People went online for CPG information — in fact, to a significantly higher degree than even our most optimistic predictions.  Over a 3-month period, comScore recorded over 150 million visits to CPG websites in four categories: Food Products, Personal Care Products, Baby Products and Household Products. Those are numbers no marketer should ignore. But even more significantly, search drove significant portions of that traffic, from 23% of all visits in Household Products to 60% in Baby Products.

It’s not just automotive or travel that drive search traffic. We search for recipes, how to get the stains out of carpets, the most eco-friendly disposable diaper and yes, even the nutritional information for potato chips. We search, a lot!

And our searching sets us apart as a consumer segment. Searchers tend to be more interested in product information, comparing against competitors and what they need to make a purchase decision. Non-searchers are more interested in special offers and coupons.

Searchers spend more, about 20% more  — in all the categories in the study. In fact, in the Baby Care Category alone, people searching for information and eventually purchasing could result in almost $12 billion in sales.

Search = Opportunity

But perhaps the most compelling finding was this: People search because they’re comparing alternatives. This means they’re not locked into a brand. They could very well be your competitor’s customer right now. Non-searchers are more likely to go directly to a site because they do have a brand preference. They’re just looking for a bargain on that brand. The study found that 36% of searchers had recently switched their brand, compared to 29% of non-searchers. And, interestingly, searchers are less motivated by price. Only 27% of searchers switched because of price, compared to 38% of non-searchers.

So, the study delivered on our original wish list, and then some. It showed that search is a significant influencing factor in the most unlikely product category of all, the stuff on your pantry shelf or under the sink in your bathroom. In fact, I have yet to see a study done on any product category where search didn’t blow the doors off the competition in its effectiveness in connecting with customers. So perhaps the biggest question left unanswered by the study is this: Why are all those branding dollars still going everywhere but search?

The Wisdom of Consumer Crowds?

Following up on the theme of the rewiring of our brains, is the internet making us smarter consumers as well? There certainly seems to be evidence pointing in that direction.

A study by ScanAlert  found that the average online shopper in 2005 took 19 hours between first visiting a store and completing a transaction. In 2007, that jumped almost 79% to 34 hours. We’re taking longer to make up our minds. And we’re also doing our homework. Deloitte’s Consumer Products group recently released research saying 62 percent of consumers read consumer written product reviews on the Internet, and of those, more than 8 in 10 are directly influenced by the reviews.

In James Surowiecki’s Wisdom of Crowds, he believes that large groups, thinking independently with access to a diversity of information, will always make a better collective decision than the smartest individual in the group. Isn’t the Internet wiring this wisdom into more and more purchases? When we access these online reviews, we’re in fact coming to collective decisions about a product, built on hundreds or thousands of individual experiences. As the network expands, we benefit from the diversity of all those opinions and probably get a much more accurate picture of the quality of a product than we ever could from vendor supplied information alone. The marketplace votes for their choice, and the best product should theoretically emerge as the winner.

Of course, nothing works perfectly all of the time. As Surowiecki points out, communication can be an inexact and imperfect process, and information cascades based on faulty inputs can spread faster than ever online. But it’s also true that if a cascade leads to rapid adoption of an inferior product, we’ll discover we’ve been “had” faster and this news can also spread quicker. The connections of online make for a much faster dissemination of information based on experience than ever before, ensuring that the self correcting mechanisms of the marketplace kick into gear faster.

There’s a pass along effect happening here as well. For social networking buffs, you’ve probably heard of Granovetter’s “Weak Ties”. Social networks are made up of dense, highly connected clusters, i.e. families, close friends, co-workers. The social ties within these clusters are strong ties. But spanning the clusters are “weak ties” between more distant acquaintances. The ability for word to spread depends on these weak ties. What the internet does is exponentially increase the number of weak ties, wiring thousands of clusters together into much bigger networks than were ever possible before. This allows word of mouth to travel not only in the physical world but also in the virtual. I looked at a fascinating follow up study to Granovetter’s where Jonathan Frenzen and Kent Nakamoto also looked at the value of the information and the self interest of the individual and their “strong ties” within a cluster as a factor in how quickly word of mouth passes through a network.

Deloitte’s study graphically illustrates the weak tie/strong tie effect. 7 out of 10 of the consumers who read reviews share them with friends, family or colleagues, moving the information that comes through the weak ties of the internet into each cluster, where it spreads rapidly thanks to the efficiency of strong ties. This effect pumps up the power of word of mouth by several orders of magnitude.

But are we also becoming more socially aware in our shopping? The research by Deloitte also seems to indicate this. 4 out of 10 consumers said they were swayed by “better for you” ingredients or components, eco-friendly usage and sourcing, and eco-friendly production or packaging. The internet wires us into communities, so it’s not surprising that we become more sensitive to the collective health of those communities in the process.

What all these leads to is a better informed consumer, who’s not reliant on marketing messaging coming from the manufacturer or the retailer. And that should make us all smarter.

On Your Search Menu Tonight

First published October 4, 2007 in Mediapost’s Search Insider

This week Yahoo unveiled a new feature. It doesn’t really change the search game that much in terms of competitive functionality. If anything, it’s another case of Yahoo catching up with the competition. But it may have dramatic implications from a user’s point of view. To illustrate that point further I’d like to share a couple of stories with you.

The feature is called Search Assist. You type your query in, and Yahoo provides a list under the query box with a number of possible ways you could complete the query. This follows in the footsteps of Google’s search suggestions in its toolbar. Currently, Google doesn’t offer this functionality within the standard Google query box, at least in North America. Ask also offers this feature.

Because Yahoo is late to the game, the company had the opportunity to up the functionality a little bit. For example, the suggestions that come from Yahoo can include the word you’re typing anywhere in the suggested query phrase. Google uses straight stemming, so the word you’re typing is always at the beginning of the suggested phrases. Yahoo also seems to be pulling from a larger inventory of suggested phrases. The few test queries I did brought back substantially more suggestions than did Google.

It’s not so much the functionality of this feature that intrigues me; it’s how it could affect the way we search. I personally have found that I come to rely on this feature in the Google toolbar more and more. Rather than structuring a complete query in my mind, I type the first few letters of the root word in and see what Google offers me. It leads me to select query phrases that I probably never would have thought of myself.

Some time ago I wrote that contrary to popular belief, we’ve actually become quite adept at paring our queries down to the essential words. It’s not that we don’t know how to launch an advanced query; it’s that most times, we don’t need to. This becomes even truer with search suggestions. All we have to do is think of one word, and the search engine will serve us a menu of potential queries. It reduces the effort required from the searcher, but let me tell you a story about how this might impact a company’s reputation online.

I Wouldn’t Recommend That Choice

Some time ago I got a voicemail from an equity firm. The woman who left a message was brash, a little abrasive and left a rather cryptic message, insisting that I had to phone her right back. Now, since I’m in the search game, getting calls from venture capitalists and investment bankers is nothing really new. But I’d never quite heard this tone from one of these prospecting calls before. So, I did as I usually do in these cases and decided to do a little more research on the search engines to determine whether I was actually going to return this call or not. I did my quick 30-second reputation check.

Normally, I would just type in the name of the firm and see what came up in the top 10 results. Usually, if there’s strong negative content out there, it’s worth paying attention to and it tends to collect enough search equity to break the top 10. This time, I didn’t even have to get as far as the results page. The minute I started typing the company name into my Google toolbar, the suggestions Google was providing me told the entire story: “company” scam, “company” fraud and “company” lawsuits. Of the top eight suggestions, over half of them were negative in nature. Not great odds for success. Needless to say, I never returned the call.

If these search suggestions are going to significantly alter our search patterns, we should be aware of what’s coming up in those suggestions for our branded terms. Type your company name into Yahoo or Google’s toolbar and see what variations are being served to you. Some of them may not be that appetizing.

Would You Prefer Szechuan?

My belief is that users are increasingly going to use this to structure their queries. It moves search one step closer to be coming a true discovery engine. One of the overwhelming characteristics of search user behavior is that we’re basically lazy. We want to expand a minimal amount of effort but in return, we expect a significant degree of relevance. Search suggestions allow us to enter a minimum of keystrokes and the search engine obliges us with a full menu of options.

This brings me to my other story. Earlier this year we did some eye-tracking research on how Chinese citizens interact with the search engines Baidu and Google China. After we released the preliminary results of the study, I had a chance to talk to a Google engineer who worked on the search engine. In China, Google does provide real-time search suggestions right from the query box. The company found that it’s significantly more work to type a query in Mandarin than it is in most Western languages. Using a keyboard for input in China is, at best, a compromise. So Google found that because of the amount of work required to enter a query, the average query length was quite short in China, giving a substantially reduced degree of relevancy. In fact, many Chinese users would type in the bare minimum required and then would scroll to the bottom of the page, where Google showed other suggested queries. Then, the user would just click on one of these links. Hardly the efficient searching behavior Google was shooting for. After introducing real-time search suggestions for the query box, Google found the average length of query increased dramatically and supposedly, so did the level of user satisfaction.

Search query suggestions are just one additional way we’ll see our search behavior change significantly over the next year or two. Little changes, like a list of suggested queries or the inclusion of more types of content in our results pages, will have some profound effects. And when search is the ubiquitous online activity it is, it doesn’t take a very big rock to create some significant and far-reaching ripples.

Personalization Catches the User’s Eye

First published September 13, 2007 in Mediapost’s Search Insider

Last week, I looked at the impact the inclusion of graphics on the search results page might have on user behavior, based on our most recent eye tracking report. This week, we look at the impact that personalization might bring.

One of the biggest hurdles is that personalization, as currently implemented by Google, is a pretty tentative representation of what personalization will become. It only impacts a few listings on a few searches, and the signals driving personalization are limited at this point. Personalization is currently a test bed that Google is working on, but Sep Kamvar and his team have the full weight of Google behind them, so expect some significant advances in a hurry. In fact, my suspicion is that there’s a lot being held in reserve by Google, waiting for user sensitivity around the privacy issue to lessen a bit. We didn’t really expect to see the current flavor of personalization alter user behavior that much, because it’s not really making that much of a difference on the relevancy of the results for most users.

But if we look forward a year or so, it’s safe to assume that personalization would become a more powerful influencer of user behavior. So, for our test, we manually pushed the envelope of personalization a bit. We divided up the study into two separate sessions around one task (an unrestricted opportunity to find out more about the iPhone) and used the click data from the first session to help us personalize the data for the search experience in the second session. We used past sites visited to help us first of all determine what the intent of the user might be (research, looking for news, looking to buy) and secondly to tailor the personalized results to provide the natural next step in their online research. We showed these results in organic positions 3, 4 and 5 on the page, leaving base Google results in the top two organic spots so we could compare.

Stronger Scent

The results were quite interesting. In the nonpersonalized results pages, taken straight from Google (in signed out mode) we saw 18.91% of the time spent looking at the page happened in these three results, 20.57% of the eye fixations happened here, and 15% of the clicks were on Organic listings 3, 4 and 5. The majority of the activity was much further up the page, in the typical top heavy Golden Triangle configuration.

But on our personalized results, participants spent 40.4% of their time on these three results, 40.95% of the fixations were on them, and they captured a full 55.56% of the clicks. Obviously, from the user’s point of view, we did a successful job of connecting intent and content with these listings, providing greater relevance and stronger information scent. We manually accomplished exactly what Google wants to do with the personalization algorithm.

Scanning Heading South

Something else happened that was quite interesting. Last week I shared how the inclusion of a graphic changed our “F” shaped scanning patterns into more of an “E” shape, with the middle arm of the “E” aligned with the graphic. We scan that first, and then scan above and below. When we created our personalized test results pages, we (being unaware of this behavioral variation at the time) coincidentally included a universal graphic result in the number 2 organic position, as this is what we were finding on Google.

When we combined the fact that users started scanning on the graphic, then looked above and below to see where they wanted to scan next with the greater relevance and information scent of the personalized results, we saw a very significant relocation of scanning activity, moving down from the top of the Golden Triangle.

One of the things that distinguished Google in our previous eye tracking comparisons with Yahoo and Microsoft was its success of keeping the majority of scanning activity high on the page, whether those top results were organic or sponsored.

Top of page relevance has been a religion at Google. More aggressive presentation of sponsored ads (Yahoo) or lower quality and relevance thresholds of those ads (Microsoft) meant that on these engines (at least as of early 2006) users scanned deeper and were more likely to move past the top of the page in their quest for the most relevant results. Google always kept scan activity high and to the left.

But ironically, as Google experiments with improving the organic results set, both through the inclusion of universal results and more personalization, their biggest challenge may be in making sure sponsored results aren’t left in the dust. Top of page scanning is ideal user behavior that also happens to offer a big win for advertisers. As results pages are increasingly in flux, it will be important to ensure that scanning doesn’t move too far from the upper left corner, at least as long as we still have a linear, 1 dimensional top to bottom list of results.

An Image Can Change Everything for the Searcher

First published September 6, 2007 in Mediapost’s Search Insider

For the many of you who responded to last week’s column about Nona Yolanda, I just want to take a few seconds to let you know that she passed away the evening of Sept. 3, having fought for five days more than doctors gave her. She was in the presence of her family right until the end. We printed off your comments and well wishes and posted them on the hospital door. It was somewhat surprising but very gratifying for my wife’s family to know that Nona’s story touched hearts around the world. Thank you. – G.H.

The world of the search results page is changing quickly, which means that we’re going to have to apply new rules for user behavior. This week, I’d like to look at some results from a recent eye tracking study we did about how we interact with search when graphic elements start to appear on the page. We also tested for the inclusion of personalized results. There’s a lot of ground to cover, so I’ll start off with Universal Search this week, and cover personalization and the future of search next week.

Warning: Graphic Depictions Ahead

You can’t get much more basic than the search results page we’ve all grown to know in the past decade. The 10 blue organic links and, more recently, the top and side sponsored ads have defined the interface. It’s been all text, ordered in a linear top to bottom format. The only sliver of real estate that saw any variation was the vertical results, sandwiched between top sponsored and top organic. So it was little wonder that we saw a consistent scan pattern emerge, which we labeled the Golden Triangle. It was created by an “F”-shaped scan pattern, where we scanned down the left hand side, looking for information scent, and then scanned across when we found it.

But that design paradigm is in the middle of change. The first and most significant of these will be the inclusion of different types of results on the same page, blended into the main results set. Google’s label is Universal Search, Ask’s is 3D Search and Yahoo’s is Omni Search. Whatever you choose to call it, it defines a whole new ball game for the user.

Starting at the Top…

In the classic pattern, users began at the top left corner because there was no real reason not to. We saw the page, our eyes swung up to the top left and then we started our “F”- shaped scans from there. Therefore, our interactions with the page were very top-heavy. The variable in this was the relevance of the top sponsored ads. If the engine maintained relevance by only showing top sponsored when they were highly relevant (i.e. Google) to the query, we scanned them. If the engine bowed to the pressures of monetization and showed the ads even when they might not be highly relevant to the query (we saw more examples of this on Yahoo and Microsoft) users tended to move down quickly and the Golden Triangle stretched much further down the page. It was a mild form of search banner blindness. The one thing that remained consistent was the upper left starting point.

But things change, at least for now, when you start mixing results into the equation. If the number 2 or 3 organic return is a blended one, with a thumbnail graphic, we assume the different presentation must mean the result is unique in some way. The graphic proves to be a power attractor for the eye, especially if it’s a relevant graphic. It’s information scent that can be immediately “grokked” (to use Jakob Nielsen’s parlance) and this often drew the eye quickly down, making this the new entry point for scanning. This reduces the top to bottom bias (or totally eliminates it), making the blended result the first one scanned. Also, we saw a much more deliberate scanning of this listing.

Give Me an F, Give Me an E…

Another common behavior we identified is the creation of a consideration set, by choosing three or four listings to scan before either choosing the most relevant one or selecting another consideration set. In the pre-blended results set, this consideration set was usually the top three or four results. But in blended results, it’s usually the image result being the first result scanned, and then the results immediately above and below it. Rather than an “F”-shaped scan, this changes the pattern to an “E”-shaped scan, with the middle arm of the “E” focused on the graphic result.

The implications are interesting to consider. The engines and marketers have come to accept the top to bottom behavior as one of the few dominant behavioral characteristics, and it has given us a foundation on which to build our positioning strategy. But if the inclusion of a graphic result suddenly moves the scanning starting point, we have to consider our best user interception opportunities on a case-by-case basis.

Next week, I’ll look at further findings.

Search Engine Results: 2010 – Interview with Danny Sullivan

Danny-SullivanHere’s another in the series of the Search:2010 transcripts, this one of my chat with Search Engine Land Editor Danny Sullivan:

Gord: The big question that I’m asking is how much change are we going to see on the search engine results page over the next three years.  What impact are things like universal search and personalization and some of the other things we’re seeing come out, how much of that is going to impact the actual interface the user is going to see.  Maybe let’s just start there.

Danny: I love the whole series to begin with because then I thought, Gosh, I never really sat down and tried to plot out how I would do it, and I wish I had had the time to do that before we talked (laughs).  But it would be nice to have a contest or something for the people who are in the space to say I think this is the way we should do it or where it should go.
But the thing at the top of my head that I expect or I assume that we’re going to get is… I think they’re going to get a lot more intelligent at giving you more from a particular database when they know you’re doing a specific a kind of search.  It’s not necessarily an interface change, but then again it is.  This is the thing I talked about when I was saying about when the London Car Bombing attempts happened, and I’m searching for “London Bombings”.  When you see a spike in certain words you ought to know that there’s a reason behind that spike.  It’s going to be news driven probably, so why are you giving me 10 search results? Why don’t you give me 10 news results?  And saying I’ve also got stuff from across the web, or I’ve got other things that are showing up in that regard.  And that hasn’t changed.  I‘d like to see them get that.   I’d like to see them figure out some intelligent manner to maybe get to that point.  Part of what could come along with that too is that as we start displaying more vertical results the search interface itself could change.  So I think the most dramatic change in how we present search results, really, has come off of local.  And people go “wow, these maps are really cool!” Well of course they’re really cool, they’re presenting information on a map which makes sense when we’re talking about local information.  You want things displayed in that kind of manner.  It doesn’t make sense to take all web search results and put them on a map. You could do it, but it doesn’t communicate additional information for you that’s probably irrelevant and that needs to be presented in a visual manner.  If you think about the other kinds of search that you tend to do, Blog search for instance, it may be that there’s going to be a more chronological display. We saw them do with news archive where they would do a search and they would tell you this happened within these years at this time.  Right now when I do a Google blog search, by default it shows me ‘most relevant’.  But sometimes I want to know what the most recent thing is, and what’s the most recent thing that’s also the most relevant thing right? So perhaps when I do a Search, a Google blog search, I can see something running down the left hand side that says “last hour” and within the last hour you show me the most relevant things in the last hour, the last 4 hours, and then the last day.  And you could present it that way, almost sort of a timeline metaphor. I’m sure there are probably things you could do with shading and other stuff to go along with that.  Image search…Live has done some interesting things now where they’ve made it much less textual, and much more stuff that you’re hovering over, that you can interact with it in that regard.  An I don’t know, it might be that with book search and those other kinds of things that there’ll be other kinds of metaphors that come into place that you can do when you know you are going to present most of the information just from those sorts of resources.  With Video search… I think we’ve already seen a lot of the thing with video search is just giving you the display and being able to play the videos directly.  Rather than having to leave the site because it just doesn’t make sense to have to leave the site in that regard.

Gord: When I was talking to Marissa, she saw a lot more mash ups with search functionality, and you talked about having maps and that with local search making sense, but its almost like you take the search functionality and you layer that over different types of interfaces that make sense, given the type of information your interacting with.

Danny: Right.

Gord: One thing I talked about with a few different people is ‘how much functionality do you put in the hands of the user?’ how much needs to be transparent? How hard are we willing to work with a page of search results?

Danny: By default, not a lot, you know if you’re just doing a general search, I don’t think that putting a whole lot of functionality is going to help you. You could put a lot of options there but historically we haven’t seen people use those things, and I think that’s because they just want to do their searches. They want you to just naturally get the right kind of information that’s there and a lot of the time if they give you that direct answer you don’t need to do a lot of manipulation.  It’s a different thing I think when you get into some very vertical, very task orientated kinds of searches, where you’re saying, ‘I don’t just need the quick answer, I don’t just need to browse and see all the things that are out there, but actually, I’m trying to drill down on this subject in a particular way’.  And local tends to be a great example. ‘Now you’ve given me all the results that match the zip code, but really I would like to narrow it down into a neighborhood, so how can I do that?’  Or a shopping search.  ‘I have a lot of results but now I want to buy something, so now I need to know who has it in inventory? Now I really need to know who has it cheapest? And I need to know who’s the most trusted merchant?’ Then I think the searcher is going to be willing to do more work on the search and make use of more of the options that you give to them.

Gord: Like you say, if you’re putting users directly into an experience where they are closer to the information that they were looking for, there’s probably a greater likelihood that they’re willing to meet you half way, by doing a little extra work to refine that if you give them tools that are appropriate to the types of results they are seeing.  So if it’s shopping search, filtering that by price, or by brand.  That’s common functionality with a shopping search engine and maybe we’ll see that get in to some of the other verticals. But I guess the big question is, in the next three years are the major engines going to gain enough confidence that they’ll be providing a deeper vertical experience as the default, rather than as an invisible tab or a visible tab.

Danny: I still tend to think that the way that they are going to give a deeper vertical experience is the visible tab idea, which is you know, that you are not going to be overtly asked to do it, it is just going to do it for you, and then give you options to get out of it, if it was the wrong choice. So, both Ask, and Google, which are getting all the attention right now, for universal search, you know, blended search if you wanna find a generic term for it that, doesn’t favor one service over the other.  The other term is federated search and I’ve always hated that because it always felt like something from that, you know, came out of the Star Trek Enterprise (laugh). No, I want Klingon search! (laugh) I think that in both of those cases you do the search and the default still is web.  And Ask will say, over here on the side we have some other results. Yes, universal search is inserting an item here or an item there but in most of the cases it still looks like web search, right? They still, really feel like OneBoxes. I haven’t had a universal search happen to me yet that I’ve come along and I’ve thought ‘that really was something I couldn’t have got just from searching the web’ except when I’ve gotten a map.  That’s come in when they’ve shown the map, and that is that kind of dramatic change, and I think at some point they will get to that point, that kind of dramatic change where you just search for “plumbers” and a zip code.  I’m so confident of it I’m just going to give you Google local. I’m not just going to insert a map and give you 7 more web listings that are down there. I’m going to give you a whole bunch of listings and I’m going to change the whole interface on you and if you’re going ‘well, this isn’t what I want’, then I’m going to be able to give you some options if you want to escape out of it.  I like what Ask does, in the sense that it’s easy to escape out of that thing because you just look off to the side and there’s web search over here, there’s other stuff over there.  I think it’s harder for Google to do that when they try to blend it all together. The difficulty remains as to whether people will actually notice that stuff off to the side, and make use of it.

Gord: That was actually something that Jacob Nielsen brought up. He said the whole paradigm of the linear scan down the page is such a dominant user behavior, that we’ve got so used to, you know engines like Ask can experiment with a different layout where they’re going two dimensional, but will the users be able to scan that efficiently?

Danny: I’ve been using this Boeing versus Airbus analogy when I’m trying to explain to people the differences between what Google is doing and what Ask is doing.  Boeing is going, ‘Well, we’ll build small fast energy-efficient jets’ and Airbus is saying ‘We’ll build big huge jets, and we’ll move more people so you’ll be able to do less flights’.  And when I look at the blended search, Google’s approach is, well, we’ve got to stay linear, we’ve got to keep it all in there. That’s where people are expecting the stuff and so we’re going to go that way.  Ask’s approach is we’re going to be putting it all over the place on the page and we’ve got this split, really nice interface.  And I agree with them. And of course Walt Mossberg wrote that review where he said ‘oh they’re so much nicer, they look so much cleaner’, and that’s great, except that he’s a sophisticated person, I’m a sophisticated person, you’re a sophisticated person, we search all the time.  We look at that sort of stuff. A typical person might just ignore it; it might continue to be eye candy that they don’t even notice. And that is the big huge gamble that is going on between these two sorts of players and then, yet again, it might not be a gamble because when you talk to Jim Lanzone, he says ‘My testing tells me this is what our people do’. Well, his people might be different from the Google people. Google has got a lot more new people that come over there that are like, ‘I just want to do a search, show me some things, where’s the text links? I’m done’. So I tend to look perhaps more kindly on what Google is doing, than some people who try to measure them up against Ask because I understand that they deal with a lot more people than Ask, and they have to be much more conservative than what Ask is doing.  And I think that what’s going to happen is those two are going to approach closer together.  The advantage, of course, Jim has over at Ask, is that he doesn’t have to put ads in that column so he’s got a whole column he can make use of, and it is useful, and it is a nice sort of place to tuck it in there. If you really want to talk about search interfaces, what will be really fun to envision is what happens when Ajax starts coming along and doing other things. Can I start putting the sponsored search results where they are hovering above other results? Is there other issues that come with that?  There may be some confusion as to why I’m getting this and why I’m getting that. Can I pop up a map as I hover over a result? I could deliver you a standard set of search results and I can also deliver you local results on top of a particular type of picture.  If I move my mouse along it, I could show you a preview of what you get in local and you might go “Oh wow, there’s a whole map there”. I want to jump off in that direction.  That would be really fun to see that type of stuff come along there, but I’m just not seeing anything come out of it.  What we typically have had when people have played with the interface is, these really WYSIWYG things like, ‘well we’ll fly you though the results, or we’ll group them’.  None of which is really something that you’d need, that added to the choices, “do I want to go vertical, do I not want to go vertical?”

Gord: When we start talking about the fact that the search results page could be a lot more dynamic and interactive, of course the big question is what does that do for monetization of the page?  One of the things that Jakob (Nielsen) talked about was banner blindness.  Do people start cutting out sections of the page?  We talked a little about that.  How do you make sure that the advertising doesn’t get lost on the page when there’s just a lot more visual information in there to assimilate?

Danny: Well I think a variety of things that are going to start happening there.  For example, Google doesn’t do paid inclusion, right, but Google has partnerships with YouTube and they have these channels, and they’re going to be sharing revenue from these channels with other people. So when they start including that stuff up, perhaps they are getting paid off of that.  They didn’t pay to put it in the index but, because they are better able to promote their video channels, more people are going over there, and they’re making money off of that as a destination.  So in some ways, they can afford to have their video results start becoming more relevant because they don’t have to worry about if you didn’t click on the ad from the initial search result, they sort of lost you.  In terms of how the other ads might go, I guess the concern might be if the natural results are getting better and better why would anyone click on the ads anyway?  Maybe people will reassess the paid results and some people will come through and say that paid search results are a form of search data base as well.  So we’re going to call them classifieds or we’re going to call them ads, we’re going to move them right into the linear display.  You know there’ll be issues, because at least in the US, you have the FCC guidelines that say that you should really keep them segregated.  So if you don’t highlight them or blend them in some way, you might run into some regulatory problems.  But then again, maybe those rules might start to change as the search innovation starts to change, and go with it from there.  I don’t know, the search engines might come up with other things.  You know we’re getting toolbars that are appearing more on all of our things. Google might start thinking, ‘Well, let’s put ads back onto that toolbar’.  We used to have those sorts of things, and everyone seems to catch on, but they might come back, and that might be another way that some of the players, especially somebody like Google, might make money beyond just putting the ad on the search result page.

Gord: In the next three years, are we going to get to the point where search starts to become less of a destination activity like the way it is now, and the functionality  sits underneath more of Web 2.0 or semantic web or whatever you want to call it.  It almost becomes a mash up of functionality that underlies other types of sites. Are we going to stop going to a Google or a Yahoo as much to launch a distinct search as we do now?

Danny: You know people have been saying that for at least 3 or 4 years now, especially with Microsoft. ‘Oh you’re not even going to go there, you’re going to do it from your desktop.’  Vista, which I have yet to actually use.  I’ve got the laptop and I’m about to start playing with it! Apparently, it’s supposed to be even more integrated than it was with XP.  But I still tend to think, you know what? We do stuff in our browsers.  I know widgets are growing and I know there’s more stuff that’s just drawing stuff into your computer as well, but we still tend to do stuff in our browser.  I still see search as something where I’m going to go to a search engine and do the search.  With the exception of toolbars. I think we’re going to do a lot more searching through toolbars.  Tool bars are everywhere; it’s really rare for me to start a search where I’m actually not doing it from the toolbar.  I just have a toolbar that sits up there, and I don’t need to be at the search engine itself.  But I still want the results displayed in my browser.  Because I think most of the stuff I’m going to have to deal with is going to be in my browser as well.  So it doesn’t really help to be able to search from Microsoft Word, right?  Because I don’t want all these sites in a little window within Word. I’m probably going to have to read what they say, so I’m probably going to have to go there.  I think that will change though if I have a media player, then I think it makes much more sense for me, and you can already do this with some media players, where you can do searches, and have the results flow back in.  iTunes is a classic example. iTunes is basically a music search engine.  Sure, it’s limited to the music and the podcasts that are within iTunes, but it doesn’t really make any sense for me to go to the Apple website. Although, interestingly, here’s an example where Apple is just a terrible failure.  They’ve got all this stuff out there, they’ve got stuff that perhaps you might be interested in even if you don’t use their software and there’s just no way to get to it on the web.  The last time I looked you really had to do the searches in iTunes.  So they’re missing out on being a destination for those people who say ‘I’m not going to use iTunes’  or ‘I don’t have iTunes’ or ‘I’m on a different version.’ I don’t know if you’ve downloaded it recently but it takes forever and it’s just a pain.

Gord: I think that covers off the main questions I wanted to cover off in this.  Is there anything else as far as search in the next three years that you wanted to comment on?

Danny: You know, it’s hard because if you’d asked me that three years ago, would I have told you, ‘watch for the growth of verticals and watch for the growth of blended search’, (laughs) right?  I’ve been thinking really hard because, I’m like, ‘Gosh, now what am I going to talk about because they’re doing both of those things’. I think personalized search is going to continue to get strong.  I do think that Google is onto something with their personalized search results.  I don’t think that they’re going to cause you to be in an Amazon situation where you’re continuing to be recommended stuff you’re no longer interested in.  I think that people are misunderstanding how sophisticated it can be.  I think that the next big trend is that, ironically from what I just said to you, search is going to start jumping into devices.  And everything is going to have a search box.  But it will be appropriate.  My iPod itself will have a search capability within it.  And the iPhone, to some degree, maybe is going to be that look at how it’s happening already. But I’ll be able to search, access, and get information appropriate to that device within it.  Windows Media Center, when I first got that in 2005, I said, this is amazing, because it’s basically got TV search built into it.  I do the search and then of course, it allows me to subscribe to the program, and records the program, and knows when the next ones are coming up.  And it makes so much more sense for that search to be in that device than it did for me to have it elsewhere.  I use it all the time, when I want to know when a programs on, I don’t have to find where the TV listings are on the web, I just walk over to my computer and do a search from within the Media Center player.  So I think we’re going to have many more devices that are internet enabled, and there’s going to be reasons why you want to do searches with them, to find stuff for them in particular.  That’s going to be the new future of search and search growth will come into it.  And in terms of what that means to the search marketer, I think it’s going to be crucial to understand that these are going to be new growth areas, because those searches when they start are going to be fairly rudimentary. It’s going to be back in the days of, OK, they’re probably going to be driven off of meta data, so you got to make sure you have your title, and your description and making sure the item that your searching for is relevant.

Gord: So obviously all that leads itself to the question of mobile search, and will mobile search be more useful by 2010?

Danny: Sure, but it’s going to be more useful because it’s not going to be mobile search.  It’s just the device is going to catch up and be more desktop-like.  I have a Windows mobile phone at the moment, and I have downloaded some of the applets like Live Search and Google Maps, and those can be handy for me to use, but for the most part, if I want to do a search, I fire up the web browser, I look for what I’m looking for, the screen is fairly large, and I can see what I wanted to find.  And I think that you’re going to find that the devices are going to continue to be small and yet gain larger screens, and have the ability for you to better do direct input. So if you want to do search, you can do a search. It’s not like you’re going to need to have to have something that’s designed for the mobile device that only shows mobile pages.  I think that’s going to change.  You’re going to have some mobile devices that are specifically not going to be able to do that and those people in the end are going to find that no one is going to be trying to support you.

Gord: Thanks Danny.