Infomediating a Broken Marketplace

First published October 18, 2007 in Mediapost’s Search Insider

Last week, I explored the disconnect between how advertisers define Nirvana; the ability to control consumer and persuade them at will by inundating them with advertising; and what consumers dream about: authentic and reliable information on needed products and services. There are costs associated with both sides, the cost of advertising, and the cost of consumer research. Max Kalehoff, from Nielsen BuzzMetric, pointed out another cost: the nuisance cost to the consumer of wading through an earlobe-deep sea of irrelevant and uninvited advertising: zapped TV commercials, blaring billboards, glaring signage, email spam, ubiquitous interstitials and pop-ups, preloads… .or one of the zillions of other ways advertisers choose to scream at you.

So, with this highly inefficient, annoying and disconnected marketplace, there has to be a better way, right? Well, Marc Singer and John Hagel III think so. They call it the infomediary, a concept introduced in their 1999 book, “Net Worth.” It’s well worth the read. The one thing that struck me is that in the entire book, the word “Google” is not mentioned once. This is not really surprising, given the publication date, but for reasons that will soon become clear, the irony was not lost on me.

How to Spot an Infomediary

Here’s the basic foundation of the infomediary. Acting on behalf of the client when he’s looking to make a purchase, the infomediary takes previously gathered personal information, as well as information volunteered by the client, and searches for the best match with vendors. The client can choose to remain anonymous, saving himself from an onslaught of advertising. Or, if the client agrees, the infomediary will pass his name along to a qualified vendor, and for this privilege, the vendor will pay the prospect. In essence, the infomediary plays the role of marketing matchmaker.

There are a number of offshoots of this basic premise. The infomediary supplies privacy tools to clients, marketing intelligence to vendors, the opportunity to bargain as a group for lower prices on regular consumable products, and it also acts as an aggregator of consumer power. In effect, the infomediary takes over control of the client relationship, inserting itself squarely between the consumer and the vendor, with the ultimate goal of protecting the consumer. This is a decidedly customer-centric model.

But it’s in the basic concept of gathering information about a client, and using that to ensure a good match with a vendor, that one begins to speculate about Google’s ambitions to fill this role. In essence, at a rudimentary level, Google is already fulfilling some of the role of the infomediary. Certainly if you factor personalization into the equation, we move a big step closer to Singer and Hagel’s concept.

Disruptive Influences

There are a number of dramatically disruptive possibilities in the infomediary model:

  • It forces advertisers to surrender all pretense of control over the consumer. Persuasion becomes a non-issue. The touchpoint with the consumer is stripped of hype, ensuring that product information is authentic and factual.
  • It gives the aggregated consumer voice a level of power never seen before. Previously, the marketplace was vendor-centric: here’s what we offer, here’s how we offer it, here’s what we charge. The consumer’s choice was restricted to “take it or leave it.” Now, the balance shifts to the consumer: here’s what we want, here’s how we want it, here’s what we want to pay. Provide it or we’ll find someone else who can.
  • By gaining control of the customer relationship, it forces companies to focus on two other core processes: one, either product innovation and commercialization; or two, infrastructure management, excelling in producing and distributing a product.

Something’s Rotten in the State of Advertising

There are a number of other seismic shifts in the landscape that come out of the infomediary model, but “Net Worth” weighs in at over 300 pages, and I have a bare 700 to 800 words for this column. The sum of it all is that the infomediary model, or some variation of it, dramatically changes the rules of the marketing game. A terribly inefficient marketplace has evolved in the past century, with some very wobbly power structures. The communication disconnect is almost laughable in its dysfunction. Advertisers spend more and more, hoping to penetrate a barricade set up by increasingly militant consumers. It’s literally a war, with strategies to match. The only hint of concession to the increasing power of the consumer has been search, and that has been done reluctantly. Remember Einstein’s definition of insanity? “Doing the same thing over and over again and expecting different results.”

If you look at the characteristics of an infomediary laid out by Singer and Hagel, Google has many of them in place already, and certainly has the resources to assemble the rest. The one piece that’s missing, and this is the critical one, is a purely customer-centric approach. For all Google’s focus on the user experience, their advertising models are still primarily driven by advertisers, not consumers. But for the model to work, consumers have to have complete trust in the infomediary and be willing to share their personal information. As we’ve seen with the initial pushback to personalization, there’s still a healthy degree of suspicion on the part of users that Google will use personal information for its benefit and not the advertiser’s.

4000 Ads a Day and Counting

First published October 11, 2007 in Mediapost’s Search Insider

It’s not easy being a consumer. Current estimates indicate that the average urban dweller is exposed to between 3,000 and 5,000 advertising messages every day. That means, settling on the middle number, that every waking hour (sleep seems to be our only reprieve, and I hear they’re working on that) you’re presented with an ad every 14.4 seconds. That’s every 14.4 seconds, every minute of every day you’re alive. The frequency of this advertising barrage has doubled in the past 30 years.

“Are We There Yet?”

So, let’s imagine that your 5-year-old child interrupted you every 14 and a half seconds with “Moooommmm…” or “Daaaaddd…”. If we use my patience limits as a baseline here, that mean’s you’d last about 1.3 minutes before you went ballistic. The difference, of course, is that we’re genetically hardwired to pay attention to our children, much as we sometimes might try not to. We’ve been conditioned to ignore advertising.

But what happens when we really want to buy something? Suddenly, we’re looking for information, and we spend a lot of time doing so. At least, that’s true for some purchases. Take a computer, for instance. It’s not unusual to spend 10 to 15 hours researching a computer purchase, from the minute you decide you need one to the minute you tear open the box in your home. That’s not including the many hours needed to get your “plug and play” box actually playing after plugging.

The Cost of Consumer Research

Of course, we generally don’t put a cost on our time, but let’s say an hour of your time is worth about $40 (an average rate for someone making $75,000 per year). That means that $1,000 box of electronics cost you an additional $600, just in time spent to pick the right box.

The Internet is not making this any easier. Yes, as consumers, we’re armed with more information sources, but we spend a lot of time sorting out sense from nonsense. The explosion of information sources, both the good and the bad, mean we’re spending more time thinking about what we should buy. A study by ScanAlert found that that across many ecommerce categories, the average time to buy has increased by almost 79% in the past two years. Now, this was just the duration from first visit to purchase in the actual online store. It doesn’t include any consumer research before visiting the store. But I think we’re safe to assume that there would be a corresponding increase in the amount of online consumer “tire kicking.”

It’s No Picnic for Advertisers Either

Before you feel too sorry for yourself, let me tell you, it’s not easy being an advertiser, either. How do we get past the filters? How do we stand out from the other 3,999 messages you’ll hear today?

To recycle some research I did for a previous column (because research is a terrible thing to waste), the Ontario Tourism Board ran newspaper ads in Toronto targeting people looking to vacation in the province. The ad cost (at posted rate card rates) about $54,000. Even with an exceptional response rate, that ad might sneak though the filters of 1,700 or so people and actually catch their attention. This works out to an average cost of about $32 per introduction, or, to put it another way, $32 to tear a hole through that advertising barricade you’ve been building.

Got a Minute? I’ll Make it Worth Your While

So, if advertisers are willing to pay to get your attention, why not cut out the middle man and pay you directly? Why should the Toronto Star get all that money, when you’re the person the advertiser wants to talk to? What if every one of those 4,000 advertisers who are going to try to get your attention today (Consuummmerrr…Consummmerrr!) paid you a dollar to listen to what they have to say? You’d do okay financially, to the tune of about $1.46 million a year. Of course, your brain would explode after the first hour.

The concept is not as far-fetched as it seems. In fact, in 1999 John Hagel III and Marc Singer, both principals with McKinsey and Company, wrote a book called “Net Worth” that explored this very premise (along with a number of others) as a potential online business model. The book provided a detailed business plan for a new concept: the infomediary. Some of the details have been passed over in the last eight years since publication, but the basic premise still addresses a significant disconnect in today’s advertising marketplace. Next week, I’ll lay out the foundation of infomediaries and look at how some of our favorite search players seem to be inching their way towards Hagel and Singer’s proposal.

We now return you to your regular commercial onslaught.

On Your Search Menu Tonight

First published October 4, 2007 in Mediapost’s Search Insider

This week Yahoo unveiled a new feature. It doesn’t really change the search game that much in terms of competitive functionality. If anything, it’s another case of Yahoo catching up with the competition. But it may have dramatic implications from a user’s point of view. To illustrate that point further I’d like to share a couple of stories with you.

The feature is called Search Assist. You type your query in, and Yahoo provides a list under the query box with a number of possible ways you could complete the query. This follows in the footsteps of Google’s search suggestions in its toolbar. Currently, Google doesn’t offer this functionality within the standard Google query box, at least in North America. Ask also offers this feature.

Because Yahoo is late to the game, the company had the opportunity to up the functionality a little bit. For example, the suggestions that come from Yahoo can include the word you’re typing anywhere in the suggested query phrase. Google uses straight stemming, so the word you’re typing is always at the beginning of the suggested phrases. Yahoo also seems to be pulling from a larger inventory of suggested phrases. The few test queries I did brought back substantially more suggestions than did Google.

It’s not so much the functionality of this feature that intrigues me; it’s how it could affect the way we search. I personally have found that I come to rely on this feature in the Google toolbar more and more. Rather than structuring a complete query in my mind, I type the first few letters of the root word in and see what Google offers me. It leads me to select query phrases that I probably never would have thought of myself.

Some time ago I wrote that contrary to popular belief, we’ve actually become quite adept at paring our queries down to the essential words. It’s not that we don’t know how to launch an advanced query; it’s that most times, we don’t need to. This becomes even truer with search suggestions. All we have to do is think of one word, and the search engine will serve us a menu of potential queries. It reduces the effort required from the searcher, but let me tell you a story about how this might impact a company’s reputation online.

I Wouldn’t Recommend That Choice

Some time ago I got a voicemail from an equity firm. The woman who left a message was brash, a little abrasive and left a rather cryptic message, insisting that I had to phone her right back. Now, since I’m in the search game, getting calls from venture capitalists and investment bankers is nothing really new. But I’d never quite heard this tone from one of these prospecting calls before. So, I did as I usually do in these cases and decided to do a little more research on the search engines to determine whether I was actually going to return this call or not. I did my quick 30-second reputation check.

Normally, I would just type in the name of the firm and see what came up in the top 10 results. Usually, if there’s strong negative content out there, it’s worth paying attention to and it tends to collect enough search equity to break the top 10. This time, I didn’t even have to get as far as the results page. The minute I started typing the company name into my Google toolbar, the suggestions Google was providing me told the entire story: “company” scam, “company” fraud and “company” lawsuits. Of the top eight suggestions, over half of them were negative in nature. Not great odds for success. Needless to say, I never returned the call.

If these search suggestions are going to significantly alter our search patterns, we should be aware of what’s coming up in those suggestions for our branded terms. Type your company name into Yahoo or Google’s toolbar and see what variations are being served to you. Some of them may not be that appetizing.

Would You Prefer Szechuan?

My belief is that users are increasingly going to use this to structure their queries. It moves search one step closer to be coming a true discovery engine. One of the overwhelming characteristics of search user behavior is that we’re basically lazy. We want to expand a minimal amount of effort but in return, we expect a significant degree of relevance. Search suggestions allow us to enter a minimum of keystrokes and the search engine obliges us with a full menu of options.

This brings me to my other story. Earlier this year we did some eye-tracking research on how Chinese citizens interact with the search engines Baidu and Google China. After we released the preliminary results of the study, I had a chance to talk to a Google engineer who worked on the search engine. In China, Google does provide real-time search suggestions right from the query box. The company found that it’s significantly more work to type a query in Mandarin than it is in most Western languages. Using a keyboard for input in China is, at best, a compromise. So Google found that because of the amount of work required to enter a query, the average query length was quite short in China, giving a substantially reduced degree of relevancy. In fact, many Chinese users would type in the bare minimum required and then would scroll to the bottom of the page, where Google showed other suggested queries. Then, the user would just click on one of these links. Hardly the efficient searching behavior Google was shooting for. After introducing real-time search suggestions for the query box, Google found the average length of query increased dramatically and supposedly, so did the level of user satisfaction.

Search query suggestions are just one additional way we’ll see our search behavior change significantly over the next year or two. Little changes, like a list of suggested queries or the inclusion of more types of content in our results pages, will have some profound effects. And when search is the ubiquitous online activity it is, it doesn’t take a very big rock to create some significant and far-reaching ripples.

In Search of B2B Landmarks

First published September 27, 2007 in Mediapost’s Search Insider

This week (actually, right about the time you’ll be reading this column) I’ll be talking to the American Business Media Publishers Summit in Chicago about online opportunities, from a user’s perspective. As I was getting ready for the address, I realized there’s a substantial piece of the B to B market that’s missing online. I call it a market enabler.

Looking for Landmarks

Think of our typical progression when we begin researching something online. If it’s new territory, the first thing we need to do is to find a landmark to navigate from. From that landmark we tend to navigate out from it. This is true both in the online and real world. Think of Google as everybody’s favorite landmark. It’s the starting point of nearly all our online navigation, because we know we can always get back to it if we’re lost. In fact, it becomes the vehicle of our online navigation in almost all cases. The only time we deviate from it is if we have enough familiarity with a certain section of the online landscape that we can find other online landmarks without it. For example, if I’m planning a trip somewhere, I usually don’t start at Google. I either start at one of the travel tools I have bookmarked (Farecompare.com, Kayak, Sidestep) or at my favorite travel community, Tripadvisor.com. I’ve been down this path before, so I’ve memorized other familiar landmarks. Otherwise, I always start at Google.

But there are some things we look for in our landmarks. We want them to be recognizable. We want them to be authoritative. We want them to be comprehensive. And usually, we want them to be relatively agnostic. We don’t want to be pushed in any particular direction. We want to choose our own paths. We want a neutral marketplace that allows us to compile our own consideration set, not have it built for us.

Making Life Easier

It also helps if our landmarks incorporate some strong navigational and comparison functionality. One of the best things about the travel sites and tools I’ve mentioned is their sophisticated search and filtering capabilities. They beat Google at this particular game. They’re a more useful landmark to navigate from within. And increasingly, they’re incorporating authentic community dialogue and reviews with the search functionality. I can search, sort and qualify, all in one place. They make the difficult job of planning a trip easier. They’re market enablers, because they allow us to compare alternatives more effectively. If we look at the two best examples of market enablers, eBay and Amazon, they share all of the above characteristics.

So, let’s return to the B to B marketplace. In our B to B survey, we found that almost everyone starts with Google, because most of the time when we research B to B purchases, we’re starting in unfamiliar territory. We have no landmarks. And while we usually end up going fairly quickly to vendor sites, the survey found a strong desire to find an unbiased landmark as the market’s middle ground. Yet, no enablers have strongly established themselves in this position. There is no eBay or Amazon, or even a TripAdvisor, of B to B. There are vertical engines, including Business.com, Knowledgestorm, KellySearch, ThomasNet and others, but none have dominated the landscape to this point.

Sorting through the Haystack

In a recent B to B panel I moderated, consultant Karen Breen Vogel mentioned that these vertical properties do restrict the scope of the search, so rather than looking for a needle in a haystack, you’re looking for a needle in a needlestack. While this is true, it can still be a pretty painful process if you’re looking for the right needle. The problem is that the B 2 B marketplace is vast and fragmented. Also, there are no obvious affiliation or revenue opportunities, as there are in the travel business. There isn’t an obvious money trail to follow in the B to B world to make enabling the marketplace a potentially lucrative proposition. Most of the players have morphed over from being directory publishers in the offline world, and are still following the paid listing model. Unfortunately, this doesn’t lend itself to the neutral marketplace favored by researching buyers.

There are few purchase processes that are more difficult or taxing than a complicated B to B one. Sorting out potential vendors can be a long, tedious and frustrating process. First of all, there’s no emotional investment. This isn’t planning a vacation. This is your job. Secondly, the risk level is extremely high. Screw up, and your job may evaporate. While the potential to make money may be obscured by the challenges, the buyer’s need is painfully obvious. And I can’t help thinking, if eBay could do it, given the immense diversity of its marketplace, there must be a way.

Personalization Catches the User’s Eye

First published September 13, 2007 in Mediapost’s Search Insider

Last week, I looked at the impact the inclusion of graphics on the search results page might have on user behavior, based on our most recent eye tracking report. This week, we look at the impact that personalization might bring.

One of the biggest hurdles is that personalization, as currently implemented by Google, is a pretty tentative representation of what personalization will become. It only impacts a few listings on a few searches, and the signals driving personalization are limited at this point. Personalization is currently a test bed that Google is working on, but Sep Kamvar and his team have the full weight of Google behind them, so expect some significant advances in a hurry. In fact, my suspicion is that there’s a lot being held in reserve by Google, waiting for user sensitivity around the privacy issue to lessen a bit. We didn’t really expect to see the current flavor of personalization alter user behavior that much, because it’s not really making that much of a difference on the relevancy of the results for most users.

But if we look forward a year or so, it’s safe to assume that personalization would become a more powerful influencer of user behavior. So, for our test, we manually pushed the envelope of personalization a bit. We divided up the study into two separate sessions around one task (an unrestricted opportunity to find out more about the iPhone) and used the click data from the first session to help us personalize the data for the search experience in the second session. We used past sites visited to help us first of all determine what the intent of the user might be (research, looking for news, looking to buy) and secondly to tailor the personalized results to provide the natural next step in their online research. We showed these results in organic positions 3, 4 and 5 on the page, leaving base Google results in the top two organic spots so we could compare.

Stronger Scent

The results were quite interesting. In the nonpersonalized results pages, taken straight from Google (in signed out mode) we saw 18.91% of the time spent looking at the page happened in these three results, 20.57% of the eye fixations happened here, and 15% of the clicks were on Organic listings 3, 4 and 5. The majority of the activity was much further up the page, in the typical top heavy Golden Triangle configuration.

But on our personalized results, participants spent 40.4% of their time on these three results, 40.95% of the fixations were on them, and they captured a full 55.56% of the clicks. Obviously, from the user’s point of view, we did a successful job of connecting intent and content with these listings, providing greater relevance and stronger information scent. We manually accomplished exactly what Google wants to do with the personalization algorithm.

Scanning Heading South

Something else happened that was quite interesting. Last week I shared how the inclusion of a graphic changed our “F” shaped scanning patterns into more of an “E” shape, with the middle arm of the “E” aligned with the graphic. We scan that first, and then scan above and below. When we created our personalized test results pages, we (being unaware of this behavioral variation at the time) coincidentally included a universal graphic result in the number 2 organic position, as this is what we were finding on Google.

When we combined the fact that users started scanning on the graphic, then looked above and below to see where they wanted to scan next with the greater relevance and information scent of the personalized results, we saw a very significant relocation of scanning activity, moving down from the top of the Golden Triangle.

One of the things that distinguished Google in our previous eye tracking comparisons with Yahoo and Microsoft was its success of keeping the majority of scanning activity high on the page, whether those top results were organic or sponsored.

Top of page relevance has been a religion at Google. More aggressive presentation of sponsored ads (Yahoo) or lower quality and relevance thresholds of those ads (Microsoft) meant that on these engines (at least as of early 2006) users scanned deeper and were more likely to move past the top of the page in their quest for the most relevant results. Google always kept scan activity high and to the left.

But ironically, as Google experiments with improving the organic results set, both through the inclusion of universal results and more personalization, their biggest challenge may be in making sure sponsored results aren’t left in the dust. Top of page scanning is ideal user behavior that also happens to offer a big win for advertisers. As results pages are increasingly in flux, it will be important to ensure that scanning doesn’t move too far from the upper left corner, at least as long as we still have a linear, 1 dimensional top to bottom list of results.

An Image Can Change Everything for the Searcher

First published September 6, 2007 in Mediapost’s Search Insider

For the many of you who responded to last week’s column about Nona Yolanda, I just want to take a few seconds to let you know that she passed away the evening of Sept. 3, having fought for five days more than doctors gave her. She was in the presence of her family right until the end. We printed off your comments and well wishes and posted them on the hospital door. It was somewhat surprising but very gratifying for my wife’s family to know that Nona’s story touched hearts around the world. Thank you. – G.H.

The world of the search results page is changing quickly, which means that we’re going to have to apply new rules for user behavior. This week, I’d like to look at some results from a recent eye tracking study we did about how we interact with search when graphic elements start to appear on the page. We also tested for the inclusion of personalized results. There’s a lot of ground to cover, so I’ll start off with Universal Search this week, and cover personalization and the future of search next week.

Warning: Graphic Depictions Ahead

You can’t get much more basic than the search results page we’ve all grown to know in the past decade. The 10 blue organic links and, more recently, the top and side sponsored ads have defined the interface. It’s been all text, ordered in a linear top to bottom format. The only sliver of real estate that saw any variation was the vertical results, sandwiched between top sponsored and top organic. So it was little wonder that we saw a consistent scan pattern emerge, which we labeled the Golden Triangle. It was created by an “F”-shaped scan pattern, where we scanned down the left hand side, looking for information scent, and then scanned across when we found it.

But that design paradigm is in the middle of change. The first and most significant of these will be the inclusion of different types of results on the same page, blended into the main results set. Google’s label is Universal Search, Ask’s is 3D Search and Yahoo’s is Omni Search. Whatever you choose to call it, it defines a whole new ball game for the user.

Starting at the Top…

In the classic pattern, users began at the top left corner because there was no real reason not to. We saw the page, our eyes swung up to the top left and then we started our “F”- shaped scans from there. Therefore, our interactions with the page were very top-heavy. The variable in this was the relevance of the top sponsored ads. If the engine maintained relevance by only showing top sponsored when they were highly relevant (i.e. Google) to the query, we scanned them. If the engine bowed to the pressures of monetization and showed the ads even when they might not be highly relevant to the query (we saw more examples of this on Yahoo and Microsoft) users tended to move down quickly and the Golden Triangle stretched much further down the page. It was a mild form of search banner blindness. The one thing that remained consistent was the upper left starting point.

But things change, at least for now, when you start mixing results into the equation. If the number 2 or 3 organic return is a blended one, with a thumbnail graphic, we assume the different presentation must mean the result is unique in some way. The graphic proves to be a power attractor for the eye, especially if it’s a relevant graphic. It’s information scent that can be immediately “grokked” (to use Jakob Nielsen’s parlance) and this often drew the eye quickly down, making this the new entry point for scanning. This reduces the top to bottom bias (or totally eliminates it), making the blended result the first one scanned. Also, we saw a much more deliberate scanning of this listing.

Give Me an F, Give Me an E…

Another common behavior we identified is the creation of a consideration set, by choosing three or four listings to scan before either choosing the most relevant one or selecting another consideration set. In the pre-blended results set, this consideration set was usually the top three or four results. But in blended results, it’s usually the image result being the first result scanned, and then the results immediately above and below it. Rather than an “F”-shaped scan, this changes the pattern to an “E”-shaped scan, with the middle arm of the “E” focused on the graphic result.

The implications are interesting to consider. The engines and marketers have come to accept the top to bottom behavior as one of the few dominant behavioral characteristics, and it has given us a foundation on which to build our positioning strategy. But if the inclusion of a graphic result suddenly moves the scanning starting point, we have to consider our best user interception opportunities on a case-by-case basis.

Next week, I’ll look at further findings.

Search Engine Results: 2010 – Marissa Mayer Interview

marissa-mayer-7882_cnet100_620x433Just getting back in the groove after SES San Jose. You may have caught some of my sessions or heard we have released a white paper looking at the future of search and with some eye tracking on personalized and universal search results. We don’t have the final version up yet, but it should be available later this week. The sneak preview got rave reviews in SJ.

Anyway, I interviewed a number of influencers in the space, and I’ll be posting the full transcripts here on my blog over the next week. I already posted Jakob Nielsen’s interview. Today I’ll be posting Marissa Mayer’s, who did a keynote at SES SJ. It makes for interesting reading. Also, I’ll be running excerpts and additional commentary on Just Behave on Search Engine Land. The first half ran a couple weeks ago. Look for more (and a more regular blog schedule) coming out over the next few weeks. Summer’s over and it’s back to work.

Here’s my chat with Marissa:

Gord: I guess I have one big question that will probably break out into a few smaller questions.  What I wanted to do for Search Engine Land is speculate on what the search engine results page might look like to the user in three years time.  With some of the emerging things like personalization and universal search results and some the things that are happening with the other engines: Ask with their 3D Search, which is their flavor of Universal, it seems to me that we might be at a point for the first time in a long time the results that we’re seeing may have a significant amount of flux over the next 3 years.  I wanted to talk to a few people in the industry about their thoughts of what we might be seeing 3 years down the road.  So that’s the big over-arching question I’m posing.

Marissa: Sure, Minority Report on search result pages…Well, I’d like to say it’s going to be like that but I think that’s a little further out.  There are some really fascinating technologies that I don’t know if you’ve seen..some work being done by a guy named Jeff Han?

Gord: No.

Marissa: So I ran into Jeff Han both of the past years at TED. Basically he was doing multi-touch before they did it on the iPhone on a giant wall sized screen, so it actually does look a lot like Minority Report. It was this big space where you could interact, you could annotate, you could do all those things.  But let me talk first about what I see happening as some trends that are going to drive change.

One is that we are seeing more and more broadband usage and I think in three years everyone will be on very fast connections, so a lot more to choose from and  a lot more data without taking a large latency hit.  The other thing we’re seeing is different mediums, audio, video.  They used to not work.  If you remember getting back a year ago, everytime you clicked on an audio file or a movie file, it would be, like, ‘thunk’?  It needs a plug in, or “thunk”, it doesn’t work.  Now we’re coming into some standardized formats and players that are either browser or technology independent enough, or are integrated enough that they are actually going to work.  And also we’re seeing users having more and more storage on their end.  And those are the sort of 3 computer science trends that are things that are going to change things.  I also think that people are becoming more and more inclined to annotate and interact with the web. It started with bloggers, and then it moved to mash ups, and now people are really starting to take a lot more ownership over their participation on the web and they want to annotate things, they want to mark it up.

So I think when you add these things together it means there’s a couple of things.  One, we will be able to have much more rich interaction with the search results pages. There might be layers of search results pages: take my results and show them on a map, take my results and show them to me on a timeline.  It’s basically the ability to interact in a really fast way, and take the results you have and see them in a new light.  So I think that that kind of interaction will be possible pretty easily and pretty likely.  I think it will be, hopefully, a layout that’s a little bit less linear and text based, even than our search results today and ultimately uses what I call the ‘sea of whiteness’ more in the middle of the page, and lays out in a more information dense way all the information from videos to audio reels to text, and so on and so forth.  So if you imagine the results page, instead of being long and linear, and having ten results on the page that you can scroll through to having ten very heterogeneous results, where we show each of those results in a form that really suits their medium, and in a more condensed format.  A couple of years ago we did a very interesting experiment here on the UI team where we took three or 4 different designs where the problem was artificially constrained.  It was above the fold Google.  If you needed to say everything that Google needed to say above the fold, how would you lay it out?  And some came in with two columns, but I think two columns is really hard when it was linear and text based.  When you started seeing some diagrams, some video, some news, some charts, you might actually have a page that looks and feels more like an interactive encyclopedia.

Gord: So, we’re almost going from a more linear presentation of results, very text based, to almost more of a portal presentation, but a personalized portal presentation.

Marissa: Right and I think as people, one, are getting more bandwidth and two, as they’re more savvy with how they look at more information, think of it this way, as more of serial access versus random access.  One of my pet peeves is broadcast news, where I really don’t like televised news anymore.  I like newspapers, and I like reading online because when I’m online or with newspapers, I have random access.  I can jump to whatever I’m most interested in.  And when you’re sitting there watching broadcast news you have to take it in the order, at the pace and at the speed that they are feeding it to you.  And yes, they try to make it better by having the little tickers at the bottom, but you can’t just jump in to what you’re interested in.  You can only read one piece of text at a time, and it’s hard to survey and scan and hone in on one type of medium or another when it’s all one medium.  So certainly there is some random access happening with the search results today.  I think as the results formats becomes much more heterogeneous, we’re going to have a more condensed presentation that allows for better random access.  Above the fold being really full of content, some text, some audio, some video, maybe even playing in place, and you see what grabs your attention, and pulls you in.   But it’s almost like random access on the front page of the New York Times, where am I more drawn to the picture, or the chart, or this piece of content down here?  What am I drawn to?

Gord: Right.  If you’re looking at different types of stimuli across the page, I guess what you’re saying is, as long as all that content is relevant to the query you can scan it more efficiently than you could with the standardized text based scanning, linear scanning, that we’re seeing now

Marissa: That’s right.

Gord: Ok.

Marissa: So the eyes follow and they just read and scan in a linear order, where when you start interweaving charts and pictures and text, people’s eyes can jump around more, and they can gravitate towards the medium that they understand best.

Gord: So, this is where Ask is going right now with their 3D search, where it’s broken it into 3 columns and they’re mixing images and text and different things.  So I guess what we’re looking at is taking it to the next extreme, making it a richer, more interactive experience, right?

Marissa: Rather than having three rote columns, it would actually be more organic.

Gord: So more dynamic.  And it mixes and matches the format based on the types of material it’s bringing back.

Marissa: Well, to keep hounding on the analogy of the front page of the New York Times.  It’s not like the New York Times…I mean they have basically the same layout each time, but it’s not like they have a column that only has this kind of content, and if it doesn’t fill the column, too bad.  They have a basic format that they change as it suits the information.

Gord: So in that kind of format, how much control does the user have? How much functionality do you put in the hands of the user?

Marissa: I think that, back to my third point, I think that people will be annotating search results pages and web pages a lot.  They’re going to be rating them, they’re going to be reviewing them.  They’re going to be marking them up, saying  “I want to come back to this one later”.  So we have some remedial forms of this in terms of Notebook now, but I imagine that we’re going to make notes right on the pages later.  People are going to be able to say I want to add a note here; I want to scribble something there, and you’ll be able to do that.  So I think the presentation is going to be largely based on our perceived notion of relevance, which of course leverages the user, in the ways they interact with the page, and look at what they do and that helps inform us as to what we should do.  So there is some UI user interaction, but the majority of user interaction will be on keeping that information and making it consumable in the best possible way.

Gord: Ok, and then if, like you said, if you go one step further, and provide multiple layers, so you could say, ok, plot my search results, if it’s a local search, plot my search results on a map.  There’s different ways to, at the user’s request, present that information, and they can have different layers that they can superimpose them on.

Marissa: So what I’m sort of imagining is that in the first basic search, you’re presented with a really rich general overview page, that interweaves all these different mediums, and on that page you have a few basic controls, so you could say, look, what really matters to me is the time dimension, or what really matters to me is the location dimension.  So do you want to see it on a timeline, do you want to see it on a map?

Gord: Ok, so taking a step further than what you do with your news results, or your blog search results, so you can sort them a couple of different ways, but then taking that and increasing the functionality so it’s a richer experience.

Marissa: It’s a richer experience. What’s nice about timeline and date as we’re currently experimenting with them on Google Experimental is not only do they allow you to sort differently, they allow you to visualize your results differently.  So if you see your results on a map, you can see the loci, so you can see this location is important to this query, and this location is really important to that query.  And when you look at it in time line you can see, “wow, this is a really hot topic for that decade”.  They just help you visualize the nut of information across all the results in these fundamentally different ways that ‘sorts’ kind of get at. But it’s really allowing that richer presentation and that overview of results on the meta level that helps you see it.

Gord: Ok.  I had a chance to talk to Jakob Nielsen about this on Friday, and he doesn’t believe that we’re going to be able to see much of a difference in the search results in 3 years.  He just doesn’t think that that can be accomplished in that time period.  What you’re talking about is a pretty drastic change from what we’re seeing today, and the search results that we’re seeing today haven’t changed that much in the last 10 years, as far as what the user is seeing.  You’re really feeling that this is possible?

Marissa: It’s interesting, you know, I pay am lot of attention to how the results look.  And I do think that change happens slowly over time and that there are little spurts of acceleration.  We at Google certainly saw a little accelerated push during May when we launched Universal Search.  I’m of the view that maybe its 3 years out, maybe it’s 5 years out, maybe it’s 10 years out.  I’m a big subscriber to the slogan that people tend to overestimate the short term and underestimate the long term.  My analogy to this is that when I was 5, I remember watching the Jetson’s and being, like, this rocks!  When I’m thirty there are flying cars!  Right?  And here I am, I’m 32 and we don’t even have a good flying car prototype, and yet the world has totally changed in ways that nobody expected because of the internet and computing.  In ways that in the 1980s no one even saw it coming.  Because personal computers were barely out, let alone the internet.  It’s interesting.  We do our off site in August. I do an offsite with my team where we do Google two years out. There it’s really interesting to see how people think about it.  I take all the prime members on my team, so they’re the senior engineers, and everybody has homework.  They have to do a homepage and a results page of Google, and this year it’ll be Google 2009.

Gord: Oh Cool!

Marissa: Six months out, it’s really easy because if we’re working on it, because if it’s going to launch in 6 months and it’s big enough that you would notice, we’re working on it right now and we know it’s coming.  And five years or ten years out we start getting into the bigger picture things like what I’m talking to you about.  When the little precursors that get us ready for those advances happen between now and then that’s what’s shifting.   So I’m giving you the big picture so you can start understanding what some of the mini steps that might happen in the next 3 years, to get us ready for that, would be.  The two to three year timeframe is painful. Everybody at my offsite said, “this timeframe sucks!” So it’s just far enough out that we don’t have great visibility, will mobile devices be something that’s a really big new factor in three years?  Maybe, maybe not.  Some of the things are making fast progress now may even take a big leap, right, like it was from 1994 to 97 on the internet.  Or if you think about G-mail and Maps, like AJAX applications..you wouldn’t have foreseen those in 2002 or 2003.  So, two or three years is a really painful time frame because some things are radically different, but probably in different ways than you would expect.  You have very low visibility in our industry to that time frame.  So I actually find it easier to talk about the six month timeframe, or the ten year timeframe.  So I’m giving you the ten year picture knowing that it’s not like the unveiling of a statue, where you can just take the sheet, snatch it off and go, “Voila there it is”.  If you look at the changes we’ve made over time at Google search they’ve always been “getting this ready, getting this ready”.  So the changes are very slow and feel like they’re very incremental.  But then you look at them in summation over 18 months or two years, you’re like, “you know, nothing felt really big along the way, but they are fundamentally different today”.

Gord: One last question.  So we’re looking at this much richer search experience where it’s more dynamic and fluid and there are different types of content being presented on the page.  Does advertising or the marketing message get mixed into that overall bucket, and does this open the door to significantly different types of presentation of the advertising message on the search results page?

Marissa: I think that there will be different types of advertising on the search results page.  As you know, my theory is always that the ads should match the search results.  So if you have text results, you have text ads, and if you have image results, you have image ads.  So as the page becomes richer, the ads also need to become richer, just so that they look alive and match the page.  That said, trust is a fundamental premise of search.  Search is a learning activity.  You think of Google and Ask and these other search engines as teachers.  As an end user the only reason learning and teaching works, the only way it works, is when you trust your teacher.  You know you’re getting the best information because it’s the best information, not because they have an agenda to mislead you or to make more money or to push you somewhere because of their own agenda.  So while I do think the ads will look different, they will look different in format, or they may look different in placement, I think our commitment to calling out very strongly where we have a monetary incentive and we may be biased will remain.  Our one promise on our search results page, and I think that will stand, is that we clearly mark the ads.  It’s very important to us that the users know what the ads are because it’s the disclosure of that bias, that ultimately builds the trust which is paramount to search

Gord: Ok.  Great to see you’re a keynote at San Jose in August.

Marissa: Should be fun.  This whole topic has me kind of jazzed up so maybe I’ll talk about that.

Search Engine Results: 2010 – Interview with Danny Sullivan

Danny-SullivanHere’s another in the series of the Search:2010 transcripts, this one of my chat with Search Engine Land Editor Danny Sullivan:

Gord: The big question that I’m asking is how much change are we going to see on the search engine results page over the next three years.  What impact are things like universal search and personalization and some of the other things we’re seeing come out, how much of that is going to impact the actual interface the user is going to see.  Maybe let’s just start there.

Danny: I love the whole series to begin with because then I thought, Gosh, I never really sat down and tried to plot out how I would do it, and I wish I had had the time to do that before we talked (laughs).  But it would be nice to have a contest or something for the people who are in the space to say I think this is the way we should do it or where it should go.
But the thing at the top of my head that I expect or I assume that we’re going to get is… I think they’re going to get a lot more intelligent at giving you more from a particular database when they know you’re doing a specific a kind of search.  It’s not necessarily an interface change, but then again it is.  This is the thing I talked about when I was saying about when the London Car Bombing attempts happened, and I’m searching for “London Bombings”.  When you see a spike in certain words you ought to know that there’s a reason behind that spike.  It’s going to be news driven probably, so why are you giving me 10 search results? Why don’t you give me 10 news results?  And saying I’ve also got stuff from across the web, or I’ve got other things that are showing up in that regard.  And that hasn’t changed.  I‘d like to see them get that.   I’d like to see them figure out some intelligent manner to maybe get to that point.  Part of what could come along with that too is that as we start displaying more vertical results the search interface itself could change.  So I think the most dramatic change in how we present search results, really, has come off of local.  And people go “wow, these maps are really cool!” Well of course they’re really cool, they’re presenting information on a map which makes sense when we’re talking about local information.  You want things displayed in that kind of manner.  It doesn’t make sense to take all web search results and put them on a map. You could do it, but it doesn’t communicate additional information for you that’s probably irrelevant and that needs to be presented in a visual manner.  If you think about the other kinds of search that you tend to do, Blog search for instance, it may be that there’s going to be a more chronological display. We saw them do with news archive where they would do a search and they would tell you this happened within these years at this time.  Right now when I do a Google blog search, by default it shows me ‘most relevant’.  But sometimes I want to know what the most recent thing is, and what’s the most recent thing that’s also the most relevant thing right? So perhaps when I do a Search, a Google blog search, I can see something running down the left hand side that says “last hour” and within the last hour you show me the most relevant things in the last hour, the last 4 hours, and then the last day.  And you could present it that way, almost sort of a timeline metaphor. I’m sure there are probably things you could do with shading and other stuff to go along with that.  Image search…Live has done some interesting things now where they’ve made it much less textual, and much more stuff that you’re hovering over, that you can interact with it in that regard.  An I don’t know, it might be that with book search and those other kinds of things that there’ll be other kinds of metaphors that come into place that you can do when you know you are going to present most of the information just from those sorts of resources.  With Video search… I think we’ve already seen a lot of the thing with video search is just giving you the display and being able to play the videos directly.  Rather than having to leave the site because it just doesn’t make sense to have to leave the site in that regard.

Gord: When I was talking to Marissa, she saw a lot more mash ups with search functionality, and you talked about having maps and that with local search making sense, but its almost like you take the search functionality and you layer that over different types of interfaces that make sense, given the type of information your interacting with.

Danny: Right.

Gord: One thing I talked about with a few different people is ‘how much functionality do you put in the hands of the user?’ how much needs to be transparent? How hard are we willing to work with a page of search results?

Danny: By default, not a lot, you know if you’re just doing a general search, I don’t think that putting a whole lot of functionality is going to help you. You could put a lot of options there but historically we haven’t seen people use those things, and I think that’s because they just want to do their searches. They want you to just naturally get the right kind of information that’s there and a lot of the time if they give you that direct answer you don’t need to do a lot of manipulation.  It’s a different thing I think when you get into some very vertical, very task orientated kinds of searches, where you’re saying, ‘I don’t just need the quick answer, I don’t just need to browse and see all the things that are out there, but actually, I’m trying to drill down on this subject in a particular way’.  And local tends to be a great example. ‘Now you’ve given me all the results that match the zip code, but really I would like to narrow it down into a neighborhood, so how can I do that?’  Or a shopping search.  ‘I have a lot of results but now I want to buy something, so now I need to know who has it in inventory? Now I really need to know who has it cheapest? And I need to know who’s the most trusted merchant?’ Then I think the searcher is going to be willing to do more work on the search and make use of more of the options that you give to them.

Gord: Like you say, if you’re putting users directly into an experience where they are closer to the information that they were looking for, there’s probably a greater likelihood that they’re willing to meet you half way, by doing a little extra work to refine that if you give them tools that are appropriate to the types of results they are seeing.  So if it’s shopping search, filtering that by price, or by brand.  That’s common functionality with a shopping search engine and maybe we’ll see that get in to some of the other verticals. But I guess the big question is, in the next three years are the major engines going to gain enough confidence that they’ll be providing a deeper vertical experience as the default, rather than as an invisible tab or a visible tab.

Danny: I still tend to think that the way that they are going to give a deeper vertical experience is the visible tab idea, which is you know, that you are not going to be overtly asked to do it, it is just going to do it for you, and then give you options to get out of it, if it was the wrong choice. So, both Ask, and Google, which are getting all the attention right now, for universal search, you know, blended search if you wanna find a generic term for it that, doesn’t favor one service over the other.  The other term is federated search and I’ve always hated that because it always felt like something from that, you know, came out of the Star Trek Enterprise (laugh). No, I want Klingon search! (laugh) I think that in both of those cases you do the search and the default still is web.  And Ask will say, over here on the side we have some other results. Yes, universal search is inserting an item here or an item there but in most of the cases it still looks like web search, right? They still, really feel like OneBoxes. I haven’t had a universal search happen to me yet that I’ve come along and I’ve thought ‘that really was something I couldn’t have got just from searching the web’ except when I’ve gotten a map.  That’s come in when they’ve shown the map, and that is that kind of dramatic change, and I think at some point they will get to that point, that kind of dramatic change where you just search for “plumbers” and a zip code.  I’m so confident of it I’m just going to give you Google local. I’m not just going to insert a map and give you 7 more web listings that are down there. I’m going to give you a whole bunch of listings and I’m going to change the whole interface on you and if you’re going ‘well, this isn’t what I want’, then I’m going to be able to give you some options if you want to escape out of it.  I like what Ask does, in the sense that it’s easy to escape out of that thing because you just look off to the side and there’s web search over here, there’s other stuff over there.  I think it’s harder for Google to do that when they try to blend it all together. The difficulty remains as to whether people will actually notice that stuff off to the side, and make use of it.

Gord: That was actually something that Jacob Nielsen brought up. He said the whole paradigm of the linear scan down the page is such a dominant user behavior, that we’ve got so used to, you know engines like Ask can experiment with a different layout where they’re going two dimensional, but will the users be able to scan that efficiently?

Danny: I’ve been using this Boeing versus Airbus analogy when I’m trying to explain to people the differences between what Google is doing and what Ask is doing.  Boeing is going, ‘Well, we’ll build small fast energy-efficient jets’ and Airbus is saying ‘We’ll build big huge jets, and we’ll move more people so you’ll be able to do less flights’.  And when I look at the blended search, Google’s approach is, well, we’ve got to stay linear, we’ve got to keep it all in there. That’s where people are expecting the stuff and so we’re going to go that way.  Ask’s approach is we’re going to be putting it all over the place on the page and we’ve got this split, really nice interface.  And I agree with them. And of course Walt Mossberg wrote that review where he said ‘oh they’re so much nicer, they look so much cleaner’, and that’s great, except that he’s a sophisticated person, I’m a sophisticated person, you’re a sophisticated person, we search all the time.  We look at that sort of stuff. A typical person might just ignore it; it might continue to be eye candy that they don’t even notice. And that is the big huge gamble that is going on between these two sorts of players and then, yet again, it might not be a gamble because when you talk to Jim Lanzone, he says ‘My testing tells me this is what our people do’. Well, his people might be different from the Google people. Google has got a lot more new people that come over there that are like, ‘I just want to do a search, show me some things, where’s the text links? I’m done’. So I tend to look perhaps more kindly on what Google is doing, than some people who try to measure them up against Ask because I understand that they deal with a lot more people than Ask, and they have to be much more conservative than what Ask is doing.  And I think that what’s going to happen is those two are going to approach closer together.  The advantage, of course, Jim has over at Ask, is that he doesn’t have to put ads in that column so he’s got a whole column he can make use of, and it is useful, and it is a nice sort of place to tuck it in there. If you really want to talk about search interfaces, what will be really fun to envision is what happens when Ajax starts coming along and doing other things. Can I start putting the sponsored search results where they are hovering above other results? Is there other issues that come with that?  There may be some confusion as to why I’m getting this and why I’m getting that. Can I pop up a map as I hover over a result? I could deliver you a standard set of search results and I can also deliver you local results on top of a particular type of picture.  If I move my mouse along it, I could show you a preview of what you get in local and you might go “Oh wow, there’s a whole map there”. I want to jump off in that direction.  That would be really fun to see that type of stuff come along there, but I’m just not seeing anything come out of it.  What we typically have had when people have played with the interface is, these really WYSIWYG things like, ‘well we’ll fly you though the results, or we’ll group them’.  None of which is really something that you’d need, that added to the choices, “do I want to go vertical, do I not want to go vertical?”

Gord: When we start talking about the fact that the search results page could be a lot more dynamic and interactive, of course the big question is what does that do for monetization of the page?  One of the things that Jakob (Nielsen) talked about was banner blindness.  Do people start cutting out sections of the page?  We talked a little about that.  How do you make sure that the advertising doesn’t get lost on the page when there’s just a lot more visual information in there to assimilate?

Danny: Well I think a variety of things that are going to start happening there.  For example, Google doesn’t do paid inclusion, right, but Google has partnerships with YouTube and they have these channels, and they’re going to be sharing revenue from these channels with other people. So when they start including that stuff up, perhaps they are getting paid off of that.  They didn’t pay to put it in the index but, because they are better able to promote their video channels, more people are going over there, and they’re making money off of that as a destination.  So in some ways, they can afford to have their video results start becoming more relevant because they don’t have to worry about if you didn’t click on the ad from the initial search result, they sort of lost you.  In terms of how the other ads might go, I guess the concern might be if the natural results are getting better and better why would anyone click on the ads anyway?  Maybe people will reassess the paid results and some people will come through and say that paid search results are a form of search data base as well.  So we’re going to call them classifieds or we’re going to call them ads, we’re going to move them right into the linear display.  You know there’ll be issues, because at least in the US, you have the FCC guidelines that say that you should really keep them segregated.  So if you don’t highlight them or blend them in some way, you might run into some regulatory problems.  But then again, maybe those rules might start to change as the search innovation starts to change, and go with it from there.  I don’t know, the search engines might come up with other things.  You know we’re getting toolbars that are appearing more on all of our things. Google might start thinking, ‘Well, let’s put ads back onto that toolbar’.  We used to have those sorts of things, and everyone seems to catch on, but they might come back, and that might be another way that some of the players, especially somebody like Google, might make money beyond just putting the ad on the search result page.

Gord: In the next three years, are we going to get to the point where search starts to become less of a destination activity like the way it is now, and the functionality  sits underneath more of Web 2.0 or semantic web or whatever you want to call it.  It almost becomes a mash up of functionality that underlies other types of sites. Are we going to stop going to a Google or a Yahoo as much to launch a distinct search as we do now?

Danny: You know people have been saying that for at least 3 or 4 years now, especially with Microsoft. ‘Oh you’re not even going to go there, you’re going to do it from your desktop.’  Vista, which I have yet to actually use.  I’ve got the laptop and I’m about to start playing with it! Apparently, it’s supposed to be even more integrated than it was with XP.  But I still tend to think, you know what? We do stuff in our browsers.  I know widgets are growing and I know there’s more stuff that’s just drawing stuff into your computer as well, but we still tend to do stuff in our browser.  I still see search as something where I’m going to go to a search engine and do the search.  With the exception of toolbars. I think we’re going to do a lot more searching through toolbars.  Tool bars are everywhere; it’s really rare for me to start a search where I’m actually not doing it from the toolbar.  I just have a toolbar that sits up there, and I don’t need to be at the search engine itself.  But I still want the results displayed in my browser.  Because I think most of the stuff I’m going to have to deal with is going to be in my browser as well.  So it doesn’t really help to be able to search from Microsoft Word, right?  Because I don’t want all these sites in a little window within Word. I’m probably going to have to read what they say, so I’m probably going to have to go there.  I think that will change though if I have a media player, then I think it makes much more sense for me, and you can already do this with some media players, where you can do searches, and have the results flow back in.  iTunes is a classic example. iTunes is basically a music search engine.  Sure, it’s limited to the music and the podcasts that are within iTunes, but it doesn’t really make any sense for me to go to the Apple website. Although, interestingly, here’s an example where Apple is just a terrible failure.  They’ve got all this stuff out there, they’ve got stuff that perhaps you might be interested in even if you don’t use their software and there’s just no way to get to it on the web.  The last time I looked you really had to do the searches in iTunes.  So they’re missing out on being a destination for those people who say ‘I’m not going to use iTunes’  or ‘I don’t have iTunes’ or ‘I’m on a different version.’ I don’t know if you’ve downloaded it recently but it takes forever and it’s just a pain.

Gord: I think that covers off the main questions I wanted to cover off in this.  Is there anything else as far as search in the next three years that you wanted to comment on?

Danny: You know, it’s hard because if you’d asked me that three years ago, would I have told you, ‘watch for the growth of verticals and watch for the growth of blended search’, (laughs) right?  I’ve been thinking really hard because, I’m like, ‘Gosh, now what am I going to talk about because they’re doing both of those things’. I think personalized search is going to continue to get strong.  I do think that Google is onto something with their personalized search results.  I don’t think that they’re going to cause you to be in an Amazon situation where you’re continuing to be recommended stuff you’re no longer interested in.  I think that people are misunderstanding how sophisticated it can be.  I think that the next big trend is that, ironically from what I just said to you, search is going to start jumping into devices.  And everything is going to have a search box.  But it will be appropriate.  My iPod itself will have a search capability within it.  And the iPhone, to some degree, maybe is going to be that look at how it’s happening already. But I’ll be able to search, access, and get information appropriate to that device within it.  Windows Media Center, when I first got that in 2005, I said, this is amazing, because it’s basically got TV search built into it.  I do the search and then of course, it allows me to subscribe to the program, and records the program, and knows when the next ones are coming up.  And it makes so much more sense for that search to be in that device than it did for me to have it elsewhere.  I use it all the time, when I want to know when a programs on, I don’t have to find where the TV listings are on the web, I just walk over to my computer and do a search from within the Media Center player.  So I think we’re going to have many more devices that are internet enabled, and there’s going to be reasons why you want to do searches with them, to find stuff for them in particular.  That’s going to be the new future of search and search growth will come into it.  And in terms of what that means to the search marketer, I think it’s going to be crucial to understand that these are going to be new growth areas, because those searches when they start are going to be fairly rudimentary. It’s going to be back in the days of, OK, they’re probably going to be driven off of meta data, so you got to make sure you have your title, and your description and making sure the item that your searching for is relevant.

Gord: So obviously all that leads itself to the question of mobile search, and will mobile search be more useful by 2010?

Danny: Sure, but it’s going to be more useful because it’s not going to be mobile search.  It’s just the device is going to catch up and be more desktop-like.  I have a Windows mobile phone at the moment, and I have downloaded some of the applets like Live Search and Google Maps, and those can be handy for me to use, but for the most part, if I want to do a search, I fire up the web browser, I look for what I’m looking for, the screen is fairly large, and I can see what I wanted to find.  And I think that you’re going to find that the devices are going to continue to be small and yet gain larger screens, and have the ability for you to better do direct input. So if you want to do search, you can do a search. It’s not like you’re going to need to have to have something that’s designed for the mobile device that only shows mobile pages.  I think that’s going to change.  You’re going to have some mobile devices that are specifically not going to be able to do that and those people in the end are going to find that no one is going to be trying to support you.

Gord: Thanks Danny.

The Strength of Weak Ties and Search

First published August 2, 2007 in Mediapost’s Search Insider

Mark Granovetter wrote a ground-breaking study in 1973 called the “The Strength of Weak Ties.” It later became one of the foundations for Gladwell’s “The Tipping Point.” I ran across Granovetter’s work and a later follow up study by Jonathan Frenzen and Kent Nakamoto (Frenzen, Nakamoto: “Structure, Cooperation and the Flow of Market Information,” The Journal of Consumer Research, December 1993) that further explored the fascinating world of word-of-mouth and how it spreads through networks. When we move this into an online paradigm, it has some thought-provoking implications.

No Network is an Island

First, let’s cover Granovetter’s work. In an oversimplified version, it states that social networks are not uniformly dense in their makeup. There are very densely linked nodes. These are families, circles of best friends, immediate co-workers and other very close relationships. These clusters, or islands, are then loosely linked by more fragile ties that span the clusters. They include formal acquaintances, lapsed or dormant friendships, more distant relationships and other “arm’s length” connections. These are Granovetter’s “weak ties.” For a viral spreading of information, we can assume that word will spread quickly within the tightly linked clusters, the “strong ties” — but for it to spread widely, it has to be passed through the “weak ties.” Otherwise, it will never spread outside a cluster. Thus the importance of these “weak ties” in the structure of the social network.

But there is another factor, and that is the cooperativeness of those “weak ties.” Are they motivated to pass on the information? In the words of Frenzen and Nakamoto: “Instead of an array of islands interconnected by a network of fixed bridges, the islands are interconnected by a web of “drawbridges” that are metaphorically raised and lowered by transmitters depending on the moral hazards imposed by the information transmitted by word of mouth.”

The Principles of “Passing it On”

Frenzen and Nakamoto’s study introduced two variables: value of information and moral hazard. In this case, they used the framework of an exclusive sale. The value of information varied with the size of the price discount. And the moral hazard was the scarcity of inventory available at this discounted price. So in the low value/low moral hazard version, it was a smaller discount (20%) and there was plenty of inventory available. There was no danger that close friends and family would “lose out” by sharing this information with a wider circle. In the high value/high moral hazard version, the discount was high (50-70%) and the number of items available at this price was very limited. A scarcity mentality was imposed.

Frenzen and Nakamoto also varied the structure of the network by assigning different “tie strengths” to the linkages within the group. The results were striking. In the low moral hazard scenario, where there was maximal cooperation to pass along information, everyone in a 100 member social network, composed of five loosely linked clusters, received the information in a maximum of seven time periods (the actual period used was not stated), even with a varying link strength of the network. In fact, in the strongest structure, everyone knew by the third time period. But in the high moral hazard situation, transfer of information was much slower and less effective. In the strongest structure, it took eight time periods for 100% spreading of the information. And in the weakest structure, even after 15 time periods, still only 66% of the group had received the information.

WOM Moved Online

So, what does this have to do with search? Simply this. The weak ties are now moving online. If we have great news or a great product story to share, we can now share this information on line. We can blog about it, post a comment or leave a review. But we’re most likely to do this when there’s low moral hazard. We pass information where there’s no “scarcity mentality.” So we’ll happily post about a great travel destination, a restaurant or a piece of software because by doing so, we’re not running the risk of losing out ourselves. We’re much less likely to blog about that exceptional deal on men’s suits at 70% off, when there’s only six suits left. That information is reserved for our closest friends. It only gets passed along through our strong ties.

There’s another factor at play here that was beyond the scope of Frenzen and Nakamoto’s study. We are motivated to pass on information online when it’s remarkable. Product or brand experiences have to earn the right to be passed on. As online mavens, we’re motivated by being “first to know” and by passing on value. Therefore, we carefully consider the trustworthiness of the information and its authenticity before we decide to share it. After all, we’re staking our reputation on it. Although these online posts become Granovetter’s “weak ties” online (because we usually don’t have strong personal relations with all the readers of our various online “footprints”) they only happen when the nature of the information bears passing along.

If we’re depending on the spread of word of mouth for our marketing, we have to start with some basic understanding of how the dynamics of the network works. All too often, we assume that everyone is like our best friend, eager to spread the word about our product or service. In the wired world, this would include leaving footprints online, through blog posts, comments and reviews. There, future customers can connect with them through search. But a successful viral campaign is largely dependent on those weak ties being motivated to pass along the information. It needs to be remarkable in some compelling way (i.e. Godin’s Purple Cow), it has to eliminate a scarcity mentality, it has to feel authentic and, to appeal to the mavens, it has to have the feel of news.

Breaking “Auction Order” Explained

One of the things that raised eyebrows in my interview with Diane Tang and Nick Fox was the following section regarding how Google determines which ads rank first and climb into the all important top sponsored locations:

Nick: Yes, it’s based on two things.  One is the primary element is the quality of the ad. The highest quality ads get shown on the top. The lower quality ads get shown on the right hand side. We block off the top ads from the top of the auction, if you really believe those are truly excellent ads…

Diane: It’s worth pointing out that we never break auction order…

Nick: One of the things that’s sacred here is making sure that the advertiser’s have the incentive. In an auction, you want to make sure that the folks who win the auction are the ones who actually did win the auction. You can’t give the prize away to the person who didn’t win the auction. The primary element in that function is the quality of the ad. Another element of function is what the advertiser’s going to pay for that ad. Which, in some ways, is also a measure of quality. We’ve seen that in most cases, where the advertiser’s willing to pay more, it’s more of a commercial topic. The query itself is more commercial, therefore users are more likely to be interested in ads. So we typically see that queries that have high revenue ads, ads that are likely to generate a lot of revenue for Google are also the queries where the ads are also most relevant to the user, so the user is more likely to be happy as well. So it’s those two factors that go into it. But it is a very high threshold. I don’t’ want to get into specific numbers, but the fraction of queries that actually show these promoted ads is very small.

This seemed a little odd to me in the interview and I made a note to ask further about that, but what can I say, I forgot and went on to other things. But when the article got posted on Searchengineland, Danny jumped on it at Sphinn

“Seriously? I mean, it’s not an auction. If it were an auction, highest amount would win. They break it all the time by factoring in clickrate, quality score, etc. Not saying that’s bad, but it’s not an auction.”

This reminded me to follow up with Nick and Diane. Diana Adair, on the Google PR team, responded with this clarification:

We wanted to follow up with you regarding your question below.  We wanted to clarify that we rank ads based on both quality score and by bid.  Auction order, therefore, is based on the combination of both of those factors.  So that means that it’s entirely possible that an ad with a lower bid could rank higher than an ad with a higher bid if the quality score for the less expensive ad is high enough.

So, it seems it’s the use of the word “auction” that’s throwing everyone off here. Google’s use of the term includes ad quality. The rest of the world thinks of an auction as somewhere where the highest bid (exclusively) determines the winner. Otherwise, like Danny said, “it’s not an auction”. So, with that interpretation, I then assume that Nick and Diane’s (which sounds vaguely like a title of a John Mellencamp song) comment means that Google won’t arbitrarily hijack these positions for other types of packages which may include presence on the SERP, as in the current Bourne Ultimatum promotion.