In Search of B2B Landmarks

First published September 27, 2007 in Mediapost’s Search Insider

This week (actually, right about the time you’ll be reading this column) I’ll be talking to the American Business Media Publishers Summit in Chicago about online opportunities, from a user’s perspective. As I was getting ready for the address, I realized there’s a substantial piece of the B to B market that’s missing online. I call it a market enabler.

Looking for Landmarks

Think of our typical progression when we begin researching something online. If it’s new territory, the first thing we need to do is to find a landmark to navigate from. From that landmark we tend to navigate out from it. This is true both in the online and real world. Think of Google as everybody’s favorite landmark. It’s the starting point of nearly all our online navigation, because we know we can always get back to it if we’re lost. In fact, it becomes the vehicle of our online navigation in almost all cases. The only time we deviate from it is if we have enough familiarity with a certain section of the online landscape that we can find other online landmarks without it. For example, if I’m planning a trip somewhere, I usually don’t start at Google. I either start at one of the travel tools I have bookmarked (Farecompare.com, Kayak, Sidestep) or at my favorite travel community, Tripadvisor.com. I’ve been down this path before, so I’ve memorized other familiar landmarks. Otherwise, I always start at Google.

But there are some things we look for in our landmarks. We want them to be recognizable. We want them to be authoritative. We want them to be comprehensive. And usually, we want them to be relatively agnostic. We don’t want to be pushed in any particular direction. We want to choose our own paths. We want a neutral marketplace that allows us to compile our own consideration set, not have it built for us.

Making Life Easier

It also helps if our landmarks incorporate some strong navigational and comparison functionality. One of the best things about the travel sites and tools I’ve mentioned is their sophisticated search and filtering capabilities. They beat Google at this particular game. They’re a more useful landmark to navigate from within. And increasingly, they’re incorporating authentic community dialogue and reviews with the search functionality. I can search, sort and qualify, all in one place. They make the difficult job of planning a trip easier. They’re market enablers, because they allow us to compare alternatives more effectively. If we look at the two best examples of market enablers, eBay and Amazon, they share all of the above characteristics.

So, let’s return to the B to B marketplace. In our B to B survey, we found that almost everyone starts with Google, because most of the time when we research B to B purchases, we’re starting in unfamiliar territory. We have no landmarks. And while we usually end up going fairly quickly to vendor sites, the survey found a strong desire to find an unbiased landmark as the market’s middle ground. Yet, no enablers have strongly established themselves in this position. There is no eBay or Amazon, or even a TripAdvisor, of B to B. There are vertical engines, including Business.com, Knowledgestorm, KellySearch, ThomasNet and others, but none have dominated the landscape to this point.

Sorting through the Haystack

In a recent B to B panel I moderated, consultant Karen Breen Vogel mentioned that these vertical properties do restrict the scope of the search, so rather than looking for a needle in a haystack, you’re looking for a needle in a needlestack. While this is true, it can still be a pretty painful process if you’re looking for the right needle. The problem is that the B 2 B marketplace is vast and fragmented. Also, there are no obvious affiliation or revenue opportunities, as there are in the travel business. There isn’t an obvious money trail to follow in the B to B world to make enabling the marketplace a potentially lucrative proposition. Most of the players have morphed over from being directory publishers in the offline world, and are still following the paid listing model. Unfortunately, this doesn’t lend itself to the neutral marketplace favored by researching buyers.

There are few purchase processes that are more difficult or taxing than a complicated B to B one. Sorting out potential vendors can be a long, tedious and frustrating process. First of all, there’s no emotional investment. This isn’t planning a vacation. This is your job. Secondly, the risk level is extremely high. Screw up, and your job may evaporate. While the potential to make money may be obscured by the challenges, the buyer’s need is painfully obvious. And I can’t help thinking, if eBay could do it, given the immense diversity of its marketplace, there must be a way.

Search Engine Results: 2010 – Marissa Mayer Interview

marissa-mayer-7882_cnet100_620x433Just getting back in the groove after SES San Jose. You may have caught some of my sessions or heard we have released a white paper looking at the future of search and with some eye tracking on personalized and universal search results. We don’t have the final version up yet, but it should be available later this week. The sneak preview got rave reviews in SJ.

Anyway, I interviewed a number of influencers in the space, and I’ll be posting the full transcripts here on my blog over the next week. I already posted Jakob Nielsen’s interview. Today I’ll be posting Marissa Mayer’s, who did a keynote at SES SJ. It makes for interesting reading. Also, I’ll be running excerpts and additional commentary on Just Behave on Search Engine Land. The first half ran a couple weeks ago. Look for more (and a more regular blog schedule) coming out over the next few weeks. Summer’s over and it’s back to work.

Here’s my chat with Marissa:

Gord: I guess I have one big question that will probably break out into a few smaller questions.  What I wanted to do for Search Engine Land is speculate on what the search engine results page might look like to the user in three years time.  With some of the emerging things like personalization and universal search results and some the things that are happening with the other engines: Ask with their 3D Search, which is their flavor of Universal, it seems to me that we might be at a point for the first time in a long time the results that we’re seeing may have a significant amount of flux over the next 3 years.  I wanted to talk to a few people in the industry about their thoughts of what we might be seeing 3 years down the road.  So that’s the big over-arching question I’m posing.

Marissa: Sure, Minority Report on search result pages…Well, I’d like to say it’s going to be like that but I think that’s a little further out.  There are some really fascinating technologies that I don’t know if you’ve seen..some work being done by a guy named Jeff Han?

Gord: No.

Marissa: So I ran into Jeff Han both of the past years at TED. Basically he was doing multi-touch before they did it on the iPhone on a giant wall sized screen, so it actually does look a lot like Minority Report. It was this big space where you could interact, you could annotate, you could do all those things.  But let me talk first about what I see happening as some trends that are going to drive change.

One is that we are seeing more and more broadband usage and I think in three years everyone will be on very fast connections, so a lot more to choose from and  a lot more data without taking a large latency hit.  The other thing we’re seeing is different mediums, audio, video.  They used to not work.  If you remember getting back a year ago, everytime you clicked on an audio file or a movie file, it would be, like, ‘thunk’?  It needs a plug in, or “thunk”, it doesn’t work.  Now we’re coming into some standardized formats and players that are either browser or technology independent enough, or are integrated enough that they are actually going to work.  And also we’re seeing users having more and more storage on their end.  And those are the sort of 3 computer science trends that are things that are going to change things.  I also think that people are becoming more and more inclined to annotate and interact with the web. It started with bloggers, and then it moved to mash ups, and now people are really starting to take a lot more ownership over their participation on the web and they want to annotate things, they want to mark it up.

So I think when you add these things together it means there’s a couple of things.  One, we will be able to have much more rich interaction with the search results pages. There might be layers of search results pages: take my results and show them on a map, take my results and show them to me on a timeline.  It’s basically the ability to interact in a really fast way, and take the results you have and see them in a new light.  So I think that that kind of interaction will be possible pretty easily and pretty likely.  I think it will be, hopefully, a layout that’s a little bit less linear and text based, even than our search results today and ultimately uses what I call the ‘sea of whiteness’ more in the middle of the page, and lays out in a more information dense way all the information from videos to audio reels to text, and so on and so forth.  So if you imagine the results page, instead of being long and linear, and having ten results on the page that you can scroll through to having ten very heterogeneous results, where we show each of those results in a form that really suits their medium, and in a more condensed format.  A couple of years ago we did a very interesting experiment here on the UI team where we took three or 4 different designs where the problem was artificially constrained.  It was above the fold Google.  If you needed to say everything that Google needed to say above the fold, how would you lay it out?  And some came in with two columns, but I think two columns is really hard when it was linear and text based.  When you started seeing some diagrams, some video, some news, some charts, you might actually have a page that looks and feels more like an interactive encyclopedia.

Gord: So, we’re almost going from a more linear presentation of results, very text based, to almost more of a portal presentation, but a personalized portal presentation.

Marissa: Right and I think as people, one, are getting more bandwidth and two, as they’re more savvy with how they look at more information, think of it this way, as more of serial access versus random access.  One of my pet peeves is broadcast news, where I really don’t like televised news anymore.  I like newspapers, and I like reading online because when I’m online or with newspapers, I have random access.  I can jump to whatever I’m most interested in.  And when you’re sitting there watching broadcast news you have to take it in the order, at the pace and at the speed that they are feeding it to you.  And yes, they try to make it better by having the little tickers at the bottom, but you can’t just jump in to what you’re interested in.  You can only read one piece of text at a time, and it’s hard to survey and scan and hone in on one type of medium or another when it’s all one medium.  So certainly there is some random access happening with the search results today.  I think as the results formats becomes much more heterogeneous, we’re going to have a more condensed presentation that allows for better random access.  Above the fold being really full of content, some text, some audio, some video, maybe even playing in place, and you see what grabs your attention, and pulls you in.   But it’s almost like random access on the front page of the New York Times, where am I more drawn to the picture, or the chart, or this piece of content down here?  What am I drawn to?

Gord: Right.  If you’re looking at different types of stimuli across the page, I guess what you’re saying is, as long as all that content is relevant to the query you can scan it more efficiently than you could with the standardized text based scanning, linear scanning, that we’re seeing now

Marissa: That’s right.

Gord: Ok.

Marissa: So the eyes follow and they just read and scan in a linear order, where when you start interweaving charts and pictures and text, people’s eyes can jump around more, and they can gravitate towards the medium that they understand best.

Gord: So, this is where Ask is going right now with their 3D search, where it’s broken it into 3 columns and they’re mixing images and text and different things.  So I guess what we’re looking at is taking it to the next extreme, making it a richer, more interactive experience, right?

Marissa: Rather than having three rote columns, it would actually be more organic.

Gord: So more dynamic.  And it mixes and matches the format based on the types of material it’s bringing back.

Marissa: Well, to keep hounding on the analogy of the front page of the New York Times.  It’s not like the New York Times…I mean they have basically the same layout each time, but it’s not like they have a column that only has this kind of content, and if it doesn’t fill the column, too bad.  They have a basic format that they change as it suits the information.

Gord: So in that kind of format, how much control does the user have? How much functionality do you put in the hands of the user?

Marissa: I think that, back to my third point, I think that people will be annotating search results pages and web pages a lot.  They’re going to be rating them, they’re going to be reviewing them.  They’re going to be marking them up, saying  “I want to come back to this one later”.  So we have some remedial forms of this in terms of Notebook now, but I imagine that we’re going to make notes right on the pages later.  People are going to be able to say I want to add a note here; I want to scribble something there, and you’ll be able to do that.  So I think the presentation is going to be largely based on our perceived notion of relevance, which of course leverages the user, in the ways they interact with the page, and look at what they do and that helps inform us as to what we should do.  So there is some UI user interaction, but the majority of user interaction will be on keeping that information and making it consumable in the best possible way.

Gord: Ok, and then if, like you said, if you go one step further, and provide multiple layers, so you could say, ok, plot my search results, if it’s a local search, plot my search results on a map.  There’s different ways to, at the user’s request, present that information, and they can have different layers that they can superimpose them on.

Marissa: So what I’m sort of imagining is that in the first basic search, you’re presented with a really rich general overview page, that interweaves all these different mediums, and on that page you have a few basic controls, so you could say, look, what really matters to me is the time dimension, or what really matters to me is the location dimension.  So do you want to see it on a timeline, do you want to see it on a map?

Gord: Ok, so taking a step further than what you do with your news results, or your blog search results, so you can sort them a couple of different ways, but then taking that and increasing the functionality so it’s a richer experience.

Marissa: It’s a richer experience. What’s nice about timeline and date as we’re currently experimenting with them on Google Experimental is not only do they allow you to sort differently, they allow you to visualize your results differently.  So if you see your results on a map, you can see the loci, so you can see this location is important to this query, and this location is really important to that query.  And when you look at it in time line you can see, “wow, this is a really hot topic for that decade”.  They just help you visualize the nut of information across all the results in these fundamentally different ways that ‘sorts’ kind of get at. But it’s really allowing that richer presentation and that overview of results on the meta level that helps you see it.

Gord: Ok.  I had a chance to talk to Jakob Nielsen about this on Friday, and he doesn’t believe that we’re going to be able to see much of a difference in the search results in 3 years.  He just doesn’t think that that can be accomplished in that time period.  What you’re talking about is a pretty drastic change from what we’re seeing today, and the search results that we’re seeing today haven’t changed that much in the last 10 years, as far as what the user is seeing.  You’re really feeling that this is possible?

Marissa: It’s interesting, you know, I pay am lot of attention to how the results look.  And I do think that change happens slowly over time and that there are little spurts of acceleration.  We at Google certainly saw a little accelerated push during May when we launched Universal Search.  I’m of the view that maybe its 3 years out, maybe it’s 5 years out, maybe it’s 10 years out.  I’m a big subscriber to the slogan that people tend to overestimate the short term and underestimate the long term.  My analogy to this is that when I was 5, I remember watching the Jetson’s and being, like, this rocks!  When I’m thirty there are flying cars!  Right?  And here I am, I’m 32 and we don’t even have a good flying car prototype, and yet the world has totally changed in ways that nobody expected because of the internet and computing.  In ways that in the 1980s no one even saw it coming.  Because personal computers were barely out, let alone the internet.  It’s interesting.  We do our off site in August. I do an offsite with my team where we do Google two years out. There it’s really interesting to see how people think about it.  I take all the prime members on my team, so they’re the senior engineers, and everybody has homework.  They have to do a homepage and a results page of Google, and this year it’ll be Google 2009.

Gord: Oh Cool!

Marissa: Six months out, it’s really easy because if we’re working on it, because if it’s going to launch in 6 months and it’s big enough that you would notice, we’re working on it right now and we know it’s coming.  And five years or ten years out we start getting into the bigger picture things like what I’m talking to you about.  When the little precursors that get us ready for those advances happen between now and then that’s what’s shifting.   So I’m giving you the big picture so you can start understanding what some of the mini steps that might happen in the next 3 years, to get us ready for that, would be.  The two to three year timeframe is painful. Everybody at my offsite said, “this timeframe sucks!” So it’s just far enough out that we don’t have great visibility, will mobile devices be something that’s a really big new factor in three years?  Maybe, maybe not.  Some of the things are making fast progress now may even take a big leap, right, like it was from 1994 to 97 on the internet.  Or if you think about G-mail and Maps, like AJAX applications..you wouldn’t have foreseen those in 2002 or 2003.  So, two or three years is a really painful time frame because some things are radically different, but probably in different ways than you would expect.  You have very low visibility in our industry to that time frame.  So I actually find it easier to talk about the six month timeframe, or the ten year timeframe.  So I’m giving you the ten year picture knowing that it’s not like the unveiling of a statue, where you can just take the sheet, snatch it off and go, “Voila there it is”.  If you look at the changes we’ve made over time at Google search they’ve always been “getting this ready, getting this ready”.  So the changes are very slow and feel like they’re very incremental.  But then you look at them in summation over 18 months or two years, you’re like, “you know, nothing felt really big along the way, but they are fundamentally different today”.

Gord: One last question.  So we’re looking at this much richer search experience where it’s more dynamic and fluid and there are different types of content being presented on the page.  Does advertising or the marketing message get mixed into that overall bucket, and does this open the door to significantly different types of presentation of the advertising message on the search results page?

Marissa: I think that there will be different types of advertising on the search results page.  As you know, my theory is always that the ads should match the search results.  So if you have text results, you have text ads, and if you have image results, you have image ads.  So as the page becomes richer, the ads also need to become richer, just so that they look alive and match the page.  That said, trust is a fundamental premise of search.  Search is a learning activity.  You think of Google and Ask and these other search engines as teachers.  As an end user the only reason learning and teaching works, the only way it works, is when you trust your teacher.  You know you’re getting the best information because it’s the best information, not because they have an agenda to mislead you or to make more money or to push you somewhere because of their own agenda.  So while I do think the ads will look different, they will look different in format, or they may look different in placement, I think our commitment to calling out very strongly where we have a monetary incentive and we may be biased will remain.  Our one promise on our search results page, and I think that will stand, is that we clearly mark the ads.  It’s very important to us that the users know what the ads are because it’s the disclosure of that bias, that ultimately builds the trust which is paramount to search

Gord: Ok.  Great to see you’re a keynote at San Jose in August.

Marissa: Should be fun.  This whole topic has me kind of jazzed up so maybe I’ll talk about that.

Search Engine Results: 2010 – Interview with Danny Sullivan

Danny-SullivanHere’s another in the series of the Search:2010 transcripts, this one of my chat with Search Engine Land Editor Danny Sullivan:

Gord: The big question that I’m asking is how much change are we going to see on the search engine results page over the next three years.  What impact are things like universal search and personalization and some of the other things we’re seeing come out, how much of that is going to impact the actual interface the user is going to see.  Maybe let’s just start there.

Danny: I love the whole series to begin with because then I thought, Gosh, I never really sat down and tried to plot out how I would do it, and I wish I had had the time to do that before we talked (laughs).  But it would be nice to have a contest or something for the people who are in the space to say I think this is the way we should do it or where it should go.
But the thing at the top of my head that I expect or I assume that we’re going to get is… I think they’re going to get a lot more intelligent at giving you more from a particular database when they know you’re doing a specific a kind of search.  It’s not necessarily an interface change, but then again it is.  This is the thing I talked about when I was saying about when the London Car Bombing attempts happened, and I’m searching for “London Bombings”.  When you see a spike in certain words you ought to know that there’s a reason behind that spike.  It’s going to be news driven probably, so why are you giving me 10 search results? Why don’t you give me 10 news results?  And saying I’ve also got stuff from across the web, or I’ve got other things that are showing up in that regard.  And that hasn’t changed.  I‘d like to see them get that.   I’d like to see them figure out some intelligent manner to maybe get to that point.  Part of what could come along with that too is that as we start displaying more vertical results the search interface itself could change.  So I think the most dramatic change in how we present search results, really, has come off of local.  And people go “wow, these maps are really cool!” Well of course they’re really cool, they’re presenting information on a map which makes sense when we’re talking about local information.  You want things displayed in that kind of manner.  It doesn’t make sense to take all web search results and put them on a map. You could do it, but it doesn’t communicate additional information for you that’s probably irrelevant and that needs to be presented in a visual manner.  If you think about the other kinds of search that you tend to do, Blog search for instance, it may be that there’s going to be a more chronological display. We saw them do with news archive where they would do a search and they would tell you this happened within these years at this time.  Right now when I do a Google blog search, by default it shows me ‘most relevant’.  But sometimes I want to know what the most recent thing is, and what’s the most recent thing that’s also the most relevant thing right? So perhaps when I do a Search, a Google blog search, I can see something running down the left hand side that says “last hour” and within the last hour you show me the most relevant things in the last hour, the last 4 hours, and then the last day.  And you could present it that way, almost sort of a timeline metaphor. I’m sure there are probably things you could do with shading and other stuff to go along with that.  Image search…Live has done some interesting things now where they’ve made it much less textual, and much more stuff that you’re hovering over, that you can interact with it in that regard.  An I don’t know, it might be that with book search and those other kinds of things that there’ll be other kinds of metaphors that come into place that you can do when you know you are going to present most of the information just from those sorts of resources.  With Video search… I think we’ve already seen a lot of the thing with video search is just giving you the display and being able to play the videos directly.  Rather than having to leave the site because it just doesn’t make sense to have to leave the site in that regard.

Gord: When I was talking to Marissa, she saw a lot more mash ups with search functionality, and you talked about having maps and that with local search making sense, but its almost like you take the search functionality and you layer that over different types of interfaces that make sense, given the type of information your interacting with.

Danny: Right.

Gord: One thing I talked about with a few different people is ‘how much functionality do you put in the hands of the user?’ how much needs to be transparent? How hard are we willing to work with a page of search results?

Danny: By default, not a lot, you know if you’re just doing a general search, I don’t think that putting a whole lot of functionality is going to help you. You could put a lot of options there but historically we haven’t seen people use those things, and I think that’s because they just want to do their searches. They want you to just naturally get the right kind of information that’s there and a lot of the time if they give you that direct answer you don’t need to do a lot of manipulation.  It’s a different thing I think when you get into some very vertical, very task orientated kinds of searches, where you’re saying, ‘I don’t just need the quick answer, I don’t just need to browse and see all the things that are out there, but actually, I’m trying to drill down on this subject in a particular way’.  And local tends to be a great example. ‘Now you’ve given me all the results that match the zip code, but really I would like to narrow it down into a neighborhood, so how can I do that?’  Or a shopping search.  ‘I have a lot of results but now I want to buy something, so now I need to know who has it in inventory? Now I really need to know who has it cheapest? And I need to know who’s the most trusted merchant?’ Then I think the searcher is going to be willing to do more work on the search and make use of more of the options that you give to them.

Gord: Like you say, if you’re putting users directly into an experience where they are closer to the information that they were looking for, there’s probably a greater likelihood that they’re willing to meet you half way, by doing a little extra work to refine that if you give them tools that are appropriate to the types of results they are seeing.  So if it’s shopping search, filtering that by price, or by brand.  That’s common functionality with a shopping search engine and maybe we’ll see that get in to some of the other verticals. But I guess the big question is, in the next three years are the major engines going to gain enough confidence that they’ll be providing a deeper vertical experience as the default, rather than as an invisible tab or a visible tab.

Danny: I still tend to think that the way that they are going to give a deeper vertical experience is the visible tab idea, which is you know, that you are not going to be overtly asked to do it, it is just going to do it for you, and then give you options to get out of it, if it was the wrong choice. So, both Ask, and Google, which are getting all the attention right now, for universal search, you know, blended search if you wanna find a generic term for it that, doesn’t favor one service over the other.  The other term is federated search and I’ve always hated that because it always felt like something from that, you know, came out of the Star Trek Enterprise (laugh). No, I want Klingon search! (laugh) I think that in both of those cases you do the search and the default still is web.  And Ask will say, over here on the side we have some other results. Yes, universal search is inserting an item here or an item there but in most of the cases it still looks like web search, right? They still, really feel like OneBoxes. I haven’t had a universal search happen to me yet that I’ve come along and I’ve thought ‘that really was something I couldn’t have got just from searching the web’ except when I’ve gotten a map.  That’s come in when they’ve shown the map, and that is that kind of dramatic change, and I think at some point they will get to that point, that kind of dramatic change where you just search for “plumbers” and a zip code.  I’m so confident of it I’m just going to give you Google local. I’m not just going to insert a map and give you 7 more web listings that are down there. I’m going to give you a whole bunch of listings and I’m going to change the whole interface on you and if you’re going ‘well, this isn’t what I want’, then I’m going to be able to give you some options if you want to escape out of it.  I like what Ask does, in the sense that it’s easy to escape out of that thing because you just look off to the side and there’s web search over here, there’s other stuff over there.  I think it’s harder for Google to do that when they try to blend it all together. The difficulty remains as to whether people will actually notice that stuff off to the side, and make use of it.

Gord: That was actually something that Jacob Nielsen brought up. He said the whole paradigm of the linear scan down the page is such a dominant user behavior, that we’ve got so used to, you know engines like Ask can experiment with a different layout where they’re going two dimensional, but will the users be able to scan that efficiently?

Danny: I’ve been using this Boeing versus Airbus analogy when I’m trying to explain to people the differences between what Google is doing and what Ask is doing.  Boeing is going, ‘Well, we’ll build small fast energy-efficient jets’ and Airbus is saying ‘We’ll build big huge jets, and we’ll move more people so you’ll be able to do less flights’.  And when I look at the blended search, Google’s approach is, well, we’ve got to stay linear, we’ve got to keep it all in there. That’s where people are expecting the stuff and so we’re going to go that way.  Ask’s approach is we’re going to be putting it all over the place on the page and we’ve got this split, really nice interface.  And I agree with them. And of course Walt Mossberg wrote that review where he said ‘oh they’re so much nicer, they look so much cleaner’, and that’s great, except that he’s a sophisticated person, I’m a sophisticated person, you’re a sophisticated person, we search all the time.  We look at that sort of stuff. A typical person might just ignore it; it might continue to be eye candy that they don’t even notice. And that is the big huge gamble that is going on between these two sorts of players and then, yet again, it might not be a gamble because when you talk to Jim Lanzone, he says ‘My testing tells me this is what our people do’. Well, his people might be different from the Google people. Google has got a lot more new people that come over there that are like, ‘I just want to do a search, show me some things, where’s the text links? I’m done’. So I tend to look perhaps more kindly on what Google is doing, than some people who try to measure them up against Ask because I understand that they deal with a lot more people than Ask, and they have to be much more conservative than what Ask is doing.  And I think that what’s going to happen is those two are going to approach closer together.  The advantage, of course, Jim has over at Ask, is that he doesn’t have to put ads in that column so he’s got a whole column he can make use of, and it is useful, and it is a nice sort of place to tuck it in there. If you really want to talk about search interfaces, what will be really fun to envision is what happens when Ajax starts coming along and doing other things. Can I start putting the sponsored search results where they are hovering above other results? Is there other issues that come with that?  There may be some confusion as to why I’m getting this and why I’m getting that. Can I pop up a map as I hover over a result? I could deliver you a standard set of search results and I can also deliver you local results on top of a particular type of picture.  If I move my mouse along it, I could show you a preview of what you get in local and you might go “Oh wow, there’s a whole map there”. I want to jump off in that direction.  That would be really fun to see that type of stuff come along there, but I’m just not seeing anything come out of it.  What we typically have had when people have played with the interface is, these really WYSIWYG things like, ‘well we’ll fly you though the results, or we’ll group them’.  None of which is really something that you’d need, that added to the choices, “do I want to go vertical, do I not want to go vertical?”

Gord: When we start talking about the fact that the search results page could be a lot more dynamic and interactive, of course the big question is what does that do for monetization of the page?  One of the things that Jakob (Nielsen) talked about was banner blindness.  Do people start cutting out sections of the page?  We talked a little about that.  How do you make sure that the advertising doesn’t get lost on the page when there’s just a lot more visual information in there to assimilate?

Danny: Well I think a variety of things that are going to start happening there.  For example, Google doesn’t do paid inclusion, right, but Google has partnerships with YouTube and they have these channels, and they’re going to be sharing revenue from these channels with other people. So when they start including that stuff up, perhaps they are getting paid off of that.  They didn’t pay to put it in the index but, because they are better able to promote their video channels, more people are going over there, and they’re making money off of that as a destination.  So in some ways, they can afford to have their video results start becoming more relevant because they don’t have to worry about if you didn’t click on the ad from the initial search result, they sort of lost you.  In terms of how the other ads might go, I guess the concern might be if the natural results are getting better and better why would anyone click on the ads anyway?  Maybe people will reassess the paid results and some people will come through and say that paid search results are a form of search data base as well.  So we’re going to call them classifieds or we’re going to call them ads, we’re going to move them right into the linear display.  You know there’ll be issues, because at least in the US, you have the FCC guidelines that say that you should really keep them segregated.  So if you don’t highlight them or blend them in some way, you might run into some regulatory problems.  But then again, maybe those rules might start to change as the search innovation starts to change, and go with it from there.  I don’t know, the search engines might come up with other things.  You know we’re getting toolbars that are appearing more on all of our things. Google might start thinking, ‘Well, let’s put ads back onto that toolbar’.  We used to have those sorts of things, and everyone seems to catch on, but they might come back, and that might be another way that some of the players, especially somebody like Google, might make money beyond just putting the ad on the search result page.

Gord: In the next three years, are we going to get to the point where search starts to become less of a destination activity like the way it is now, and the functionality  sits underneath more of Web 2.0 or semantic web or whatever you want to call it.  It almost becomes a mash up of functionality that underlies other types of sites. Are we going to stop going to a Google or a Yahoo as much to launch a distinct search as we do now?

Danny: You know people have been saying that for at least 3 or 4 years now, especially with Microsoft. ‘Oh you’re not even going to go there, you’re going to do it from your desktop.’  Vista, which I have yet to actually use.  I’ve got the laptop and I’m about to start playing with it! Apparently, it’s supposed to be even more integrated than it was with XP.  But I still tend to think, you know what? We do stuff in our browsers.  I know widgets are growing and I know there’s more stuff that’s just drawing stuff into your computer as well, but we still tend to do stuff in our browser.  I still see search as something where I’m going to go to a search engine and do the search.  With the exception of toolbars. I think we’re going to do a lot more searching through toolbars.  Tool bars are everywhere; it’s really rare for me to start a search where I’m actually not doing it from the toolbar.  I just have a toolbar that sits up there, and I don’t need to be at the search engine itself.  But I still want the results displayed in my browser.  Because I think most of the stuff I’m going to have to deal with is going to be in my browser as well.  So it doesn’t really help to be able to search from Microsoft Word, right?  Because I don’t want all these sites in a little window within Word. I’m probably going to have to read what they say, so I’m probably going to have to go there.  I think that will change though if I have a media player, then I think it makes much more sense for me, and you can already do this with some media players, where you can do searches, and have the results flow back in.  iTunes is a classic example. iTunes is basically a music search engine.  Sure, it’s limited to the music and the podcasts that are within iTunes, but it doesn’t really make any sense for me to go to the Apple website. Although, interestingly, here’s an example where Apple is just a terrible failure.  They’ve got all this stuff out there, they’ve got stuff that perhaps you might be interested in even if you don’t use their software and there’s just no way to get to it on the web.  The last time I looked you really had to do the searches in iTunes.  So they’re missing out on being a destination for those people who say ‘I’m not going to use iTunes’  or ‘I don’t have iTunes’ or ‘I’m on a different version.’ I don’t know if you’ve downloaded it recently but it takes forever and it’s just a pain.

Gord: I think that covers off the main questions I wanted to cover off in this.  Is there anything else as far as search in the next three years that you wanted to comment on?

Danny: You know, it’s hard because if you’d asked me that three years ago, would I have told you, ‘watch for the growth of verticals and watch for the growth of blended search’, (laughs) right?  I’ve been thinking really hard because, I’m like, ‘Gosh, now what am I going to talk about because they’re doing both of those things’. I think personalized search is going to continue to get strong.  I do think that Google is onto something with their personalized search results.  I don’t think that they’re going to cause you to be in an Amazon situation where you’re continuing to be recommended stuff you’re no longer interested in.  I think that people are misunderstanding how sophisticated it can be.  I think that the next big trend is that, ironically from what I just said to you, search is going to start jumping into devices.  And everything is going to have a search box.  But it will be appropriate.  My iPod itself will have a search capability within it.  And the iPhone, to some degree, maybe is going to be that look at how it’s happening already. But I’ll be able to search, access, and get information appropriate to that device within it.  Windows Media Center, when I first got that in 2005, I said, this is amazing, because it’s basically got TV search built into it.  I do the search and then of course, it allows me to subscribe to the program, and records the program, and knows when the next ones are coming up.  And it makes so much more sense for that search to be in that device than it did for me to have it elsewhere.  I use it all the time, when I want to know when a programs on, I don’t have to find where the TV listings are on the web, I just walk over to my computer and do a search from within the Media Center player.  So I think we’re going to have many more devices that are internet enabled, and there’s going to be reasons why you want to do searches with them, to find stuff for them in particular.  That’s going to be the new future of search and search growth will come into it.  And in terms of what that means to the search marketer, I think it’s going to be crucial to understand that these are going to be new growth areas, because those searches when they start are going to be fairly rudimentary. It’s going to be back in the days of, OK, they’re probably going to be driven off of meta data, so you got to make sure you have your title, and your description and making sure the item that your searching for is relevant.

Gord: So obviously all that leads itself to the question of mobile search, and will mobile search be more useful by 2010?

Danny: Sure, but it’s going to be more useful because it’s not going to be mobile search.  It’s just the device is going to catch up and be more desktop-like.  I have a Windows mobile phone at the moment, and I have downloaded some of the applets like Live Search and Google Maps, and those can be handy for me to use, but for the most part, if I want to do a search, I fire up the web browser, I look for what I’m looking for, the screen is fairly large, and I can see what I wanted to find.  And I think that you’re going to find that the devices are going to continue to be small and yet gain larger screens, and have the ability for you to better do direct input. So if you want to do search, you can do a search. It’s not like you’re going to need to have to have something that’s designed for the mobile device that only shows mobile pages.  I think that’s going to change.  You’re going to have some mobile devices that are specifically not going to be able to do that and those people in the end are going to find that no one is going to be trying to support you.

Gord: Thanks Danny.

The Strength of Weak Ties and Search

First published August 2, 2007 in Mediapost’s Search Insider

Mark Granovetter wrote a ground-breaking study in 1973 called the “The Strength of Weak Ties.” It later became one of the foundations for Gladwell’s “The Tipping Point.” I ran across Granovetter’s work and a later follow up study by Jonathan Frenzen and Kent Nakamoto (Frenzen, Nakamoto: “Structure, Cooperation and the Flow of Market Information,” The Journal of Consumer Research, December 1993) that further explored the fascinating world of word-of-mouth and how it spreads through networks. When we move this into an online paradigm, it has some thought-provoking implications.

No Network is an Island

First, let’s cover Granovetter’s work. In an oversimplified version, it states that social networks are not uniformly dense in their makeup. There are very densely linked nodes. These are families, circles of best friends, immediate co-workers and other very close relationships. These clusters, or islands, are then loosely linked by more fragile ties that span the clusters. They include formal acquaintances, lapsed or dormant friendships, more distant relationships and other “arm’s length” connections. These are Granovetter’s “weak ties.” For a viral spreading of information, we can assume that word will spread quickly within the tightly linked clusters, the “strong ties” — but for it to spread widely, it has to be passed through the “weak ties.” Otherwise, it will never spread outside a cluster. Thus the importance of these “weak ties” in the structure of the social network.

But there is another factor, and that is the cooperativeness of those “weak ties.” Are they motivated to pass on the information? In the words of Frenzen and Nakamoto: “Instead of an array of islands interconnected by a network of fixed bridges, the islands are interconnected by a web of “drawbridges” that are metaphorically raised and lowered by transmitters depending on the moral hazards imposed by the information transmitted by word of mouth.”

The Principles of “Passing it On”

Frenzen and Nakamoto’s study introduced two variables: value of information and moral hazard. In this case, they used the framework of an exclusive sale. The value of information varied with the size of the price discount. And the moral hazard was the scarcity of inventory available at this discounted price. So in the low value/low moral hazard version, it was a smaller discount (20%) and there was plenty of inventory available. There was no danger that close friends and family would “lose out” by sharing this information with a wider circle. In the high value/high moral hazard version, the discount was high (50-70%) and the number of items available at this price was very limited. A scarcity mentality was imposed.

Frenzen and Nakamoto also varied the structure of the network by assigning different “tie strengths” to the linkages within the group. The results were striking. In the low moral hazard scenario, where there was maximal cooperation to pass along information, everyone in a 100 member social network, composed of five loosely linked clusters, received the information in a maximum of seven time periods (the actual period used was not stated), even with a varying link strength of the network. In fact, in the strongest structure, everyone knew by the third time period. But in the high moral hazard situation, transfer of information was much slower and less effective. In the strongest structure, it took eight time periods for 100% spreading of the information. And in the weakest structure, even after 15 time periods, still only 66% of the group had received the information.

WOM Moved Online

So, what does this have to do with search? Simply this. The weak ties are now moving online. If we have great news or a great product story to share, we can now share this information on line. We can blog about it, post a comment or leave a review. But we’re most likely to do this when there’s low moral hazard. We pass information where there’s no “scarcity mentality.” So we’ll happily post about a great travel destination, a restaurant or a piece of software because by doing so, we’re not running the risk of losing out ourselves. We’re much less likely to blog about that exceptional deal on men’s suits at 70% off, when there’s only six suits left. That information is reserved for our closest friends. It only gets passed along through our strong ties.

There’s another factor at play here that was beyond the scope of Frenzen and Nakamoto’s study. We are motivated to pass on information online when it’s remarkable. Product or brand experiences have to earn the right to be passed on. As online mavens, we’re motivated by being “first to know” and by passing on value. Therefore, we carefully consider the trustworthiness of the information and its authenticity before we decide to share it. After all, we’re staking our reputation on it. Although these online posts become Granovetter’s “weak ties” online (because we usually don’t have strong personal relations with all the readers of our various online “footprints”) they only happen when the nature of the information bears passing along.

If we’re depending on the spread of word of mouth for our marketing, we have to start with some basic understanding of how the dynamics of the network works. All too often, we assume that everyone is like our best friend, eager to spread the word about our product or service. In the wired world, this would include leaving footprints online, through blog posts, comments and reviews. There, future customers can connect with them through search. But a successful viral campaign is largely dependent on those weak ties being motivated to pass along the information. It needs to be remarkable in some compelling way (i.e. Godin’s Purple Cow), it has to eliminate a scarcity mentality, it has to feel authentic and, to appeal to the mavens, it has to have the feel of news.

Breaking “Auction Order” Explained

One of the things that raised eyebrows in my interview with Diane Tang and Nick Fox was the following section regarding how Google determines which ads rank first and climb into the all important top sponsored locations:

Nick: Yes, it’s based on two things.  One is the primary element is the quality of the ad. The highest quality ads get shown on the top. The lower quality ads get shown on the right hand side. We block off the top ads from the top of the auction, if you really believe those are truly excellent ads…

Diane: It’s worth pointing out that we never break auction order…

Nick: One of the things that’s sacred here is making sure that the advertiser’s have the incentive. In an auction, you want to make sure that the folks who win the auction are the ones who actually did win the auction. You can’t give the prize away to the person who didn’t win the auction. The primary element in that function is the quality of the ad. Another element of function is what the advertiser’s going to pay for that ad. Which, in some ways, is also a measure of quality. We’ve seen that in most cases, where the advertiser’s willing to pay more, it’s more of a commercial topic. The query itself is more commercial, therefore users are more likely to be interested in ads. So we typically see that queries that have high revenue ads, ads that are likely to generate a lot of revenue for Google are also the queries where the ads are also most relevant to the user, so the user is more likely to be happy as well. So it’s those two factors that go into it. But it is a very high threshold. I don’t’ want to get into specific numbers, but the fraction of queries that actually show these promoted ads is very small.

This seemed a little odd to me in the interview and I made a note to ask further about that, but what can I say, I forgot and went on to other things. But when the article got posted on Searchengineland, Danny jumped on it at Sphinn

“Seriously? I mean, it’s not an auction. If it were an auction, highest amount would win. They break it all the time by factoring in clickrate, quality score, etc. Not saying that’s bad, but it’s not an auction.”

This reminded me to follow up with Nick and Diane. Diana Adair, on the Google PR team, responded with this clarification:

We wanted to follow up with you regarding your question below.  We wanted to clarify that we rank ads based on both quality score and by bid.  Auction order, therefore, is based on the combination of both of those factors.  So that means that it’s entirely possible that an ad with a lower bid could rank higher than an ad with a higher bid if the quality score for the less expensive ad is high enough.

So, it seems it’s the use of the word “auction” that’s throwing everyone off here. Google’s use of the term includes ad quality. The rest of the world thinks of an auction as somewhere where the highest bid (exclusively) determines the winner. Otherwise, like Danny said, “it’s not an auction”. So, with that interpretation, I then assume that Nick and Diane’s (which sounds vaguely like a title of a John Mellencamp song) comment means that Google won’t arbitrarily hijack these positions for other types of packages which may include presence on the SERP, as in the current Bourne Ultimatum promotion.

Interview with Google’s Nick Fox and Diane Tang on Ad Quality Scoring

I had the chance to talk to Nick Fox and Diane Tang from Google’s Ad Quality team about quality scoring and how it impacts the user experience. Excerpts from the article along with additional commentary will be in Friday’s Just Behave column, but here is the full transcript.

Gord: What I wanted to talk about a little bit was just how the  quality, particularly in the top sponsored ads, impacts user experience and talk a little about relevancy. Just to set the stage, one of the things I talked about at SES in Toronto was just the fact that as far as a Canadian user goes, because the Canadian Ad market isn’t as mature as the American one, we’re not seeing the same acceptance of those sponsored ads at the top.  Just because you’re not seeing the brands that you would expect to see for a lot of the queries. You’re not seeing a lot of trusted vendors in that space. They just have not adopted search the same way they have in the States.  What we’ve seen in some of our user studies is a greater tendency to avoid that real estate … or at least to quickly scan it and then move down.  So, that’s the angle I really want to take here.  Just how important it is ad quality and ad relevance to impacting that user experience and then also talking about one thing I’ve always noticed in the number of our user studies. Of all the engines, Google seems to be the most stringent on what it takes to be a qualified ad. To get promoted from the right rails to the top sponsored ads. So that sets a broad framework of what I wanted to talk about today.

Nick: Let me give you a quick overview of who I am and who Diane is and what we work on and then we’ll jump into the topics that you’ve raised.  So what Diane and I work on is called Ad Quality and it is essentially everything about how we decide which ads to show on Google and our partners and what they should look like, how much we charge for them and all those types of things. How the auction works…everything from soup to nuts.  If you ask us what our goal is…our goals is to make sure our users love our ads. If you ask Larry Page what our goal is…it’s to make our ads as good as our search results. So it’s a heavy focus on making sure that our users are happy and that our users are getting back what they want out of our ads.  We sort of think of ourselves as among the first that work on the average product for Google. We represent the user, to make sure the user is getting what they really need.  We’re very similar to what we do on the search quality side, making sure that search results are very good.

I think a lot of the things you’ve picked up on are very accurate. In terms of the focus on top ad quality..in general, the focus on quality..I think what you picked up on in your various  reports as well as the study in Canada are pretty accurate and pretty much what drives what we are working on here.  The big concern that I would have, the main motivation for why I think ad quality is important is as a company we need to make sure users continue to trust our ads.  If users don’t trust our ads, they will stop looking at the ads, and once they stop looking at the ads they’ll stop clicking on the ads and all is lost. So what we need to make sure we are doing in long run  is that the users believe that the ads will provide them what they are looking for and they will continue looking at the ads as valuable real estate and to continue to trust that.
So that is what we are going for. I think as we look at the competitors landscape as well, we see a lot of what you see. We certainly have historically, and continue to do so, have much more of a focus on the quality of the ads. Making sure we’re not doing things where we trade off the user experience against revenue. We all have the ability to show more ads or worse ads, but we take a very stringent approach, as you’ve noticed, to making sure we only show the best ads that we believe the user will actually get something out of. If the user’s not going to get something out of the ad, we don’t show the ad. Otherwise the user is going to be less likely to consider ads in the future.

Diane: It’s worth pointing out that basically what we’re saying is that we are taking a very long term view towards making sure our users are happy with our ads and it’s really about making them trust what we give them.

Gord: One thing I’ve noticed in all my conversations whether they’re with Marissa or Matt or you, the first thing that everyone always says at Google is the focus around the user experience. The fact that the user needs to walk away satisfied with their experience. When we’re talking about the search results page, that focuses very specifically on what we’ve called in our reports the “area of greatest promise”. That upper left orientation on the search results page and making sure that whatever is appearing in that area had better be the most relevant result possible for the user.  In conversations with other engines I hear things like balanced ecosystems and communities that include both users and advertisers. I’ve always been struck by the focus at Google and I’ve always been a strong believer that corporations need sacred cows, these untouchable driving principles that everyone can rally around.  Is that what we’re talking about here with Google?

Nick: I think it is.  I think it comes from the top and it comes from the roots. If we were doing a proposal to Larry and Sergey and Eric where we’re saying, “Hey, let’s show a bunch of low quality ads”  the first question they’re going to ask is “Is this the right thing for the user?”  And if the answer is no, we get kicked out of the room and that’s the end of conversation. So you get that from the top and it permeates all the way through. You hear it when you speak to Marissa and Matt and us. It permeates the conversations we have here as well.  It’s not just external when we talk about the user; it’s what the conversation is internally as well. It just exudes through the company because it’s just part of what we think. I wouldn’t say that there isn’t a focus on the advertiser too, it’s just that our belief is that the way you get that balance is by focusing on the user, and as long as the user’s happy, the user’s clicking on the ad, and as long as the user’s clicking on the ad, the advertiser’s getting leads and everything works. If you focus on the advertiser’s in the short term, maybe the advertisers will be happy in the short term, but in the long term that doesn’t work. That used to be a hard message to get across. It used to be the case that advertiser’s didn’t really get that. And one of the most rewarding things for me is that the advertisers see that, they get that. Some of the stuff we do in the world of ad quality is frustrating to advertisers because in some cases we’re preventing their ads from running in cases where they’d like it to run. We’ve seen that the advertiser community is actually more receptive to that recently because they understand why we’re doing it and they understand that in the long term, they’re benefiting from it as well. I think that you are seeing that there is a difference in approach between us and our competitors. That we believe the ecosystem thrives if you focus on the users first.

Gord: I’d like to focus on what, to me, what’s a pretty significant performance delta between right rail and top sponsored. We’ve seen the scan patterns put top sponsored directly in the primary scanning path of users where right rail is more of a side bar that may be considered after the primary results are scanned. With whatever you can share, can you tell me a little about what’s behind that promotion from right rail to top sponsored?

Nick: Yes, it’s based on two things.  One is the primary element is the quality of the ad. The highest quality ads get shown on the top. The lower quality ads get shown on the right hand side. We block off the top ads from the top of the auction, if you really believe those are truly excellent ads…

Diane: It’s worth pointing out that we never break auction order…

Nick: One of the things that’s sacred here is making sure that the advertiser’s have the incentive. In an auction, you want to make sure that the folks who win the auction are the ones who actually did win the auction. You can’t give the prize away to the person who didn’t win the auction. The primary element in that function is the quality of the ad. Another element of function is what the advertiser’s going to pay for that ad. Which, in some ways, is also a measure of quality. We’ve seen that in most cases, where the advertiser’s willing to pay more, it’s more of a commercial topic. The query itself is more commercial, therefore users are more likely to be interested in ads. So we typically see that queries that have high revenue ads, ads that are likely to generate a lot of revenue for Google are also the queries where the ads are also most relevant to the user, so the user is more likely to be happy as well. So it’s those two factors that go into it. But it is a very high threshold. I don’t’ want to get into specific numbers, but the fraction of queries that actually show these promoted ads is very small.

Gord: One thing we’ve noticed is, actually in an eye tracking study we did on Google China, there where the search market is far less mature, you very, very seldom see those ads being promoted to top sponsored. So I would imagine that that’s got to be a factor. Is the same threshold applied across all the markets or does it vary, does the quality threshold vary from market to market?

Nick:  I don’t want to get too much into the specifics of that kind of detail. We do certainly take an approach in market that we believe is most effective for that market. Handling everything at a global level doesn’t really make a lot of sense because in some cases you have micro markets that, or, in the case of China, a large market, where it makes sense to tailor our approach to what makes sense for that market…what users from that market are looking for, what the maturity of that market is. A market that has a different level of search quality, for example, it might make sense to take a different approach in how we think about ads as well. So that’s what I want to say there. But you’re right, in a market like China that’s less mature and at the early stage of it’s development, you do see fewer ads at the top of the page, there are just fewer ads there that we believe are good enough to show at the top of the page. Contrast that with a country like the U.S. or the U.K., where these markets are very mature and have the high quality ads we feel comfortable showing at the top, we show top ads.

Diane: But market maturity is just one area we look at. There’s also user sophistication with the internet and other key factors. We have to take all this into account to really decide what the approach is on a market basis.

Gord: One of the questions that always comes up every time I sit on a panel that has anything to do with quality scoring is what’s in an ad that might generate a click through is not necessarily what will generate a quality visitor when you carry it forward into conversion. For instance you can entice someone to click through but they may not convert and, of course, if you’re enticing them to click through you’re going to benefit from the quality scoring algorithm.  How do we correct that in the future?

Nick:  I think there are two things. One is, in general, an ad’s that’s being honest, and gets a high click rate from being honest,  is essentially a very relevant ad and therefore gets a high click through rate. We’ll typically see that that ad also has a high conversion rate. In cases where the advertiser’s not being dishonest, the high click through rate is generally correlated with a high conversion rate. And it’s simply because that ad is more relevant, it’s more relevant in terms of getting the user to click on that ad in the first place, it’s also more relevant in delivering what that user is looking for once they actually got to the landing page. So you see a good correlation there.

There are cases where advertisers can do things where they’re misleading in their ad text and create an incentive for a user to click on their ad and then not be able to deliver, so the advertiser could say “great deals on iPods” and then they sell iPod cases or something. In that case, the high click through rate is unlikely to be correlated with a high conversion rate because the users are going to be disappointed when they actually end up on the page. The good thing for us is that the conversion rate typically gets reflected in the amount that the advertiser’s actually willing to pay, so that’s one of the reasons why the advertiser’s bid is a relatively decent metric of the quality, for example in this ipod cases case, because that conversion rates likely to be low, the advertiser’s not likely to bid as much for that. The click just isn’t worth as much to them, therefore they’ll bid less and end up getting a lower rank as a result of that. So, in many cases, this doesn’t end up being a problem because that just sort of falls out of the ranking formula. It’s a little bit convoluted.

Gord: Just to restate it to make sure I’ve got it here. You’re saying that if somebody is being dishonest, ultimately the return they’re getting on that will dictate that they have to drop their bid amount, so it will correct itself. If they’re not getting the returns on the back end, they’re not going to pay the same on the front end and ultimately it will just find it will just find it’s proper place.

Nick: What an advertiser should probably be thinking most about is mostly ROI per click…it’s actually ROI per impression. From the ad that’s likely to generate the most value for the user, and therefore the most value to Google as well as the most value to the advertiser, all aligned in a very nice way, is the ad that’s the most likely to generate the most ROI per impression. And because of our ranking formula, those are the ads that are most likely to show up at the top of the auction. And the ones that aren’t fall out. So the advertiser should care click through rate, but they shouldn’t care about click through rate exclusively to the extent that that results in a low conversion rate and a low ROI per click for them.

Gord: We talked a little bit about ads being promoted to the top sponsored and over the past three or four years, you have experimented a little bit with the number of ads that you show up there. When we did our first eye tracking study, usually we didn’t see any more than two ads, and that increased to three shortly after. Have you found the right balance with what appears above organic results as far as sponsored results?

Diane: I would say that it’s one of those things where the user base is
constantly shifting, the market is constantly shifting. It’s something that we definitely reevaluate frequently. It was definitely a very thought through decision to move to three, and we show three actually very rarely. We seriously consider that when we show three, is it in the best interest for the user? There’s a lot of evaluation of the entire page at that point and not even just the ads, whether or not it was the right thing. We’re very careful to make sure that we’re constantly at the right balance. It’s definitely something that we look at.

Gord: One of the things we’ve noticed in our eye tracking studies is that there’s a tendency on the part of users to “break off” results in consideration sets and the magic number seems to be around four, so what we’ve seen is even if they’re open to looking at sponsored ads, they want to include at least the number one organic result as well, as kind of a baseline for reference. They want to be able to flip back and forth and say, “Okay, that’s the organic result, that how relevant I feel that is. If one of the sponsored ads is more relevant, than fine, I’ll click on it.” It seems like that’s a good number for the user to be able to concentrate on at one time, quickly and then make their decision based on that consideration set that would usually include one or two sponsored ads and at least one organic listing, and where the highest relevancy is. Does that match what you guys have found as well?

Nick: I don’t think we’ve looked at it in the way of consideration sets, along those lines. I think that’s consistent with the outcomes that we’ve had and maybe some of the thought process that lead us to our outcome. The net effect is the same outcome. One of the things that we are careful about is trying to make sure that you don’t want to create an experience where you show no organic results on the page, you know, or at least above the fold on the page. You want to make sure that the user is going to be able to make that decision, regarding what they want to click on and if you just serve the user with one type of result you’re not really helping the user make that type of decision. What we care more about is what the user sees in the above the fold real estate, not quite so much the full result. And probably relatively consistent on certain sets of screen resolutions.

Gord: One of the things that Marissa said when I talked to her a few days ago was that as Google moves into Universal Search results and we’re starting to see different types of results appear on the page, including in some cases images or videos, that opens the door to potentially looking at different presentations of advertising content as well. How does that impact your quality scoring and ultimately how does that impact the user?

Nick: We need to see. I don’t think we know yet. Ultimately it would be our team deciding whether to do that or not, so fortunately we don’t have to worry too much about hooking up the quality score because we would design a quality score that would make sense for it. The team that focuses on what we call Ad UI, that’s the team that’s looking at …it’s sub group within that, that’s the team that essentially thinks about what should the ads actually look like?

Diane: And what information can we present that’s most useful to the user?

Nick: So in some cases, that information may be an image, in some cases that information may be a video. We need to make sure in doing this that we’re not just showing video ads, because video happens to be catchy. We want to make sure that we’re showing video ads because the video is what actually contains the content that’s actually useful for the user. With Universal Search we found that video search results, for example, can contain that information, so it’s likely that their paid results set could be the same as well. Again, just as in text ads, we’d need to make sure that whatever we do there is user driven rather than anything else and that the users are actually happy with it. There would be a lot of user experimentation that would happen before anything was launched along those lines.

Diane: You can track our blogs as well. All of our experiments show up at some point there.

Gord: Right. Talking a little bit about personalization, you started off by saying that Larry and Sergey have dictated that the ads should be more relevant than the organic results in an ideal situation and just as a point of interest, in our second eye tracking study, when we looked at the success rate of click throughs as far as people actually clicking through to a site that appeared to deliver what they were looking for, for commercial tasks, it was in fact the top sponsored ads that had the highest average success rate of all the links on the page. When we’re looking at Personalization, one of the things that, again, Marissa said is we don’t want our organic results and our sponsored results to be too far out of sync. Although personalization is rolling out on the organic side right now, it would make sense, if that can significantly improve the relevancy to the user, for that to eventually fold into the sponsored results as well. So again, that might be something that would potentially impact quality scoring in the future, right?

Nick: Yes. So we have been looking at some.. I’m not sure if the right word is personalization or some sort of user based or task based…what the right word is..changes to how we think about ads. We have made changes to try to get a sense of what the user’s trying to do right now. Whether they’re, for example, in a commercial mind set and alter how we do ads somewhat based on that type of an understanding of the user’s current task. We’ve done much less with trying to..we’ve done nothing really…with trying to build profiles of the user and trying to understand who the user is and whether the user is a man or woman or a 45 year old or a 25 year old. We haven’t seen that that’s particularly useful for us. You don’t want to personalize users into a corner, you don’t want to create a profile of them that’s not actually reflective of whom they are. We don’t want to freak the user out. If you have a qualified user you could risk alienating that user. So we’ve been very hesitant to move in that direction and in general, we think that there’s a lot more we can that doesn’t require profiles down that path.

Diane: You can think of personalization in a couple of different ways, right? It can manifest itself in regards to the results you actually show. It can also be more about how many ads or even the presentation of those ads with regards to actual information. Those sorts of things. There are many possible directions that can be more fruitful than, like Nick points out, profiling.

Gord: Right, right.

Nick: For example, one of the things that you could theoretically do is, as you know, we changed the background color of our top ads from blue to yellow, because we found that yellow works better in general. You might find that for certain users, green is better, you might find that for certain users, blue is actually better. Those types of things, where you’re able to change things based on what users are responding to, is more appealing to us than these broad user classification types of things. It seems somewhat sketchy.

Gord: It was funny. Just before those interview, I was actually talking to Michael Ferguson at Ask.com and one of the things he mentioned that I thought was quite interesting was a different take on personalization. It may get to the point where it’s not just using personalization for the sake of disambiguating intent and improving relevancy, it might actually be using personalization to present results or advertising messages in the form that’s most preferred by the user. So some may prefer video ads. Some may prefer text ads and they may prefer shorter text ads or longer text ads. And I just thought that that was really interesting. Looking at personalization to actually customize how the results are being presented to you. In what format.

Nick: Yes.

Gord. One last question. You’ve talked before about quality scoring and how it impacts two different things. Whether it’s the minimum bid price or whether it’s actually position on the page. And the fact that there’s more factors, generally, in the “softer” or “fuzzier” minimum bid algorithm than there is in the “hard” algorithm that determines position on the page. And ideally you would like to see more factors included in all of it. Where is Google at on that line right now?

Nick: There are probably two things. One is that when setting the minimum bid, we have much less information available to us. We don’t know what the specific query is that the user issued. We don’t know what time of day it is. We know very little about the context of what the user is actually trying to do. We don’t know what property that user’s on. There’s a whole lot that we don’t know. What we need to do when we set a minimum bid is much coarser. We just need to be able to say, what do we think this keyword is, what do we think the quality of the ad is, does the keyword meet the objective of the landing page and make a judgment based on that. But we don’t have the ability to be more nuanced in terms of actually taking into account the context of how the ad is likely to actually show up. There’s always going to be a difference in terms of what we can actually use when we set the minimum bid versus what we use at auction time to set the position. The other piece of it though is there are certain pieces that only affect the minimum bid. Let me give you an example. Landing page quality normally impacts the minimum bid but it doesn’t impact your ranking. The reason for that is mostly from the standpoint of our decision to launch the product and what we thought was the most expedient way to improve the landing page quality of our ads rather than what we think will be the long term design of the system. So I’d expect things like that, where signals like landing page quality should impact not only the minimum CPC but also rank which ads show at the top of the page and things like that as well. That’s where you’ll see more convergence. But there’s always going to be context that we can get at query time to use for the auction than we can for minimum CPC.

Don’t Put Search on Your Site if it Sucks

I just spent 15 minutes wrestling with the internal search tool on AdWeek trying to track down an article. I had the title, what the article was about and the month it ran and still I was unable to track it down. I was getting hundreds of results, supposedly ranked by relevance, and I was unable to filter it down. Then, I searched on Google, with just the name of the article and of the publication and bang, got it in 0.03 seconds. I don’t know how much AdWeek spent for their enterprise search tool but it was too much.

Interview with Jakob Nielsen on the Future of the SERP (and other stuff)

jakob-nielsen_cropped.jpg.400x400_q95_crop_upscaleI recently had the opportunity to talk to Jakob Nielsen for a series I’m doing for Search Engine Land about what the search results page will look like in 2010.  Jakob is called a “controversial guru of Web design” in Wikipedia (Jakob gets his own shots in at Wikipedia in this interview) because of his strongly held views on the use of graphics and flash in web design. I have a tremendous amount of respect for Jakob, even though we don’t agree on everything, because of his no frills, common sense approach to the user experience. And so I thought it was quite appropriate I sound him out on his feelings about the evolution of the search interface, now that with Universal search and Ask’s 3D Search we seem to be seeing more innovation in this area in the last 6 months than we’ve seen for the last 10 years. Jakob is not as optimistic about the pace of change as I am, but the conversation was fascinating. We touched on Universal Search, personalization, banner blindness on the SERP and scanning of the web in China, amongst other things. Usability geeks..enjoy!

Gord: For today I only really have one question, although I’m sure there be lots of branch offs from it. It revolves around what the search engine results page may look like in 2010.  I thought you would be a great person to lend your insight on that.

Jakob: Ok, sure.

Gord: So why don’t we just start? Obviously there are some things that are happening now with personalization and universal search results. Let’s just open this up. What do you think we’ll be seeing on a search results page in 3 years?

Jakob: I don’t think there will be that big a change because 3 years is not that long a time. I think if you look back three years at 2004, there was not really that much difference from what there is today.  I think if you look back ten years there still isn’t that much difference.  I actually just took a look at some old screen shots in preparation before this call at some various search engines like Infoseek and Excite and those guys that were around at that time, and Google’s Beta release, and the truth is that they were pretty similar to what we have today as well.  The main difference, the main innovation seems to have been to abandon banner ads, which we all know now really do not work, and replace them with the text ads, and of course that affected the appearance of the page.  And of course now the text ads are driven by the key words, but in terms of the appearance of the page, they have been very static, very similar for 10 years.  I think that’s quite likely to continue. You could speculate the possible changes. Then I think there are three different big things that could happen.

One of them that will not make any difference to the appearance and that is a different prioritization scheme. Of course, the big thing that has happened in the last 10 years was a change from an information retrieval oriented relevance ranking to being more of a popularity relevance ranking. And I think we can see a change maybe being a more of a usefulness relevance ranking. I think there is a tendency now for a lot of not very useful results to be dredged up that happen to be very popular, like Wikipedia and various blogs. They’re not going to be very useful or substantial to people who are trying to solve problems. So I think that with counting links and all of that, there may be a change and we may go into a more behavioral judgment as to which sites actually solve people’s problems, and they will tend to be more highly ranked.

But of course from the user perspective, that’s not going to look any different. It’s just going to be that the top one is going to be the one that the various search engines, by what ever means they think of, will judge to be the best and that’s what people will tend to click first, and then the second one and so on. That behavior will stay the same, and the appearance will be the same, but the sorting might be different. That I think is actually very likely to happen

Gord: So, as you say, those will be the relevancy changes at the back end. You’re not seeing the paradigm of the primarily text based interface with 10 organic results and  8-9 sponsored results where they are, you don’t see that changing much in the next 3 years?

Jakob: No.  I think you can speculate on possible changes to this as well. There could be small changes, there could be big changes.  I don’t think big changes. The small changes are, potentially, a change from the one dimensional linear layout to more of a two dimensional layout with different types of information, presented in different parts of the page so you could have more of a newspaper metaphor in terms of the layout. I’m not sure if that’s going to happen.  It’s a huge dominant user behavior to scan a linear list and so this attempt to put other things on the side, to tamper with the true layout, the true design of the page, to move from it being just a list, it’s going to be difficult, but I think it’s a possibility.  There’s a lot of things, types of information that the search engines are crunching on, and one approach is to unify them all into one list based on it’s best guess as to relevance or importance or whatever, and that is what I think is most likely to happen.  But it could also be that they decide to split it up, and say, well, out here to the right we’ll put shopping results, and out here to the left we’ll put news results, and down here at the bottom we’ll put pictures, and so forth, and I think that’s a possibility.

Gord: Like Ask is experimenting with right now with their 3D search. They’re actually breaking it up into 3 columns, and using the right rail and the left rail to show non-web based results.

Jakob: Exactly, except I really want to say that it’s 2 dimensional, it’s not 3 dimensional.

Gord: But that’s what they’re calling it.

Jakob: Yes I know, but that’s a stupid word. I don’t want to give them any credit for that. It’s 2 dimensional. It’s evolutionary in the sense that search results have been 1 dimensional, which is linear, just scroll down the page, and so potentially 2 dimensional (they can call it three but it is two) that is the big step, doing something differently and that may take off and more search engines may do that if it turns out to work well.  But I think it’s more likely that they will work on ways on integrating all these different sources into a linear list. But those are two alternative possibilities, and it depends on how well they are able to produce a single sorted list of all these different data sources.  Can they really guess people’s intent that well?

All this stuff..all this talk about personalization, that is incredibly hard to do. Partly because it’s not just personalization, based on a user model, which is hard enough already. You have to guess that this person prefers this style of content and so on.  But furthermore, you have to guess as to what this person’s “in this minute” interest is and that is almost impossible to do. I’m not too optimistic on the ability to do that.  In many ways I think the web provides self personalization, you know, self service personalization. I show you my navigational scheme of things you can do on my site and you pick the one you want today, and the job of the web designer is to, first of all, design choices that adequately meet common user needs, and secondly, simply explain these choices so people can make the right ones for them.  And that’s what most sites do very poorly. Both of those two steps are done very poorly on most corporate websites. But when it’s done well, that leads to people being able to click – click and they have what they want, because they know what they want, and its very difficult for the computer to guess what they want in this minute.

Gord:  When we bring it back to the search paradigm, giving people that kind of control to be able to determine the type of content that’s most relevant to them requires them interacting with the page in some way.

Jakob: Yes, exactly, and that’s actually my third possible change. My first one was changing to the ranking scheme; the second one was the potentially changing to two dimensional layouts. The third one is to add more tools to the search interface to provide query reformulation and query refinement options. I’m also very skeptical about this, because this has been tried a lot of times and it has always failed.  If you go back and look at old screen shots (you probably have more than I have) of all of the different search engines that have been out there over the last 15 years or so, there have been a lot of attempts to do things like this. I think Microsoft had one where you could prioritize one thing more, prioritize another thing more. There was another slider paradigm. I know that Infoseek, many, many years ago, had alternative query terms you could do just one click and you could search on them, which was very simple. Yet most people didn’t even do that.

People are basically lazy, and this makes sense.  The basic information foraging theory, which is, I think, the one theory that basically explains why the web is the way it is, says that people want to expend minimal effort to gain their benefits.  And this is an evolutionary point that has come about because the people, or the creatures, who don’t exert themselves, are the ones most likely to survive when there are bad times or a crisis of some kind. So people are inherently lazy and don’t want to exert themselves. Picking from a set of choices is one of the least effortful interaction styles which is why this point and click interaction in general seems to work very well. Where as tweaking sliders, operating pull down menus and all that stuff, that is just more work.

Gord: Right.

Jakob: But of course, this depends on whether we can make these tools useful enough, because it’s not that people will never exert themselves.  People do, after all, still get out of bed in the morning, so people will do something if the effort is deemed worthwhile.  But it just has to be the case that if you tweak the slider you get remarkably better results for your current needs.  And it has to be really easy to understand. I think this has been a problem for many of these ideas. They made sense to the search engine experts, but for the average user they had no idea about what would happen if they tweaked these various search settings and so people tended to not do them.

Gord: Right. When you look at where Google appears to be going, it seems like they’ve made the decision, “we’ll keep the functionality transparent in the background, we’ll use our algorithms and our science to try to improve the relevancy”, where as someone like Ask might be more likely to offer more functionality and more controls on the page. So if Google is going the other way, they seem to be saying that personalization is what they’re betting on to make that search experience better.  You’re not too optimistic that that will happen without some sort of interaction on the part of the user?

Jakob: Not, at least, in a small number of years. I think if you look very far ahead, you know 10, 20, 30 years or whatever, then I think there can be a lot of things happening in terms of natural language understanding and making the computer more clever than it is now. If we get to that level then it may be possible to have the computer better guess at what each person needs without the person having to say anything, but I think right now, it is very difficult.  The main attempt at personalization so far on the web is Amazon.com. They know so much about the user because they know what you’ve bought which is a stronger signal of interest than if you had just searched for something.  You search for a lot of things that you may never actually want, but actually paying money; that’s a very, very strong signal of interest.  Take myself, for example. I’m a very loyal shopper of Amazon. I’ve bought several hundred things from them and despite that they rarely recommend (successfully)…sometimes they actually recommend things I like but things I already have. I just didn’t buy it from them so they don’t know I have it. But it’s very, very rare that they recommend something where I say, “Oh yes, I really want that”. So I actually buy it from them.  And that’s despite the (fact that the) economic incentive is extreme, recommending things that people will buy. And they know what people have bought. Despite that and despite their work on this now for already 10 years (it’s always been one of their main dreams is to personalize shopping) they still don’t have it very well done. What they have done very well is this “just in time” relevance or “cross sell” as it’s normally called. So when you are on one book on one page, or one product in general, they will say, here are 5 other ones that are very similar to the one you’re looking at now. But that’s not saying, in general, I’m predicting that these 5 books will be of interest to you. They’re saying, “Given that you’re looking at this book, here are 5 other books that are similar, and therefore, the lead that you’re interested in these 5 books comes from your looking at that first book, not from them predicting or having a more elaborate theory about what I like.

Gord: Right.

Jakob: What “I like” tends not to be very useful.

Gord: Interesting. Jakob, I want to be considerate of your time but I do have one more question I’d love to run by you.  As the search results move towards more types of images, we’re already seeing more images showing up on the actual search results page for a lot of searches. Soon we could be seeing video and different types of information presented on the page. First of all, how will that impact our scanning patterns?  We’ve both done eye scanning research on search engine results, so we know there is very distinct patterns that we see.  Second of all, Marissa Mayer in a statement not that long ago seemed to backpedal a bit about the fact that Google would never put display ads back on a search results page, seeming to open a door for non text ads.  Would you mind commenting on those two things?

Jakob: Well they’re actually quite related.  If they put up display ads, then they will start training people to exhibit more banner blindness, which will also cause them to not look at other types of multimedia on the page. So as long as the page is very clean and the only ads are the text ads that are keyword driven, then I think that putting pictures and probably even videos on there actually work well.  The problem of course is they are inherently a more two dimensional media form, and video is 3 dimensional, because it’s two dimensional – graphic, and the third dimension is time, so they become more difficult to process in this linear type of scanned document “down the page” type of pattern.  But on the other hand people can process images faster, with just one fixation and you can “grok” a lot of what’s in an image, so I think that if they can keep the pages clean, then it will be incorporated in peoples scanning pattern a little bit more. “Oh this can give me a quick idea of what this is all about and what type of information I can expect”.  This of course assumes as well one more thing which is that they can actually select good pictures.

Gord: Right.

Jakob: I would be kind of conservative until higher tweaking with these algorithms, you know, what threshold should you cross before you put an image up.  I would really say tweak it such so that you only put it up when you’re really sure that it’s a highly relevant good image.  If there starts becoming that there are too many images, then we start seeing the obstacle course behavior. People scan around the images, as they do on a lot of corporate websites, where the images tend to be stock photos of glamour models that are irrelevant to what the user’s there for.  And then people involve behavior where they look around the images which is very contrary to first principals of perceptual psychology type of predicting which would be that the images would be attractive. Images turn out to be repelling if people start feeling like they are irrelevant. It’s a similar effect to banner blindness. If there’s any type of design element that people start perceiving as being irrelevant to their needs, then they will start to avoid that design element.

Gord: So, they could be running the risk of banner blindness, by incorporating those images if they’re not absolutely relevant…

Jakob: Exactly.

Gord: …to the query. Ok thank you so much.  Just out of interest have you done a lot of usability work with Chinese?

Jakob: Some. I actually read the article you had on your site. We haven’t done eye tracking studies, but we did some studies when we were in Hong Kong recently, and to that level the findings were very much the same. In terms of pdf was bad and how people go though shopping carts. So a lot of the transactional behavior, the interaction behavior, is very, very similar.

Gord: It was interesting to see how they were interacting with the search results page.  We’re still trying to figure out what some of those interactions meant

Jakob: I think it’s interesting. It can possibly be that the alphabet or character set is less scannable, but it is very hard to say because when you’re a foreigner, these characters look very blocky, and it looks very much like a lot of very similar scribbles.  But on the other hand, it could very well be the same, that people who don’t speak English would view a set of English words like a lot of little speck marks on the page, and yet words in English or in European languages are highly scannable because they have these shapes.

Gord: Right.

Jakob: So I think this is where more research is really called for to really find out.  But I think it’s possible, you know the hypothesis is that it’s just less scannable because the actual graphical or visual appearance of the words just don’t make the words pop as much.

Gord: There seems to be some conditioning effects as well and intent plays a huge part.  There’s a lot of moving pieces with that and we’re just trying to sort out. The relevancy of the results is a huge issue because the relevancy in China is really not that good so…

Jakob: It seems like it would have a lot to do with experience and amount of information.  If you compare back with uses of search in the 80’s, for example, before the web started, that was also a much more thorough reading of search results because people didn’t do search very well. Most people never did it actually, and when you did do it you would search through a very small set of information, and you had to carefully consider each probability. Then, as WebCrawler and Excite and AltaVista and people started, users got more used to scanning, they got more used to filtering out lots of junk. So the paradigm has completely changed from “find everything about my question” to “protect myself against overload of information”.  That paradigm shift requires you to have lived in a lot of information for awhile.

Gord: I was actually talking to the Chinese engineering team down at Yahoo! and that’s one thing I said. If you look at how the Chinese are using the internet, it’s very similar to North America in 99 or 2000. There’s a lot of searching for entertainment files and MP3s. They’re not using it for business and completing tasks nearly as much. It’s an entertainment medium for them, and that will impact how their browsing things like search results. It’ll be interesting to watch as that market matures and as users get more experienced, if that scanning pattern condenses and tightens up a lot

Jakob: Exactly. And I would certainly predict it would. There could be a language difference, basically a character set as we just discussed, but I think the basic information foraging theory is still a universal truth. People have to protect themselves against information overload, if you have information overload. As long as you’re not accustomed to that scenario, then you don’t evolve those behaviors. But once you get it… I think a lot of those people have lived in an environment where there’s not a lot of information.  Only one state television channel and so forth and gradually they’re getting satellite television and they’re getting millions of websites. But gradually they are getting many places where they can shop for given things, but that’s going to be an evolution.

Gord: The other thing we saw was that there was a really quick scan right to the bottom of the page, within 5 seconds, just to determine how relevant these results were, were these legitimate results? And then there was a secondary pass though where they went back to the top and then started going through. So they’re very wary of what’s presented on the page, and I think part of it is lack of trust in the information source and part of it is the amount of spam on the results page.

Jakob: Oh, yes, yes.

Gord: Great thanks very much for your time Jakob.

Jakob: Oh and thank you!

Notes from China

First published May 31, 2007 in Mediapost’s Search Insider

I let Chris Sherman convince me that if I had to choose one overseas show this year, it should be SES China in Xiamen. Part of me is thanking Chris, and part of me is cursing the hell out of him. To be fair, he warned me that this is a cultural shock of significant magnitude. He was right.

I’ll leave the personal observations for my blog. One of the reasons I came was that I knew this was the most important online market in the world, and I had to dip my toe in for myself. For that, I do have to thank Chris. A few weeks ago I was in Florida for the Search Insider Summit, and made a note of some advice Esther Dyson passed in the keynote presentation to the ersatz “Bill Gates” (played by David Vise): “Make sure your kids learn Mandarin.” Xie Xie (thank you), Esther. You’re absolutely right.

Big, But Just Beginning

Let me give you some sense of the magnitude of this market. Right now, the Chinese Internet market is the second largest in the world, only a whisker behind the U.S.: 150 million users to the U.S.’154 million. But the U.S has 68% penetration. That 150 million represents only about 10% of the Chinese market. At full saturation, the Chinese market will be almost seven times as large as that of the U.S.

But don’t make the mistake of projecting the U.S. experience onto the emerging Chinese market. Chinese culture is vastly different from ours, and their online community reflects this difference. For one thing, much of the Chinese online experience will likely happen through mobile devices, since the mobile market is much more mature here. While the number of Internet subscribers is 150 million, the number of cell phone subscribers is significantly higher, nearly 500 million (as of October, 2006) and is growing at the rate of 5.5 million subscribers per month. For another, the Sino mind just clicks at a different speed than ours.

Hot and Noisy Online

One of my favorite phrases I’ve learned while here was renao, which loosely translates into “hot and noisy.” It was explained to me by Deborah Fallows from the PEW Internet Group, an U.S. ex-pat living in Shanghai for two years with her husband, author and journalist Jim Fallows. It sums up so much of what I’ve seen here. The Chinese like to be bombarded by visual stimuli. They operate at a frenetic pace, juggling several things at once, each loudly demanding attention. Some look at this as a lack of maturity in the Asian market. Western eyes see Chinese Web sites as garish, and we think this is because the designers aren’t very sophisticated yet. Perhaps it’s just designers catering to their audience, who like it “hot and noisy.”

Savoring Information

The other difference is how Western cultures treat information, compared to the Chinese. In the West, information is in no short supply, and for the most part, we inherently trust the source of that information. We believe most things we read online to be true. Our biggest challenge is to wade through the mountain of information available to us and to eliminate the irrelevant. The Chinese treasure information yet have a healthy skepticism as to its veracity. While Western Web users are ruthless in their filtering of information, particularly on a search page, the Chinese are more apt to gather and consider, taking time to digest and choose. They often have multiple windows open at the same time, both as a way to keep busy with the slower load times typical in China, and also because they like their desktop “hot and noisy.”

Keeping an Eye on the Market

One of the reasons I was here was to share preliminary findings from an eye-tracking study we did with Chinese users on the two main Chinese search properties, Baidu and Google.cn. This difference in user behavior became very apparent in the study. In North America, the average interaction with a search results page, from launch to first click, is generally less than 10 seconds. In the Chinese study, we saw averages of 30 seconds on Google and up to a minute on Baidu. While North American scan activity is condensed in the Golden Triangle, in China, it’s spread around the page.

It’s fascinating to watch an individual session. The eye zips around the page, picking up information in an apparently haphazard manner. Baidu has been taken to task for the opaque nature of its listings, where you can pay for placement. The results are also much more prone to affiliate spam (on both engines, but particularly Baidu) than we see in North America. But the Chinese don’t mind. Baidu has captured 62% of the search market here, compared to 20% for Google. After all, lack of trust in information is nothing new to the Chinese. Why should it be any different on a search engine?

Everyone I’ve talked to here agrees. This is a market ready to explode. Innovation is happening organically and at an incredibly rapid pace. The development cycle to turn out new functionality on Chinese sites is 30% to 50% as long as their North-American-based rivals. As somebody told me, “In China, you point, shoot and then aim. Deliberation will kill you here.”

This is a lesson Google is learning the hard way. Chris noted that the level of sophistication has increased immensely from the last trade show here, in 2006. The Chinese Internet market is like a Beijing taxi: there may be no logic to its route, but it’s sure getting to wherever it’s going in a hurry!

Universal Search and Other Surprises from Google’s Searchology

When Google yesterday invited a number of reporters to come down to Mountain View for an event they called Searchology, I figured they had something in the works. I had to turn down the invitation because of other commitments, but we sent Enquiro’s Director of Technology and analytics blogger, Manoj Jasra down in my stead. Sure enough, just after noon yesterday, I received a press release announcing the introduction of universal search. I haven’t had a chance to talk to Manoj about what else Google may have unveiled in Mountain View yesterday, but even just working my way through the official release from Google gave me plenty of food for thought. For the extensive list of the announcements and some running commentary, check out Danny’s post on Searchengineland.

To me, the one thing that jumps out in this is the announcement of Universal Search. Basically, Universal Search is the breaking down of the information silos that currently exist on Google and blending them into a single set of results. The changes right now are very subtle. Web results still dominate the typical results page and the primary thing that would be noticeable by the user are additional dynamically generated navigation links that sit just about the results.

universalsearch

The key to universal search results is an on-the-fly algorithm that looks across all of Google’s information sources and prioritizes and ranks all the items coming from these disparate sources based on the user intent. Now, it’s in those last five words, “based on the user intent” that the really important piece of this comes out. Just a few weeks ago, I interviewed Marissa Mayer about the inclusion of Web history in the dataset to calculate personalized search results. This just gives Sep Kamvar and his personalization algorithm a lot more to chew on as they determine user intent. During the interview, I asked Marissa Mayer if personalization allows Google to be more confident in delivering vertical results. Marissa indicated that this was not an area they were currently looking at.

There are a lot of different things that we could do with this data. I’ll be totally honest. Verticals isn’t something that has been first and foremost in our minds so I don’t really think there’s a strong vertical angle here at the moment.

To me it just didn’t make sense. Couple that with yesterday’s announcement of Universal search results and I’ve got to conclude that Marissa was throwing up a smokescreen.

Personalized search is the engine is going to drive universal search. The two are inextricably linked. When you look at the wording the Google throws around about the on-the-fly ranking of content from all the sources for Universal Search, that’s exactly the same the wording they use for the personalization algorithm. It operates on-the-fly, looks at the content in the Google index and re-ranks it according to be perceived intent of the user, based on search history, Web history and other signals. It’s not a huge stretch to extend that same real-time categorization of content across all of Google’s information silos. That is, in fact, what Google’s announcement yesterday said. Call it a silo, call it a vertical, the end result is the same. As Google gains more confidence in disambiguating user intent, more specific types of search results, extending beyond Web results, will get included on the results page and presented to the user.

This introduces something else that opens up some interesting implications for Google. And again, if they choose to go down this path, it flies in the face of something that Marissa Mayer has previously stated. On the search results page as we know it, display or other types of advertising just don’t work that well. The search results pages is heavily text-based. We look for text, we respond to text, we click on text. Anything that’s not text acts as an interruption and distraction. There’s no place on this page for display or rich media advertising.

But if you mix up the search results page and start including things like images, video clips, maps, icons for audio files, you move away from the common paradigm of the text based search results page. The Google page becomes much more like a personalized, on-the-fly portal based around the intent of our query. As such, it includes stimuli from a lot of different sources, presented in a lot of different ways. There will be many things fighting for your attention. And in this paradigm, perhaps display and rich media advertising works better. In another announcement from Google, Marissa Mayer appears to have backtracked and open the door for this.

Yesterday, Marissa responded to a question about possible inclusion of non text-based ads in this way:

Well we don’t have anything to announce on that today. I do think this opens the door for the introduction of richer media into the search results page. We are now going to understand how users interact with that. And as Alan always likes to say search is about finding the best answer, not just the best URL or the best textual snippet.  

For us ads are answers as well. Searching ads is just as hard as searching the Web, as searching images. And so I was hoping that we could bring some of these same advances in terms of the richness of media to ads.

Greg Sterling, in his post on Search Engine Land, calls it something of a bombshell (Greg, I’m now regreting that I didn’t attend, as I would have loved to chat to you about this) and I agree. This is a significant retraction of Google’s long running stand on keeping display ads off the SERP:

There will be no banner ads on the Google homepage or web search results pages. There will not be crazy, flashy, graphical doodads flying and popping up all over the Google site. Ever.

Google said in their announcements that the changes for the user will be subtle at first. In fact, the position of the dynamically generated navigation links that appear about the search results will largely be ignored by most users. They won’t even know they exist. But in typical Google fashion, this tentative presentation of new functionality will be an incremental one. The typical path that Google takes when introducing new functionality is

  • subtly introduce new navigation options in the way of links that tend to be out of the primary scan path
  • make it an opt in experience for the user
  • gradually roll this functionality into a default opt in
  • eventually integrate more fully into the standard presentation of results
  • move to full integration and remove the ability for the user to opt out

if Google goes down this path with both universal and personal search, you can expect to see a substantially different look for search results in the near future. And as with most things we’ve talked about that Google is looking at introducing, there will be a trade-off between overall functionality for most users and a relinquishing of control for a small number of users.

My final point for this post is the speed of which Google is introducing new search innovations. A few weeks ago I posted that Google may be treating search as the forgotten child, devoting more attention to the sexier new channels they were acquiring, including pretty much everything under the sun. Matt Cutts was quick to post a comment saying that Google was still very much involved with search and that there would be a number of new things rolling out in the near future. It appears that I didn’t know what the hell I was talking about and now have to eat my words, as the announcements over the last few weeks have indicated that Google is still very much in the search game and is moving forward at, what for them, is breakneck pace.

I’ve often stated before the Google was the victim of their own success. Because they have such a large slice search market, any changes to the actual presentation of the search pages came with a lot of risk. It’s a major monetization channel for them, their biggest one by far, and any changes in user experience through the introduction of new functionality comes with the potential of dramatically reducing click through on sponsored ads. I predicted that this would make it tough for Google to really innovate with search and we would probably be looking to the smaller players to aggressively pursue innovation. Interestingly, much of my recent conversation with Ask’s usability team lead, Michael Ferguson, revolved around this point. That interview will be running tomorrow on Search Engine Land, with full transcript posted to this blog. If you look at what Ask is been doing with AskX:

AskX

 It’s very similar to what Google says they will be doing with universal search results. It’s taking content from a number of different sources and rolling it into one combined search results page. It came as a complete surprise to me when I read the release indicating that Google is moving aggressively down the same path. Google will not be taking the path that Ask is, by aggressively presenting new functionality on their main site, Google will introduce it incrementally, bit by bit. But expect the evolution of the search experience on Google to move fairly quickly.

All of Google’s announcements in the last few months point in the same direction. They all point to a highly personalized, highly relevant portal to all of Google’s information. Here’s my other prediction. While Marissa was very careful in past interviews to state that personalization is currently impacting only the organic search results, with no work being done on the personalized presentation of sponsored content, I smell another smokescreen. Personalized presentation of advertising content is just too huge a revenue opportunity for Google and we’ll be seeing it in the very near future.