I recently had the opportunity to talk to Jakob Nielsen for a series I’m doing for Search Engine Land about what the search results page will look like in 2010. Jakob is called a “controversial guru of Web design” in Wikipedia (Jakob gets his own shots in at Wikipedia in this interview) because of his strongly held views on the use of graphics and flash in web design. I have a tremendous amount of respect for Jakob, even though we don’t agree on everything, because of his no frills, common sense approach to the user experience. And so I thought it was quite appropriate I sound him out on his feelings about the evolution of the search interface, now that with Universal search and Ask’s 3D Search we seem to be seeing more innovation in this area in the last 6 months than we’ve seen for the last 10 years. Jakob is not as optimistic about the pace of change as I am, but the conversation was fascinating. We touched on Universal Search, personalization, banner blindness on the SERP and scanning of the web in China, amongst other things. Usability geeks..enjoy!
Gord: For today I only really have one question, although I’m sure there be lots of branch offs from it. It revolves around what the search engine results page may look like in 2010. I thought you would be a great person to lend your insight on that.
Jakob: Ok, sure.
Gord: So why don’t we just start? Obviously there are some things that are happening now with personalization and universal search results. Let’s just open this up. What do you think we’ll be seeing on a search results page in 3 years?
Jakob: I don’t think there will be that big a change because 3 years is not that long a time. I think if you look back three years at 2004, there was not really that much difference from what there is today. I think if you look back ten years there still isn’t that much difference. I actually just took a look at some old screen shots in preparation before this call at some various search engines like Infoseek and Excite and those guys that were around at that time, and Google’s Beta release, and the truth is that they were pretty similar to what we have today as well. The main difference, the main innovation seems to have been to abandon banner ads, which we all know now really do not work, and replace them with the text ads, and of course that affected the appearance of the page. And of course now the text ads are driven by the key words, but in terms of the appearance of the page, they have been very static, very similar for 10 years. I think that’s quite likely to continue. You could speculate the possible changes. Then I think there are three different big things that could happen.
One of them that will not make any difference to the appearance and that is a different prioritization scheme. Of course, the big thing that has happened in the last 10 years was a change from an information retrieval oriented relevance ranking to being more of a popularity relevance ranking. And I think we can see a change maybe being a more of a usefulness relevance ranking. I think there is a tendency now for a lot of not very useful results to be dredged up that happen to be very popular, like Wikipedia and various blogs. They’re not going to be very useful or substantial to people who are trying to solve problems. So I think that with counting links and all of that, there may be a change and we may go into a more behavioral judgment as to which sites actually solve people’s problems, and they will tend to be more highly ranked.
But of course from the user perspective, that’s not going to look any different. It’s just going to be that the top one is going to be the one that the various search engines, by what ever means they think of, will judge to be the best and that’s what people will tend to click first, and then the second one and so on. That behavior will stay the same, and the appearance will be the same, but the sorting might be different. That I think is actually very likely to happen
Gord: So, as you say, those will be the relevancy changes at the back end. You’re not seeing the paradigm of the primarily text based interface with 10 organic results and 8-9 sponsored results where they are, you don’t see that changing much in the next 3 years?
Jakob: No. I think you can speculate on possible changes to this as well. There could be small changes, there could be big changes. I don’t think big changes. The small changes are, potentially, a change from the one dimensional linear layout to more of a two dimensional layout with different types of information, presented in different parts of the page so you could have more of a newspaper metaphor in terms of the layout. I’m not sure if that’s going to happen. It’s a huge dominant user behavior to scan a linear list and so this attempt to put other things on the side, to tamper with the true layout, the true design of the page, to move from it being just a list, it’s going to be difficult, but I think it’s a possibility. There’s a lot of things, types of information that the search engines are crunching on, and one approach is to unify them all into one list based on it’s best guess as to relevance or importance or whatever, and that is what I think is most likely to happen. But it could also be that they decide to split it up, and say, well, out here to the right we’ll put shopping results, and out here to the left we’ll put news results, and down here at the bottom we’ll put pictures, and so forth, and I think that’s a possibility.
Gord: Like Ask is experimenting with right now with their 3D search. They’re actually breaking it up into 3 columns, and using the right rail and the left rail to show non-web based results.
Jakob: Exactly, except I really want to say that it’s 2 dimensional, it’s not 3 dimensional.
Gord: But that’s what they’re calling it.
Jakob: Yes I know, but that’s a stupid word. I don’t want to give them any credit for that. It’s 2 dimensional. It’s evolutionary in the sense that search results have been 1 dimensional, which is linear, just scroll down the page, and so potentially 2 dimensional (they can call it three but it is two) that is the big step, doing something differently and that may take off and more search engines may do that if it turns out to work well. But I think it’s more likely that they will work on ways on integrating all these different sources into a linear list. But those are two alternative possibilities, and it depends on how well they are able to produce a single sorted list of all these different data sources. Can they really guess people’s intent that well?
All this stuff..all this talk about personalization, that is incredibly hard to do. Partly because it’s not just personalization, based on a user model, which is hard enough already. You have to guess that this person prefers this style of content and so on. But furthermore, you have to guess as to what this person’s “in this minute” interest is and that is almost impossible to do. I’m not too optimistic on the ability to do that. In many ways I think the web provides self personalization, you know, self service personalization. I show you my navigational scheme of things you can do on my site and you pick the one you want today, and the job of the web designer is to, first of all, design choices that adequately meet common user needs, and secondly, simply explain these choices so people can make the right ones for them. And that’s what most sites do very poorly. Both of those two steps are done very poorly on most corporate websites. But when it’s done well, that leads to people being able to click – click and they have what they want, because they know what they want, and its very difficult for the computer to guess what they want in this minute.
Gord: When we bring it back to the search paradigm, giving people that kind of control to be able to determine the type of content that’s most relevant to them requires them interacting with the page in some way.
Jakob: Yes, exactly, and that’s actually my third possible change. My first one was changing to the ranking scheme; the second one was the potentially changing to two dimensional layouts. The third one is to add more tools to the search interface to provide query reformulation and query refinement options. I’m also very skeptical about this, because this has been tried a lot of times and it has always failed. If you go back and look at old screen shots (you probably have more than I have) of all of the different search engines that have been out there over the last 15 years or so, there have been a lot of attempts to do things like this. I think Microsoft had one where you could prioritize one thing more, prioritize another thing more. There was another slider paradigm. I know that Infoseek, many, many years ago, had alternative query terms you could do just one click and you could search on them, which was very simple. Yet most people didn’t even do that.
People are basically lazy, and this makes sense. The basic information foraging theory, which is, I think, the one theory that basically explains why the web is the way it is, says that people want to expend minimal effort to gain their benefits. And this is an evolutionary point that has come about because the people, or the creatures, who don’t exert themselves, are the ones most likely to survive when there are bad times or a crisis of some kind. So people are inherently lazy and don’t want to exert themselves. Picking from a set of choices is one of the least effortful interaction styles which is why this point and click interaction in general seems to work very well. Where as tweaking sliders, operating pull down menus and all that stuff, that is just more work.
Jakob: But of course, this depends on whether we can make these tools useful enough, because it’s not that people will never exert themselves. People do, after all, still get out of bed in the morning, so people will do something if the effort is deemed worthwhile. But it just has to be the case that if you tweak the slider you get remarkably better results for your current needs. And it has to be really easy to understand. I think this has been a problem for many of these ideas. They made sense to the search engine experts, but for the average user they had no idea about what would happen if they tweaked these various search settings and so people tended to not do them.
Gord: Right. When you look at where Google appears to be going, it seems like they’ve made the decision, “we’ll keep the functionality transparent in the background, we’ll use our algorithms and our science to try to improve the relevancy”, where as someone like Ask might be more likely to offer more functionality and more controls on the page. So if Google is going the other way, they seem to be saying that personalization is what they’re betting on to make that search experience better. You’re not too optimistic that that will happen without some sort of interaction on the part of the user?
Jakob: Not, at least, in a small number of years. I think if you look very far ahead, you know 10, 20, 30 years or whatever, then I think there can be a lot of things happening in terms of natural language understanding and making the computer more clever than it is now. If we get to that level then it may be possible to have the computer better guess at what each person needs without the person having to say anything, but I think right now, it is very difficult. The main attempt at personalization so far on the web is Amazon.com. They know so much about the user because they know what you’ve bought which is a stronger signal of interest than if you had just searched for something. You search for a lot of things that you may never actually want, but actually paying money; that’s a very, very strong signal of interest. Take myself, for example. I’m a very loyal shopper of Amazon. I’ve bought several hundred things from them and despite that they rarely recommend (successfully)…sometimes they actually recommend things I like but things I already have. I just didn’t buy it from them so they don’t know I have it. But it’s very, very rare that they recommend something where I say, “Oh yes, I really want that”. So I actually buy it from them. And that’s despite the (fact that the) economic incentive is extreme, recommending things that people will buy. And they know what people have bought. Despite that and despite their work on this now for already 10 years (it’s always been one of their main dreams is to personalize shopping) they still don’t have it very well done. What they have done very well is this “just in time” relevance or “cross sell” as it’s normally called. So when you are on one book on one page, or one product in general, they will say, here are 5 other ones that are very similar to the one you’re looking at now. But that’s not saying, in general, I’m predicting that these 5 books will be of interest to you. They’re saying, “Given that you’re looking at this book, here are 5 other books that are similar, and therefore, the lead that you’re interested in these 5 books comes from your looking at that first book, not from them predicting or having a more elaborate theory about what I like.
Jakob: What “I like” tends not to be very useful.
Gord: Interesting. Jakob, I want to be considerate of your time but I do have one more question I’d love to run by you. As the search results move towards more types of images, we’re already seeing more images showing up on the actual search results page for a lot of searches. Soon we could be seeing video and different types of information presented on the page. First of all, how will that impact our scanning patterns? We’ve both done eye scanning research on search engine results, so we know there is very distinct patterns that we see. Second of all, Marissa Mayer in a statement not that long ago seemed to backpedal a bit about the fact that Google would never put display ads back on a search results page, seeming to open a door for non text ads. Would you mind commenting on those two things?
Jakob: Well they’re actually quite related. If they put up display ads, then they will start training people to exhibit more banner blindness, which will also cause them to not look at other types of multimedia on the page. So as long as the page is very clean and the only ads are the text ads that are keyword driven, then I think that putting pictures and probably even videos on there actually work well. The problem of course is they are inherently a more two dimensional media form, and video is 3 dimensional, because it’s two dimensional – graphic, and the third dimension is time, so they become more difficult to process in this linear type of scanned document “down the page” type of pattern. But on the other hand people can process images faster, with just one fixation and you can “grok” a lot of what’s in an image, so I think that if they can keep the pages clean, then it will be incorporated in peoples scanning pattern a little bit more. “Oh this can give me a quick idea of what this is all about and what type of information I can expect”. This of course assumes as well one more thing which is that they can actually select good pictures.
Jakob: I would be kind of conservative until higher tweaking with these algorithms, you know, what threshold should you cross before you put an image up. I would really say tweak it such so that you only put it up when you’re really sure that it’s a highly relevant good image. If there starts becoming that there are too many images, then we start seeing the obstacle course behavior. People scan around the images, as they do on a lot of corporate websites, where the images tend to be stock photos of glamour models that are irrelevant to what the user’s there for. And then people involve behavior where they look around the images which is very contrary to first principals of perceptual psychology type of predicting which would be that the images would be attractive. Images turn out to be repelling if people start feeling like they are irrelevant. It’s a similar effect to banner blindness. If there’s any type of design element that people start perceiving as being irrelevant to their needs, then they will start to avoid that design element.
Gord: So, they could be running the risk of banner blindness, by incorporating those images if they’re not absolutely relevant…
Gord: …to the query. Ok thank you so much. Just out of interest have you done a lot of usability work with Chinese?
Jakob: Some. I actually read the article you had on your site. We haven’t done eye tracking studies, but we did some studies when we were in Hong Kong recently, and to that level the findings were very much the same. In terms of pdf was bad and how people go though shopping carts. So a lot of the transactional behavior, the interaction behavior, is very, very similar.
Gord: It was interesting to see how they were interacting with the search results page. We’re still trying to figure out what some of those interactions meant
Jakob: I think it’s interesting. It can possibly be that the alphabet or character set is less scannable, but it is very hard to say because when you’re a foreigner, these characters look very blocky, and it looks very much like a lot of very similar scribbles. But on the other hand, it could very well be the same, that people who don’t speak English would view a set of English words like a lot of little speck marks on the page, and yet words in English or in European languages are highly scannable because they have these shapes.
Jakob: So I think this is where more research is really called for to really find out. But I think it’s possible, you know the hypothesis is that it’s just less scannable because the actual graphical or visual appearance of the words just don’t make the words pop as much.
Gord: There seems to be some conditioning effects as well and intent plays a huge part. There’s a lot of moving pieces with that and we’re just trying to sort out. The relevancy of the results is a huge issue because the relevancy in China is really not that good so…
Jakob: It seems like it would have a lot to do with experience and amount of information. If you compare back with uses of search in the 80’s, for example, before the web started, that was also a much more thorough reading of search results because people didn’t do search very well. Most people never did it actually, and when you did do it you would search through a very small set of information, and you had to carefully consider each probability. Then, as WebCrawler and Excite and AltaVista and people started, users got more used to scanning, they got more used to filtering out lots of junk. So the paradigm has completely changed from “find everything about my question” to “protect myself against overload of information”. That paradigm shift requires you to have lived in a lot of information for awhile.
Gord: I was actually talking to the Chinese engineering team down at Yahoo! and that’s one thing I said. If you look at how the Chinese are using the internet, it’s very similar to North America in 99 or 2000. There’s a lot of searching for entertainment files and MP3s. They’re not using it for business and completing tasks nearly as much. It’s an entertainment medium for them, and that will impact how their browsing things like search results. It’ll be interesting to watch as that market matures and as users get more experienced, if that scanning pattern condenses and tightens up a lot
Jakob: Exactly. And I would certainly predict it would. There could be a language difference, basically a character set as we just discussed, but I think the basic information foraging theory is still a universal truth. People have to protect themselves against information overload, if you have information overload. As long as you’re not accustomed to that scenario, then you don’t evolve those behaviors. But once you get it… I think a lot of those people have lived in an environment where there’s not a lot of information. Only one state television channel and so forth and gradually they’re getting satellite television and they’re getting millions of websites. But gradually they are getting many places where they can shop for given things, but that’s going to be an evolution.
Gord: The other thing we saw was that there was a really quick scan right to the bottom of the page, within 5 seconds, just to determine how relevant these results were, were these legitimate results? And then there was a secondary pass though where they went back to the top and then started going through. So they’re very wary of what’s presented on the page, and I think part of it is lack of trust in the information source and part of it is the amount of spam on the results page.
Jakob: Oh, yes, yes.
Gord: Great thanks very much for your time Jakob.
Jakob: Oh and thank you!