Exclusive Interview: Larry Cornett, Yahoo Director Of User Experience Design

Note: This is the 2nd of the Just Behave series from Search Engine Land. This one ran back in February, 2007. At the time of this interview, Yahoo Search was retooling and trying to gain some marketshare lost to Google. They had just launched Panama, their ad management platform. In our eyetracking study, we found Yahoo – more than any of the other engines – loaded the top of the SERP with sponsored ads. We felt this would not bode well for Yahoo’s user experience and we talked about that in this interview. Yahoo pushed back, saying in many cases, a sponsored ad was what the user was looking for. I remember thinking at the time that this didn’t pass our own usability “smell test.” Three years later, Yahoo Organic search would be shuttered and replaced with results from Microsoft’s Bing.

This week I caught up with Larry Cornett, the relatively new Director of User Experience Design at Yahoo and Kathryn Kelly, Director of PR for Yahoo! Search. Again, to set the stage for the interview, here are some high level findings from our eye tracking study that I’ll be discussing in more detail with Larry and Kathryn.

Emphasis on Top Sponsored Results

In the study we found Yahoo emphasized the top sponsored results more than either Microsoft or Google. They showed top sponsored results for more searches and devoted more real estate to them. This had the effect of giving Yahoo! the highest percentage of click throughs on top sponsored, but on first visits only. On subsequent visits the click through rate on these top sponsored ads dropped to a rate lower than what was found on Google or Microsoft.

Better Targeted Vertical Results

Yahoo’s vertical results, or Shortcuts, seem to be better targeted to the queries used by the participants in the survey. Especially for commercial searches, Yahoo did a good job disambiguating intent from the query and providing a researcher with relevant vertical results in the product search category.

How Searching from a Portal Impacted the Search Experience

When we look at how the search experience translated from a portal page where the query is launched to the results page, we found that Yahoo had a greater spread of entry points on the actual search results page. This brings up the question of how launching a search from a portal page rather than a simple search page can impact the user experience and their interaction with the results they see.

I had the chance to ask Larry and Catherine about the Yahoo search experience and how their own internal usability testing has led to the design and the experience we see today. Further I asked them about their plans for the future and what their strategy is for differentiating themselves from the competition, namely Microsoft and Google. One difference you’ll notice from Marissa’s interview last week is the continual reference to Yahoo’s advertisers as key stakeholders in the experience. At Yahoo, whenever user experience is mentioned, it’s always balanced with the need for monetization.

Here’s the interview:


Gord: First let’s maybe just talk in broad terms about Yahoo’s approach to the user experience and how it affects your search interface. How are decisions made? What kind of research is done? Why does Yahoo’s search page looks like it does?

Larry: I can give you a little bit. I have been here about 7 months, so I’m still fairly new to Yahoo. But I can tell you a little bit about how we move forward with our decision making process and the approach we are taking with Yahoo.. We try to strike a balance between user experience and the needs of business as well as our advertising population. We want to provide the best user experience for the users who are trying to find information and give them most efficient experience they can have and then also provide a good ecosystem for our advertisers. And we do a tremendous amount of research here: we do a lot of usability testing, we do surveys. We also do eye tracking studies. All those help inform us when thing are working really well for users and one we need to work on improving things.

Kathryn: And we do bucket testing on a lot of different features, on different properties, to get feed back from our users before ever implementing any new features.

Gord: Since we conducted the study we’ve noticed some changes on the search interface. Do changes tend to be more evolutionary or do you lump them together into a major revision and roll them out together?

Larry: I’d say we do both. We have essentially two parallel tracks. One is continuous improvement, so we’re always looking to improve the experience. They would be considered the evolutionary changes based on a lot of data we are looking at. Then we also have larger things that we’re definitely interested on a more strategic direction, that we look at in a longer term window for larger changes.

Gord: One of the things that we did spend a fair amount of time on in this study is this whole idea of perceived relevancy. If we set a side the whole question of how relevant are the actual results based on the content of those results and what shows, and look more how quickly scent is picked up by the user and how the results that they see are perceived to be relevant. Does that notion coincide with your findings from the internal research and how is that idea of the appearance of relevancy rather the actual relevancy play into the results you present?

Larry: Yes, that absolutely is similar to the types of research findings that we’ve had, specifically with some of the eye tracking studies. We also continue to make efforts on actual relevance so our Yahoo search team is constantly doing improvements to everything to have real relevance improve. But you right, that perceived relevance is actually the most important thing because, at the end of the day, that’s what the users are looking at and that’s what they walk away with. In terms of: Was my search relevant? Did I find what I was looking for?

I do like the concept that you have with the information scent, the semantic mapping. I think it definitely ties into the mental model that a user has when they approach search and they are doing a query. They looking for things that come back to match what they have on their mind, what they are looking for in the results, so the more they actually see those search terms and things they are having in their mind, in terms of what they’re expecting to see, the more relevant the search is going to be for them.

Gord: It comes down to the efficiency of the user experience too, how quickly they find what they are looking for, how quickly they think they find what they are looking for and how successful that click through is. Did the promise match up with what was actually delivered on the other end?

Larry: Yes absolutely.

Gord: One of the biggest differences we saw between Yahoo and the other engines was the treatment of the top sponsored ads. I think it’s fair to say that in both the percentages of the searches that ads were presented for and the number presented, you were more aggressive than MSN (now Live Search) and Google. Obviously I understand the monetization reasoning behind that, but maybe you can speak a little bit as far as the user experience.

Larry: Sure, I mean in many cases those results are, and even in your report you showed this, those results are exactly what the users are looking for. Very often what they see in that sponsored section actually is a good fit for the type of query they are doing, especially if you look at a commercial query. So it’s always finding that balance between monetization and showing organic results. We’re just trying to get the best results for the user based on what they are looking for.

Gord: I certainly agree with you with the fact that in a lot of cases the sponsored results were what they were looking for, but we couldn’t help but notice that there was always a little bit of suspicion or skepticism on the part of the user, both in how they scan the results, and even when they do click through to a result there seems to be a hesitancy to stop there. We found a tendency to want to check out at least the top organic listing as well. One thing with Yahoo is that, with the more aggressive presentation of top sponsored, their choices on the organic tend to get pushed closer and closer to the fold. Have you done any testing on that?

Larry: Yes, those are definitely things that we are exploring as we’re trying to improve the user experience. And we’ve done our own eye tracking as well. A lot of it does come down to a big difference between what’s above the fold and what’s below the fold. So we’re always being very careful when we’re exploring that, thinking about the dominant monitor resolution, settings that we’re looking at as people start to have more advanced systems and larger monitors and really trying to understand what they seeing when they given that first load of the search page.

Gord: We did notice that of all the three engines, Yahoo has the highest percentage of click throughs on top sponsored and first visits. A little more then 30 % of the clicks happened there. But we noticed that it dropped substantially on repeat visits, much more then it did on the other engines. Combine that with the fact that we saw more pogo sticking on Yahoo then we did on the other engines; someone would click through a top sponsored ad then click back to the search results. So, my question is; does the more aggressive presentation of top sponsored ads even out when you factor in the repeat visits and does those lower repeat click through rates negatively impact the monetization opportunities on those repeat visits?

Larry: I can’t get into too many details about that, because it starts to get into some of the business logic and business rules that we have. Especially looking at CTR (click through rates) and repeat visits, so yes, that’s probably a little more information that I actually have access to, myself.

Gord: OK, we’ll put that one off limits for now. Let’s shift gears a little bit. One of the other things that we noticed, that actually works very well on Yahoo, was Yahoo Shortcuts on the vertical results. It seems like you are doing a great job at disambiguating intent based on query and really giving searchers varied options in that vertical real state. Maybe you can talk about that.

Larry: You are absolutely right that those are very effective. And you’ll probably notice over time that we’re continuing to refine our Shortcuts, try to find even more appropriate shortcuts for different types of queries. A lot of that is based on the best end result, what the user is trying to find, and the more we can give them that information the better

Kathryn: And it’s faster too, right?

Larry: Exactly. So a lot of time people just want a quick answer; they don’t want to have to dig through a lot of web pages. They just want a very simple quick answer, so if we can provide that, then that is a great experience for them.
Kathryn: And we found that certain queries like movies, entertainment, weather, sports, travel, blend very nicely with those types of Shortcuts.

Gord: In talking to Google about this, they have fairly strictly monitored click throughs thresholds in both their top sponsored ads and on their vertical results; and if results aren’t getting clicked they don’t tend to show. It automatically gets turned off. What’s Yahoo’s approach to that? Are you monitoring CTRs and determining whether or not vertical results and top sponsored ads will appear for certain types of queries.

Larry: We definitely monitor that as well. We’re interested in tracking usage and so looking at the CTR, because we don’t want to be showing things that are not actually getting usage, so we do continually monitor the CTR and the Shortcuts.

Kathryn: Are you just referring to the Shortcuts or to all of the ads?

Gord: Both; top sponsored and vertical results or Shortcuts.

Kathryn: It’s the same for both.

Larry: Obviously we track CTR in both of those areas and look at that trend over time.

Gord: When we look at the visibility difference or the delta between those top sponsored ads and those side sponsored ads, when you factor in conversion rates, click through rates and everything else, is the difference as significant as it appears to be from an eye tracking study? How do you work with your advertisers to maximize their placement and to help them understand how people are interacting with that search real estate?

Larry: That is actually a separate team that works with those folks.

Gord: Is there overlap between the two departments? You would be on top of how the user is interfacing or interacting with the search results page. Do you share that information with that team and keep them up to date with how that real estate is been navigated?

Larry: Absolutely, it’s a very collaborate relationship, We are in communication constantly, they are giving us performance, we are giving them performance. So it’s always a very collaborative kind of relationship with that team. We definitely can give them recommendations and vice versa. We each have our own worlds that we own.

Gord: Maybe you can speak a little bit from the user’s perspective how Yahoo is when you position it against Microsoft Live Search and Google. What’s unique, why should a user be using Yahoo rather than the other two?

Larry: I’d say one of the key differentiators that you’ve seen released last year, and Terry Semel actually talks about this, is that we’re starting to introduce social search. If you look at Yahoo Answers, it’s one of the key examples of that. It’s a very exciting site that’s performing very well. There’s a lot of great press around it, and we starting to integrate that within the search experience itself, so you can do certain types of queries within search and at the bottom of the page you’ll see relevant best answers that are brought from Yahoo’s Answers. In many cases people look at that and say that it actually adds value. That’s one of the key differentiators here; there is definitely a social aspect to Yahoo Search.

Gord: As Yahoo Answers and that whole social aspect gains traction , is that something that either be moved up as far as visibility on the real state page, moved up into that Golden Triangle real estate or would it be rolled in almost transparently in what the results being shown are?

Larry: Anything is possible, but it’s something that we’re evaluating, so we’re constantly looking at data that comes from user studies and the live site performance, and so we’ll be making a decision about that as the year goes on.

Gord: One other thing that I think somewhat distinguishes Yahoo, especially from Google, is where the searches are launched from. When you look at user’s experience, obviously you are taking into consideration where those Yahoo searches are being launched from; a tool bar versus a portal versus the search page. How does that factor into the user experience?

Larry: You’re right. We definitely look at that type of data and really try to understand how those users might be different and their expectations might be different. So we’re constantly looking at that whole ecosystem because Yahoo is a very large network, with a lot of wonderful properties, so you have to understand how we all play together and what the relationship is between the properties back and forth.

Kathryn: Another thing is knowing where to put a search box on what property and what is going to work with the right mix of users for that property, because not every property is conducive to having a search box prominently displayed, and that’s something that we look at very closely.

Gord: Which brings up another question. One of the things that we speculated on the study is; does the intent of the user get colored based on the context that shows around that search box? If it gets launched from a very clean minimalist search page, there is little influence on intent, but if it gets launched from a portal, where there is a lot of content surrounding it, the intent can then be altered in between the click on the search box and ending up on the search page . Do you have any insight on that? Have you done your own studies on that impact?

Larry: We are definitely doing research within that area to understand the affect of the context and I don’t really have anything I can share at this point but I would say that there’s probably a lot of very interesting information to be derived from looking at that.

Gord: Ok. I’ll go out on the limb once more and say: Obviously if you can take the contextual messaging or what is surrounding that search box, and if it obviously correlates to the search, then I suppose that will help you in potentially targeting the advertising messaging that can go with that, right?

Larry: I think that’s fair to say.

Gord: Ok. I’ll leave it there. One question I have to ask comes down to the user interface. It seems as changes are made the differences between the three engines are getting fewer and fewer and it does seem like everyone is moving more to the standards that Google has defined. Is Google’s interface as it sits the de facto standard for a search results page now and if so, then in what areas does Yahoo differentiate itself? I’m talking more about the design of the page, white space, font usage, where the query bolding is…that type of thing.

Larry: There are a lot of really smart people in each of those companies. They also do their own user studies and they look at their metrics, so I think everyone is realizing over time that they’ve refined their search experience, what is working and what is not. So it’s not surprising to see some convergence in terms of the design and what seems to be most effective. I can’t really speak to whether Google is the de facto standard, but definitely they have a lot of eyeballs, so I think people do get use to seeing things in a certain way. I know from the Yahoo perspective that we want to do what’s best for our users. And I think we do have a user population that has certain expectations of us. I think a big part of that is the social search component, because people do think of Yahoo as a distinct company with its own brand, so there’s a lot that we want to do on that page that is completely independent of what other people might be doing because we want to do what’s best for our users.

Kathryn: And our users tend to be different than Google users. There is obviously overlap but we also have a distinct type of user than Google. We have to take that into consideration.

Gord: Can we go a little bit further down that road? Can we paint a picture of the Yahoo user and then explain how your interface is catering to their specific needs?

Larry: I can’t speak too much about it but one difference is that Yahoo is a lot of different things. We’re not just a search company, not just a mail company, not just a portal company; we serve a lot of needs. And we have a lot of tremendously popular, very effective properties that people use. Millions and millions come to the Yahoo network every day for a whole variety of reasons, so I think that’s one thing that’s different about the Yahoo user. They’re not coming to Yahoo for one purpose. There is often many, many purposes, so that’s something we definitely we have to take into consideration. And I think that’s one reason of many that we’re looking at social search. We know that our users are doing a lot of things in our network and it’s really effective if we’re aware of that.

Gord: So, rather then the task oriented approach with Google where their whole job is to get people in and out as quickly as possible, Yahoo Search supports that community approach where search is just one aspect of several things that people might be doing when they are engaged with the various properties?

Larry: We want to support whatever the user’s task is; and I think search is actually a very simple term and it encompasses a lot. People use search for a whole variety of reasons, millions and millions of reasons, so you have to be aware of what their intent is and, you talk a lot about that in your report, you support that. If they want to get in and out, that’s one task flow. If they want to have a place where they have access to data and information that is coming from their community, all the social information that they think is valuable, that’s another task flow. So, I think just being aware of the fact that search is multi-faceted, it’s not just a simple single type of task flow.

Kathryn: And another thing we talk about a lot is that Google is really about getting people off of their networks as fast as possible; we tend to want to keep people in our network and introduce them to other properties and experiences. So I think that’s also something that we take a look at.

Gord: So, what’s the challenge for Yahoo for search in the future, if you were looking at your whiteboard of the things that you’re tackling in 2007? We talked a little bit about social search, but as far as the user’s experience, what is the biggest challenge that has to be cracked over the next year or two?

Larry: We’ve been touching on that and I think the biggest challenge is really disambiguating intent. Really trying to understand what does the user want when they enter a few words into the search box. It’s not a lot to work with, obviously. So the biggest challenge is understanding the intent and giving them what they’re looking for, and doing that in the most effective way we can. Yes, probably not anything new but I’d say that is the biggest challenge.

Gord: And in dealing with that challenge, I would suspect that moving beyond the current paradigm is imperative in doing that. We’re used to interacting with search in a certain way, but to do what you’re saying we have to move quickly beyond the idea of a query and getting results back on a fairly static page.

Larry: There are certain expectations that users have, because search is search, and it’s been that way for many years, but I think you can see with our strategy with social search and what we’ve been doing with the integration of Yahoo Answers that it is a shift. And it’s showing that we believe for certain types of queries and for certain information that it’s very useful to bring it up, not just purely algorithmic results.

Gord: I’m just going to wrap up by asking one question, and I guess…somewhat of a self serving question, but with our eye tracking report, are there parts where we align with what you have found?

Larry: No…I found the report fascinating. I think you guys have done a wonderful job. It’s a very interesting read. There is a lot of great information there. And I think there is a lot that is in sync with some of our findings as well. So I think you definitely found some themes that make a lot of sense.

Gord: Thanks very much.

Next week, I talk with Justin Osmer, Senior Product Manager at Microsoft about the new Windows Live Search experience, how MSN Search fared in the eye tracking study, and how MSN Search evolved into the Live Search experience.

Just Behave Archive: Q&A With Marissa Mayer, Google VP, Search Products & User Experience

This blog is the most complete collection of my various posts across the web – with one exception. For 4 years, from 2007 to 2011, I wrote a column for Search Engine Land called “Just Behave” (Danny Sullivan’s choice of title, not mine – but it grew on me). At the time, I didn’t cross-post because Danny wanted the posts to be exclusive. Now, with almost 2 decades past, I think it’s safe to bring these lost posts back home to the nest, here at “Out of My Gord”. You might find them interesting from a historical perspective, and also because it gave me the chance to interview some of the brightest minds in search at that time. So, here’s my first, with Google’s then VP of Search Products and User Experience – Marissa Mayer. It ran in January, 2007 :

Marissa Mayer has been the driving force behind Google’s Spartan look and feel from the very earliest days. In this wide-ranging interview, I talked with Marissa about everything from interface design to user behavior to the biggest challenge still to be solved with search as we currently know it.

I had asked for the interview because of some notable findings in our most recent eye tracking study. I won’t go into the findings in any great depth here, because Chris Sherman will be doing a deep dive soon. But for the purpose of setting the background for Marissa’s interview, here are some very quick highlights:


MSN and Yahoo Users had a better User Experience on Google

In the original study, the vast majority of participants were Google users, and their interactions were restricted to Google. With the second study, we actually recruited participants that indicated their engine of preference was Yahoo! or MSN (now Live Search), as the majority of their interactions would be with those two engines. We did take one task at random, however, and asked them to use Google to complete the task. By almost every metric we looked at, including time to complete the task (choose a link), the success of the link chosen, the percentage of the page scanned before choosing a link and others, these users had a more successful experience on Google than on their engine of choice.

Google Seemed to Have a Higher Degree of Perceived Relevancy

In looking at the results, we didn’t believe that it was the actual quality of the results that lead to a more successful user experience as much as it was how those results were presented to the user. Something about Google’s presentation made it easier to determine which results were relevant. We referred to it in the study as information scent, using the term common in the information foraging theory.

Google Has an Almost Obsessive Dedication to Relevancy at the Top of the Results Page

The top of the results, especially the top left corner, is the most heavily scanned part of the results page. Google seemed to be the most dedicated of all the three engines in ensuring the results that fall in this real estate are highly relevant to the query. For example, Google served up top sponsored ads in far fewer sessions in the study than did either Yahoo or MSN.

Google Offers the “Cleanest” Search Experience

Google is famous for its Spartan home page. It continues this minimalist approach to search with the cleanest results page. When searching, we all have a concept in mind and that concept can be influenced by what else we see on the page. Because a number of searches on Yahoo! and MSN were launched from their portal page, we wondered how that impacted the search experience.

Google Had Less Engagement than Yahoo with their Vertical Results

The one area where Google appeared to fall behind in these head to head tests was with the relevance of the OneBox, or their vertical results. Yahoo! in particular seemed to score more consistently with users with their vertical offerings, Yahoo! Shortcuts.

It was in these areas in particular that I wanted to get the thinking of Marissa and her team at Google. Whatever they’re doing, it seems to be working. In fact, I have said in the past that Google has set the de facto standard for what we expect from a search engine, at least for now.

Here’s the interview:

Gord: What, at the highest level, is Google’s goal for the user?

Marissa: Our goal is to make sure that people can find what they’re looking for and get off the page as quickly as possible

If we look at this idea of perceived versus real relevancy, some things seemed to make a big difference in how relevant people perceived the results to be on a search engine: things like how much white space there was around individual listings, separating organic results from the right rail, the query actually being bolded in the title and the description and very subtle nuances like a hair line around the sponsored ads as opposed to a screened box. What we found when we delved into it was there seemed to be a tremendous attention to that detail on Google. It became clear that this stuff had been fairly extensively tested out.

I think all of your observations are correct. I can walk you through any one of the single examples you just named and I can talk you through the background and exactly what our philosophy was when we designed it and the numbers we saw in our tests as we had tested them, but you’re right in that it’s not an accident. For example, putting a line along the side of the ad as opposed to boxing it allows it to integrate more into the page and lets it fall more into what people read.

One thing that I think about a lot are people that are new to the internet. A lot of times they subconsciously map the internet to physical idioms. For example, when you look at how you parse a webpage, chances are that there are some differences if there are links in the structure and so forth, but a lot of times it looks just like a page in a book or a page on a magazine, and when you put a box around something, it looks like a sidebar. The way people handle reading a page that has a sidebar on it is that they read the whole main page and then, at the end, if it’s not too interesting, they stop and read the sidebar on that page.

For us, given that we think our ads in some cases are as good an answer as our search results and we want them to be integral to the user experience, we don’t want that kind of segmentation and pausing. We tried not to design it so it looked like a side bar, even though we have two distinct columns. You know, There are a lot of philosophies like that that go into the results page and of course, testing both of those formats to see if that matches our hypothesis.

That brings up something else that was really interesting. If we separate the top sponsored from the right rail, the majority of the interaction happens on the page in that upper left real estate. One thing that became very apparent was that Google seemed to be the most aware of relevancy at that top of page, that Golden Triangle real estate. In all our scenarios, you showed top sponsored the least number of times and generally you showed fewer top sponsored results. We saw a natural tendency to break off the top 3 or 4 listings on a page and scan them as a set and then make your choice from those top 3 or 4. In Google, those top 3 or 4 almost always include 1 or 2 organic results, sometimes all organic results.

That’s absolutely the case. Yes, we’re always looking at how can we do better targeting with ads. But we believe part of the targeting for those ads is “how well do those ads match your query?” And then the other part is how well does this format and that prominence convey to you how relevant it is. That’s baked into the relevance.

Our ad team has worked very very hard. One of the most celebrated teams at Google is our Smart Ads team. In fact, you may have heard of the Google Founder’s Awards, where small teams of people get grants of stock of up to $10,000,000 in worth, split across a small number of individuals. One of the very first teams at Google to receive that award was the Smart Ads team. And they were looking, interestingly enough, at how you target things. But they were also looking at what’s the probability that someone will click on a result. And shouldn’t that probability impact our idea of relevance, and also the way we choose to display it.

So we do tend to be very selective and keep the threshold on what appears on the top of the page very high. We only show things on the top when we’re very very confident that the click through rate on that ad will be very high. And the same thing is true for our OneBox results that occasionally appear above the top (organic) results. Larry and Sergey, when I started doing user interface work, said we’re thinking of making your salary proportional to the number of pixels above the first result, on average. We’ve mandated that we always want to have at least one result above the fold. We don’t let people put too much stuff up there. Think about the amount of vertical space on top of the page as being an absolute premium and design it and program it as if your salary depended on it.

There are a couple of other points that I want to touch on. When we looked at how the screen real estate divided up on the search results page, based on a standard resolution, there seemed to be a mathematical precision to the Google proportions that wasn’t apparent on MSN and on Yahoo. The ratio seemed pretty set. We always seemed to come up with a 33% ratio dedicated to top organic, even on a fully loaded results page, so obviously that’s not by accident. That compared to, on a fully loaded page, less than 14% on Yahoo.

That’s interesting, because we never reviewed on a percentage basis that you’re mentioning. We’ve had a lot of controversy amongst the team, should it be in linear inches along the left hand margin, should it actually be square pixelage computed on a percentage basis? Because of the way that the search is laid out linear inches or vertical space may be more accurate. As I said, the metric that I try to hold the team to is always getting at least one organic result above the fold on 800 by 600, with the browser held at that size.

The standard resolution we set for the study was 1024 by 768.

Yes, we are still seeing as many as 30% plus of our users at 800 by 600. My view is, we can view 1024 by 768 as ideal. The design has to look good on that resolution. It has to at least work and appear professional on 800 by 600. So all of us with our laptops, we’re working with 1024 by 768 as our resolution, so we try to make sure the designs look really good on that. It’s obvious that some of our engineers have bigger monitors and bigger resolutions than that, but we always are very conscious of 800 by 600. It’s pretty funny, most of our designers, myself included, have a piece of wall paper that actually has rectangles in the back where if you line up the browser in the upper left hand corner and then align the edge of the browser with the box you can simulate all different sizes so we can make sure it works in the smaller browsers.

One of the members of our staff has a background in physics and design and he was the one that noticed that if you take the Golden Ratio it lined up very well with how the Google results page is designed. The proportions of the page lined up pretty closely with how that Ratio is proportioned.

I’m a huge fan of the Golden Ratio. We talk about it a lot in our design reviews, both implicitly and explicitly, even when it comes down to icons. We prefer that icons not be square, we prefer that they be more of the 1.7:1.

I wanted to talk about Google OneBox for a minute. Of all the elements on the Google page, frankly, that was the one that didn’t seem to work that well. It almost seemed to be in flux somewhat while we were doing the data collection. Relevancy seemed to be a little off on a number of the searches. Is that something that is being tested.

Can you give me an example?

The search was for digital cameras and we got news results back in OneBox. Nikon had a recall on a bunch of digital cameras at the time and we went, as far as disambiguating the user intent from the query, it would seem that news results for the query digital cameras is probably not the best match.

It’s true. The answer is that we do a fairly good job, I believe, in targeting our OneBox results. We hold them to a very high click through rate expectation and if they don’t meet that click through rate, the OneBox gets turned off on that particular query. We have an automated system that looks at click through rates per OneBox presentation per query. So it might be that news is performing really well on Bush today but it’s not performing very well on another term, it ultimately gets turned off due to lack of click through rates. We are authorizing it in a way that’s scalable and does a pretty good job enforcing relevance. We do have a few niggles in the system where we have an ongoing debate and one of them is around news versus product search

One school of thought is what you’re saying, which is that it should be the case that if I’m typing digital cameras, I’m much more likely to want to have product results returned. But here’s another example. We are very sensitive to the fact that if you type in children’s flannel pajamas and there’s a recall due to lack of flame retardation on flannel pajamas, as a parent you’re going to want to know that. And so it’s a very hard decision to make.

You might say, well, the difference there is that it’s a specific model. Is it a Nikon D970 or is it digital cameras, which is just a category? So it’s very hard on the query end to disambiguate. You might say if there’s a model number then it’s very specific and if only the model number matches in the news return the news and if not, return the products. But it’s more nuanced than that. With things like Gap flannel pajamas for children, it’s very hard to programmatically tell if that’s a category or a specific product. So we have a couple of sticking points.

So that would be one of the reasons why, for a lot of searches, we weren’t seeing product results coming back, and in a lot of local cases, we weren’t seeing local results coming back?. That would be that click through monitoring mechanism where it didn’t meet the threshold and it got turned off?

That’s right.

Here’s another area we explored in the study. Obviously a lot of searches from Yahoo or MSN Live Search get launched from a portal and the user experience if you launch from the Google home page is different. What does it mean as far as interaction with search results when you’re launching the search from what’s basically a neutral palette versus something that’s launched from a portal that colors the intent of the user as it passes them through to the search results?

We want the user to not be distracted, to just type in what they want and not be very influenced by what they see on the page, which is one reason why the minimalist home page works well. It’s approachable, it’s simple, it’s straightforward and it gives the user a sense of empowerment. This engine is going to do what they want it to do, as opposed to the engine telling them what they should be doing, which is what a portal does. We think that to really aid and facilitate research and learning, the clean slate is best.

I think there’s a couple of interesting problems in the portal versus simple home page piece. You might say it’s easier to disambiguate from a portal what a person might be intending. They look at the home page and there’s a big ad running for Castaway and if they search Castaway, they mean the movie that they just saw the ad for. That might be the case but the other thing that I think is more confusing than anything is the fact that most people who launch the search from the portal home page are actually ignoring and tuning out most of the content on a page. If anything you’re more inclined to mistake intent, to think, “Oh, of course when they typed this they meant that,” but they actually didn’t, because they didn’t even see this other thing. One thing that we’re consistently noticing, which your Golden Triangle finding validated, is that users have a laser focus on their task.

The Google home page is very simple and when we put a link underneath the Google search box on the home page to advertise one of our products, we say, “Hey, try Google video, it’s new, or download the new Picassa.” Basically it’s the only other thing on the page, and while it does get a fair amount of click through, it’s nothing compared to the search, because most users don’t even see it. Most users on our search results page don’t see the logo on the top of the page, they don’t see OneBox, they don’t even see spelling corrections, even though it’s there in bright red letters. There’s a single-mindedness of I’m going to put in my search, not let anything on the home page get in the way, and I’m going to go for the first blue left aligned link on the results page and everything above it basically gets ignored. And we’ve seen that trend again and again. My guess is that if anything, that same thing is happening at the portals but because there is so much context around it on the home page, their user experience and search relevance teams may be led astray, thinking that that context has more relevance than it has.

One thing eye tracking allowed us to pull this apart a little bit is that when we gave people two different scenarios, one aimed more towards getting them to look at the organic results and one that would have them more likely to look at sponsored results, and then look down to organic results, we saw the physical interaction with the page didn’t vary as much as we thought, but the cognitive interaction with the page, when it came to what they remembered seeing and what they clicked on, was dramatically different. So it’s almost like they took the same path through, but the engagement factor flicked on at different points.

My guess is that people who come to the portal are much more likely to look at ads. I like to think of them as users with ADHD. They’re on the home page and they enjoy a home page that pulls their attention in a lot of different directions. They’re willing to process a lot of information on the way to typing in their search, and as a result, that same mind that likes that, it may not even be a per user thing, it may be an of-the-moment thing, but a person that’s in the mindset of enjoying that, on the home page, is also going to be much more likely to look around on the search results page. Their attention is going to be much more likely to be pulled in the direction of an ad, even if it’s not particularly relevant, banner, brand, things like that.

I want to wrap up by asking you, what in your mind is the biggest challenge still to be solved with the search interface as we currently know it?

I think there’s a ton of challenges, because in my view, search is in its infancy, and we’re just getting started. I think the most pressing, immediate need as far as the search interface is to break paradigm of the expectation of “You give us a keyword, and we give you 10 URL’s”. I think we need to get into richer, more diverse ways you’re able to express their query, be it though natural language, or voice, or even contextually. I’m always intrigued by what the Google desktop sidebar is doing, by looking at your context, or what Gmail does, where by looking at your context, it actually produces relevant webpages, ads and things like that. So essentially, a context based search.

So, challenge one is how the searches get expressed, I think we really need to branch out there, but I also think we need to look at results pages that aren’t just 10 standard URLS that are laid out in a very linear format. Sometimes the best answer is a video, sometimes the best answer will be a photo, and sometime the best answer will be a set of extracted facts. If I type in general demographic statistics about China, it’d be great if I got “A” as a result. A set of facts that had been parsed off of and even aggregated and cross validated across a result set.

And sometimes the best result would be an ad. Out of interest, when we tracked through to the end of the scenario to see which links provided the greatest degree of success, the top sponsored results actually delivered the highest success rates across all the links that were clicked on in the study.

Really? Even more so than the natural search results?

Yes. Even the organic search results. Now mind you, the scenarios given were commercial in nature.

Right… that makes much more sense. I do think that for the 40 or so percent of page views that we serve ads on that those ads are incredibly relevant and usually do beat the search results, but for the other 60% of the time the search results are really the only reasonable answer.

Thanks, Marissa.

In my next column, I talk with Larry Cornett, Senior Director of Search & Social Media in Yahoo’s User Experience & Design group about their user experience. Look for it next Friday, February 2.

Curation is Our Future. But Can You Trust It?

 You can get information from anywhere. But the meaning of that information can come from only one place: you. Everything we take in from the vast ecosystem of information that surrounds us goes through the same singular lens – one crafted by a lifetime of collected beliefs and experiences.

Finding meaning has always been an essentially human activity. Meaning motivates us – it is our operating system. And the ability to create shared meaning can create or crumble societies. We are seeing the consequences of shared meaning play out right now in real time.

The importance of influencing meaning creates an interesting confluence between technology and human behavior. For much of the past two decades, technology has been focusing on filtering and organizing information. But we are now in an era where technology will start curating our information for us. And that is a very different animal.

What does it mean to “curate” an answer, rather than simply present it to you? Curation is more than just collecting and organizing things. The act of curation is to put that information in a context that provides additional value by providing a possible meaning. This crosses the line that delineates just disseminating information from attempting to influence individuals by providing them a meaningful context for that information. 

Not surprisingly, the roots of curation lie – in part – with religion. It comes from the Latin “curare” – “to take care of”. In medieval times, curates were priests who cared for souls. And they cared for souls by providing a meaning that lay beyond the realms of our corporal lives. If you really think about religion, it is one massive juxtaposition of a pre-packaged meaning on the world as we perceive it.

In the future, as we access our world through technology platforms, we will rely on technology to mediate meaning. For example, searches on Google now include an “AI Overview” at the top of the search results The Google Page explaining what the Overview is says it shows up when “you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.” That is Google – or rather Google’s AI – curating an answer for you.

It could be argued that this is just another step to make search more useful – something I’ve been asking for a decade and a half now. In 2010, I said that “search providers have to replace relevancy with usefulness. Relevancy is a great measure if we’re judging information, but not so great if we’re measuring usefulness.” If AI could begin to provide actionable answers with a high degree of reliability, it would be a major step forward. There are many that say such curated answers could make search obsolete. But we have to ask ourselves, is this curation something we can trust?

With Google, this will probably start as unintentional curation – giving information meaning through a process of elimination. Given how people scan search listings (something I know a fair bit about) it’s reasonable to assume that many searchers will scan no further than the AI Overview, which is at the top of the results page. In that case, you will be spoon-fed whatever meaning happens to be the product of the AI compilation without bothering to qualify it by scanning any further down the results page. This conveyed meaning may well be unintentional, a distillation of the context from whatever sources provided the information. But given that we are lazy information foragers and will only expend enough effort to get an answer that seems reasonable, we will become trained to accept anything that is presented to us “top of page” at face value.

From there it’s not that big a step to intentional curation – presenting information to support a predetermined meaning. Given that pretty much every tech company folded like a cheap suit the minute Trump assumed office, slashing DEI initiatives and aligning their ethics – or lack of – to that of the White House, is it far-fetched to assume that they could start wrapping the information they provide in a “Trump Approved” context, providing us with messaged meaning that supports specific political beliefs? One would hate to think so but based on Facebook’s recent firing of its fact checkers, I’m not sure it’s wise to trust Big Tech to be the arbitrators of meaning.

They don’t have a great track record.

Can OpenAI Make Searching More Useful?

As you may have heard, OpenAI is testing a prototype of a new search engine called SearchGPT. A press release from July 25 notes: “Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results. We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier.”

I’ve been waiting for this for a long time: search that moves beyond relevance to usefulness.  It was 14 years ago that I said this in an interview with Aaron Goldman regarding his book “Everything I Know About Marketing I Learned from Google”:“Search providers have to replace relevancy with usefulness. Relevancy is a great measure if we’re judging information, but not so great if we’re measuring usefulness. That’s why I believe apps are the next flavor of search, little dedicated helpers that allow us to do something with the information. The information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

I’ve felt for almost two decades that the days of search as a destination were numbered. For over 30 years now (Archie, the first internet search engine, was created in 1990), when we’re looking for something online, we search, and then we have to do something with what we find on the results page. Sometimes, a single search is enough — but often, it isn’t. For many of our intended end goals, we still have to do a lot of wading through the Internet’s deep end, filtering out the garbage, picking up the nuggets we need and then assembling those into something useful.

I’ve spent much of those past two decades pondering what the future of search might be. In fact, my previous company wrote a paper on it back in 2007. We were looking forward to what we thought might be the future of search, but we didn’t look too far forward. We set 2010 as our crystal ball horizon. Then we assembled an all-star panel of search design and usability experts, including Marissa Mayer, who was then Google’s vice president of search user experience and interface design, and Jakob Nielsen, principal of the Nielsen Norman Group and the web’s best known usability expert. We asked them what they thought search would look like in three years’ time.

Even back then, almost 20 years ago, I felt the linear presentation of a results page — the 10 blue links concept that started search — was limiting. Since then, we have moved beyond the 10 blue links. A Google search today for the latest IPhone model (one of our test queries in the white paper) actually looks eerily similar to the mock-up we did for what a Google search might look like in the year 2010. It just took Google 14 extra years to get there.

But the basic original premise of search is still there: Do a query, and Google will try to return the most relevant results. If you’re looking to buy an iPhone, it’s probably more useful, mainly due to sponsored content. But it’s still well short of the usefulness I was hoping for.

It’s also interesting to see what directions search has (and hasn’t) taken since then. Mayer talked a lot about interacting with search results. She envisioned an interface where you could annotate and filter your results: “I think that people will be annotating search results pages and web pages a lot. They’re going to be rating them, they’re going to be reviewing them. They’re going to be marking them up, saying ‘I want to come back to this one later.’”

That never really happened. The idea of search as a sticky and interactive interface for the web sort of materialized, but never to the extent that Mayer envisioned.

From our panel, it was Nielsen’s crystal ball that seemed to offer the clearest view of the future: “I think if you look very far ahead, you know 10, 20, 30 years or whatever, then I think there can be a lot of things happening in terms of natural language understanding and making the computer more clever than it is now. If we get to that level then it may be possible to have the computer better guess at what each person needs without the person having to say anything, but I think right now, it is very difficult.”

Nielsen was spot-on in 2007. It’s exactly those advances in natural language processing and artificial intelligence that could allow ChatGPT to now move beyond the paradigm of the search results page and move searching the web into something more useful.

A decade and a half ago, I envisioned an ecosystem of apps that could bridge the gap between what we intended to do and the information and functionality that could be found online.  That’s exactly what’s happening at OpenAI — a number of functional engines powered by AI, all beneath a natural language “chat” interface.

At this point, we still have to “say” what we want in the form of a prompt, but the more we use ChatGPT (or any AI interface) the better it will get to know us. In 2007, when we wrote our white paper on the future of search, personalization was what we were all talking about. Now, with ChatGPT, personalization could come back to the fore, helping AI know what we want even if we can’t put it into words.

As I mentioned in a previous post, we’ll have to wait to see if SearchGPT can make search more useful, especially for complex tasks like planning a vacation, making a major purchase onr planning a big event.

But I think all the pieces are there. The monetization siloes that dominate the online landscape will still prove a challenge to getting all the way to our final destination, but SearchGPT could make the journey faster and a little less taxing.

Note: I still have a copy of our 2007 white paper if anyone is interested. Just email me (email in the contact us page), give me your email and I’ll send you a copy.

Google Leak? What Google Leak?

If this were 15 years ago, I might have cared about the supposed Google Leak that broke in late May.

But it’s not, and I don’t. And I’m guessing you don’t either. In fact, you could well be saying “what Google leak?” Unless you’re a SEO, there is nothing of interest here. Even if you are a SEO, that might be true.

I happen to know Rand Fishkin, the person who publicly broke the leak last week. Neither Rand nor I are in the SEO biz anymore, but obviously his level of interest in the leak far exceeded mine. He devoted almost 6000 words to it in the post where he first unveiled the leaked documents, passed on to him by Erfan Azimi, CEO and director of SEO of EA Eagle Digital.

Rand and I spoke at many of the same conferences before I left the industry in 2012. Even at that time, our interests were diverging. He was developing what would become the Moz SEO tool suite, so he was definitely more versed in the technical side of SEO. I had already focused my attention on the user side of search, looking at how people interacted with a search engine page. Still, I always enjoyed my chats with Rand.

Back then, SEO was an intensely tactical industry. Conference sessions that delved into the nitty gritty of ranking factors and shared ways to tweak sites up the SERP were the ones booked into the biggest conference rooms, because organizers knew they’d be jammed to the rafters.

I always felt a bit like a fish out of water at these conferences. I tried to take a more holistic view, looking at search as just one touchpoint in the entire online journey. To me, what was most interesting was what happened both before the search click and after it. That was far more intriguing to me than what Google might be hiding under their algorithmic hood.

Over time, my sessions developed their own audience. Thanks to mentors like Danny Sullivan, Chris Sherman and Brett Tabke, conference organizers carved out space for me on their agendas. Ken Fadner and the MediaPost team even let me build a conference that did its best to deal with search at a more holistic level, the Search Insider Summit. We broadened the search conversation to include more strategic topics like multipoint branding, user experience and customer journeys.

So, when the Google leak story bleeped on my radar, I was immediately taken back to the old days of SEO. Here, again, there was what appeared to be a dump of documents that might give some insights into the nuts and bolts of Google’s ranking factors. Mediapost’s own post said that “leaked Google documents has given the search industry proprietary insight into Google Search, revealing very important elements that the company uses to rank content.” Predictably, SEOs swarmed over it like a flock of seagulls attacking a half-eaten hot dog on a beach. They were still looking for some magic bullet that might move them higher in the organic results.

They didn’t come up with much. Brett Tabke, who I consider one of the founders of SEO (he coined the term SERP), spent five hours combing through the documents and said it wasn’t a leak and the documents contained no algorithm-related information. To mash up my metaphors, the half-eaten hotdog was actually a nothingburger.

But Oh My SEOs – you still love diving into the nitty gritty, don’t you?

What is more interesting to me is how the actual search experience has changed in the past decade or two. In doing the research for this, I happened to run into a great clip about Tech monopolies from Last Week Tonight with John Oliver. He shows how much of the top of the Google SERP is now dominated by information and links from Google. Again, quoting a study from Rand Fishkin’s new company, SparkToro, Oliver showed that “64.82% of searches on Google…ended..without clicking to another web property.”

That little tidbit has some massive implications for marketers. The days of relying on a high organic ranking are long gone, because even if you achieve it, you’ll be pushed well down the page.

And on that, Rand Fishkin and I seem to agree. In his post, he does say, “If there was one universal piece of advice I had for marketers seeking to broadly improve their organic search rankings and traffic, it would be: ‘Build a notable, popular, well-recognized brand in your space, outside of Google search.’”

Amen.

Climbing the Slippery Slopes of Mount White Hat

First published August 30, 2012 in Mediapost’s Search Insider

On Monday of this week, fellow Search Insider Ryan DeShazer bravely threw his hat back in the ring regarding this question: Is Google better or worse off because of SEO?

DeShazer confessed to being vilified after a previous column indicated that Google owed us something. I admit I have a column penned but never submitted that Ryan could have added to the “vilify” side of that particular tally. But in his Monday column, Ryan touches on a very relevant point: “What is the thin line between White Hat and Black Hat SEO?” For as long as I’ve been in this industry (which is pushing 17 years now) I’ve heard that same debate. I’ve been at conference sessions where white hats and black hats went head to head on the question. It’s one of those discussions that most sane people in the world could care less about, but we in the search biz can’t seem to let go.

Ryan stirs the pot again by indicating that Google may be working on an SEO “Penalty Box”: a temporary holding pen for sites that are using “rank modifying spammers” where results will fluctuate more than in the standard index. The high degree of flux should lead to further modifications by the “spammers” that will help Google identify them and theoretically penalize them. DeShazer’s concern is the use of the word “spammers” in the wording of the patent application, which seems to include any “webmasters who attempt to modify their search engine ranking.”

I personally think it’s dangerous to try to apply wording used in a patent application (the source for this speculation) arbitrarily against what will become a business practice. Wording in a patent is intended to help convey the concept of the intellectual property as quickly and concisely as possible to a patent review bureaucrat. The wording deals in concepts that are (ironically) pretty black and white. It has little to no relationship to how that IP will be used in the real world, which tends to be colored in various shades of gray. But let’s put that aside for a moment.

Alan Perkins, an SEO I would call vociferously “white hat,” some years ago came up with what I believe is the quintessential difference here. Black hats optimize for a search engine. White hats optimize for humans.  When I make site recommendations, they are to help people find better content faster and act on it. I believe, along with Perkins, that this approach will also do good things for your search visibility.

But that also runs the danger of being an oversimplification. The picture is muddied by clients who measure our success as SEO agencies by their position relative to their competitors on a keyword-by-keyword level. This is the bed the SEO industry has built for itself, and now we’re forced to sleep in it. I’m as guilty as the next guy of cranking out competitive ranking reports, which have conditioned this behavior over the past decade and a half.

The big problem, and one continually pointed out by vocal grey/black hats, is that you can’t keep up with competition who are using methods more black than white by staying with white-hat tactics alone. The fact is, black hat works, for a while. And if I’m the snow-white SEO practitioner whose clients are repeatedly trounced by those using a black hat consultant, I’d better expect some client churn. Ethics and profitably don’t always go together in this industry.

To be honest, over the past five years, I’ve largely stopped worrying about the whole white hat/black hat thing. We’ve lost some clients because we weren’t aggressive enough, but the ones who stayed were largely untouched by the string of recent Google updates targeting spammers. Most benefited from the house cleaning of the index. I’ve also spent the last five years focused a lot more on people and good experiences than on algorithms and link juice, or whatever the SEO flavor du jour is.

I think Alan Perkins nailed it way back in 2007. Optimize for humans. Aim for the long haul. And try to be ethical. Follow those principles, and I find it hard to imagine that Google would ever tag you with the label of “spammer.”

The Virtuous Cycle of SEO

First published August 9, 2012 in Mediapost’s Search Insider

Virtuous cycles are anomalies. They fight the universal law of entropy, and for that reason alone, they are worth investigation. Rather than a gradual slide toward dissipation and equilibrium, virtuous cycles build upon themselves, yielding self-sustaining returns cycle after cycle.

In marketing, there are not a lot of virtuous cycles. Most marketing efforts need to be constantly fueled by a steady stream of dollars. The minute the budget tap is closed, so is the marketing program. But there are a few, and SEO is one of them, if done correctly. Let’s take a quick look at the elements required to build a truly virtuous cycle.

The Power of Positive Feedback

Positive feedback is the engine of a virtuous cycle. It’s what drives sustainable growth. Think of it as the compound interest paid on your marketing efforts.

In an SEO program, positive feedback comes in the form of the algorithmic love shown to you by the search engines, dragging in an ever-increasing number of eyeballs. These eyeballs also contribute to the feedback loop, creating new links, new user-generated content, new activity, all of which continue to drive rankings, up, which drives new eyeballs, which… well, you get the idea. And the cycle continues.

Investment Required

Virtuous cycles require an upfront investment, and it’s usually a significant one. You can’t collect compound interest on a zero balance. Cycles don’t start from scratch.

In SEO, the investments required come in the form of content and an engaging user experience. You have to give a user a reason to come, to engage and to evangelize to really leverage the benefits of SEO. You can evaluate if you have the makings of a virtuous cycle by asking yourself the following questions:

–      What are my users coming for?

–      What will they do?

–      How can they engage?

–      Why will they care?

–      Will their expectations be exceeded?

If you have a less than satisfactory answer to any of these questions, you don’t have what it takes to create a virtuous cycle.

Appealing to Human Nature

If your cycle depends on human behavior, as most do, you have to appeal to one of the basic tenets of human nature. As complicated as we can be, we are generally driven by a surprisingly small number of basic needs. Harvard professors Nitin Nohria and Paul Lawrence, in their book “Driven,” identified four fundamental human drives: We need to acquire, to learn, to bond and to defend. Examine any virtuous cycle, and you’ll always find at least one of these drives at the heart of it.

Ask yourself how your online presence contributes to these drives. Remember, for a cycle to begin, positive feedback is required. And positive feedback depends on engagement from your visitors.

Universally Beneficial

Finally, a virtuous cycle needs to benefit all parties in order for it to be sustainable. It needs to be a win/win/win. If, somewhere along the line, someone gets screwed, the cycle will ultimately fall apart.

In SEO, this means you must play along with the algorithm rather than try to beat it. Short-term thinking and virtuous cycles never go well together. One algorithmic update to crack down on a SEO loophole will shut down your cycle in a heartbeat. But if you work with a search engine to make a great user experience discoverable, the cycle will begin.