In Search of B2B Landmarks

First published September 27, 2007 in Mediapost’s Search Insider

This week (actually, right about the time you’ll be reading this column) I’ll be talking to the American Business Media Publishers Summit in Chicago about online opportunities, from a user’s perspective. As I was getting ready for the address, I realized there’s a substantial piece of the B to B market that’s missing online. I call it a market enabler.

Looking for Landmarks

Think of our typical progression when we begin researching something online. If it’s new territory, the first thing we need to do is to find a landmark to navigate from. From that landmark we tend to navigate out from it. This is true both in the online and real world. Think of Google as everybody’s favorite landmark. It’s the starting point of nearly all our online navigation, because we know we can always get back to it if we’re lost. In fact, it becomes the vehicle of our online navigation in almost all cases. The only time we deviate from it is if we have enough familiarity with a certain section of the online landscape that we can find other online landmarks without it. For example, if I’m planning a trip somewhere, I usually don’t start at Google. I either start at one of the travel tools I have bookmarked (Farecompare.com, Kayak, Sidestep) or at my favorite travel community, Tripadvisor.com. I’ve been down this path before, so I’ve memorized other familiar landmarks. Otherwise, I always start at Google.

But there are some things we look for in our landmarks. We want them to be recognizable. We want them to be authoritative. We want them to be comprehensive. And usually, we want them to be relatively agnostic. We don’t want to be pushed in any particular direction. We want to choose our own paths. We want a neutral marketplace that allows us to compile our own consideration set, not have it built for us.

Making Life Easier

It also helps if our landmarks incorporate some strong navigational and comparison functionality. One of the best things about the travel sites and tools I’ve mentioned is their sophisticated search and filtering capabilities. They beat Google at this particular game. They’re a more useful landmark to navigate from within. And increasingly, they’re incorporating authentic community dialogue and reviews with the search functionality. I can search, sort and qualify, all in one place. They make the difficult job of planning a trip easier. They’re market enablers, because they allow us to compare alternatives more effectively. If we look at the two best examples of market enablers, eBay and Amazon, they share all of the above characteristics.

So, let’s return to the B to B marketplace. In our B to B survey, we found that almost everyone starts with Google, because most of the time when we research B to B purchases, we’re starting in unfamiliar territory. We have no landmarks. And while we usually end up going fairly quickly to vendor sites, the survey found a strong desire to find an unbiased landmark as the market’s middle ground. Yet, no enablers have strongly established themselves in this position. There is no eBay or Amazon, or even a TripAdvisor, of B to B. There are vertical engines, including Business.com, Knowledgestorm, KellySearch, ThomasNet and others, but none have dominated the landscape to this point.

Sorting through the Haystack

In a recent B to B panel I moderated, consultant Karen Breen Vogel mentioned that these vertical properties do restrict the scope of the search, so rather than looking for a needle in a haystack, you’re looking for a needle in a needlestack. While this is true, it can still be a pretty painful process if you’re looking for the right needle. The problem is that the B 2 B marketplace is vast and fragmented. Also, there are no obvious affiliation or revenue opportunities, as there are in the travel business. There isn’t an obvious money trail to follow in the B to B world to make enabling the marketplace a potentially lucrative proposition. Most of the players have morphed over from being directory publishers in the offline world, and are still following the paid listing model. Unfortunately, this doesn’t lend itself to the neutral marketplace favored by researching buyers.

There are few purchase processes that are more difficult or taxing than a complicated B to B one. Sorting out potential vendors can be a long, tedious and frustrating process. First of all, there’s no emotional investment. This isn’t planning a vacation. This is your job. Secondly, the risk level is extremely high. Screw up, and your job may evaporate. While the potential to make money may be obscured by the challenges, the buyer’s need is painfully obvious. And I can’t help thinking, if eBay could do it, given the immense diversity of its marketplace, there must be a way.

Personalization Catches the User’s Eye

First published September 13, 2007 in Mediapost’s Search Insider

Last week, I looked at the impact the inclusion of graphics on the search results page might have on user behavior, based on our most recent eye tracking report. This week, we look at the impact that personalization might bring.

One of the biggest hurdles is that personalization, as currently implemented by Google, is a pretty tentative representation of what personalization will become. It only impacts a few listings on a few searches, and the signals driving personalization are limited at this point. Personalization is currently a test bed that Google is working on, but Sep Kamvar and his team have the full weight of Google behind them, so expect some significant advances in a hurry. In fact, my suspicion is that there’s a lot being held in reserve by Google, waiting for user sensitivity around the privacy issue to lessen a bit. We didn’t really expect to see the current flavor of personalization alter user behavior that much, because it’s not really making that much of a difference on the relevancy of the results for most users.

But if we look forward a year or so, it’s safe to assume that personalization would become a more powerful influencer of user behavior. So, for our test, we manually pushed the envelope of personalization a bit. We divided up the study into two separate sessions around one task (an unrestricted opportunity to find out more about the iPhone) and used the click data from the first session to help us personalize the data for the search experience in the second session. We used past sites visited to help us first of all determine what the intent of the user might be (research, looking for news, looking to buy) and secondly to tailor the personalized results to provide the natural next step in their online research. We showed these results in organic positions 3, 4 and 5 on the page, leaving base Google results in the top two organic spots so we could compare.

Stronger Scent

The results were quite interesting. In the nonpersonalized results pages, taken straight from Google (in signed out mode) we saw 18.91% of the time spent looking at the page happened in these three results, 20.57% of the eye fixations happened here, and 15% of the clicks were on Organic listings 3, 4 and 5. The majority of the activity was much further up the page, in the typical top heavy Golden Triangle configuration.

But on our personalized results, participants spent 40.4% of their time on these three results, 40.95% of the fixations were on them, and they captured a full 55.56% of the clicks. Obviously, from the user’s point of view, we did a successful job of connecting intent and content with these listings, providing greater relevance and stronger information scent. We manually accomplished exactly what Google wants to do with the personalization algorithm.

Scanning Heading South

Something else happened that was quite interesting. Last week I shared how the inclusion of a graphic changed our “F” shaped scanning patterns into more of an “E” shape, with the middle arm of the “E” aligned with the graphic. We scan that first, and then scan above and below. When we created our personalized test results pages, we (being unaware of this behavioral variation at the time) coincidentally included a universal graphic result in the number 2 organic position, as this is what we were finding on Google.

When we combined the fact that users started scanning on the graphic, then looked above and below to see where they wanted to scan next with the greater relevance and information scent of the personalized results, we saw a very significant relocation of scanning activity, moving down from the top of the Golden Triangle.

One of the things that distinguished Google in our previous eye tracking comparisons with Yahoo and Microsoft was its success of keeping the majority of scanning activity high on the page, whether those top results were organic or sponsored.

Top of page relevance has been a religion at Google. More aggressive presentation of sponsored ads (Yahoo) or lower quality and relevance thresholds of those ads (Microsoft) meant that on these engines (at least as of early 2006) users scanned deeper and were more likely to move past the top of the page in their quest for the most relevant results. Google always kept scan activity high and to the left.

But ironically, as Google experiments with improving the organic results set, both through the inclusion of universal results and more personalization, their biggest challenge may be in making sure sponsored results aren’t left in the dust. Top of page scanning is ideal user behavior that also happens to offer a big win for advertisers. As results pages are increasingly in flux, it will be important to ensure that scanning doesn’t move too far from the upper left corner, at least as long as we still have a linear, 1 dimensional top to bottom list of results.

Search Engine Results: 2010 – Marissa Mayer Interview

marissa-mayer-7882_cnet100_620x433Just getting back in the groove after SES San Jose. You may have caught some of my sessions or heard we have released a white paper looking at the future of search and with some eye tracking on personalized and universal search results. We don’t have the final version up yet, but it should be available later this week. The sneak preview got rave reviews in SJ.

Anyway, I interviewed a number of influencers in the space, and I’ll be posting the full transcripts here on my blog over the next week. I already posted Jakob Nielsen’s interview. Today I’ll be posting Marissa Mayer’s, who did a keynote at SES SJ. It makes for interesting reading. Also, I’ll be running excerpts and additional commentary on Just Behave on Search Engine Land. The first half ran a couple weeks ago. Look for more (and a more regular blog schedule) coming out over the next few weeks. Summer’s over and it’s back to work.

Here’s my chat with Marissa:

Gord: I guess I have one big question that will probably break out into a few smaller questions.  What I wanted to do for Search Engine Land is speculate on what the search engine results page might look like to the user in three years time.  With some of the emerging things like personalization and universal search results and some the things that are happening with the other engines: Ask with their 3D Search, which is their flavor of Universal, it seems to me that we might be at a point for the first time in a long time the results that we’re seeing may have a significant amount of flux over the next 3 years.  I wanted to talk to a few people in the industry about their thoughts of what we might be seeing 3 years down the road.  So that’s the big over-arching question I’m posing.

Marissa: Sure, Minority Report on search result pages…Well, I’d like to say it’s going to be like that but I think that’s a little further out.  There are some really fascinating technologies that I don’t know if you’ve seen..some work being done by a guy named Jeff Han?

Gord: No.

Marissa: So I ran into Jeff Han both of the past years at TED. Basically he was doing multi-touch before they did it on the iPhone on a giant wall sized screen, so it actually does look a lot like Minority Report. It was this big space where you could interact, you could annotate, you could do all those things.  But let me talk first about what I see happening as some trends that are going to drive change.

One is that we are seeing more and more broadband usage and I think in three years everyone will be on very fast connections, so a lot more to choose from and  a lot more data without taking a large latency hit.  The other thing we’re seeing is different mediums, audio, video.  They used to not work.  If you remember getting back a year ago, everytime you clicked on an audio file or a movie file, it would be, like, ‘thunk’?  It needs a plug in, or “thunk”, it doesn’t work.  Now we’re coming into some standardized formats and players that are either browser or technology independent enough, or are integrated enough that they are actually going to work.  And also we’re seeing users having more and more storage on their end.  And those are the sort of 3 computer science trends that are things that are going to change things.  I also think that people are becoming more and more inclined to annotate and interact with the web. It started with bloggers, and then it moved to mash ups, and now people are really starting to take a lot more ownership over their participation on the web and they want to annotate things, they want to mark it up.

So I think when you add these things together it means there’s a couple of things.  One, we will be able to have much more rich interaction with the search results pages. There might be layers of search results pages: take my results and show them on a map, take my results and show them to me on a timeline.  It’s basically the ability to interact in a really fast way, and take the results you have and see them in a new light.  So I think that that kind of interaction will be possible pretty easily and pretty likely.  I think it will be, hopefully, a layout that’s a little bit less linear and text based, even than our search results today and ultimately uses what I call the ‘sea of whiteness’ more in the middle of the page, and lays out in a more information dense way all the information from videos to audio reels to text, and so on and so forth.  So if you imagine the results page, instead of being long and linear, and having ten results on the page that you can scroll through to having ten very heterogeneous results, where we show each of those results in a form that really suits their medium, and in a more condensed format.  A couple of years ago we did a very interesting experiment here on the UI team where we took three or 4 different designs where the problem was artificially constrained.  It was above the fold Google.  If you needed to say everything that Google needed to say above the fold, how would you lay it out?  And some came in with two columns, but I think two columns is really hard when it was linear and text based.  When you started seeing some diagrams, some video, some news, some charts, you might actually have a page that looks and feels more like an interactive encyclopedia.

Gord: So, we’re almost going from a more linear presentation of results, very text based, to almost more of a portal presentation, but a personalized portal presentation.

Marissa: Right and I think as people, one, are getting more bandwidth and two, as they’re more savvy with how they look at more information, think of it this way, as more of serial access versus random access.  One of my pet peeves is broadcast news, where I really don’t like televised news anymore.  I like newspapers, and I like reading online because when I’m online or with newspapers, I have random access.  I can jump to whatever I’m most interested in.  And when you’re sitting there watching broadcast news you have to take it in the order, at the pace and at the speed that they are feeding it to you.  And yes, they try to make it better by having the little tickers at the bottom, but you can’t just jump in to what you’re interested in.  You can only read one piece of text at a time, and it’s hard to survey and scan and hone in on one type of medium or another when it’s all one medium.  So certainly there is some random access happening with the search results today.  I think as the results formats becomes much more heterogeneous, we’re going to have a more condensed presentation that allows for better random access.  Above the fold being really full of content, some text, some audio, some video, maybe even playing in place, and you see what grabs your attention, and pulls you in.   But it’s almost like random access on the front page of the New York Times, where am I more drawn to the picture, or the chart, or this piece of content down here?  What am I drawn to?

Gord: Right.  If you’re looking at different types of stimuli across the page, I guess what you’re saying is, as long as all that content is relevant to the query you can scan it more efficiently than you could with the standardized text based scanning, linear scanning, that we’re seeing now

Marissa: That’s right.

Gord: Ok.

Marissa: So the eyes follow and they just read and scan in a linear order, where when you start interweaving charts and pictures and text, people’s eyes can jump around more, and they can gravitate towards the medium that they understand best.

Gord: So, this is where Ask is going right now with their 3D search, where it’s broken it into 3 columns and they’re mixing images and text and different things.  So I guess what we’re looking at is taking it to the next extreme, making it a richer, more interactive experience, right?

Marissa: Rather than having three rote columns, it would actually be more organic.

Gord: So more dynamic.  And it mixes and matches the format based on the types of material it’s bringing back.

Marissa: Well, to keep hounding on the analogy of the front page of the New York Times.  It’s not like the New York Times…I mean they have basically the same layout each time, but it’s not like they have a column that only has this kind of content, and if it doesn’t fill the column, too bad.  They have a basic format that they change as it suits the information.

Gord: So in that kind of format, how much control does the user have? How much functionality do you put in the hands of the user?

Marissa: I think that, back to my third point, I think that people will be annotating search results pages and web pages a lot.  They’re going to be rating them, they’re going to be reviewing them.  They’re going to be marking them up, saying  “I want to come back to this one later”.  So we have some remedial forms of this in terms of Notebook now, but I imagine that we’re going to make notes right on the pages later.  People are going to be able to say I want to add a note here; I want to scribble something there, and you’ll be able to do that.  So I think the presentation is going to be largely based on our perceived notion of relevance, which of course leverages the user, in the ways they interact with the page, and look at what they do and that helps inform us as to what we should do.  So there is some UI user interaction, but the majority of user interaction will be on keeping that information and making it consumable in the best possible way.

Gord: Ok, and then if, like you said, if you go one step further, and provide multiple layers, so you could say, ok, plot my search results, if it’s a local search, plot my search results on a map.  There’s different ways to, at the user’s request, present that information, and they can have different layers that they can superimpose them on.

Marissa: So what I’m sort of imagining is that in the first basic search, you’re presented with a really rich general overview page, that interweaves all these different mediums, and on that page you have a few basic controls, so you could say, look, what really matters to me is the time dimension, or what really matters to me is the location dimension.  So do you want to see it on a timeline, do you want to see it on a map?

Gord: Ok, so taking a step further than what you do with your news results, or your blog search results, so you can sort them a couple of different ways, but then taking that and increasing the functionality so it’s a richer experience.

Marissa: It’s a richer experience. What’s nice about timeline and date as we’re currently experimenting with them on Google Experimental is not only do they allow you to sort differently, they allow you to visualize your results differently.  So if you see your results on a map, you can see the loci, so you can see this location is important to this query, and this location is really important to that query.  And when you look at it in time line you can see, “wow, this is a really hot topic for that decade”.  They just help you visualize the nut of information across all the results in these fundamentally different ways that ‘sorts’ kind of get at. But it’s really allowing that richer presentation and that overview of results on the meta level that helps you see it.

Gord: Ok.  I had a chance to talk to Jakob Nielsen about this on Friday, and he doesn’t believe that we’re going to be able to see much of a difference in the search results in 3 years.  He just doesn’t think that that can be accomplished in that time period.  What you’re talking about is a pretty drastic change from what we’re seeing today, and the search results that we’re seeing today haven’t changed that much in the last 10 years, as far as what the user is seeing.  You’re really feeling that this is possible?

Marissa: It’s interesting, you know, I pay am lot of attention to how the results look.  And I do think that change happens slowly over time and that there are little spurts of acceleration.  We at Google certainly saw a little accelerated push during May when we launched Universal Search.  I’m of the view that maybe its 3 years out, maybe it’s 5 years out, maybe it’s 10 years out.  I’m a big subscriber to the slogan that people tend to overestimate the short term and underestimate the long term.  My analogy to this is that when I was 5, I remember watching the Jetson’s and being, like, this rocks!  When I’m thirty there are flying cars!  Right?  And here I am, I’m 32 and we don’t even have a good flying car prototype, and yet the world has totally changed in ways that nobody expected because of the internet and computing.  In ways that in the 1980s no one even saw it coming.  Because personal computers were barely out, let alone the internet.  It’s interesting.  We do our off site in August. I do an offsite with my team where we do Google two years out. There it’s really interesting to see how people think about it.  I take all the prime members on my team, so they’re the senior engineers, and everybody has homework.  They have to do a homepage and a results page of Google, and this year it’ll be Google 2009.

Gord: Oh Cool!

Marissa: Six months out, it’s really easy because if we’re working on it, because if it’s going to launch in 6 months and it’s big enough that you would notice, we’re working on it right now and we know it’s coming.  And five years or ten years out we start getting into the bigger picture things like what I’m talking to you about.  When the little precursors that get us ready for those advances happen between now and then that’s what’s shifting.   So I’m giving you the big picture so you can start understanding what some of the mini steps that might happen in the next 3 years, to get us ready for that, would be.  The two to three year timeframe is painful. Everybody at my offsite said, “this timeframe sucks!” So it’s just far enough out that we don’t have great visibility, will mobile devices be something that’s a really big new factor in three years?  Maybe, maybe not.  Some of the things are making fast progress now may even take a big leap, right, like it was from 1994 to 97 on the internet.  Or if you think about G-mail and Maps, like AJAX applications..you wouldn’t have foreseen those in 2002 or 2003.  So, two or three years is a really painful time frame because some things are radically different, but probably in different ways than you would expect.  You have very low visibility in our industry to that time frame.  So I actually find it easier to talk about the six month timeframe, or the ten year timeframe.  So I’m giving you the ten year picture knowing that it’s not like the unveiling of a statue, where you can just take the sheet, snatch it off and go, “Voila there it is”.  If you look at the changes we’ve made over time at Google search they’ve always been “getting this ready, getting this ready”.  So the changes are very slow and feel like they’re very incremental.  But then you look at them in summation over 18 months or two years, you’re like, “you know, nothing felt really big along the way, but they are fundamentally different today”.

Gord: One last question.  So we’re looking at this much richer search experience where it’s more dynamic and fluid and there are different types of content being presented on the page.  Does advertising or the marketing message get mixed into that overall bucket, and does this open the door to significantly different types of presentation of the advertising message on the search results page?

Marissa: I think that there will be different types of advertising on the search results page.  As you know, my theory is always that the ads should match the search results.  So if you have text results, you have text ads, and if you have image results, you have image ads.  So as the page becomes richer, the ads also need to become richer, just so that they look alive and match the page.  That said, trust is a fundamental premise of search.  Search is a learning activity.  You think of Google and Ask and these other search engines as teachers.  As an end user the only reason learning and teaching works, the only way it works, is when you trust your teacher.  You know you’re getting the best information because it’s the best information, not because they have an agenda to mislead you or to make more money or to push you somewhere because of their own agenda.  So while I do think the ads will look different, they will look different in format, or they may look different in placement, I think our commitment to calling out very strongly where we have a monetary incentive and we may be biased will remain.  Our one promise on our search results page, and I think that will stand, is that we clearly mark the ads.  It’s very important to us that the users know what the ads are because it’s the disclosure of that bias, that ultimately builds the trust which is paramount to search

Gord: Ok.  Great to see you’re a keynote at San Jose in August.

Marissa: Should be fun.  This whole topic has me kind of jazzed up so maybe I’ll talk about that.

Breaking “Auction Order” Explained

One of the things that raised eyebrows in my interview with Diane Tang and Nick Fox was the following section regarding how Google determines which ads rank first and climb into the all important top sponsored locations:

Nick: Yes, it’s based on two things.  One is the primary element is the quality of the ad. The highest quality ads get shown on the top. The lower quality ads get shown on the right hand side. We block off the top ads from the top of the auction, if you really believe those are truly excellent ads…

Diane: It’s worth pointing out that we never break auction order…

Nick: One of the things that’s sacred here is making sure that the advertiser’s have the incentive. In an auction, you want to make sure that the folks who win the auction are the ones who actually did win the auction. You can’t give the prize away to the person who didn’t win the auction. The primary element in that function is the quality of the ad. Another element of function is what the advertiser’s going to pay for that ad. Which, in some ways, is also a measure of quality. We’ve seen that in most cases, where the advertiser’s willing to pay more, it’s more of a commercial topic. The query itself is more commercial, therefore users are more likely to be interested in ads. So we typically see that queries that have high revenue ads, ads that are likely to generate a lot of revenue for Google are also the queries where the ads are also most relevant to the user, so the user is more likely to be happy as well. So it’s those two factors that go into it. But it is a very high threshold. I don’t’ want to get into specific numbers, but the fraction of queries that actually show these promoted ads is very small.

This seemed a little odd to me in the interview and I made a note to ask further about that, but what can I say, I forgot and went on to other things. But when the article got posted on Searchengineland, Danny jumped on it at Sphinn

“Seriously? I mean, it’s not an auction. If it were an auction, highest amount would win. They break it all the time by factoring in clickrate, quality score, etc. Not saying that’s bad, but it’s not an auction.”

This reminded me to follow up with Nick and Diane. Diana Adair, on the Google PR team, responded with this clarification:

We wanted to follow up with you regarding your question below.  We wanted to clarify that we rank ads based on both quality score and by bid.  Auction order, therefore, is based on the combination of both of those factors.  So that means that it’s entirely possible that an ad with a lower bid could rank higher than an ad with a higher bid if the quality score for the less expensive ad is high enough.

So, it seems it’s the use of the word “auction” that’s throwing everyone off here. Google’s use of the term includes ad quality. The rest of the world thinks of an auction as somewhere where the highest bid (exclusively) determines the winner. Otherwise, like Danny said, “it’s not an auction”. So, with that interpretation, I then assume that Nick and Diane’s (which sounds vaguely like a title of a John Mellencamp song) comment means that Google won’t arbitrarily hijack these positions for other types of packages which may include presence on the SERP, as in the current Bourne Ultimatum promotion.

Interview with Google’s Nick Fox and Diane Tang on Ad Quality Scoring

I had the chance to talk to Nick Fox and Diane Tang from Google’s Ad Quality team about quality scoring and how it impacts the user experience. Excerpts from the article along with additional commentary will be in Friday’s Just Behave column, but here is the full transcript.

Gord: What I wanted to talk about a little bit was just how the  quality, particularly in the top sponsored ads, impacts user experience and talk a little about relevancy. Just to set the stage, one of the things I talked about at SES in Toronto was just the fact that as far as a Canadian user goes, because the Canadian Ad market isn’t as mature as the American one, we’re not seeing the same acceptance of those sponsored ads at the top.  Just because you’re not seeing the brands that you would expect to see for a lot of the queries. You’re not seeing a lot of trusted vendors in that space. They just have not adopted search the same way they have in the States.  What we’ve seen in some of our user studies is a greater tendency to avoid that real estate … or at least to quickly scan it and then move down.  So, that’s the angle I really want to take here.  Just how important it is ad quality and ad relevance to impacting that user experience and then also talking about one thing I’ve always noticed in the number of our user studies. Of all the engines, Google seems to be the most stringent on what it takes to be a qualified ad. To get promoted from the right rails to the top sponsored ads. So that sets a broad framework of what I wanted to talk about today.

Nick: Let me give you a quick overview of who I am and who Diane is and what we work on and then we’ll jump into the topics that you’ve raised.  So what Diane and I work on is called Ad Quality and it is essentially everything about how we decide which ads to show on Google and our partners and what they should look like, how much we charge for them and all those types of things. How the auction works…everything from soup to nuts.  If you ask us what our goal is…our goals is to make sure our users love our ads. If you ask Larry Page what our goal is…it’s to make our ads as good as our search results. So it’s a heavy focus on making sure that our users are happy and that our users are getting back what they want out of our ads.  We sort of think of ourselves as among the first that work on the average product for Google. We represent the user, to make sure the user is getting what they really need.  We’re very similar to what we do on the search quality side, making sure that search results are very good.

I think a lot of the things you’ve picked up on are very accurate. In terms of the focus on top ad quality..in general, the focus on quality..I think what you picked up on in your various  reports as well as the study in Canada are pretty accurate and pretty much what drives what we are working on here.  The big concern that I would have, the main motivation for why I think ad quality is important is as a company we need to make sure users continue to trust our ads.  If users don’t trust our ads, they will stop looking at the ads, and once they stop looking at the ads they’ll stop clicking on the ads and all is lost. So what we need to make sure we are doing in long run  is that the users believe that the ads will provide them what they are looking for and they will continue looking at the ads as valuable real estate and to continue to trust that.
So that is what we are going for. I think as we look at the competitors landscape as well, we see a lot of what you see. We certainly have historically, and continue to do so, have much more of a focus on the quality of the ads. Making sure we’re not doing things where we trade off the user experience against revenue. We all have the ability to show more ads or worse ads, but we take a very stringent approach, as you’ve noticed, to making sure we only show the best ads that we believe the user will actually get something out of. If the user’s not going to get something out of the ad, we don’t show the ad. Otherwise the user is going to be less likely to consider ads in the future.

Diane: It’s worth pointing out that basically what we’re saying is that we are taking a very long term view towards making sure our users are happy with our ads and it’s really about making them trust what we give them.

Gord: One thing I’ve noticed in all my conversations whether they’re with Marissa or Matt or you, the first thing that everyone always says at Google is the focus around the user experience. The fact that the user needs to walk away satisfied with their experience. When we’re talking about the search results page, that focuses very specifically on what we’ve called in our reports the “area of greatest promise”. That upper left orientation on the search results page and making sure that whatever is appearing in that area had better be the most relevant result possible for the user.  In conversations with other engines I hear things like balanced ecosystems and communities that include both users and advertisers. I’ve always been struck by the focus at Google and I’ve always been a strong believer that corporations need sacred cows, these untouchable driving principles that everyone can rally around.  Is that what we’re talking about here with Google?

Nick: I think it is.  I think it comes from the top and it comes from the roots. If we were doing a proposal to Larry and Sergey and Eric where we’re saying, “Hey, let’s show a bunch of low quality ads”  the first question they’re going to ask is “Is this the right thing for the user?”  And if the answer is no, we get kicked out of the room and that’s the end of conversation. So you get that from the top and it permeates all the way through. You hear it when you speak to Marissa and Matt and us. It permeates the conversations we have here as well.  It’s not just external when we talk about the user; it’s what the conversation is internally as well. It just exudes through the company because it’s just part of what we think. I wouldn’t say that there isn’t a focus on the advertiser too, it’s just that our belief is that the way you get that balance is by focusing on the user, and as long as the user’s happy, the user’s clicking on the ad, and as long as the user’s clicking on the ad, the advertiser’s getting leads and everything works. If you focus on the advertiser’s in the short term, maybe the advertisers will be happy in the short term, but in the long term that doesn’t work. That used to be a hard message to get across. It used to be the case that advertiser’s didn’t really get that. And one of the most rewarding things for me is that the advertisers see that, they get that. Some of the stuff we do in the world of ad quality is frustrating to advertisers because in some cases we’re preventing their ads from running in cases where they’d like it to run. We’ve seen that the advertiser community is actually more receptive to that recently because they understand why we’re doing it and they understand that in the long term, they’re benefiting from it as well. I think that you are seeing that there is a difference in approach between us and our competitors. That we believe the ecosystem thrives if you focus on the users first.

Gord: I’d like to focus on what, to me, what’s a pretty significant performance delta between right rail and top sponsored. We’ve seen the scan patterns put top sponsored directly in the primary scanning path of users where right rail is more of a side bar that may be considered after the primary results are scanned. With whatever you can share, can you tell me a little about what’s behind that promotion from right rail to top sponsored?

Nick: Yes, it’s based on two things.  One is the primary element is the quality of the ad. The highest quality ads get shown on the top. The lower quality ads get shown on the right hand side. We block off the top ads from the top of the auction, if you really believe those are truly excellent ads…

Diane: It’s worth pointing out that we never break auction order…

Nick: One of the things that’s sacred here is making sure that the advertiser’s have the incentive. In an auction, you want to make sure that the folks who win the auction are the ones who actually did win the auction. You can’t give the prize away to the person who didn’t win the auction. The primary element in that function is the quality of the ad. Another element of function is what the advertiser’s going to pay for that ad. Which, in some ways, is also a measure of quality. We’ve seen that in most cases, where the advertiser’s willing to pay more, it’s more of a commercial topic. The query itself is more commercial, therefore users are more likely to be interested in ads. So we typically see that queries that have high revenue ads, ads that are likely to generate a lot of revenue for Google are also the queries where the ads are also most relevant to the user, so the user is more likely to be happy as well. So it’s those two factors that go into it. But it is a very high threshold. I don’t’ want to get into specific numbers, but the fraction of queries that actually show these promoted ads is very small.

Gord: One thing we’ve noticed is, actually in an eye tracking study we did on Google China, there where the search market is far less mature, you very, very seldom see those ads being promoted to top sponsored. So I would imagine that that’s got to be a factor. Is the same threshold applied across all the markets or does it vary, does the quality threshold vary from market to market?

Nick:  I don’t want to get too much into the specifics of that kind of detail. We do certainly take an approach in market that we believe is most effective for that market. Handling everything at a global level doesn’t really make a lot of sense because in some cases you have micro markets that, or, in the case of China, a large market, where it makes sense to tailor our approach to what makes sense for that market…what users from that market are looking for, what the maturity of that market is. A market that has a different level of search quality, for example, it might make sense to take a different approach in how we think about ads as well. So that’s what I want to say there. But you’re right, in a market like China that’s less mature and at the early stage of it’s development, you do see fewer ads at the top of the page, there are just fewer ads there that we believe are good enough to show at the top of the page. Contrast that with a country like the U.S. or the U.K., where these markets are very mature and have the high quality ads we feel comfortable showing at the top, we show top ads.

Diane: But market maturity is just one area we look at. There’s also user sophistication with the internet and other key factors. We have to take all this into account to really decide what the approach is on a market basis.

Gord: One of the questions that always comes up every time I sit on a panel that has anything to do with quality scoring is what’s in an ad that might generate a click through is not necessarily what will generate a quality visitor when you carry it forward into conversion. For instance you can entice someone to click through but they may not convert and, of course, if you’re enticing them to click through you’re going to benefit from the quality scoring algorithm.  How do we correct that in the future?

Nick:  I think there are two things. One is, in general, an ad’s that’s being honest, and gets a high click rate from being honest,  is essentially a very relevant ad and therefore gets a high click through rate. We’ll typically see that that ad also has a high conversion rate. In cases where the advertiser’s not being dishonest, the high click through rate is generally correlated with a high conversion rate. And it’s simply because that ad is more relevant, it’s more relevant in terms of getting the user to click on that ad in the first place, it’s also more relevant in delivering what that user is looking for once they actually got to the landing page. So you see a good correlation there.

There are cases where advertisers can do things where they’re misleading in their ad text and create an incentive for a user to click on their ad and then not be able to deliver, so the advertiser could say “great deals on iPods” and then they sell iPod cases or something. In that case, the high click through rate is unlikely to be correlated with a high conversion rate because the users are going to be disappointed when they actually end up on the page. The good thing for us is that the conversion rate typically gets reflected in the amount that the advertiser’s actually willing to pay, so that’s one of the reasons why the advertiser’s bid is a relatively decent metric of the quality, for example in this ipod cases case, because that conversion rates likely to be low, the advertiser’s not likely to bid as much for that. The click just isn’t worth as much to them, therefore they’ll bid less and end up getting a lower rank as a result of that. So, in many cases, this doesn’t end up being a problem because that just sort of falls out of the ranking formula. It’s a little bit convoluted.

Gord: Just to restate it to make sure I’ve got it here. You’re saying that if somebody is being dishonest, ultimately the return they’re getting on that will dictate that they have to drop their bid amount, so it will correct itself. If they’re not getting the returns on the back end, they’re not going to pay the same on the front end and ultimately it will just find it will just find it’s proper place.

Nick: What an advertiser should probably be thinking most about is mostly ROI per click…it’s actually ROI per impression. From the ad that’s likely to generate the most value for the user, and therefore the most value to Google as well as the most value to the advertiser, all aligned in a very nice way, is the ad that’s the most likely to generate the most ROI per impression. And because of our ranking formula, those are the ads that are most likely to show up at the top of the auction. And the ones that aren’t fall out. So the advertiser should care click through rate, but they shouldn’t care about click through rate exclusively to the extent that that results in a low conversion rate and a low ROI per click for them.

Gord: We talked a little bit about ads being promoted to the top sponsored and over the past three or four years, you have experimented a little bit with the number of ads that you show up there. When we did our first eye tracking study, usually we didn’t see any more than two ads, and that increased to three shortly after. Have you found the right balance with what appears above organic results as far as sponsored results?

Diane: I would say that it’s one of those things where the user base is
constantly shifting, the market is constantly shifting. It’s something that we definitely reevaluate frequently. It was definitely a very thought through decision to move to three, and we show three actually very rarely. We seriously consider that when we show three, is it in the best interest for the user? There’s a lot of evaluation of the entire page at that point and not even just the ads, whether or not it was the right thing. We’re very careful to make sure that we’re constantly at the right balance. It’s definitely something that we look at.

Gord: One of the things we’ve noticed in our eye tracking studies is that there’s a tendency on the part of users to “break off” results in consideration sets and the magic number seems to be around four, so what we’ve seen is even if they’re open to looking at sponsored ads, they want to include at least the number one organic result as well, as kind of a baseline for reference. They want to be able to flip back and forth and say, “Okay, that’s the organic result, that how relevant I feel that is. If one of the sponsored ads is more relevant, than fine, I’ll click on it.” It seems like that’s a good number for the user to be able to concentrate on at one time, quickly and then make their decision based on that consideration set that would usually include one or two sponsored ads and at least one organic listing, and where the highest relevancy is. Does that match what you guys have found as well?

Nick: I don’t think we’ve looked at it in the way of consideration sets, along those lines. I think that’s consistent with the outcomes that we’ve had and maybe some of the thought process that lead us to our outcome. The net effect is the same outcome. One of the things that we are careful about is trying to make sure that you don’t want to create an experience where you show no organic results on the page, you know, or at least above the fold on the page. You want to make sure that the user is going to be able to make that decision, regarding what they want to click on and if you just serve the user with one type of result you’re not really helping the user make that type of decision. What we care more about is what the user sees in the above the fold real estate, not quite so much the full result. And probably relatively consistent on certain sets of screen resolutions.

Gord: One of the things that Marissa said when I talked to her a few days ago was that as Google moves into Universal Search results and we’re starting to see different types of results appear on the page, including in some cases images or videos, that opens the door to potentially looking at different presentations of advertising content as well. How does that impact your quality scoring and ultimately how does that impact the user?

Nick: We need to see. I don’t think we know yet. Ultimately it would be our team deciding whether to do that or not, so fortunately we don’t have to worry too much about hooking up the quality score because we would design a quality score that would make sense for it. The team that focuses on what we call Ad UI, that’s the team that’s looking at …it’s sub group within that, that’s the team that essentially thinks about what should the ads actually look like?

Diane: And what information can we present that’s most useful to the user?

Nick: So in some cases, that information may be an image, in some cases that information may be a video. We need to make sure in doing this that we’re not just showing video ads, because video happens to be catchy. We want to make sure that we’re showing video ads because the video is what actually contains the content that’s actually useful for the user. With Universal Search we found that video search results, for example, can contain that information, so it’s likely that their paid results set could be the same as well. Again, just as in text ads, we’d need to make sure that whatever we do there is user driven rather than anything else and that the users are actually happy with it. There would be a lot of user experimentation that would happen before anything was launched along those lines.

Diane: You can track our blogs as well. All of our experiments show up at some point there.

Gord: Right. Talking a little bit about personalization, you started off by saying that Larry and Sergey have dictated that the ads should be more relevant than the organic results in an ideal situation and just as a point of interest, in our second eye tracking study, when we looked at the success rate of click throughs as far as people actually clicking through to a site that appeared to deliver what they were looking for, for commercial tasks, it was in fact the top sponsored ads that had the highest average success rate of all the links on the page. When we’re looking at Personalization, one of the things that, again, Marissa said is we don’t want our organic results and our sponsored results to be too far out of sync. Although personalization is rolling out on the organic side right now, it would make sense, if that can significantly improve the relevancy to the user, for that to eventually fold into the sponsored results as well. So again, that might be something that would potentially impact quality scoring in the future, right?

Nick: Yes. So we have been looking at some.. I’m not sure if the right word is personalization or some sort of user based or task based…what the right word is..changes to how we think about ads. We have made changes to try to get a sense of what the user’s trying to do right now. Whether they’re, for example, in a commercial mind set and alter how we do ads somewhat based on that type of an understanding of the user’s current task. We’ve done much less with trying to..we’ve done nothing really…with trying to build profiles of the user and trying to understand who the user is and whether the user is a man or woman or a 45 year old or a 25 year old. We haven’t seen that that’s particularly useful for us. You don’t want to personalize users into a corner, you don’t want to create a profile of them that’s not actually reflective of whom they are. We don’t want to freak the user out. If you have a qualified user you could risk alienating that user. So we’ve been very hesitant to move in that direction and in general, we think that there’s a lot more we can that doesn’t require profiles down that path.

Diane: You can think of personalization in a couple of different ways, right? It can manifest itself in regards to the results you actually show. It can also be more about how many ads or even the presentation of those ads with regards to actual information. Those sorts of things. There are many possible directions that can be more fruitful than, like Nick points out, profiling.

Gord: Right, right.

Nick: For example, one of the things that you could theoretically do is, as you know, we changed the background color of our top ads from blue to yellow, because we found that yellow works better in general. You might find that for certain users, green is better, you might find that for certain users, blue is actually better. Those types of things, where you’re able to change things based on what users are responding to, is more appealing to us than these broad user classification types of things. It seems somewhat sketchy.

Gord: It was funny. Just before those interview, I was actually talking to Michael Ferguson at Ask.com and one of the things he mentioned that I thought was quite interesting was a different take on personalization. It may get to the point where it’s not just using personalization for the sake of disambiguating intent and improving relevancy, it might actually be using personalization to present results or advertising messages in the form that’s most preferred by the user. So some may prefer video ads. Some may prefer text ads and they may prefer shorter text ads or longer text ads. And I just thought that that was really interesting. Looking at personalization to actually customize how the results are being presented to you. In what format.

Nick: Yes.

Gord. One last question. You’ve talked before about quality scoring and how it impacts two different things. Whether it’s the minimum bid price or whether it’s actually position on the page. And the fact that there’s more factors, generally, in the “softer” or “fuzzier” minimum bid algorithm than there is in the “hard” algorithm that determines position on the page. And ideally you would like to see more factors included in all of it. Where is Google at on that line right now?

Nick: There are probably two things. One is that when setting the minimum bid, we have much less information available to us. We don’t know what the specific query is that the user issued. We don’t know what time of day it is. We know very little about the context of what the user is actually trying to do. We don’t know what property that user’s on. There’s a whole lot that we don’t know. What we need to do when we set a minimum bid is much coarser. We just need to be able to say, what do we think this keyword is, what do we think the quality of the ad is, does the keyword meet the objective of the landing page and make a judgment based on that. But we don’t have the ability to be more nuanced in terms of actually taking into account the context of how the ad is likely to actually show up. There’s always going to be a difference in terms of what we can actually use when we set the minimum bid versus what we use at auction time to set the position. The other piece of it though is there are certain pieces that only affect the minimum bid. Let me give you an example. Landing page quality normally impacts the minimum bid but it doesn’t impact your ranking. The reason for that is mostly from the standpoint of our decision to launch the product and what we thought was the most expedient way to improve the landing page quality of our ads rather than what we think will be the long term design of the system. So I’d expect things like that, where signals like landing page quality should impact not only the minimum CPC but also rank which ads show at the top of the page and things like that as well. That’s where you’ll see more convergence. But there’s always going to be context that we can get at query time to use for the auction than we can for minimum CPC.

Search Engines Innovate, Why Not SEMs?

First published July 26, 2007 in Mediapost’s Search Insider

The future of search has been on my mind a lot lately. I’ve just done a series of interviews with some of the top influencers and observers in the space — Marissa Mayer, Danny Sullivan, Greg Sterling, Michael Ferguson, Steven Marder, Jakob Nielsen and others — talking about what the search results page may look like in 2010. I’ll try to corral this into a white paper this fall. I’ve also chatted with a few people about the future of search marketing. And here’s the sum of it all. “Hang on, because you ain’t seen nothing yet!”

Change is the Constant

I have remarked to a number of people in the last week or two that I’ve seen more change in the past six months in the search results page than I have in the last 10 years. And all my interviewees seem to agree: We’re just at the beginning of that change. Whether its personalization, universal results, Web 2.0 functionality or mobile, our search experience is about to change drastically. Search will become more relevant, more functional, more ubiquitous and more integrated. It will come with us (via our mobile devices) more often and in more useful ways. It will expand our entertainment options. It will change forever our local shopping trips. And it will all happen quickly.

As Search Goes, So Goes SEM

The question is, what does this do for search marketing? In a recent conversation, I was asked where the major innovation in the search marketing space was coming from. This was prefaced by the remark that when a well-known industry analyst was asked the same question, they (I’ll keep the gender neutral, as there really aren’t that many industry analysts out there) said there was almost no innovation coming from search marketers. They were “living off the fat.” My first inclination was to jump to the defense of the industry, but this proved harder than I thought.

I realized I haven’t seen a lot of innovation lately. Certainly, the engines themselves are innovating. And I’m seeing innovation in adjacent areas (Web analytics, competitive intelligence). But I’m not seeing a lot happen in the search-marketing space. After a raft of proprietary bid management tools hit a few years ago, there’s been little happening to move the industry forward. In fact, I’ve noticed a lot of SEM heads buried in the sand. We are not encouraging change; we are actively fighting it.

There are probably a lot of reasons why. First and foremost, I think a number of companies that have been in the space for a while are tired. I’ve touched on this in a previous three-part series in Search Insider. Secondly, it’s tough to develop new tools or technologies when you’re completely dependent on APIs or (worse still) scraping information from the search engines.

It’s a very risky call to spend time and resources developing new tools or technologies that can be rendered useless by an arbitrary change at Google or Yahoo — or made obsolete by the rapidly increasing pace of innovation.

Either Help Push Or Get Off!

Whatever the reason (and I’m sure the Search Insider blog will be getting a number of posts refuting my observation), the fact is that if search marketers are, in fact, riding the wave, it’s coming to a crashing halt very soon. The need for innovation and changing your strategic paradigm is greater than ever. As the search engines change rules, those search marketers that want to survive must change. Innovation will become a necessity.

And, in the end, this will be a good thing.

The change that’s happening in the search space is reflective of the change that is happening throughout marketing and advertising. It’s the continuing evolution of a much more efficient marketplace, where connections between customers and vendors are made tremendously more effective through access to information on both sides.

The traditional uncertainty of advertising is being leeched out of the system, due, in large part, to the tremendous effectiveness of search. And as search becomes more relevant and useful, it will make those connections more reliable, less intrusive and more successful for both parties. The opportunity is there for search marketers to help advertisers successfully negotiate this more efficient marketplace. It remains to be seen if we’re up for the challenge.

Ask: The Reasoning Behind Ask 3D

Last week in my interview with Jakob Nielsen, he called Ask’s 3D label “stupid”. Just to refresh your memory, here’s how the exchange went:

Gord: Like Ask is experimenting with right now with their 3D search. They’re actually breaking it up into 3 columns, and using the right rail and the left rail to show non-web based results.

Jakob: Exactly, except I really want to say that it’s 2 dimensional, it’s not 3 dimensional.

Gord: But that’s what they’re calling it.

Jakob: Yes I know, but that’s a stupid word. I don’t want to give them any credit for that. It’s 2 dimensional. It’s evolutionary in the sense that search results have been 1 dimensional, which is linear, just scroll down the page, and so potentially 2 dimensional (they can call it three but it is two) that is the big step, doing something differently and that may take off and more search engines may do that if it turns out to work well.

My friend Michael Ferguson at Ask (who has his own interview coming up soon) sent me a quick email with the reasoning behind the label:

The 3D label came from the 3 dimensions of search we folded onto one page: query expression in the left rail, results in the center, and content on the right (vs. the one dimension of returning solely results).

So You Really Want to Integrate Search?

The Ontario tourism board and I have been butting heads a little bit in the blogosphere as of late.  It all came about from an article I wrote a few weeks ago saying that perhaps Canadian advertisers have their “heads up their ass” with search marketing.  I used the Ontario Tourism Board as an example of a major organization that was not doing search and was quickly corrected by Nick Pedota from the board, who indicated that they were in fact doing a search campaign.  My problem was that I couldn’t find them for any of the keywords I thought they would typically appear for.  It seemed to me that there was a disconnect here.  This week I published a follow-up column indicating that perhaps there was a mismatch between the objectives and the allocation of budget in the Ontario Tourism strategy. In a follow-up comment to the column, Nick graciously complemented me on my research and admitted that perhaps there was room for improvement in their integrated search strategy.  My suspicion is that the cracks in the strategy don’t lie exclusively with either the Ontario Tourism board or their agency but likely fall somewhere in between. And it’s not uncommon to find these cracks when major advertisers move into trying to integrate search in their overall campaign strategies.   Kudos are in order for Ontario Tourism’s recognition of search at all.

In the spirit of improvement, I’d like to offer Nick and other marketers a few tips for successfully integrating search into an overall marketing campaign.

Search should be your first dollars in

Typically, search is added as an afterthought in most marketing campaigns.  In fact, search should be the foundation of the campaign.  This should be the first allocation of funds. Searchers are often your best prospects. They’re the ones that are actively involved in trying to find you.  In the case of the Ontario Tourism Board the entire campaign objective was to drive people to their website.  Therefore, it didn’t make much sense to not fully utilize search as a channel and to steer dollars instead to less efficient branding channels such as print and television.

In this case, the first thing that should have been done was to accurately assess the size of the potential search market.  This would’ve been done during the keyword analysis, when the prime keywords were identified and the corresponding search volumes were discovered.  A smart search marketer would be able to determine the key phrases most likely to convert and would start with these, but would then work outwards to determine the total size of the keyword basket. Going hand in hand with this is the determination of the average click cost for these keywords.  The search marketer has to make the determination if the cost per click is justified, given the likelihood to convert.

Once the total available search inventory that meets the quality threshold is established, this should form the core of your marketing budget. These are prospects are raising their hand, indicating that they’re looking to find you.  They should be the first ones captured in your marketing strategy. Then you can extend the campaign with other branding intiatives.

Realize that branding dollars will drive search volume

Even after you extend your budget into areas other than search, quite often the dollars spent here will translate into search activity.  In the case of Ontario Tourism, they ran their website address in all their ads.  But much of this activity would have translated into searches on the primary search engines.  Therefore, you need top of page presence to capture these navigational searches.  Ontario Tourism also did some national advertising, primarily on television, and in this case in particular there is a high likelihood that search would be used to find the site.  Unfortunately with Ontario Tourism’s geo targeting and other limits on maximizing their search presence, it’s unlikely that searchers would be able to find a site to click through to.  So, in effect, you lose two ways here.  You’re spending the money on branding to drive traffic and then you’re not capturing that traffic by ensuring you have an adequate search presence.

Bid on the head words if budget allows

A common mistake with many first-time search marketers is to compare click costs on different keywords against each other, rather than against other lead generation channels.  Head words, the high traffic key phrases that generally form the bulk of the potential traffic, typically cost much more than the long tail phrases.  The neophyte search marketer, in an attempt to be price conscious, often deletes the head words from consideration because of the expense, relative to more niche phrases.  But this is often the wrong comparison.  What the marketer should do is compare the cost per acquisition against the typical cost per acquisition of other channels.  In the case of Ontario Tourism, even their most expensive potential phrases would’ve cost under two dollars per click.  Even under the most optimistic of conversion scenarios, much of their print advertising would have been costing 15 times that.  It was a false economy to delete the head words from the budget consideration, as it would’ve closed the loop on their search strategy and ended up bringing highly qualified prospects at a much lower cost per conversion than their other channels. If you’ve truly allocated as much as possible to your search budget and the head words are still not within reach, then bidding on long tail phrases is really your only option.  But until you’ve made sure that you’re putting your first dollars in the search, don’t eliminate the head words from consideration.

Remember, if it isn’t clicked you don’t pay

It’s typical to use targeting extensively to make sure that traditional marketing is aimed at the right prospects.  You would pick your media channels based on their target demographics.  Often, this thinking is transferred into search but again this could prove to be a false economy. Ontario Tourism decided to use geo targeting to target their primary markets, which was the province of Ontario and the neighboring US border states.  They also put budget caps in place and put time parameters on their campaign. All of these moves would have made sense if budget were extremely limited.  It always makes sense to buy your best clicks first.  But as I mentioned above, in this case search should have been the first dollars in and this would’ve allowed Ontario Tourism to extend their search campaign to capture all of the potential traffic.  One of the beauties of search is that you can gain visibility with relatively little risk.  Unlike television or newspaper advertising, you only pay in search if the ad is clicked.  This eliminates much of the risk and allows you to relax your targeting to ensure that you’re capturing all the potential search traffic.

Understand the visibility dynamics of the search results page

Another source of false economy is the position you choose to occupy on the search results page.  There is generally five times the interaction with ads in the top sponsored position as opposed to ads on the right rail.  And in the case of Ontario Tourism, the official tourism site for the province of Ontario, this would be a site you would expect to see in the top sponsored ads for searches like Ontario vacations. By reinforcing this inherent trust with eye catchers like “official site” the Ontario tourism board would have been able to take advantage of quality scoring to reduce their bid price and maintain their top position. They would also have to use a close variation of the actual key phrase in the title to reinforce information scent. This relevance should, of course, carry through to the landing pages well.  This was an area that could lead to substantially increased conversions for the Ontario Tourism Board as well.

I embarked down this path in order to wake up Canadian advertisers and hopefully make them smarter about integrating search into their strategies.  It’s in that spirit that I offer these suggestions for those that are looking to seriously tap into the potential of search.

“Doing Search” Online Counts If You’re Seen

First published June 28, 2007 in Mediapost’s Search Insider

I’m not making any friends with Ontario Tourism. Two weeks ago I said in this column they weren’t using search. I was quickly corrected by the tourist bureau’s Nick Pedota, who told me my claim was “wildly inaccurate” and that Ontario Tourism in fact has “an extensive search program.” But based on the following searches I did while in Toronto, Ontario Tourism didn’t show for: Ontario vacations, Ontario resorts, Toronto vacations, Ontario getaways and Ontario holidays. According to Google Trends’ keyword research tool, these are the most common searches for Ontario, by a substantial margin.

If You’re Not Seen, You’re Not Doing Search

Here’s the reality of search marketing. It’s one thing to say “we’re doing search” internally — and it’s a totally different thing to have the searcher realize that yes, you’re doing search. The smart thing to do here would be to give Pedota and Ontario Tourism the retraction they’re looking for and say I made a mistake (which I did). But this proves too good an example of the disconnect I see all the time; managing a search campaign to budgets, not objectives. I stand by my original claim: Canadian advertisers aren’t clueing into the power of search.

Nick wasn’t really in a mood to share many details of the bureau’s campaign, but he did share that they’re were bidding on thousands of “targeted keyphrases” and were using heavy geo-targeting to focus on their prime markets (Ontario and the border states). He said that’s simply “smart marketing”. I can’t disagree. It makes sense to target in on your best clicks first, especially if budgets are limited.

Where’s the Money Going?

But in this case, are budgets really limited? Let me share some things I was able to dig up on Ontario Tourism’s site. First of all, the tourist bureau is doing print (lots of print) and TV (lots of TV). The goal? To drive people to its Web site. Full-page 4-color ads are running multiple times in over 70 dailies and weekly newspapers and 9 magazines. One 4-color full-page ad in the Toronto Star would run about $54,000 (there’s a certain amount of guessing here, as print rate cards are really a mathematical exercise in confusion and frustration). Circulation of the Toronto Star is 350,000 (on an average day). An excellent conversion rate for a newspaper ad would be 0.5% That means, ideally, 1,750 people would actually visit the Ontario Tourism website. Now, I have never in my life seen a newspaper ad convert this well, but even if it did, that would be a cost per visitor of $30.85. If the ad doesn’t work that well, the average cost climbs dramatically. And you pay whether or not the ad works.

What People Actually Use

Now, courtesy Yahoo Canada and a recent survey, let’s look at what actual travelers cite as the most important influencers in making travel plans. Search and Web sites are tied for number one and two, used by 51% of respondents in a recent survey. Newspapers and print? Only used by 7%. But yet, only 2.1% of Canadian ad budgets get spent on search, and 42% gets spent on newspapers and magazines. I couldn’t get any specific percentages for Ontario Tourism, but one only has to look at their campaign page to see that search is very likely getting only a fraction of what’s going to newspapers and magazines. And don’t even get me started on the TV buys.

The Search Story

So, where is Ontario Tourism in the search results? As Pedota shared, they’re only geo-targeting the prime markets, and then only for a 3-month period (April through June). Only 1 of the 7 highest traffic key phrases I found (using an Ontario IP) returned an ad or an organic listing for Ontario Travel (the site also hasn’t been organically optimized). More specific phrases, like Ontario Summer Vacations or Ontario Wine Getaways, did return more ads.

But by bidding on specific phrases (even thousand of “long tail” ones) and not on the more popular ones, Ontario Tourism is catching less than 10% of all the people using search to plan a vacation in Ontario. And unless you’re in the top-sponsored ad locations (which few of the ads I saw were) you’re actually only being seen by a small percentage of those searchers (usually 10% to 30% of them) on the results pages you do appear on. So, according to 97 out of 100 people who are using search to find the official site for Ontario Tourism, the tourism bureau is not “doing search.” By the way, you could maintain top spot in Google and Yahoo for all the top traffic phrases for less than $2 per visitor. Remember, that ad in the Toronto Star cost, at a minimum, 15 times that!

Again, let’s recap. What’s the purpose of the campaign? To drive people to the Web site. And not just any one — THE official Web site of Ontario Tourism, the site most people are looking for on these key phrases.

And You’re Spending Your Money Where?

Is it really “smarter” to ignore 97% of the people who are actively searching online to find you, so you can spend more money running ads in newspapers for the 99.5% of people who have no interest in your site at all? And the real irony here is that if people don’t click on a search ad, you don’t pay! Take a fraction of that budget from the Toronto Star and blow out the geo-targeting and time parameters and go for the high-traffic phrases. After all, there might be people in Saskatchewan or Nova Scotia that are planning a trip to Ontario. Or, perhaps they’re planning their trip in September, or February. If not, it’s not costing you anything. Try getting the Toronto Star to offer the same pricing model!

Is this really smarter marketing? You decide. The readership of this column includes some of the smartest marketers on the planet. Blog about this and give me your opinion. Maybe I’m missing something, but I’ve decided I shouldn’t apologize for trying to get advertisers to spend money more effectively. After all, in this case, it’s really our money they’re spending. At least, it would be if I were an Ontario taxpayer. Something tells me after this column, it might be a good thing I live 2000 miles away. As I said, I’m not making any friends in Ontario.

The Cranky Canadian is Back from Toronto

Apparently I stirred the pot a little bit when I was in Toronto. Yahoo invited me to give a breakfast talk to the handful of Canadian advertisers and I managed to hijack the session for 10 to 15 minute rant about how Canadians don’t get search.  I quickly followed this up with a column in  the SearchInsider to the same effect. I did make one mistake.  I did mention that the Ontario government doesn’t do search for their official tourism information site.  I was quickly corrected in that.  There is in fact the search campaign going on.  It just wasn’t registering for any of the searches I did.  I think I’ll follow up on this a little more for next week’s SearchInsider column.

I apologize to show chair Andrew Goodman for breaking the cardinal Canadian rule of politeness.  Andrew is shipping a case of generic cola with a Canadian politeness serum cleverly mixed in to try to return me to the accepted norms for Canadian behavior. I noticed another blogger who picked up on my rant indicated that as a Canadian living in the US, I would be well advised to escape back south of the border. I don’t know if this is good news for Canadian advertisers or not, but I actually am a resident Canadian.  I call Kelowna, B.C. home.

You know, the funny thing is, other than poor Nick at the Ontario Tourism Board who I mistakenly said had his head up his ass, most everyone else has agreed with me.  Perhaps being a cranky Canadian pays off.  To my knowledge there’s nobody who really is filling this role currently, although Canadians have a long tradition of being cranky.  Notable cranky Canadians in the past included Gordon SinclairPierre Berton and Jack Webster.

If it makes you feel any better, Canadian advertisers weren’t  the only ones I turn my sights on in the past week.  I also took a few shots at Yahoo during an interview on Bloomberg TV. Maybe it’s the fact that I’ve been traveling for past 2 1/2 months and I think the last time I actually got seven hours of uninterrupted sleep was back in March. This weekend I think I’ll have a stiff shot of Canadian whiskey (we call it rye up here), have a good night’s sleep and maybe I’ll come back next week kinder, gentler and more polite.  Or not.