Shari Thurow Talking Smack about Eye Tracking

You know, if I didn’t know better I’d say that Shari Thurow had issues with me and eye tracking. I ran across a column a couple of weeks ago where she was talking about the niches that SEO’s are carving out for themselves and she mentioned eye tracking specifically. In fact she devoted a whole section to eye tracking. Now, it’s pretty hard not to take it personally when Enquiro is the only search marketing company I know that does extensive eye tracking. We’re the only ones I’m aware of that have eye tracking equipment in-house. So when Shari singles out eye tracking and warns about using the results in isolation…

That brings me to my favorite group of SEO specialists: search usability professionals. As much as I read and admire their research, they, too, often don’t focus on the big picture.

…I’m not sure who else she might be talking about.

I’ve been meaning to post on this for awhile but I just didn’t get around to it. I’m on the road today and feeling a little cranky so what the heck. It’s time to respond in kind. First, here’s Shari’s take on on eye tracking and SEO.

Eye-tracking data is always fascinating to observe on a wide variety of Web pages, including SERPs (define). As a Web developer, I love eye-tracking data to let me know how well I’m drawing visitors’ attention to the appropriate calls to action for each page type.

Nonetheless, eye-tracking data can be deceiving. Most search marketers understand the SERP’s prime viewing area, which is in the shape of an “F.” Organic or natural search results are viewed far more often than search engine ads are, and (as expected) top, above-the-fold results are viewed more often than the lower, below-the-fold results. Viewing a top listing in a SERP isn’t the same as clicking that link and taking the Web site owner’s desired call to action.

Remember, usability testing isn’t the same as focus groups and eye tracking. Focus groups measure peoples’ opinions about a product or service. Eye-tracking data provide information about where people focus their visual attention. Usability testing is task-oriented. It measures whether participants complete a desired task. If the desired task isn’t completed, the tests often reveal the many roadblocks to task completion.

Eye-tracking tests used in conjunction with usability tests and Web analytics analysis can reveal a plethora of accurate information about search behavior. But eye-tracking tests used in isolation yield limited information, just as Web analytics and Web positioning data yield limited (and often erroneous) information.

Okay Shari, you didn’t mention me or Enquiro by name but again, who else would you be talking about?

Actually, Shari and I agree more than we disagree here. I agree that no single data source or research or testing approach provides all the answers, including eye tracking. However, eye tracking data adds an extraordinarily rich layer of data to common usability testing. When Shari says eye tracking is not the same as usability testing, she’s only half right. As Shari points out, eye tracking combines very well with usability testing but in many cases, can be overkill. Usability testing is task oriented. There’s no reason why eye tracking studies can’t be task oriented as well (most of ours are). The eye tracking equipment we use is very unobtrusive. It virtually like interacting with any computer in a usability lab. In usability testing you put someone in front of the computer with the task and asked them to complete the task. Typically you record the entire interaction with software such as TechSmith’s Morae. After you can replay the session and watch where the cursor goes. Eye tracking can capture all that, plus capture where the eyes went. It’s like taking a two dimensional test and suddenly making it three-dimensional. Everything you do in usability can also be done with eye tracking.

The fact is, the understanding we currently have of interaction with the search results would be impossible to know without eye tracking. I’d like to think that a lot of our current understanding of interaction with search results comes from the extensive eye tracking testing we’ve done on the search results page. The facts that Shari says are common knowledge among search marketers comes, in large part, from our work with eye tracking. And we’re not the only ones. Cornell and Microsoft have done their own eye tracking studies, as has Jakob Nielsen, and findings have been remarkably similar. I’ve actually talked to the groups responsible for these other eye tracking tests and we’ve all learned from each other.

When Enquiro produced our studies we took a deep dive into the data that we collected. I think we did an excellent job at not presenting just the top level findings but really tried to create an understanding of what the interaction with the search results page looks like. Over the course of the last two years I’ve talked to Google, Microsoft and Yahoo. I’ve shared the findings of our research and learned a little bit more about the findings of their own internal research. I think, on the whole, we know a lot more about how people interact with search than we did two years ago, thanks in large part to eye tracking technology. The big picture Shari keeps alluding to has broadened and been colored much more extensively thanks to those studies. And Enquiro has tried to share that information as much as possible. I don’t know of anyone else in the search marketing world who’s done more to help marketers understand how people interact with search. When we released our first study, Shari wrote a previous column that basically said, “Duh, who didn’t know this before?” Well, based on my discussions with hundreds, actually, thousands of people, almost everyone, save for a few usability people at each of the main engines.

There are some dangers with eye tracking. Perhaps the biggest danger is that heat maps are so compelling visually. People tend not to go any further. The Golden Triangle image has been displayed hundreds, if not thousands of times, since we first released it. It’s one aggregate snapshot of search activity. And perhaps this is what Shari’s referring to. If so, I agree with her completely. This one snapshot can be deceiving. You need to do a really deep dive into the data to understand all the variations that can take place. But it’s not the methodology of eye tracking that’s at fault here. It’s people’s unwillingness to roll up their sleeves and weed through the amount of data that comes with eye tracking, preferring instead to stop at those colorful heat maps and not go any further. Conclusions on limited data can be dangerous, no matter the methodology behind them. I actually said the same for an eye tracking study Microsoft did that had a few people drawing overly simplified conclusions. The same is true for usability testing, focus groups, quantitative analysis, you name it. I really don’t believe Enquiro is guilty of doing this. That’s why we released reports that are a couple hundred pages in length, trying to do justice to the data we collected.

Look, eye tracking is a tool, a very powerful one. And I don’t think there’s any other tool I’ve run across that can provide more insight into search experience, when it’s used with a well designed study. Personally, if you want to learn more about how people interact with engines, I don’t think there’s any better place to start than our reports. And it’s not just me saying so. I’ve heard as much from hundreds of people who have bought them, including representatives at every major search engine (they all have corporate licenses, as well as a few companies you might have heard of, IBM, HP, Xerox..to name a few). I know the results pages you see at each of the major engines look the way they do in part because of our studies.

Shari says we don’t focus on the big picture. Shari, you should know that you can’t see the big picture until you fill in the individual pieces of the puzzle. That’s what we’ve been trying to do. I only wish more people out there followed our example.

Jimbo Wales and People-Powered Search: A Long Shot

First published March 15, 2007 in Mediapost’s Search Insider

Jimmy Wales, the co-founder of Wikipedia, is placing a fairly large bet that people can trump technology in the search engine game. According to a recent report in Yahoo, he’s putting $4 million (of other people’s money) plus an undisclosed “large amount” from Amazon on the line, betting that he can steal 5% of the total search market away from Google with his new project, Wikia.

Wales has called both Google and Yahoo the “black boxes” of the internet, criticizing them for the secrecy maintained around their ranking algorithms, but details on exactly how Wikia will work have been equally scarce. All we’ve heard so far is that an online community with “a distinct and clear purpose — a moral purpose — that unites people and brings them together to do something useful” will work to make Web search a better experience for us all. The “how” of how Wikia will work has been lacking to this point. But it’s likely to follow a similar path as Wikipedia. The online community will act as an army of human editors, ensuring the quality of the results by collectively agreeing on them in some fashion. The theory here is that there is no better filter for results aimed at humans than those same humans.

Human “Signal Noise”

But the minute you put people into the equation, you introduce “signal noise”: in engineering parlance, you add friction between the end user and the desired content. Automated algorithms are relatively friction-free. Results are ranked with mathematical objectivity, based on universally applicable principles. Queries flow through this channel to connect with the content as determined by the algorithms.

People are smarter and more intuitive than the smartest algorithm, but they’re also political. And the reality is, the very segment that Wikia (and Wikipedia) depends most on are those most prone to politics.

Anytime you depend on people to do things out of the “goodness of their hearts” you attract a certain kind of person. They’re community-minded, true, but it’s very much their definition of community. They can also be elitist, obstinate, territorial and dismissive of those “outside the circle.” These people tend to show up in the same places: condo strata councils, nonprofit organizations, PTAs, church groups, and, online, in forums and on wikis. They have the time to contribute, probably because no one can stand them, so they don’t have an active social life outside their chosen cause.

I’m not saying everyone that contributes falls into this category, but come on, admit it, everyone reading this now has someone firmly in mind that fits the above description. They get possessive about their online community, which is both a good and a bad thing. With possessiveness comes politics, and signal noise.

Good Intentions, Bad Results

If you need more evidence, look at what is currently happening in the best-known communities that depend on online “Good Samaritans.” On Digg, the Bury Brigade has been publicly acknowledged by Digg founder Kevin Rose: Any story that doesn’t meet their criteria for what is interesting gets quickly buried, never to rise to the surface again. That’s censorship, and it’s just some of the signal noise you can expect when you introduce people to the equation.

Wikipedia has come under frequent criticism for the same issue, a handful of community elite (with a decidedly left-wing bent) dictating what should and shouldn’t be included as entries.

A Growth Bottleneck

But perhaps the biggest challenge for Wikia is scalability. If you put your faith in people as your competitive advantage, you have to be prepared to accept the restrictions that come with that. If Wales is counting on people to help compile the index and rank it, that introduces a potentially significant bottleneck.

Search engines are different than encyclopedias. Encyclopedias are much less dynamic, even when you have an encyclopedia as fluid and ever-growing as Wikipedia. Search engines have to be much more sensitive to new content. A lower-traffic entry on Wikipedia could probably go untouched for months at a time and it wouldn’t significantly impact the value of that entry. But users of a search engine expect even long-tail queries to bring back fresh and timely results. Given this factor, it would be likely that Wikia would have to have a two-stage approach to including new content. They would need an automated spider and simple index, to be later augmented and edited by humans. This would create a significant divide in the quality of the results, between the edited and unedited entries, especially in newer, less popular segments of the index. And, as Wales himself admits, if the algorithms that power the automated portion are open source, the door is wide open to spammers.

What’s In It For Me?

Finally, we have to look at the motivation on why people contribute to Wikipedia, and ask ourselves if this would translate to a search engine. When you contribute to Wikipedia, you’ve staked your claim in online intellectual territory. You’ve left a mark, speaking to your expertise in a particular area, on a place on the Web where you can point and say, “See, that’s me. I did that!” It may not have your name on it, but it’s visible.

In a search engine, your contribution would be lost in a background process that would leave virtually no trace that you ever trod there. There are no bragging rights. And that’s essential to appeal to the segment of the online community that Wikia needs to survive. If we’re going to take even a few seconds out of our busy days to tag, vote, nominate or whatever else Wales needs us to do, there’d better be something in it for us, or it just won’t fly.

I applaud Jimmy Wales’s ideal of open access to technology and unlocking the “black box” for the masses, but I just can’t see how it will work for search. Much as I love humans, having been one on occasion, I’m not sure they’re the competitive advantage a search engine needs.

User-centricity is More than Just a Word

Ever since Time Magazine made you and I the person of the year, user experience has been the two words on the tip of everyone’s tongue. We’re all saying that the user is king and that we’re building everything around them. But I fear that user-centricity is quickly becoming one of those corporate clichés that’s easy to say, but much, much harder to do. All too often I see internal fighting in a lot of companies between those that truly get user centricity and have become the internal user champions and those that are continuing to push the corporate agenda, at the expense of the user experience. The tough part of user centricity is seeing things through the users eyes. We can do user testing but if we truly put the user first, it requires tremendous courage and fortitude to make the user the primary stakeholder. All too often, I see user considerations being one of several factors that are being balanced in the overall design. And often, it takes a backseat to other considerations, such as monetization. This is the trap that Yahoo currently finds themselves in. They talk about user experience all the time. But the fact is, over the last two years it’s really been the advertiser whose’s owned their search results page. I’ve recently seen signs of the balance tipping more towards the user’s favor with the rollout of Panama and a more judicious presentation of top sponsored ads. But I’m still not sure the user is winning the battle at Yahoo!

It’s not easy to step inside your user’s head when it comes to designing interfaces. It’s very tought to toggle the user perspective on and off when you’re going through a design cycle. The feedback we get from usability testing tends to be too far removed from the actual implementation of the design. By that time the meat of the findings has been watered down and diluted to the point where the user’s voice is barely heard. That’s why I like personas as a design vehicle. A well formulated persona keeps you on track. It keeps you in the mindset of the user. It gives you a mental framework you can step into quickly and readjust your perspective to that of the user, not the designer.

If you’re truly going to be user centric, be prepared to take a lot of flack from a lot of people. This is not a promise to be made lightly. You have to commit to it and not let anything dissuade you from delivering the best possible end-user experience, defined in the user’s own terms. This can’t be a corporate feel good thing. It has to be a corporate commitment that requires balls the size of Texas. And if you’re going to make a commitment, you better be damn sure that the entire company is also willing to make the same commitment. The user experience group can’t be a lone bastion for the user, fighting a huge sea of corporate momentum going in the opposite direction. This isn’t about balancing the user in the grand scheme of things, it’s about committing wholeheartedly to them and getting everyone else in the organization to make the same commitment. If you can do so, I think the potential wins are huge. There’s a lot of people talking about user centricity but there’s not a lot of people delivering on it consistently and wholeheartedly.

A Lesson Learned from the Pasternack SEO Contest

First published March 8, 2007 in Mediapost’s Search Insider

Why is search engine marketing defined by diametric opposition?  It seems like for every question there are two extreme answers.  And these polar opposite viewpoints are held with a tremendous amount of passion.  The smallest questioning of our position can unleash a firestorm of retribution.  Blogs kick themselves into high gear as aspersions are cast without a second thought.  We rise passionately to defend our position, questioning the pedigrees and mental capacities of our opponents. How could someone be so incredibly dense as to not see it our way?

Tempest in a Teapot

In the past few months, little has raised such a passion of opposing viewpoints as the questioning of the value of organic optimization.  The verbal feud that took place in the blogosphere is well-known to most of us within the industry.  If you’ve been one of the few that has remained blissfully ignorant of the David Pasternack (co-founder of Did-It) “Is SEO rocket science” debate, count yourself fortunate.  It’s not so much the debate I want to focus on, but the fallout of that debate because I think there’s a valuable lesson that we can all learn from it.  As the organic community rose to defend its collective value, Threadwatch.org had the idea of launching an SEO contest.  The premise of the contest was simple.  Whoever ranked highest for the phrase David Pasternack by noon on March 1 was the winner.  A Who’s Who of SEOs rose to take the challenge, using every trick in the book to try to propel their page to No. 1 in the Google data centers.

One Set of Results, Two Interpretations

Predictably, the tactics ranged from the white to the dark gray.  The winner, when all was said and done, was the page that had been ranking previously for a chef in New York also called, coincidently, David Pasternack.  There was a post on Dave Pasternack’s Did-It corporate blog that said, with a decidedly sarcastic tongue-in-cheek approach, “See? We told you so! SEO isn’t rocket science and after you guys threw the best you could at the algorithms, the page that was there before the contest was still the one ranking number one on Google.”  That’s one viewpoint.

Ironically, when you look at that same page of search results, the opposing side also claims victory.  Their contention? They dramatically changed the appearance of a search results page, which shows that SEO does have tremendous value and that it’s not a “set and forget,” one-time endeavor.  Search results pages are dynamic environments and if you hope to do well on them, you have to be prepared to take a long-term view. That’s the other viewpoint.

See?  The same set of search results — but two dramatically different opinions of what happened in the contest.  And both sides swear they’re right.  In my opinion, they’re both right — or, at least a little bit of each argument rings true.  The fact is, the page for David Pasternack, (the chef, not the cofounder of Did-It), has been around a long time and this Pasternack is a well-known guy.  Google is doing what it should be doing; putting the site first that most people would be expecting to find at the top of the listings.

The SEO side is also correct.  They did dramatically change the look of the page.  Other than the top0ranking page, the rest of the results looked decidedly different than they did a few weeks before, prior to the contest.  So rather than quibble about who’s right and who’s wrong in this debate, let’s look at the takeaways and see what we can learn.

The (Web) Guerilla Approach

One of the most interesting entries was a late one by Greg Boser.  Greg’s approach leveraged the existing notoriety of David Pasternack, the chef.  Greg’s approach was not so much based on technical tricks (although they did play a role), but rather a very clever strategy that was aligned with the inherent nature of people who frequent the Web.  Greg didn’t win the contest, but he came within a whisker’s width of doing so.  The fact was, Greg reluctantly entered the contest late (more irony, both Greg and Dave Pasternack called SEO contests stupid, but both entered) and  he wanted to time his entry so that it climbed the search engine ranks and claimed the top spot within 12 hours of the closing of a contest.  He wanted to show that not only could you control your organic visibility, but you could do it with a fair amount of predictability.  His timing was a little bit off, due more than anything to variations in the various Google data centers, but he definitely showed that with the right approach, you can influence search results.

To me, the interesting thing in this was not the technique Greg used but the approach he decided to follow.  He played the innocent bystander card.  He appealed to human nature and understood how people would react.  The genius of Greg’s approach was not in how he used redirects or turned on the “link juice.”  Those were all techniques that were part of the execution and yes, they had to be done right, but they only mattered because they were aligned to a strategy that was very clever.  He outthought his competition, rather than hammered them to death with a bag of black-hat tricks.  He knew that if he drew attention to the real Dave Pasternack, the one that was having his rightful visibility usurped by a temporary blip on the online “buzz” horizon, he would have a better chance of gaining support because he was appealing to an inherent human value that we all generally share.

We like to protect people, especially if they’ve been rightfully wronged in some way.  Our best instincts rise to the surface and we want to rush to the aid of the victim.  In this particular instance, the way to do that was by sending a little “link love” Greg threw out some irresistible link bait. And what was particularly impressive about his entry was the way he almost timed it down to the hour, letting the momentum of his entry roll right up to the final moment and coming within one datacenter of actually winning the contest.  Did he usurp the original Dave Pasternack page?  No, but he really shouldn’t have.  That page had already earned its link love and should have been right where it was, on top of the listings.

The Value of People Smarts

Recently, I wrote a column about the future of SEO and SEM agencies.  And I said that the time may soon be coming when the technical wizardry that SEOs tend to rely on may have limited value.  One thing, however, that will never have limited value is the ability to understand how people think and work — and then to be able to translate that into an online reality.

That’s what Greg Boser showed in this contest.  He understood what makes people tick and then anticipated how that might play out online.  That type of approach will always have value in the online world.  Over time it may translate itself from gaining results on a search engine to building buzz on Digg, creating more presence on blogs, or any one of the other 1,001 places where we would like to gain visibility online.  But the ability to take an understanding of human nature and then to be able to translate that into anticipated online behavior is an incredibly valuable commodity.

Greg, there are many things that we might not agree on, but in this particular contest, you showed that SEO may not be rocket science, but it can certainly be a social science.

 

Post Mortem on Ten and Half Months of Posts

Well, my interview with Matt Cutts certainly seems to be causing ripples in the SEO world. At this point, it’s well on its way to becoming the most read blog posts I’ve ever made. The fact is, I have to thank Matt for my two highest traffic days ever. The first came when I launched my blog and Matt “Matt-dotted” it. That has been the record up until now, when my interview with Matt drove more daily visits and page views on Monday.

In a more analytical bent, it was interesting to see how the traffic ramped up. On Friday, when the interview was posted, the majority of traffic was coming from the predictable sources. There was a link from Search engine land and Search Engine Watch and Web Pro News also picked up the post and ran it in a couple of stories. This drove the majority of the traffic over the weekend. But as time went on (through Monday and today), the long tail kicked in and links to the post showed up in a number of blogs and forums, both here and overseas. While my referral base broadened out dramatically, the traffic kept rolling. Obviously, the long tail phenomenon occurs everywhere. In the last 24 hours, it’s been these widely dispersed links that have driven the majority of the traffic.

It’s also interesting to note the contrast in the pickup between Matt’s interview and the previous interview with Marissa Mayer. While Marissa’s interview actually contained more hard data on how personalization works on Google, Matt speculated on what personalization might do for the future of SEO. That was obviously a hot button and generated a number of pickups. Something about putting the name Matt Cutts and the letters SEO in the same title almost guarantees that you’re going to capture attention in this industry.

I always find it fascinating to see which blog posts pickup steam and which once seem to linger forever with hardly anyone reading them. In many cases, the posts I’m most proud of are the ones that seem to limp along, capturing a handful of page views every so often. Anything that touches on controversy seems to strike a chord.

Looking back at my blog records, my most read posts to date are:

Usability and Asinine Comments from the Bay

Controversy stirred up at a Jakob Nielsen Usability Summit in San Francisco where I discussed brand experience online and the use of graphics on websites

Relevancy Rules in Top Sponsored

A sneak preview of our eye tracking study that showed how importance relevance in those top sponsored ads were for attracting attention and clicks

The Matt Cutts Interview

Matt talks about personalization and its impact on Seo

The Personalized Results are Coming, The Personalized Results are Coming

My follow-up post when Google made its announcement early in February that was pushing more people toward signing up for personalized results

The Marissa Mayer Interview

Chatting with Marissa about personalization and its impact on user experience

For interest sake, I also looked back at my main referrer sources. Google by the bigger referrer source, driving about 14% of my traffic, with Matt’s blog second (a testament to it’s popularity, considering he’s only linked to my blog a couple of times) at 12.5%. After that it’s MediaPost, Search engine Land and Search Engine Guide.

For those of us always looking to build buzz on our blogs, it’s helpful to take a look back to see what our hits and misses were. For myself, I want to keep a balance between getting the posts out that I think are important, whether or not they attract a ton of links, and obviously giving my readers what they want. It looks like more sneak previews of our internal research and more interviews with the people that are shaping the search experience at Yahoo, Google and Microsoft are where I have to look in the future.

Debating with Myself about whether or not Google can Change Advertising

Ari Rosenberg, a media buying consultant, had an interesting column last week about Google’s plans to enter the cable TV market, just the same as they’ve made inroads in the radio and print markets. Google’s approach in all these markets is consistent. They will apply technology to open up the marketplace, removing the middleman and basically automating the purchase of media. Ari argues that while Google may understand technology, they have a lot to learn about how advertising works. This is a huge, complex question and there are a lot of different shades of gray to the argument. It’s not a simple yes or no argument. But there are some very interesting aspects, both pro and con, there he touches on in his column. So I’d like to present to differing viewpoints, both pro and con, about why or why not Google may actually change how advertising is done.

The Pro Side: Making the Marketplace More Efficient

There is no doubt that there’s a lot of room for efficiency in most media buying markets. There is a layer upon layer of friction in the marketplace, caused by entrenched consultants, reps and buyers and other “filler” between the ultimate buyer and seller. This is where Google can excel. Their theory is that they can remove the friction by using their technology to enable marketplaces where buyers and sellers can connect directly. More than this, they introduce the notion of relevancy. Ultimately, Google wants to achieve their end marketing goal of always showing the right ad to the right person at the right time. They would take the idea of keyword relevancy, pioneered so effectively on the Web, and apply it to other channels. Of course this depends on a more interactive version of print, radio, or cable than we currently see. But as all media converge, Google’s initial inroads into each of these channels will secure them a foothold at the time when relevancy starts to matter.

In this regard, Google is definitely dealing from two areas of strength. They understand technology and have been successful in developing clean, efficient interfaces to help streamline the flow of commerce. There is definitely a change that is needed in the media buying marketplace and Google has the engineering chops to clean it up dramatically. Also, they have a clear and deep understanding of consumer intent, expressed in the consumer’s own terms. And as it begins to matter more in advertising, Google is well-placed to make those consumer initiated connections happen.

The Con Side: Understanding Marketing

In last few years, I’ve had enough interaction with Google to understand that for them, marketing is considered a necessary evil. There’s a lot of “soft”, undefinable aspects to marketing, that can’t be distilled into a simple, clean algorithm. This is thinking that is largely foreign to the Google frame of mind. Google loves mathematical simplicity and definition. Two plus two should always equal four. The question shouldn’t be up for debate. But marketing is not that simple, not that clean, not that black-and-white. There’s a lot of gray in marketing.

Ari makes the point that Google doesn’t understand advertising. This is largely right. Google is an engineering company. It exists to apply technology to solve problems. If you look at the makeup of the Google organization, their own marketing department is a small, under resourced afterthought. Because they didn’t need to use advertising, the philosophy is that really is not necessary for anyone. As Google steps into advertising, think of them as Mr. Spock, reluctantly doing a stint as a Madison Avenue ad exec (now that’s an idea for a sitcom).

The Wild Card: the Consumer

Ultimately, it’s not Google or Madison Avenue that will have the last word in this debate. It’s you and me and 6 billion (and counting) other consumers. There is an old world and the new world in marketing. And the former is rapidly giving way to the latter. The wild card in all this is the changing game of marketing. Sure, Google may not understand the “warm fuzzies” of marketing, those undefinable aspects of brand engagement, but what Google does understand is connecting users with what they’re looking for. And do we really need advertising that hits us at a visceral and an emotional level, when it’s exactly the advertising we’re looking for anyway? It doesn’t have to hammer us over the head with its message, because we’re openly receptive to that message, we’re seeking it. As Google moves into print, cable, and radio it may not be that their lack of understanding of the current reality of marketing that will hold them back for making it successful. It may be the fact that those channels just don’t lend themselves very well to this new idea of consumer empowerment. Consumer empowerment is expressed much more easily over the interactive platform of the Internet. The Internet is the next evolution of marketing. The question will be more if Google can make a significant inroad into these more traditional channels before the channels become integrated within interactive, Web driven platform. Or will there be just too much friction to overcome?

Matt Cutts: Personalization and the Future of SEO

mattcutts555I had the chance to interview Matt Cutts this week about personalization and it’s impact on the SEO industry. Excerpts from the interview and some additional commentary are in my Just Behave column on Search Engine Land today. As promised, here is the full transcript of the interview:

Gord: We’ve been awhile setting this up, and actually, this came from a discussion we had some time ago about geo-targeting of results in Canada, and we’re going to get to that a bit later. With this recent move by Google to move towards more personalization of the search results page, there’s some negative feedback and, to me, it seems to be coming from the SEO community. What’s your take on that?

Matt: I think that it’s natural that some people would be worried about change, but some of the best SEO’s are the SEO’s that are able to adapt, that are able to look down the road 4 or 5 years and say, “What are the big trends going to be?” and adjust for those trends in advance, so that when a search engine does make a change which you think is inevitable or will eventually happen, they’ll be in a good position. Personalization is one of those things where if you look down the road a few years, having a search engine that is willing to give you better results because it can know a little bit more about what your interests are, that’s a clear win for users, and so it’s something that SEO’s can probably predict that they’ll need to prepare for. At the same time, any time there’s a change, I understand that people need some time to adjust to that and need some time to think, “How is this going to affect me? How is this going to affect the industry? And what can I do to benefit from it?”

Gord: It seems to me, having a background in SEO, that the single biggest thing with personalization is the lack of a “test bed”, the lack of something to refer to when you’re doing your reverse engineering. You can’t look at a page of search results any more and say “that’s going to be the same page of test results that everyone’s seeing“. Given that, , more and more, we’re going to be seeing less of universal search results, is this the nail in the coffin for shady SEO tactics?

Matt: I wouldn’t say that it’s necessarily the nail in the coffin, but it’s clearly a call to action, where there’s a fork in the road and people can think hard about whether they’re optimizing for users or whether they’re optimizing primarily for search engines. And the sort of people who have been doing “new” SEO, or whatever you want to call it, that’s social media optimization, link bait, things that are interesting to people and attract word of mouth and buzz, those sorts of sites naturally attract visitors, attract repeat visitors, attract back links, attract lots of discussion, those sorts of sites are going to benefit as the world goes forward. At the same time, if you do choose to go to the other fork, towards the black hat side of things, you know you’re going to be working harder and the return is going to be a little less. And so over time, I think, the balance of what to work on does shift toward working for the user, taking these white hat techniques and looking for the sites and changes you can implement that will be to the most benefit to your user.

Gord: It would seem to be that there’s one sector of the industry that’s going to be hit harder by this, and I think it was Greg Boser or Todd Friesen who said, “You don’t take a knife to a gun fight.” So when you’re looking at the competitive categories, like the affiliates, where you don’t have that same site equity, you don’t have that same presence on the web to work with, that’s where it’s going to get hit, right?

Matt: I think one area that will change a lot, for example, is local stuff. Already, you don’t do a search for football and get the same results in the U.K. as you do in the U.S. So there are already a lot of things that return different search results based on country, and expect that trend to continue. It is, however, also the case that in highly commercial or highly spammed areas, if you are able to return more relevant, more personalized results, it gets a little harder to optimize, because the obstacles are such that you’re trying to show up on a lot of different searches rather than just one set of search engine result pages, so it does tilt the balance a little bit, yes.

Gord: I had a question about localization of search results, and I think being from Canada we’re perhaps a little bit more aware of it. How aware are American SEO’s that this is the case, that if  they’re targeting markets outside the U.S., they may not be seeing the same results that you’re seeing in the U.S.

Matt: I think that many SEO’s are relatively aware, but I’ve certainly talked to a few people who didn’t realize that if you do a search from the U.K., or from Canada, or from India, or from almost any country, you can get different results, instead of just the standard American results. And it’s definitely something that’s a huge benefit. If you’re in the United Kingdom and you type the query newspapers, you don’t want to get, necessarily, the L.A. Times or a local paper in Seattle, the Post-Intelligencer. Something like that. So I think it’s definitely started down that trend, and, over time, personalization will help a lot of people realize that it’s not just a generic set of results, or a vanilla set of results. You have to be thinking about how you’re going to show up in all of these markets, and personalization and localization complement each other in that regard.

Gord: Now one difference between localization and personalization is that personalization has the option of a toggle, you can toggle it on and off. Localization doesn’t have that same toggle, so as a Canadian, sometimes I may not want my results localized. Where does that put the user?

Matt: It’s interesting, because you have to gauge…and you talked to Marissa a couple times already, and from that you probably got a feel for the difficulty in making those decisions about just how much functionality to expose, in terms of toggles and advanced user preferences and stuff like that. So what we try to do is tackle the most common case and make that very simple. And a lot of the times, the functionality is such that you don’t even necessarily want someone that’s coming in from the U.K. to be able to search as if they’re coming in from Africa because it just makes things a lot more complicated. So, over time, I’d say we’re probably open to lots of different ways of allowing people to search. For example, you can select different countries for the advertisements. There’s a GL parameter I believe, where you can actually say, “now, show the ads as if I were searching from Canada. Okay, now I’m going to switch to Mexico”. Stuff like that. And that’s been very helpful, because if you giving Google money to buy ads, you want to be able to check and see what those ads would look like, in different regions. For search we haven’t historically made that as easy. It’s something that we’d probably be open to, but again, it’s one of those things where probably SEO’s are a lot more interested,you’re your regular user isn’t quite as interested.

Gord: And that gets to the ongoing problem. SEO’s have one perspective, users have another and arguably, yes, localization is good for the user, but for an SEO that deals with a lot of Canadian companies where the U.S. is their primary market. They’re looking at hitting that U.S. market. I guess this restricts them to making it look like their sites actually reside in the U.S. to get around it. So again, we’re trying to poke holes in the functionality, rather than live with it.

Matt: Well, one thing that should be possible is to indicate some sort of preference, or some sort of origin of location where you can indicate where you are. Historically Google has been ahead of the other search engines at the time by not just using the top level domain, so .ca, but also the I.P. address. So you can have .com hosted in Canada and that’s worked very well for many, many years. But we do continue to get feedback that people would like more flexibility, more options, so it’s a matter of deciding how many people that would help and just allocating resources on those types of things.

Gord: So we talked about personalization, we talked about localization. Are there other factors that are coloring the search results we should be aware of as we’re trying to consider all these aspects?

Matt: Once you’ve sort of “broken the mould” with different results for different countries, after that it’s good for people to move beyond the idea of a monolithic set of search results. If we had the ability to say someone is searching for Palo Alto or someone is searching for Kirkland or Redmond and give them local newspapers, truly local newspapers, that would be a good win for users as well.  So over time, I would expect search results to serve a broader and broader array of services.  The idea of a monolithic set of search results for a generic term will probably start to fade away, and you already see people expect that if I do a search and somebody else does the search, they can get slightly different answers. I expect that over time people will expect that more and more, and they’ll have that in the back of their heads.

Gord: Let’s take that “over time” and drill down a little more.  One of the things it was interesting for me when I was talking to Marissa with the fact that the Kaltix acquisition was made four years ago and it’s really taken four years for that technology to really show up in the search results.  Obviously a cautious approach to it.  And even with that we’re talking a couple of results being lifted into the top 10 and we’re talking one in five searches.  Also Marissa wasn’t exactly sure about this so I’ll clarify this with you.  She believed that it would never replace the number one organic result.

Matt: I believe that’s correct.  I’d have to double check to make sure.

Gord: So that’s a fairly tentative step in the direction of personalization, and you said over time we can expect this to continue to ship to be more of an individual experience.  Are we talking months, are we talking years, are we talking tomorrow?

Matt (chuckling): It’s usually not our policy to comment on exactly when stuff might roll out in the future, but personalization is an important trend and the ability to make search results better through personalization is really exciting to us here at Google.  I think if you look backwards over time, a lot of the reason why we might not have been able to personalize before was because Google was very much a “you come to the front page, you do a search, you get the results and you’re gone” type of model.  And there really weren’t that many opportunities to have a single sign on or some sort of Google account, where we could actually learn or know a little bit more about you to make your results more relevant.  So I think part of it involves getting all of the different ways of having an account together, so you can have personalized news, which rolled out a while ago, you could have a personalized homepage and those things give people a reason to sign in to Google.  Once you’re signed in to Google that helps us a lot more, by having your search history and the ability to offer personalization.  So at least looking backwards, I think some of the amount of time was just getting people ready to have a Google account and not just show up in Google, do a search and leave.

Gord: So part of it is that transition from a tool you use to more of a community you are engaged in.

Matt: Yes

Gord: That’s moving closer to your competition, notably Yahoo and Microsoft.  Google’s done very well as a tool.  Is this just the inevitable progression?

Matt: I think one nice thing is that Google adapts very well to what users want, and also the industry marketplace.  And so when our primary competition was a pure search engine, whether it be AltaVista or AlltheWeb or HotBot or Inktomi, then pure search mattered very much.  Search is still a part of everything we do.  It’s at the core of all the information that we organize and yet competing against sites like Yahoo and Microsoft involves a different set of strategies than competing against just a search engine for example.  So I think competition is very good for users, because it makes all of us work hard and it keeps us on our toes.  The one strength that Google has is that we do adapt and we look at the marketplace and we say, “What do we need to deliver next for our users to help them out and to encourage them to be more loyal to Google?”

Gord: So for your job, where you’re looking at the quality of the index and policing it, how does personalization change your job?

Matt: To some degree, it makes it easier, because it’s not one monolithic set of search results anymore.  But let me flip that around and say how we can make it easier for SEO’s as well.  I’m a computer graphics person, so if you go back to a concept called digital half toning, it’s this process where you have nothing but black and white, yet you are able to approximate different shades of gray. And if you look at the existing set of search results, a lot of people before had a very black or white mentality.  I’m ranking, or maybe I’m ranking number one or are not in the search results at all.  And that’s a very harsh step function, in terms of you not ranking where you think you should be, and maybe you’re not getting very much traffic at all.  If you are ranking number one, or very highly, you’re a very happy person.  And yet that monolithic set of search results may not serve users the best.  So now as we see that spread and soften, more people can show up at number one but for a smaller volume of queries.  And so individual users are happier because they’re getting more relevant search results and yet it’s not a winner take all mentality for SEOs anymore.  You can be the number one ranking set of results for your niche, whether it be a certain demographic or a certain locality, or something like that.  And I think that’s healthier overall, rather than having just a few people that are doing very well, you end up with a lot more SEO, and a lot more users who are happy and that’s softens the effect quite a bit.

Gord: What you’re talking about is a pretty fundamental shift in thinking on the part of a lot of SEOs…

Matt: yes

Gord: … a lot of SEOs are almost more engineers right now, where they’re looking at the algorithm and trying to figure out how to best it.  You’re asking them to become a lot of things, more marketing, PR, content developers, and know more about the user, more about user behavior online.  These are very different skill sets and often don’t reside in the same body.  What is this going to do to the SEO industry?

Matt: I think the SEO’s that adapt well to change an optimized for users are going to be in relatively good shape, because they’re trying to produce sites that are really pleasing and helpful to users.  It’s definitely the case that if all you care about is an algorithm than the situation grows more complicated for you with personalization.  But it’s also an opportunity for people to take a fresh look at how they do SEO.  So give you a quick example: we always say, don’t just chase after a trophy phrase.  There are so many people who think if I ranked number one for my trophy phase I win and my life will be good.  When, in fact, numerous people demonstrated that if you chase after the long tail and make a good site that can match many many different user’s queries you might end up with more traffic than if you had that trophy phrase.  So already the smart SEO, looking down the road, knows that it’s not just the head of the tail, it’s the long part of the tail and with personalization and the changes in how SEO will work, it will just push people further along the spectrum, towards looking at “it’s not just looking at a number one result for one query, how do we make it across a lot of queries.  What value do I deliver?  Am I looking at my server logs to find queries that I should be targeting?  And not just search engines, how do I target different parts of the search engine?  Like the local part of Google, the maps part of Google.  How do I target Google notebook and the other properties and how do I show up well across the entire portfolio of search properties?”  And that’s a healthy transition period that will push people towards delivering better value for their users and that’s better for everybody.

Gord: I get that and I’m an SEO.  My challenge comes in getting my clients, who in a lot of cases did their own SEO or worked with another SEO firm before they came to us and are used to that trophy phrase ranking.  How do we get them to get?  Because I see that being a challenge with a lot of SEOs. They will understand that, but getting the client to understand it could be a different matter

Matt: Sometimes I think you might have to do a demonstration like sign them into personalized search, do a query, sign them out, do query and show them, these are very different sets of results.  And sometimes the demonstration can be very visceral, you know, it can drive home the point that it’s not just going to be this one trophy phrase. People are going to have to think and look at the entire horizon of the space.

Gord: In Google there’s a very definite church versus state divide and traditionally the relationship with the advertiser was almost exclusively on one side of that divide.  But this could mark a fairly fundamental shift, and it will impact your advertisers, so as part of that community, will Google be doing anything to help those advertisers understand the organic part of their visibility on Google?  Will you be doing the same demonstration you just telling us we should be doing?

Matt: I think Google is always trying to communicate with the outside community, both with webmasters and advertisers.  So it’s really exciting to see some of the different techniques that we’ve used, everything from webinars to training materials to making videos available.  I would definitely say that every part of Google is going to keep their eyes open on how to best communicate how to stay on top of changes, because nobody wants anybody outside of Google to be unprepared for personalization or improvements in any of our technologies.

Katie (Katie Watson, Google PR representative who was sitting in on the interview) Something to actually cite there is that I know we recently just opened up our webmaster blog to outside comments, so that’s a good example of gradually moving forward to communicate even better.

Matt: You were couching the question in terms of advertisers, but if you look at the general story of webmaster communication and assume that that’s the leading edge, it’s pretty safe to assume that those smart ideas are percolating throughout the company and we’re trying to figure out all the different ways to communicate more.

Gord: So that’s the canary in the coal mine. Whatever’s happening in the webmaster community will act as a testbed for communication?

Matt: Exactly.

Gord: There is a debate raging right now about “is SEO rocket science”?  (Matt begins laughing) So what does personalization means for that debate?  Does it become more complicated?  You said it becomes easier in some ways and I countered that by saying that may be, but is also spreading out in a lot of different directions. Is there still a place for the pure SEO consultant out there?

Matt: I think there still is a place for you for a pure SEO consultant but it’s also true that over time those consultants have to keep adding to their skill set.  A few years ago no one would have even thought about the word Ajax and now people have to think about Ajax or Flash and how do I handle some of these new interfaces to still make sites crawlable?  So I definitely think there will still be places for consulting and improving crawlability of sites and advice on keywords and personalization will add some wrinkles to that, but I have faith that, over time we’ll see the benefit to users and if you make good site for your users, you will naturally benefit as a result.  Some people spend a lot of time looking at data centers and data center IP addresses and if people want to have that as a hobby they’re welcome to it but a lot of people don’t do that anymore and they’re just worried about making good results and yet, everything still comes out pretty well for them.

Gord: Some time ago I wrote a column along that line and said that, in many ways, the white hat SEO has helped clean up the Black hat side of the street because they enabled those good site to be crawled, to show up in the index and to assume their rightful place in the results.  It would seem to mean that personalization is just going to drive that process faster.

Matt: I think it will.  It’s making Black Hat tougher to do.  I think it’s interesting, it was designed primarily to improve the relevance for users but as a side effect, it definitely changes the game a lot more if you’re on the Black hat side of things then if you’re on the white hat side of things

Gord: Matt, I think that wraps things up for me..

Matt: Thanks, that was fun.

Should We Believe Google’s Click Fraud Numbers?

Today, Google finally came out with some solid numbers around the click fraud issue. The number of invalid clicks across the Google network? Less than 10%. The total amount of undetected click fraud that advertisers have reported and asked for a refund for? .02% I was briefed by Google little while ago about their plans around click fraud and so I had some time to digest the numbers and think about them. Google also passed my name along as an expert third-party that the media could contact to get more commentary about the numbers and Google’s product roadmap for dealing with click fraud. If you’re interested in what the numbers actually mean, I would suggest going to Danny Sullivan’s post this morning on Searchengineland. Danny does his usual thorough job of making sense of the announcement.

One question that I got from a couple reporters yesterday was, did I believe Google’s numbers? Although I should have anticipated this question, I was somewhat surprised. So last night I thought about. What would Google gain by fudging the numbers at this point? I think there’s a few points you have to consider when looking for the answer to this question. Based on the fact that I’ve already been asked it three times, by three different reporters. I believe it is a valid question and one that a number of people will probably be asking.

Maybe I’m naïve, after all, I am an Alberta farm boy at heart, but in all my interactions with Google I have to say, Google just doesn’t work this way. Google is a very cautious company when it comes to divulging information. I would think one of the biggest frustrations that Shuman Ghosemajumder has had in the past is having to keep his mouth shut while various inflated numbers around click fraud were thrown about. My belief is that it’s been Shuman lobbying inside of Google that finally convinced them to open the box a little bit on the scope of click fraud in the Google advertising network. Maybe the “don’t be evil” motto of Google sounds trite to some, but people at Google believe it and take it to heart.

What would Google have to gain by releasing false numbers about click fraud? The only possible motivation would be to; one, artificially inflate their stock price, and two; encourage more advertising revenue by falsely reducing the sensitivity around the click fraud issue.

Let’s deal with the first point. I talk to financial analysts all the time and frankly, it’s been a long time since any of them asked me about click fraud. As far as a sensitive issue, there are a lot of other factors that financial analysts are looking at much closer when it comes to making recommendations on buying or selling Google stock. I believe click fraud has been already factored into the valuation and analysts have moved on.

When it comes to advertisers, there still is sensitivity around the click fraud issue, but it has lessened in the last year. The recent SEMPO study shows that as a concern for advertisers it actually trended down from 2005 to 2006. Certainly it’s something we should be aware of and keeping our eye on, but I really don’t believe it’s preventing advertising revenue from flowing into Google at this particular point. So any short-term gain that might come to Google from falsely announcing numbers could potentially be a bit of a spike in their stock price. But within a day or a week other factors would smooth that out and it would basically become a nonissue. I really don’t believe it would have any impact on advertisers at all. Short-term gain would be minimal at best.

But, the long-term cost to Google could be tremendous if they were caught releasing false numbers around click fraud. It would just be a really, really dumb thing to do, and you can say what you want about Google, but one thing they’re not is dumb. So do I believe the numbers? Yes, I have no reason not to.

The other question that the reporters asked me was what I believed the number to be for undetected click fraud. One of the reporters was actually from BusinessWeek, and if you’ve read my blog you know that I’ve taken some exception to BusinessWeek’s reporting around click fraud and search in the past. I did happen to mention that to the reporter I was talking to. The way I answered that question for BusinessWeek was that obviously we don’t know what we don’t know. Potentially there could be a lot of click fraud that slips through all of Google’s filters and slips past the advertiser as well. But for it to do so it would have to be click fraud at a extremely sophisticated level. Let me explain why it’s highly unlikely that there’s a large percentage of undetected click fraud in Google’s advertising network.

First of all Google has a number of signals they can watch to determine if click fraud is happening. Shuman mentioned that there’s well over a hundred data points they look at, including overall ROI rates, impression rates, click stream activity, click patterns, IP detection and that’s just a few of them. Also, Google is very, very good at building systems. Their engineers are the best in the world. So if they throw their collective brain power at a problem, you can be pretty confident that they’re going to come up with a robust solution to that problem. Click fraud was one of the biggest threats that Google faced in the last few years. They knew they had to restore advertiser confidence around the click fraud issue. So they threw their full engineering horsepower at the problem to build the filters that they currently have in place. This is the first line of defense against click fraud. The vast majority of the invalid click activity that’s happening in the Google network is caught by the filters. That’s the first screen that this activity would have to pass through.

The second screen is Google’s post-click review screen. This is where they look at questionable activity that made it through the proactive filters, do further investigation, and if they feel it’s warranted, they will go back and make a refund to the advertiser without the advertiser having to take any action at all. Again this is a very robust program that Google has put in place. This represents the second screen that fraudulent activity would have to get through.

The third screen in the advertiser themselves. Think about this. We have a lot of very sophisticated advertisers who have put robust analytics in place and have a deep, inherent understanding of what their website traffic patterns should look like. These advertisers have also been exposed to the so-called reporting of click fraud, like the BusinessWeek expose. They have a heightened sensitivity to click fraud so they would be very vigilant, particularly on any traffic that was coming from Google. So undetected fraudulent activity would have to get past this screen as well.

Finally, as an overall metric, Google aggregates the conversion data from advertisers who have Google analytics in place and uses that as a baseline of what typical behavior across the network should look like. In aggregate form the data allows them to check out anomalies in the dataset that may indicate fraudulent activity. This level of detection is over and above all the other fraud detection I previously mentioned. It acts as a monitor on the overall activity that could potentially indicate undetected click fraud in the network. So the likelihood of there being a significant amount of undetected click fraud is very, very low. Once again, so low it’s probably not worth spending much time worrying about.

The gist of my column today in SearchInsider is advising advertisers to look at a much bigger picture than just focusing on click fraud. I realized Google had to release these numbers because everyone was asking for them. If we can accept those numbers than perhaps we can get on with looking at our overall campaign performance and really spending some time on the things that would have a much greater impact on our overall return on investment. For example, the drum that I will continue to beat as long as anyone is willing to listen is for advertisers to focus on their own conversion rates. Time after time, I see landing pages that aren’t optimized and aren’t aligned to the intent of the potential visitor. I see sloppiness in advertising messages with a lack of relevancy aligned to the queries that are used. If advertisers paid more attention to these things they’d realize far greater benefit than they would by fretting over click fraud.

Don’t Think Click Fraud, Think Negative ROI

First published March 1, 2007 in Mediapost’s Search Insider

The search engines have a dilemma on their hands when it comes to click fraud. We’re all clamoring for more information on the issue. We all want solid numbers to help us define the scope of click fraud. The very fact that we refer to it as click fraud is confusing. A lot of things get thrown in the click fraud “basket” that are in no way fraudulent. Thanks to sensationalist reporting by publications like BusinessWeek, click fraud is portrayed as the biggest scourge to threaten the Nirvana that is search marketing. A tremendous number of resources have been dedicated towards click fraud by the engines themselves, in response to the advertisers’ demand that the problem be stamped out.

But when you do an honest appraisal of the issue, the search engines would rather we get over our preoccupation with click fraud and start thinking of it as part of a much bigger whole, the return we get on our search marketing investment. This in no way negates the importance of click fraud as an issue. I don’t think there’s anyone more aware of click fraud than Shuman Ghosemajumder (Google), John Slade (Yahoo), and Brendan Kitts (Microsoft). They’re the first to say that click fraud does exist and that they’re each, in their own ways, actively policing it.

It’s more a question of proportional response, an appropriate amount of attention given the actual scope of the issue. And today, for the first time, Google is giving us concrete numbers on what that scope might be, at least for its network. Google is announcing a multiphase approach and product road map to handle the click fraud question. Accompanying the announcement are hard numbers, for the first time, about how much of Google’s traffic could actually be considered fraudulent. I’ll talk more about the numbers in a moment, but first, let’s explore the dilemma that presents itself to the engines.

Caught Between an Over-Hyped Threat and an Ignored Danger

The engines know that, as a factor that negatively impacts return on search marketing investment, click fraud represents a tiny percentage. There are far bigger drains on the performance of campaign that advertisers should be paying significantly more attention to, but thanks to doom and gloom “exposés,” there’s a disproportionate amount of attention focused on click fraud. So, although the engines would rather advertisers focus more on the big picture and consider all the factors, including fraudulent traffic, that are negatively impacting their return on that investment, they’re playing the game they have to and are keeping the focus on click fraud. Google’s announcement today may allay some of the “sky is falling” concerns that are being whipped up by journalists, but in the long run it may do the advertisers a disservice by diverting attention from more pressing campaign optimization issues.

I’ve talked about some of this before, but here are some of the issues I have with the current click fraud situation:

Just Because We Call It Fraud Doesn’t Make It Fraud

Click fraud seems to be the label that has stuck with this particular issue. There have been calls to try to put numbers around the occurrence of click fraud in search marketing. In reality, it’s not that cut-and-dried. First of all, fraud implies that someone loses money through the deliberate actions of someone else. For a click to be fraudulent, at least in the way that BusinessWeek tried to define it, advertisers have to lose money. They have to be paying for traffic that has no value.

Less than 10% are Invalid Clicks

The fact is, there are a number of factors that may result in traffic that the advertiser would probably prefer not to pay for. Fraudulent traffic is just one of them. Google puts all this traffic into a basket they call invalid clicks. This includes double clicks on ads, questionable activity from a single IP address, automated clicks, and yes, clicks from the nefarious click fraud perpetrator. In today’s release, Google said invalid clicks accounted for less than 10% of its total network traffic. The company didn’t want to get more specific than this, because the actual percentage can rise and fall with a fair amount of volatility, based on spikes in clickbot attacks and other factors. Google works to filter this traffic out proactively, so it’s as if the clicks never happened. The advertiser is never charged for this traffic. In most cases, the publisher of the site from which the traffic is generated is never paid for the traffic. No money changes hands, so no fraud has been committed. If anyone is out of pocket, it’s Google, not the advertiser.

The Bottom Line for Advertisers? .02%!

The traffic that the advertiser should be concerned about is the fraudulent traffic that slips through the cracks. This is truly click fraud. It’s not caught by the Google filters and it’s up to the advertiser to come back and report it and request a refund. In this case, money has changed hands and fraud has been perpetrated. Today, Google announced that this represents .02% of its total traffic. Some time ago I did a column after a talk with Shuman at Google, and after making some assumptions and extrapolating the number, I came out with a “worst case” estimate of .2%. It appears that my worst case was much higher than reality, by a factor of 10X.

I don’t know about you, but frankly, if something is only making a .02% impact on my advertising campaign, I’ve probably got better places to be spending my time. One place you might want to look? The conversion rates of your landing page. If you can bump your conversion rates by .5%, you’ve just made 25 times more impact on your overall campaign performance than by continuing to fret about click fraud on Google.

Google’s announcement today was more than just releasing numbers on the occurrence of click fraud. It is also announcing the creation of a Click Fraud Resource Center, a streamlined reporting process, the ability for advertisers to filter out questionable IPs, more details in its nvalid click reporting and some other initiatives. I believe all these things are good and are needed by advertisers, if only to put to bed the perceptions of click fraud as a major issue. But do me a favor, will you? Take some of the time you may be spending worrying about click fraud, and start looking at all the other places where your return on investment may be slipping through the cracks. My guess is there a lot bigger cracks you should be looking at than the click fraud one.

Webpronews Video: Who Said What?

I happened to be browsing through Webpronews on the weekend and saw one of their new video news updates. The clips are well produced, professional looking and even have their own attractive newscaster, Nicole Eggers. One I happened to pick, however, left a little to be desired on the accuracy front. As you’re probably aware, I just did a series of interviews with the top usability people at each of the three engines for Search Engine Land and a couple weeks ago I did a recap talking about the differences I saw between each of their philosophical approaches. The blurb on the video appeared to be on the same topic so I decided to give it a watch. If it, Webpronews indicated that search expert Danny Sullivan had talked to each of the three usability people at the engines and had come to the following conclusions:

  • That relevancy was almost a religion for Google
  • Yahoo had a heightened sensitivity to the needs of their advertising community
  • Microsoft was still finding their competitive niche

Huh? That’s exactly what my recap said. They even pulled a few quotes from it and attributed them to Danny. I quickly e-mailed Danny to see if we were doing some kind of weird Cyrano de Bergerac thing but Danny was apparently as out of the loop on this as I was. Anyway a quick e-mail to Webpronews seems to have got it straightened out. They’ve pulled the clip and apparently they’re redoing it.

Not that I mind being mistaken for Danny, but I just hate to be putting words in his mouth. By the way, does anyone else feel like they’re being scolded by Nicole? Again, not that I mind.