Wikia Search: Should Google be Scared?

Wikipedia co-founder Jimmy Wales is creating a bit of a stir with his new project, a wiki based search engine temporarily called Search Wikia. Of course, the media are jumping at the chance to postulate whether this will be a Google killer. Wired.com has an interview with Wales that looks at the concept in a little more depth.

Wales definitely comes out swinging on the project website. First, he calls it “a project to create the search engine that changes everything”. Bold, if nothing else.

He then comments:

“Search is part of the fundamental infrastructure of the Internet. And, it is currently broken.

Why is it broken? It is broken for the same reason that proprietary software is always broken: lack of freedom, lack of community, lack of accountability, lack of transparency. Here, we will change all that.”

A community based search property is an intriguing concept, but not a terribly new one. Open Directory, GoGuides and others have gone here before. The difference here is the total opening of control, and using a wiki platform for collaborative development of the search index.

Here is the challenge, as I see it.

It’s dependent on the involvement of the community. Wales has no idea how the community will be managed, and doesn’t really plan on figuring this out. “I don’t think it makes sense to manage a community.” He’s right. You can’t manage a community, you incentivize them.  You give them a reason to participate. Wales seems to think altruism will be the motivating factor:

“It’s about building a space where good people can come in and manage themselves and manage each other. They can have a distinct and clear purpose — a moral purpose — that unites people and brings them together to do something useful.”

I disagree. There are a few altruistic netizens out there, but not nearly enough to help make this dream a reality. It’s not a scalable concept. But, you say, what about Wikipedia? Isn’t that based on the same premise? Yes, and no.

I believe that the main motivation on contributing to Wikipedia is to spout off on something you know about, and to leave a semi permanent footprint on the shifting sands of the Web. Whether it’s attributed to you or not, when you can contribute to Wikipedia, you can point to something and say, “I did that. That’s me..and this is how smart I am on this particular topic.” I don’t think altruism has much to do with it.

In a search engine wiki, your contribution would be hidden in a human powered algorithm. You wouldn’t leave a footprint. There’s no sense of you, no bragging rights. There’s no compelling incentive to participate. And to bite off something at the scope Wales is envisioning, you would need huge numbers of people participating.

Personally, I would like to see Search Wikia work. But there’s a reason why community based search hasn’t worked til now. It’s the same reason social tagging has questionable viability. Anytime you’re asking people to be involved, you’ve introduced a limiting factor. There are a handful of people that get involved in something like this. They’re enough to get a concept off the ground, and they’ll do it because they’re motivated by different things than the other 99% of society. But once you burn off that resource, you’re stuck. Wales dream depends on tapping into the other 99%, and I just don’t see that happening. We don’t want to work that hard. There’s no “whats in it for me?” reward for doing the heavy lifting.

But hey, maybe I’m wrong. Go ahead and take a few minutes to post a comment telling me why the average Net user is a more giving person than I believe them to be. I’ll even give you credit for helping contribute to the betterment of the online world 😉

US Statistical Abstract: Time Well Spent?

The U.S. Census Bureau just released their new Statistical Abstact for 2007. In it, they predict the amount of time adults and teens will spend consuming media in various forms:

  • 65 days in front of the TV
  • 41 days listening to radio
  • A little over a week on the Internet in 2007
  • Adults will spend about a week reading a daily newspaper
  • Teens and adults will spend another week listening to recorded music
  • Consumer spending for media is forecasted to be $936.75 per person

What was interesting about this was noticing the gap that still exists between TV and Radio consumption and time spent on the Internet. To me, it’s indicative of the nature of engagement, at least for now.

According to these stats, we will spend 10X the amount of time in front of a television than we will spend in front of a computer cruising the Internet. The media release didn’t elaborate on the nature of time spent on the Internet. Does this mean work time as well?

Given these numbers, one can understand why the lion’s share of ad budgets still go to television, and I expect that TV sales execs will gleefully quote these given every possible opportunity. But consider the following:

  • The consumption of entertainment content online is in it’s infancy. Strike that, it’s actually embroyonic. If YouTube is the barometer of where we’re at, we have an immense way to go. All the hype about online video is still largely centered around viral growth amongst very early adopters who are watching amateur videos less than 3 minutes in length. It’s not the actual current  impact of online video that’s creating buzz, it’s the paradigm shifting that we have to do when we consider the democratization of content creation, the searchability of the digital format and the interactive possibilities that come with the online distribution channel.  All these things speak to a totally new experience. We’re just not there yet.
  • Think about the difference in your engagement level when you’re interacting with the Internet, as opposed to passively watching TV or listening to the radio. Think about how you respond to advertising messaging, especially when it’s relevant to the task you’re pursuing. The influence of this difference in engagement on consumers hasn’t been quantified yet, but at a gut level, we know it should be significant, probably a quantum leap in effectiveness. Actually, the numbers drive this home. In the research that’s been done on the impact of various channels on consumers, the Internet consistently ranks near the top, usually right after word of mouth, and much higher than television. And it has this impact with one tenth of the exposure time.
  • We need time to change our habits. Television watching has been ingrained in our daily routine for decades. Radio for a bit longer. Newspapers for centuries. The Internet is just celebrating its first decade as a widely accessible channel, and high speed access is less than 5 years old. Given that, the one week number is actually quite remarkable.

I’m sure these numbers will be quoted often, and spun in drastically different directions, depending on who’s doing the spinning. At first glance, my thought was “only one week?”. But as I thought about it, the numbers just emphasized the vast potential of online. What will be fascinating is to revisit this in a year’s time and see how these numbers change. In Internet terms, 12 months is an eternity.

Should Google Stick to the Knitting or See What Works?

I was just doing some year end cleaning of my “to be blogged about” folder and found a couple of lingering items from a few months back. While most of that time, that would make them hopelessly outdated, these two touch on a bigger theme that is still relevant, and is aligned strategically to a book I just finished re-reading.

First, the here-to-fore neglected articles. Did-It’s Bill Wise wrote a Search Insider column on how Google wins by losing, and John Markoff at the NY Times talked about the concern over “Google Sprawl”.  Both talk about Google’s strategy of pushing into new businesses at a frantic rate, seemingly trying to reinvent everything at the same time. But they take slightly different approaches. Bill’s opinion is that the strategy works because the string of new challenges, and the many subsequent failures, continually generates buzz for Google that keeps driving it’s main revenue channel, search. The NY Times reports on recently voiced Google concerns that the myriad of new initiatives will confuse users and impact the user trust in the Google brand. It also touches on the implied conundrum that comes with Google’s goal to integrate functionality into a simple and elegant interface, making it the online Swiss Army Knife, and it’s desire to keep user data open, steering away from the Microsoft approach that landed them in hot water with the Department of Justice. The timing of both pieces was right around the Google acquisition of Youtube.

There’s a bigger piece here that seems to be missing from both viewpoints. Let’s look at Wise’s assertion first:

“By continually announcing that it’s expanding beyond search, Google gains tremendous buzz, which translates into higher stock prices, which translates into still more buzz. All that attention keeps Google top-of-mind; by being top-of-mind, Google draws more users and more loyalty towards the Google brand–which means more searchers flock to Google Search, and more searchers stick with it. And it’s through Google Search that Google actually makes its money.

All that buzz is only beneficial if the new launches don’t succeed. If Google were to successfully expand past search, users would mistrust it as a corporate giant bent on empire-building–a problem that’s certainly familiar to Microsoft. Because Google fails at really getting a hold beyond search, users don’t see any effects of Google’s empire-building, and instead only see Google as a company that’s continually on the rise.”

The problem here is that Wise is confusing strategy and a by product of an approach that’s baked right into Google’s corporate DNA. I really don’t believe Google is purposely trying to fuel the buzz machine by venturing into areas with low odds for success. I believe Google does this because they don’t know any other way. It’s part of their genetic code.

Next, John Markoff starts to uncover the clues that point to the bigger picture:

“Google executives generally answer questions about acquisitions by saying that the company is still experimenting with business plans, or by arguing that a program like Sketch-Up — a simple computer-aided design program — will have an indirect revenue impact by making the entire Google service more valuable.”

To be sure, the culture of grass roots innovation that has been scrupulously nurtured at Google is at the same time it’s greatest strength and it’s greatest challenge. And despite the fact that Google is being hailed as a pioneer, it’s ground that has been trodden before. Google is hardly the first to go down this path. Which brings me to my renewed acquaintance with Jim Collin’s and Jerry Porras’s book Built to Last.

The Google mandate that a percentage of their engineer’s time be set aside to work on new, cool and cutting edge products is a chapter that was stolen right out of 3M’s playbook. And 3M, like HP, like Sony, like Motorola and like many of the other visionary companies profiled in Built to Last, started without a business plan. These companies worried first about the who, and then worried about the what. Google is clearly following in the same footsteps.

In fact, in the book, Collins and Porras show how visionary companies often “try a lot of stuff and keep what works”. Here is a pertinent quote from the book:

“Visionary companies make some of their best moves by experimentation, trial and error, opportunism, and – quite literally – accident. What looks in retrospect like brilliant foresight and preplanning was often the result of “Let’s just try a lot of stuff and keep what works.”

Collins and Porras devote a whole chapter to the topic. They show how many iconic corporations struggled, often for years, before they found the right business model. Google has a leg up on these, as they already have a very successful cash cow that’s driving their ability to “try a lot of stuff”. And it’s one notable area where Collins and Porras offer a different viewpoint from previous seminal works, including Tom Peters’ and Bob Waterman’s In Search of Excellence. Peters and Waterman advocate “Sticking to the knitting”, warning “the odds for excellent performance seems strongly to favor those companies that stay reasonably close to the businesses they know.”  Collins and Porras counter that if that were always the case, 3M would still be trying to run mines in Minnesota, HP would be selling nothing but audio oscillators and American Express would still be a delivery service.

The challenge for Google comes in not impacting the user, as Markoff identified in his article. Ironically, it comes from Google’s initial success in search. If Google search wasn’t as successful as it is, Google would have free reign to experiment. But they have to pay scrupulous attention to the user experience. I’ve commented before that Google’s biggest obstacle as a visionary company is it’s early success.

Here, Google is faced with the Yin and Yang challenge that faces all visionary companies. How to preserve the core while at the same time stimulate progress? And this gets down to a fundamental place where Google might be veering off track. Google’s core purpose, and the one that Google search succeeds very well at, is to organize the world’s information and make it universally accessible and useful.  This should be what the company scrupulously protects. All of Google’s free time initiatives should be aligned to that core purpose. But Google seems to be trying to pursue a number of core items at the same time. Redefining how advertising is bought and sold (recent forays into print and radio) seems to have little to do with Google’s stated core purpose. Controlling the main intersections of the new online global community (the purchase of YouTube) might be tangentially related, but clear alignment is not apparent. If Google stuck to their initial core purpose, that gives them scads of room for growth and innovation.

If Google is going to pursue a grassroots culture of innovation, that’s admirable. If they want to try experimenting in a number of areas and see what succeeds, while at the same time pruning out the failures, they can take comfort in knowing that strategy worked well in the past, notably for 3M. But to go down this path, it’s essential that an overarching core purpose be defined and communicated clearly to each and every Google employee. Innovation has to be aligned with a common goal. And when companies try to identify more than one core purpose, they can lose direction. Google might be well advised to see how other trailblazers have handled this in the past. For example, the core purpose of 3M is to solve problems through technology. While it’s broad and all encompassing, it does provide a sense of direction for 3M employees.

If I was to identify one challenge for Google to face in 2007, it would not be the fragmenting their business model, or even defining one. It would not be nailing another surefire revenue channel. It would be deciding, clearly and unequivocally, what they want to do, communicating the hell out of that internally and by doing that, point all that formidable brainpower in one direction.

What We Searched for in 2006

First published December 28, 2006 in Mediapost’s Search Insider

Right about this time of year, you’ll see two things coming in your inbox in the way of search-related columns. First, there’s predictions for 2007 accompanied by scorecards of success for last year’s predictions; second, recaps of the top searches of 2006. I didn’t make any predictions last year, so I figure it’s too late to jump on the particular bandwagon, but as to the second, I’m fully on board! Last year, I took a look across the major engines and was somewhat disheartened with the lack of intellectual depth that was shown in our collective quest for knowledge. So, how did we fare this year?

Google Coming Clean?

Let’s start with Google. Unfortunately, one has to read between the lines on these various reports. The list isn’t actually the real list for any of them.. These lists are heavily filtered, and in Google’s case, seemingly altered to a substantial degree. Here is its reported top 10:
Bebo
Myspace
World Cup
Metacafe
Radioblog
Wikipedia
Video
Rebelde
Mininova
Wiki

A little investigative work at Google Trends (thanks to Danny Sullivan) soon uncovered the inconsistencies. Google’s reported No. 1 term, “bebo,” actually has nowhere near the volume of “myspace” and “world cup.” In fact, “bebo” is almost flat-lined at the bottom. I suppose there are internal excuses Google might have for the inconsistencies, including aggregation of misspellings, but just how many ways can you misspell bebo anyway?

The list actually becomes more interesting when you include some of the terms that got filtered out. A quick look shows that Google is often used for navigation. Terms like myspace and wikipedia are not queries for information, but a quick way to get to a site. Google has already deleted many navigational terms from the list, so let’s add the big ones, Yahoo, Google, MSN and YouTube and see what the trend chart looks like. Now we see the true search volumes, and that a lot of people are using Google to get from point A to B. What is a little disturbing is that searches for “Google” on Google hold the No. 2 spot, just behind Yahoo. This drips with irony, and not a little stupidity. “Hey..how do I get to Google? Oh..wait a minute, I’ll just search on Google” Duh!

Yahoo on the Red Carpet

Meanwhile, Yahoo seems to turning into the “Entertainment Tonight” of search engines. Once you navigate through the incredibly annoying user interface they slammed on it (please Yahoo, take two Jakob Nielsens and call me in the morning) you find that the top 10 on Yahoo are:
Britney Spears
WWE
Shakira
Jessica Simpson
Paris Hilton
American Idol
Beyonce Knowles
Chris Brown
Pamela Anderson
Lindsay Lohan

This is almost too sad to comment on. Almost. If these are the best things that searchers can throw at Yahoo, no wonder they’re struggling in the search engine showdown. It’s the equivalent of the tabloid rack at the grocery checkout counter.

Yahoo also allows a peek at other countries’ top-ten lists as well. Last year, the Germans showed a blend of Teutonic practicality and pure kinkiness, and nothing seems to have changed this year. The loosely translated Top Ten are as follows:
Weather
Route Planner
Erotica
Telephone Directory
Chat
Greeting Cards
Horoscopes
Games
Web
Paris Hilton

Well, at least wife swapping didn’t make the list this year.

The Brit Top Ten shows they love their dirt:
Heather Mills McCartney
Pete Burns
Big Brother
The Ordinary Boys
World Cup
Steve Irwin
Borat
Notting Hill Carnival
Zidane
Kate Moss

And my fellow Canadians? Well, at least we’re consistent, if not terribly exciting. NHL (The National Hockey League) tops the list once again.

The Search Engine formerly known as MSN

The MSN (now Live) list also shows a bias towards the entertainment side, but it also showed how out of touch I was with pop culture:
Ronaldinho
Shakira
Paris Hilton
Britney Spears
Harry Potter
Eminem
Pamela Anderson
Hilary Duff
Rebelde
Angelina Jolie

Okay, Britney I know, Pam I know, Paris I know. Who the heck is Ronaldinho–or what’s a Rebelde? I’ve since been clued in by soccer fans and a quick check on Wikipedia. Ronaldhino was FIFA World Player of the Year in 2004 and “Rebelde” is a Mexican TV series, for those of you equally pop-cult ignorant.

In the final analysis, what’s striking about these lists is what the search engines seem to be used for. Google has become the main intersection of the Web. Its top searches make clear its role as a traffic clearinghouse, routing millions of users through the results page as they navigate from point A to B. It’s infrastructural and essential. The top searches on Yahoo and MSN tell a different story–one of idle curiosity, no pressing plans and killing time. In a nutshell, this story crystallizes the fundamental problem Yahoo and Microsoft face if they hope to challenge Google as the king of the search hill. They have to become essential.

Happy New Year!

A Sign of Things to Come: eShopping at a Store near You

A small article in the Wall Street Journal (a subscription is needed to read the whole article) is a precursor of a big shake up to come. It’s something I’ve been predicting for sometime now, and while it will take awhile to gain traction, it will turn local retail upside down.

Three malls in California and one in Arizona have agreed to allow shoppers to check prices on actual inventory through text messages from their cell phones with a service called NearbyNow. According to their site, NearbyNow plans to add another 17 malls throughout the US to their network by April. Another service called Slifter is focusing on national chains like Best Buy, CompUSA and Foot Locker.

Here’s why this is revolutionary and why you’ll be hearing more.

  • For shopping, this represents discontinuous innovation. It’s a big win for the user, allowing them to shop smarter than ever before. Consumer demand will drive adoption of this new approach.
  • For retailers, this is scary as hell. By allowing their inventory to be captured realtime, they’re agreeing to be compared side by side with everyone else, including online retailers with no physical overhead to drive up prices. It completely levels the playing field.
  • As a number of technologies improve and converge, this will become substantially more useful and powerful. Mobile computing, GPS and search functionality will make this a must have for consumers.
  • It completely fuses the online and offline worlds, making the transition seamless.

This is one of those ideas you just know will take off, but there’s going to be some significant hurdles to overcome. These services are only as good as their success at signing up merchants. The more stores in the network, the more successful. If only a few are included, consumers will always wonder if there’s a better deal that isn’t part of the service, defeating the purpose. And a number of retailers will resist this trend til the bitter end. Ultimately, it will be consumer insistence that will force the laggards to join.

Another challenge will be the user interface. Right now, both services run on cell phones, meaning you have to deal with an awkward keypad and stripped down display. But this problem will rectify itself with advances in mobile technology.

In the world of shopping, this changes everything.

No Real Surprises in the Latest iCrossing Study

iCrossing released the results of a new study conducted by Harris Interactive just before the holidays. The study looked at the role of online in the CPG market. A media release outlines the key findings, including:

  • Consumers look for CPG’s online, with 39% of US adults confirming they’ve conducted a search for CPG’s.
  • Women do this more than man. Footwear and apparel lead the categories searched for.
  • Online CPG searches often result in offline sales. Much of this activity is looking for sales or special offers at traditional bricks and mortar retail locations.
  • Activity is spread pretty evenly over search engines, retailer websites and manufacturer’s sites. Shopping engines and consumer information sites have substantially less traffic.

There are a few notable take aways here that speak to the future use of online. Most CPG’s have been slow to move to online as a marketing channel. The more commoditized the product, the less the online research activity, or so traditional marketing wisdom has told us. Certainly, CPG’s have been very slow to enter the search arena, yet the iCrossing study tells us that there is a significant portion of the consumer population are turning online to research these every day purchases.

To be honest, I think the study is probably underreporting the frequency of this. At Enquiro, we’re steering away from self reported survey based vehicles as a sole vehicle to look at search behavior, because we find that people have trouble recalling how often they use search and what they use it for. It’s become second nature for us to turn to online, and that in turn usually means search. So in a survey like the iCrossing one, memory lapses usually mean overly conservative numbers.

Another notable trend that would influence the findings are the increasing spread of high speed internet access. The likelihood of this CPG online activity happening is directly related to how handy a computer with an internet connection is. The more ubiquitous access is, the more we’ll do a quick look up on everything. About the only purchases I make now that I don’t do some form of online research about are groceries. And as local search becomes more robust, that will probably change too.

I’ve been predicting another surge of advertising dollars migrating into search over the next year or two. As we understand more how universal online research truly is, and how a lot of major advertisers are completely missing this very important touchpoint, more budget will find it’s way into search. While there are no real surprises in the iCrossing study, it’s good that major advertisers are continually reminded that they’re missing a rather large boat.

Year End Lists and the Stories They Tell

I was just putting a Search Insider column in the can for next week (it will run next Thursday) about the year end lists that are coming out of the various search engines and it brought up a few observations, together with a story that hit my desk about Google capturing 63% of searches.

First of all, the top ten lists. Here are the reported lists from each of the engines

Google Yahoo Microsoft
  1. Bebo
  2. Myspace
  3. World Cup
  4. Metacafe
  5. Radioblog
  6. Wikipedia
  7. Video
  8. Rebelde
  9. Mininova
  10. Wiki
  1. Britney Spears
  2. WWE
  3. Shakira
  4. Jessica Simpson
  5. Paris Hilton
  6. American Idol
  7. Beyonce Knowles
  8. Chris Brown
  9. Pamela Anderson
  10. Lindsay Lohan
  1. Ronaldinho
  2. Shakira
  3. Paris Hilton
  4. Britney Spears
  5. Harry Potter
  6. Eminem
  7. Pamela Anderson
  8. Hilary Duff
  9. Rebelde
  10. Angelina Jolie

First of all, I say reported because these aren’t actually the real top searches. Danny Sullivan had a good post pointing out the inconsistencies. These are filtered, sanitized and in Google’s case, apparently manipulated. The same could be true for the others, but unfortunately, they haven’t provided a tool like Google Trends that we can use to trip them up.

Be that as it may, it’s the comparison between them that holds the story that I touched on in the column, but would like to explore in greater depth.

Look at Google’s list. It’s obvious that people are using Google to interact with the web. They’re using it like a tool, to get to where they’re going. This becomes more apparent when we add the real top searches, the navigational queries that were filtered from the list.

googletrendsnav

People use Google to get to Yahoo, MSN..and even Google (okay, I’m still trying to figure that one out).

Look at Yahoo and Microsoft’s list. It’s the online equivalent of the trash tabloid section of the local magazine rack. These aren’t essential searches, these are fluff. It’s the searching you would do if you had time to kill. It’s the searching you would do if you had nothing better to do. It’s the searching you would do if you weren’t using Google for something useful.

I’m sure part of this comes from Yahoo and Microsoft’s portal roots. It speaks to a different philosophy towards search. Google aims to be the Web’s Swiss Army knife. It appears that Yahoo and Microsoft aspire to be the Entertainment Tonight of the Internet. When it comes to the Internet, Google is infrastructure, Yahoo and Microsoft are superstructure.

And that’s a fundamental issue for Yahoo and Microsoft. To win, or even hold their own in search, they have to offer tool-like utility. They have to live, breath and eat usability. They have to beat Google at Google’s own game. It’s not an easy task, and it’s getting harder every day. The latest numbers from Hitwise show they’re losing ground, not gaining it. According to the just released report, Google has a 62.79% share of searches for the 4 week period ending Dec. 16, compared to 21.9% for Yahoo and 9.28% for Microsoft. The number has been consistently trending up for Google, and trending down for the competition.

One last thing. Yahoo can say they focus on usability, but take a tour of the interface they put on their top 10. This would be enough to make Jakob Nielsen go postal. It’s one of the most irritating interfaces I’ve run into in a long time. It’s completely in Flash, launches with an irritating video clip, and makes you hunt around for the plain HTML version. I just know somewhere in Sunnyvale, there’s a team patting each other on the back for putting this thing together.

Stepping into the Did It/Web Guerrilla/Searchengineland Fray

I came in this morning, and what did I find? Another tempest stirring up in the blogosphere! Danny Sullivan, Kevin Lee and Greg Boyser have all waded in, so what the hell, I’ll dive in too.

First, a little history. Did It President David Pasternack started the whole deal sometime ago when he took a swipe at SEO, calling for it’s imminent death. I’m not going to elaborate, but for those of you interested, here are links to the original article, and a follow up article.

Now, Kevin Lee from Did It has written a ClickZ column, adding some clarity, but also predicting organic results being pushed below the fold because sponsored ads are more relevant. I’m going to set aside for a moment the SEO spamming question that Kevin raises. Greg and Danny do a pretty passionate job of defending SEO.

I’d like to speak from another perspective, the search user. There are a couple things that should be considered here.

First of all, contrary to Kevin’s point, just paying for an ad doesn’t make it relevant. That’s because the vast majority of marketers don’t consider the intent of the search user. They assume that everyone is ready to buy right now. That assumption is at least 85% wrong. Go ahead, do a search for any popular consumer product. I’ll bet the ads you see are talking about lowest prices, free shipping, guarantees and other hot button items that are aimed at a purchaser. But study after study shows that search engines are used primarily for product research, not purchase. The problem is that marketers have a very biased set of metrics they use to measure return. They measure ROI based on purchase, so when they test, these types of ads tend to pull the numbers they’re looking for. But the metrics aren’t capturing the full story. The 85% of users that are researching are basically ignored. No value is assigned to them. Until PPC marketers figure this out, they’re not doing the user any favors.

Our research shows that a very interesting interaction takes place with the researcher versus the purchaser in that Golden Triangle real estate. Both users look at the top sponsored ads when they appear. They both look at the organic listings. Frankly, there’s not a lot of difference between the scan patterns. But it’s where they click that makes the difference. When they’re ready to buy, based on a recent eye tracking study, about 45% click on top sponsored, and about 55% clicked on the top 1 or 2 organic links. Almost a 50/50 split, FOR THOSE THAT ARE READY TO PURCHASE. But when we look at the other 85%, the ones doing research, EVERYONE OF THEM clicked on the organic link. And in the test, the same site appeared in both spots, so relevancy of the destination was equal. As long as users want organic links, organic optimization continues to be important.

Look, David Pasternack can ring the funeral bell for organic all he wants, but the fact is, it’s not his call. It’s the user’s. Yahoo has actually done exactly what he and Kevin are predicting. They’ve moved organic down the page, jamming more sponsored on the top. Based on Did It’s comments, this should be good for the user, right? It should be more relevant, pushing the “spam” down below the fold. Wrong. Google kicked Yahoo’s ass in user experience in our latest study by every metric we looked at. And they’re definitely winning in the big picture, including stock prices. The difference. About 14% of Yahoo’s screen real estate (at 1024 by 768 pixels) was reserved for top organic. 33% of Google’s real estate went for top organic. You want more proof? Ask, back in the Ask Jeeves days, pushed organic totally off the page, doing exactly what Kevin and David call for and filling the top with sponsored. Take a look at Ask now. Organic is back above the fold. Spend some time talking to Ask usability lead Michael Ferguson about how the absence of organic worked out for them.

And it’s not that sponsored links provide a bad experience. Our study proves Kevin somewhat right. Top sponsored links, for commercial queries, delivered the highest success rates. But those were in highly structured and commercially oriented scenarios. That doesn’t represent all searches. It’s not that we avoid sponsored links, but we do want a choice and we want relevance, ALIGNED TO OUR CURRENT INTENT. Google has recognized that to a much greater extent than their competitors, and they’re eating their lunch.

There’s a reason why 70% of users choose organic. We’ve done a number of studies over the past 3 years, and that number has remained fairly constant.  It can’t be because those results are filled with spam. I actually just chatted with Marissa Mayer at Google, and she continually emphasized the importance of organic on the page. It’s a cardinal rule there that at least one organic result will always appear at 800 by 600. It’s mandated by Larry and Sergey. And that’s because they know it’s important to the user. We want alternatives. And we will be the judge of relevancy. That’s why Google has stringent click through measures on their top sponsored ads. If they don’t get clicked, they don’t show. The top of the Golden Triangle is reserved for the most relevant results, period, and in more than 50% of the cases, those are organic (either through OneBox or traditional organic).

So we in this industry can debate sponsored versus organic. We can make predictions. We can post in blogs til the cows (or frogs) come home. But it’s not our call. It’s not even the engine’s call. It’s the user’s.

Yahoo’s Quiet Guy is Moving On

First published December 21, 2006 in Mediapost’s Search Insider

Last spring, I attended the Pubcon conference in Boston put on by Webmaster World. During one of the breaks between the sessions, I was tucked away in an empty room trying to keep up with the inevitable flood of e-mails.

Well, truth be told, the room wasn’t quite empty. There was another person, also hunched over a laptop, working at the table next to me. We were both pretty absorbed and quiet. It was one of those situations where you’re wondering whether it’s better to not introduce yourself and run the risk of looking like you’re ignoring the person, or break the silence and acknowledge the other person by way of a quick nod and hi. I eventually opted for the later, and I’m glad I did. This was the way I met Yahoo’s Tim Converse.

Tim posted on his blog earlier this week that he’s moving on from Yahoo. Knowing Tim, albeit not that well, I sat and thought about this for awhile. It brought up a number of interesting questions about our industry. I thought them worthy of comment.

Don’t Judge a Book…

Tim is a pretty quiet guy. In fact, many readers of this column probably don’t know who Tim is. He’s the head of Yahoo’s anti-spam patrol, so he’s a bit like the Matt Cutts of Yahoo. But not quite. While Tim came to Yahoo through the acquisition of Inktomi way back in 2003, he has generally let the spotlight shine on his counterpart at Yahoo, the more vocal Tim Mayer. In an industry notorious for flocking the algorithmic cops at the major engines (see my first encounter with Mr. Cutts) Tim Converse can walk through most shows unscathed and unrecognized. In fact, it’s only very recently that I’ve seen Tim participate on a panel at a show, at this fall’s Pubcon in Vegas. Tim is not as comfortable in the public eye as his counterparts; he doesn’t have the same practiced ease of the other Tim or the open charm of Matt, but it becomes quickly apparent that his brain is packed with algorithmic gold. This guy knows his stuff. And if you’ve ever had the chance to chat with Tim or read his blog, you’ll find a razor-sharp wit and some pretty deep thinking lies below that deceptively calm exterior.

Moving On

The post went live on Tim’s blog on Monday. The well-wishers that commented made it clear that while many may not know Tim, those that do have a great deal of respect for him. Posters included Cutts and Danny Sullivan. While Tim’s announcement didn’t elicit the same type of response that Sullivan did when he dropped his bombshell that he was moving on from the Search Engine Strategies franchise (imagine what would happen if Cutts posted that he was leaving Google), one can’t help but wonder what the impact on an already battle-sore Yahoo might be. Certainly no one is irreplaceable, but Tim was a definite asset in the relevancy war staged by the big three. He’s looking forward to seeing the showdown continue, albeit from a distance: “Who is going to have the highest-quality general web search a year from now? I think it’s still going to be a brutal battle between the current top three (including MSN), and the winner will be whoever can innovate and execute the fastest. I’m sorry I’m going to watch that particular game from the sidelines, because it’s definitely not even halftime yet.”

Hot Property

My other thought that came from Tim’s departure is more a precursor to what will inevitably happen at all the major engines. The people who serve on the algorithmic side of the engines, like Tim, are privy to an extraordinary amount of proprietary information. Now with the introduction of quality scoring on the sponsored side, the same is true for these teams as well. At some point, all that knowledge is apt to walk out the front door and never return. I’m sure Yahoo’s corporate legal department has a sheath of nondisclosure and noncompete agreements, but those have always been relatively hard to enforce when put to the test.

This concern over trade secrets is certainly not unique to search, but with the importance of search for millions of marketers, it does put a universally recognized premium on the value of that knowledge. It’s inevitable that others from all the three engines will follow Tim’s lead and move along. When they do, they will suddenly find themselves hot properties–even if they tend to be quiet guys, a little on the shy side.

The Elusive Click Fraud Issue: Google’s Side of the Story

First published December 14, 2006 in Mediapost’s Search Insider

There are few issues in search marketing more thorny and convoluted than click fraud. It’s the elusive problem, the industry scourge that seems to defy definition. Everyone wants to know the extent of click fraud, but to date, there seems to be no credible numbers to attach to the problem. A recent BusinessWeek “investigation” called it the “dark underground” of the Internet, “a dizzying collection of scams and deceptions that inflate advertising bills for thousands of companies of all sizes.” The article pegged the occurrence of click fraud at “10% to 15% of ad clicks… representing roughly $1 billion in annual billings.” Unfortunately, the reporter used some questionable sources and math to come up with this number.

Even experienced search marketers can sometimes jump to wrong conclusions. Noted search marketer Andy Beal thought he had a scoop earlier this week when he did a little rough calculation on a presentation made by Google click fraud point person Shuman Ghosemajumder and pegged the actual occurrence of click fraud at 2% on Google. There was actually a little miscommunication between Beal and Ghosemajumder (since corrected on Andy’s blog). I chatted with Ghosemajumder this week and here’s Google’s side of the story, largely ignored by the mainstream press.

Where Do These Numbers Come From?

BusinessWeek‘s article said “most academics and consultants who study online advertising” agree with the 10% to 15% number. Yet there has been no independent study done with reliable methodology to accurately scope the size of the issue. The study most often cited is a particularly damning one done by Outsell in May of 2006. In the study, 407 companies were asked what percentage of their search buy they believed to be fraudulent. They then averaged the responses and extrapolated it across the industry. Many of these advertisers weren’t even tracking ROI, definitely a prerequisite for accurate identification of actual fraudulent behavior. As Ghosemajumder pointed out, “it’s like asking a random group of people what they estimate the average salary in the U.S. to be, when they have no numbers to judge it on, and they don’t even know what their own salary is.” Yet, this is the number that seems to be accepted as fact by reporters determined to blow the issue into cover story status.

What’s Fraud, and What’s Attempted Fraud?

One fact that seems to be easily overlooked is what actually qualifies as click fraud. Fraud is only perpetrated when damage is done–in this case, if money passes hands. If no money changes hands, it’s attempted fraud. Yet this simple distinction seems to be overlooked by many “investigators” into the question of click fraud. Everything tends to be included in the same bucket, usually accompanied by a whopping percentage designed to scare the hell out of online advertisers.

The 2% number quoted on marketingpilgrim.com came from Andy Beal, not from Google. It was computed by looking at the relative size of some graphics on a slide deck that was prepared to show Google’s click fraud filtering systems.

Google has coined the term “invalid clicks” to refer to all those that advertisers are not charged for. This category also includes more benign examples, such as multiple clicks on the same ad that can happen when a visitor “pogo sticks,” or clicks on an ad, hits the back button, and then clicks through on the ad again. Ghosemajumder does confirm that the number of invalid clicks represents a “single digit” percentage of all clicks across the network,

The “vast majority” of these clicks are proactively filtered out by Google in real time before any money passes hands, he says. It’s as if the clicks didn’t happen. The advertisers don’t pay, and the publisher where the click originated doesn’t get paid. The invalid clicks that slip through the real time filter then go for offline analysis, primarily focused on the AdSense network. Advertisers here are affected, but get refunds from Google without their having to take any action. In this case, Google does have a procedure for going back to the sites where the clicks originated. If anyone is out of pocket for these clicks, it’s Google, not the advertiser.

Now we get to the 2% number. It refers to the clicks that make it through the proactive filters that the advertiser has to bring to Google’s attention. The official word from Google is that this number is a “negligible percentage” of the total number of invalid clicks. My sense is that it’s probably much less than 2%. Remember, this isn’t a negligible percentage of all clicks, but a negligible percentage of “invalid” clicks, which in turn is less than 10% of all the clicks happening on Google.

The Impact in Dollars and Cents

So, let’s talk about actual fraud, where the advertiser is the one out of pocket. Let’s assume there is an advertiser with a $100,000-per- month budget. Let’s further assume that the clicks this advertiser receives are representative of the total Google network.

Using the assumed 9% number as the number of invalid clicks, this means about $9000 of the budget falls into this category. From this, the “vast majority” are filtered out in real time, so there is no impact to the advertiser. A smaller percentage is refunded to the client without its having to take any action. Finally, there’s the percentage that slips through the proactive filters. Even if we go with 2%, that would make the amount that would impact the advertiser $180. If you’re doing your math, that’s 0.18% of the total monthly spend, a far cry from 10% to 15%.

But It’s Not that Simple

These are the estimates from Google, which has invested heavily in fighting click fraud. The same diligence in policing click fraud is probably not present in all advertising networks. Click fraud is definitely more prevalent in some sectors and on some networks than others. Finally, everyone acknowledges that we don’t know what we don’t know. If click fraud goes undetected through Google’s filters and the advertiser never challenges it, it won’t be identified. Google uses the ROI and conversion data that some of its advertisers share with it as an overall indicator of click fraud activity throughout its network. Its executives feel confident that there’s very little slipping through all of these cracks.

Yes, this is Google’s side of the story, but as the mainstream press seems to be more interested in focusing on a couple of egregious cases rather than providing a realistic picture of the issue across the entire network, I think it’s important to pass it along. In the absence of real numbers for the short term, shouldn’t you at least balance the numbers being touted by the press with those coming from the people fighting click fraud on a daily basis?