The ZMOT Continued: More from Jim Lecinski

First published July 28, 2011 in Mediapost’s Search Insider

Last week, I started my conversation with Jim Lecinski, author of the new ebook from Google: “ZMOT, Winning the Zero Moment of Truth.”  Yesterday, Fellow Search Insider Aaron Goldman gave us his take on ZMOT. Today, I’ll wrap up by exploring with Jim the challenge that the ZMOT presents to organizations and some of the tips for success he covers in the book.

First of all, if we’re talking about what happens between stimulus and transaction, search has to play a big part in the activities of the consumer. Lecinski agreed, but was quick to point out that the online ZMOT extends well beyond search.

Jim Lecinski: Yes, Google or a search engine is a good place to look. But sometimes it’s a video, because I want to see [something] in use…Then [there’s] your social network. I might say, “Saw an ad for Bobby Flay’s new restaurant in Las Vegas. Anybody tried it?” That’s in between seeing the stimulus, but before… making a reservation or walking in the door.

We see consumers using… a broad set of things. In fact, 10.7 sources on average are what people are using to make these decisions between stimulus and shelf.

A few columns back, I shared the pinball model of marketing, where marketers have to be aware of the multiple touchpoints a buyer can pass through, potentially heading off in a new and unexpected direction at each point. This muddies the marketing waters to a significant degree, but it really lies at the heart of the ZMOT concept:

Lecinski: It is not intended to say, “Here’s how you can take control,” but you need to know what those touch points are. We quote the great marketer Woody Allen: “‘Eighty percent of success in life is just showing up.”

So if you’re in the makeup business, people are still seeing your ads in Cosmo and Modern Bride and Elle magazine, and they know where to buy your makeup. But if Makeupalley is now that place between stimulus and shelf where people are researching, learning, reading, reviewing, making decisions about your $5 makeup, you need to show up there.

Herein lies an inherent challenge for the organization looking to win the ZMOT: whose job is that? Our corporate org chart reflects marketplace realities that are at least a generation out of date. The ZMOT is virgin territory, which typically means it lies outside of one person’s job description. Even more challenging, it typically cuts across several departments.

Lecinski: We offer seven recommendations in the book, and the first one is “Who’s in charge?” If you and I were to go ask our marketer clients, “Okay, stimulus — the ad campaigns. Who’s in charge of that? Give me a name,” they could do that, right? “Here’s our VP of National Advertising.”

Shelf — if I say, “Who’s in charge of winning at the shelf?” “Oh. Well, that’s our VP of Sales” or “Shopper Marketing.” And if I say, “Product delivery,” – “well that’s our VP of Product Development” or “R&D” or whatever. So there’s someone in charge of those classic three moments. Obviously the brand manager’s job is to coordinate those. But when I say, “Who’s in charge of winning the ZMOT?” Well, usually I get blank stares back.

If you’re intent on winning the ZMOT, the first thing you have to do is make it somebody’s job. But you can’t stop there. Here are Jim’s other suggestions:

The second thing is, you need to identify what are those zero moments of truth in your category… Start to catalogue what those are and then you can start to say, “Alright. This is a place where we need to start to show up.”

The next is to ask, “Do we show up and answer the questions that people are asking?”

Then we talk about being fast and being alert, because up to now, stimulus has been characterized as an ad you control. But sometimes it’s not. Sometimes it’s a study that’s released by an interest group. Sometimes it’s a product recall that you don’t control. Sometimes it’s a competitor’s move. Sometimes it’s Colbert on his show poking a little fun at Miracle Whip from Kraft. That wasn’t in your annual plan, but now there’s a ZMOT because, guess what happens — everybody types in “Colbert Miracle Whip video.” Are you there, and what do people see? Because that’s how they’re going to start making up their mind before they get to Shoppers Drug Mart to pick up their Miracle Whip.

Winning the ZMOT is not a cakewalk. But it lies at the crux of the new marketing reality. We’ve begun to incorporate the ZMOT into the analysis we do for clients. If you don’t, you’re leaving a huge gap between the stimulus and shelf — and literally anything could happen in that gap.

Marketing in the ZMOT: An Interview with Jim Lecinski

First published July 21, 2011 in Mediapost’s Search Insider

A few columns back, I mentioned the new book from Google, “ZMOT, Winning the Zero Moment of Truth.” But, in true Google fashion, it isn’t really a book, at least, not in the traditional sense. It’s all digital, it’s free, and there’s even a multimedia app (a Vook) for the iPad.

Regardless of the “book” ‘s format, I recently caught up with its author, Jim Lecinski, and we had a chance to chat about the ZMOT concept. Jim started by explaining what the ZMOT is: “The traditional model of marketing is stimulus – you put out a great ad campaign to make people aware of your product, then you win the FMOT (a label coined by Procter and Gamble) — the moment of truth, the purchase point, the shelf. Then the target takes home the product and hopefully it will live up to its promises. It makes whites whiter, brights brighter, the package actually gets there by 10:30 the next morning.

What we came out with here in the book is this notion that there’s actually a fourth node in the model  of equal importance.  We gave the umbrella name to that new fourth moment that happens in between stimulus and shelf: if it’s prior to FMOT, one minus F is zero, ‘Zero Moment of Truth.'”

Google didn’t invent the ZMOT, just as Procter & Gamble didn’t invent the FMOT. These are just labels applied to consumer behaviours. But Google, and online in general, have had a profound effect on a consumer’s ability to interact in the Zero Moment of Truth.

Lecinski: “There were always elements of a zero moment of truth. It could happen via word of mouth. And in certain categories, of course  — washing machines, automotive, certain consumer electronics — the zero moment of truth was won or lost in print publications like Consumer Reports or Zagat restaurant guide or Mobil Travel Guide.

But those things had obvious limitations. One: there was friction — you had to actually get in the car and go to the library. The second is timeliness  — the last time they reviewed wash machines might have been nine months ago. And then the third is accuracy: ‘Well, the model that they reviewed nine months ago isn’t exactly the one I saw on the commercial last night that’s on sale this holiday weekend at Sears.'”

The friction, the timeliness and the simple lack of information all lead to an imbalance in the market place that was identified by economist George Akerlof in 1970 as information asymmetry. In most cases, the seller knew more about the product than the buyer. But the Web has driven out this imbalance in many product categories.

Lecinski: “The means are available to everybody to remove that sort of information asymmetry and move us into a post-Akerlof world of information symmetry. I was on the ad agency side for a long time, and we made the TV commercial assuming information asymmetry. We would say, ‘Ask your dealer to explain more about X, Y, and Z.’

Well, now that kind of a call to action in a TV commercial sounds almost silly, because you go into the dealer and there’s people with all the printouts and their smartphones and everything… So in many ways we are in a post-Akerlof world. Even his classic example of lemons for cars, well, I can be standing on the lot and pull up the CARFAX history report off my iPhone right there in the car lot.”

Lecinski also believes that our current cash flow issues drive more intense consumer research.  “Forty seven percent of U.S. households say that they cannot come up with $2,000 in a 30-day period without having to sell some possessions,” he says. “This is how paycheck to paycheck life is.”

When money is tight, we’re more careful with how we part with it. That means we spend more time in the ZMOT.

Next week, I’ll continue my conversation with Jim, touching on what the online ZMOT landscape looks like, the challenge ZMOT presents marketers and the seven suggestions Jim offers about how to win the Zero Moment of Truth.

Interview with Stefan Weitz posted at SNL

Apologies for my brief hiatus from blogging last week. I was in Santa Cruz for an extended weekend with my wife, which was fabulous…thanks for asking. Also got a chance to catch Wicked in SF. It was a great way to kick off the weekend.

In between Defying Gravity and bird watching on the California coast, I did get a chance to post Part One of an Interview with Microsoft’s Stefan Weitz on Search Engine Land. It was the kick off of a series I’m doing on where Search goes from here. Stefan and I talked mainly about Microsoft’s “Decision Engine” strategy and what Microsoft currently thinks is “broken” about search. An interview with Stefan can’t help but be interesting, so I encourage you to check it out over at Just Behave.

In the meanwhile, I’m still hopping across the country, but am hoping to get a few new posts done on the Psychology of Entertainment in between plane rides and racking up Hilton HHonors points. Why do I feel a compelling kinship to George Clooney’s character in Up in the Air?

Search Insider Sneak Peek: The Three-for-One Keynote

First published November 19, 2009 in Mediapost’s Search Insider

Avinash Kaushik, Google’s Analytics Evangelist, will be kicking off the Search Insider Summit in just two weeks. I had the opportunity to chat with Avinash last week about what might be in store. As anyone who has heard him before would agree, it won’t be-sugar coated, it will be colorful and it will probably wrench your perspective on things you took for granted at least 180 degrees. Here are the three basic themes he’ll be covering:

The Gold in the Long Tail

Avinash believers there is unmined search gold lying in the long tail of many campaigns. The secret is how to find it in an effective manner.  I’ve talked before about how longtail strategies must factor in the cost of administering the campaign, which can be a challenge as you expand into large numbers of low-traffic phrases. Chris Anderson’s Long Tail theory assumes frictionless markets where there is no or very low “inventory management” costs, such as digital music (iTunes) or print on demand bookstores (Amazon). In theory, this should apply to search but, in practice, effective management of search campaigns requires significant investments of time. You have to create copy, manage bid caps and, optimally, tweak landing pages, all of which quickly erode the ROI of long-tail phrases, so I’ll be very interested to see how Avinash recommends getting around this challenge. I’m sure if anyone can find the efficiencies of long tail management, Avinash Kaushik can.

Attribution Redefined

For the past three Search Insider Summits, attribution has been high on the list of discussion topics. Avinash thinks much of the thinking around attribution is askew (his term was not nearly as polite). All search marketers are struggling with attribution models for clients with longer sales cycles; often these models are little more than a marginally educated guess.  I believe simply crunching numbers cannot solve the convoluted challenge of attribution. The solution lies in a combination of qualitative and quantitative approaches. This, by the way, is the topic for another panel later in the day, “Balancing Hard Data & Real People.”  Avinash, despite his reputation as the analytics expert, always drops the numbers into a context that keeps human behavior firmly in focus.

Search Data Insights

The third topic that Avinash will be covering is how to take the massive set of consumer intent signals that lie within the search data and leverage it to not only improve your search strategies, but every aspect of your business. We chatted briefly on the phone about how unfortunate it is that search teams are often separated from much of the day-to-day running of a company. Typically, search marketers and their vast resources of campaign and competitive intelligence are not even connected to the other marketing teams. Avinash will show how the “database of intentions” can be effectively mined to provide unprecedented insight into the hearts, minds and needs of your market.

Any one of these topics is worthy of a keynote slot, but at the Search Insider Summit, you’ll be getting all three! See you there in just two weeks!

Talking Search with Dr. Jim Jansen at Penn State

JimJansen032105This is the full transcript of an interview with Professor Jim Jansen at Penn State University. Excerpts from the Interview are running in two parts (Part One ran a few weeks ago) on Search Engine Land. I wrote a column that provided a little background on Dr. Jansen on Search Engine Land.

Jim, we’ll start by laying out some of the research you’ve been doing over the past year and a half and then we’ll dig deeper into each of those as we find interesting things. Just give me the quick 30- or 60-second summary of what you’ve been working on in the last little while.

I have several research projects going on. One that I really find interesting is analyzing a five calendar year search engine marketing campaign from a major online retailer and brick-and-mortar retailer. It’s about 7 million interactions over that time, multi-million dollar accounts and sales and stuff. A fascinating temporal analysis of a search engine marketing effort.
I’ve been looking at that at several different levels – the buying funnel being one, aspect of branding being another, and then the aspect of some type of personalization, specifically along gender issues. And so that’s been very, very exciting and interesting and (has offered) some great insights.

I’m familiar with the buying funnel one because you were kind enough to share that with me and ask for my feedback, so let’s start there. I know you went in to prove out some assumptions, for example, is there a correlation between the nature of the query and where people would be in the buying funnel? Is there identifiable search behaviours that map to where they might be in their purchase process? What did you find?

I looked at it at several different levels. One goal was to verify whether the buying funnel was really a workable model for online e-commerce searching or was it just a paradigm for advertisers to, you know, get their handle around this chaos. And if it’s an effective model, what can it tell us in terms of how advertisers should respond?
In terms of the first question, we had mixed results. At the individual query level you can classify individual queries into different levels of this buying funnel model. There are unique characteristics that correspond very nicely to each of those levels. So in that respect, I think the model is valid.

Where it may not be valid is specifying this process that online consumers go through. We found that, no, it didn’t happen quite like we assume.  There was a lot of drop-out and they would do a very broad query and that might be all.
So we looked at the academic literature – you know, what theoretically could deal with that or explain that? – and the idea of sufficing seemed to fit. If it is a low cost, they won’t spend a lot of time, they will just purchase it and buy it.
In terms of classifying queries in terms of what advertisers’ payoff is, I think the most interesting finding was that the purchase queries – the last stage of the buying funnel – were the most expensive and had no higher payoff than the awareness or the very broad, relatively cheaper queries. From talking to practitioners, that is a phenomena that they have noted also … which is why a lot of people bid still on very broad terms, to snatch these potential customers at an early stage.

Based on what you’ve seen, there are a couple of really interesting things. You and I have talked a little bit about this, but we similarly have found that you can’t assume a search funnel is happening because people use search at different stages and they’ll come in and then they’ll drop out of the process, and they may come in later or they may not, they may pursue other channels. But the other thing we found is sometimes there’s a remarkable consistency in the query used all the way through the process and that quite often can be navigational behaviour. It can be people who say, “Okay, the last time I did this, I searched on Google for so-and-so and I remember the site I found was the third or fourth level down,” and they just use the same route to navigate the online space over and over again. If you’re looking at it from a pure query level, it’s a bit of a head-scratcher because you’re going, “Well, why did they use the same query over and over?” but again, it’s one of those nuances of online behaviour. Did that seem to be one of the possible factors of some of the anomalies in the data?

Well, that trend or something similar to it has been appearing in a lot of different domains and researchers are attributing it to “When I do a query, I expect a certain result.” So, you know, a query that may be very informational, what we’re finding is that searchers expect a Wikipedia entry. So in other words, a very navigational intent behind that very informational query. And I think the phenomena you’re describing is very similar. We have a transactional-type query and users are expecting a certain web page, a navigational aspect, and that “Okay, I have an anchor point here that I’m going to go to.” And then off search engine, maybe they do more searching and actually do some type of buying funnel process. But at the search engine, yes, we’re seeing a lot of that navigational aspect. I just looked at a query log from a major search engine and an unbelievable amount of queries were just navigational in nature.

We’ve certainly seen that. A lot of our recent research has been in the B2B space, so it’s a little bit different but certainly it follows those same lines. When we looked at queries that people would use, a large percentage of them were either very specific or navigational in nature.

You know, the idea of satisficing, of taking a heuristic shortcut with their level of research is also interesting. It seems like if the risk is fairly low, the online paths are shorter. Is that what you were finding?

Yes, and the principle of least effort is how it’s also presented. We see it in web searching itself generally in how people interact with search engines and how they interact with sites on the web. They may not get an optimal solution, but if it’s something that’s reasonable and if it’s good enough, they’ll go for it. That seems to be occurring in the e-commerce area also: “I want to buy something relatively cheap. This particular vendor may not have the best price, but it’s close to what I’m thinking it should be. Just go and get it done, get it over with, buy it.”

I would suspect that that would also be true in product categories where you have mentally a good idea of what an acceptable price range would be, right?


So if it’s a question of making a trade-off for $2 but saving yourself a half hour of time, as long as you’re aware of what those price ranges would be, you’re more apt to make that shortcut call, correct?

Yes. It does assume some knowledge and risk mitigation –if it’s a small purchase and that varies a little bit for each of us, but you’re willing to cut your costs of searching and trying to find the best deal just to get it done.

I suspect part of this would also  be your level of personal engagement with the product category you’re shopping in. So I’ll spend way too much time researching a purchase of a new gadget or something that I’m interested in just because I have that level of engagement. But if it’s a purchase that’s on my to-do list, if it’s just one task I have to get done and then move on to the next thing, I suspect that that’s where that satisficing behaviour would be more common.

Yes. Now you bring up a really good point. If it becomes entertainment – like a gadget that you enjoy researching – it’s no longer work, it’s no longer something you get done. The process of doing it makes it enjoyable so you don’t mind spending a lot of time. In those kind of cases, the goal really is not the purchase, the goal is the looking.

We found that alters the behaviour on the search page as well. So if it’s a task-type purchase where I just have to go and get there, you see that satisficing play out on the search page too. Typically when we look at engagement with the search page, you see people scan the top four, three or four listings. If it’s that satisficing type of intent where they’re saying, “I just want to buy this thing,” you’ll see people scan those first three or four and pick what they feel is the path of least effort. They go down and say, “Okay. It’s a book. Amazon’s there. I know Amazon’s price. I’m just going to click through and order this,” but if it’s entertainment, then suddenly they start treating the search page more like a catalogue where they’re paying more attention to the brands and they’re using that as a navigational hub to branch off to three or four different sites. Again, it can really impact the nature of engagement with the web… or with the search page.

Absolutely, and I really like your analogy of a catalogue. You know, there are some people that love just looking at a catalog – flipping through it, looking at the dresses and shirts or gadgets or sporting gear or whatever. And so that’s a much different engagement than flipping through the classified ads trying to find some practical thing you need. The whole level of engagement is at totally opposite ends of the spectrum, really.

As an extreme example of that, we did some eye-tracking with Chinese search engines and we found that with Baidu in particular, people were using it to look for MP3 files to download. So when we first saw the heat maps – and of course it was all in Chinese, so I could understand what the content on the page was without having it translated – I saw these heat maps going way deeper and much longer than we ever saw in typical North American behaviour. We saw a level of engagement unlike anything we had ever seen before. And it was exactly it. It was a free task – They were looking for MP3 files to download and they were treating the search page like a catalogue of MP3 files. They were reading everything on the page.  I think that’s just one extreme example of this catalogue browsing behaviour that we were talking about.

Let’s go to one of the other findings on the buying funnel: that quite often the more general, broader categories from an ROI perspective can perform just as well as what traditional wisdom tells us is your higher return terms.  Those closer to the end of the funnel – the ones that are more specific, longer, more transactionally oriented. What’s behind that?

Like a lot of these questions there’s no simple answer because there are plenty of exceptions to the rule you just described. There are some very broad terms that are very cheap, others that are very expensive. On the purchase side, there are some key phrases that are very cheap because they’re so focussed and others are expensive. But in this particular analysis – and again, this was 7 million transactions over 33 months, from mid-2005 to mid-2008 – the awareness terms were cheaper than the purchase terms and they generated just as much revenue.

I think a lot of it is that perhaps the items this particular retailer was selling fell into that sufficing behaviour: gifts, fairly low-cost items – there was just no need to progress all the way to that particular purchase phase.

To me it was really very unexpected. I really expected those purchase terms to actually be cheaper because they were more narrowly focussed and to generate more revenue, but it didn’t turn out that way.

That brings up an interesting point we’ve seen with client behavior, especially given the current economic condition. We found is a lot of clients are tending to optimize down the funnel – they are tending to look at their keyword lists they’re bidding on and move further and further down to more and more specific phrases, because the theory is – and generally they do have analytics to back this up – that there’s greater ROI on that because these are usually people that are searching for a specific model or something which is a pretty good indicator that they’re close to purchase. But I think one of the by-products of that is as people optimize their campaigns, those long tail phrases are getting more and more expensive because there’s more and more competition around them, and as people move some of their keyword baskets away from those awareness terms, maybe the prices on that, it all being based on an auction model, are starting to drop. Do you think that could be one of the factors happening here?

That very well could be. The whole online auction is designed around (the concept that) as competition increases, cost-per-clicks will increase also. It also may be that those particular customers don’t mind clicking on a few links to do some comparison-shopping and may end up going somewhere else. They may have a higher aspect of intent to purchase, but the competition among where they’re going to buy is more intense.

You know, compare that to this sufficing shopper: you just have to get that person’s attention first with a reasonably priced product and you will make the sale. That is the one issue with analytics in terms of transaction log analysis – we can analyze behaviours and we can make some conjectures about what happened, but you need lab studies and surveys to pan all data, to get the why part.

That’s a great comment and obviously something that people have heard from me over and over again, because we do tend to focus more on the quantitative approach. I think this goes back to what we were talking about originally –online information gathering is a natural extension of where we are in our actual lives so it’s not like a distinct, contained activity. It’s not like we set aside an hour each day to go through all our online research. More and more, we always have an outlet to the internet close by and as we’re talking or as we’re thinking about something, it’s a natural reaction just to go and use a search engine to find out more information. And I think because it’s such a natural extension of what’s happening in our day-to-day lives, that the idea of this one linear progression through an online research session isn’t the way people act. I think it’s just an extension of whatever’s happening in our real world. So we may do a search, we may find something, it may be an awareness search, and then we may pursue other paths to the eventual purchase. It’s not like we keep going back and forth between a search engine with this nicely refined search funnel. It’s not that neat and simple, just like our lives aren’t that neat and simple.

Yes, all models get rid of all the noise that reflect reality. So the neater they are, the less accurate they are, and the buying funnel is obviously very neat and so I think it’s reasonable that it represents a very small number of searches that actually progress exactly like that. We’re very nonlinear in things we do and so I assume our purchase behaviors are too.

I want to move on to the question of branding a little bit, because you mentioned that that was one of the areas you were looking at. And at Enquiro, we’ve done our own lab-based studies on branding, so I’d be fascinated to hear what came out as far as the impact of branded search.

This year, I’ve really got into this whole idea of branding in terms of information seeking. That’s really my background, web searching and how people find and assemble information. One of my first studies was to look at the comparison of what a search engine brand would do to how searchers interpreted the results. So I ran a little experiment where I switched the labels from Google, Yahoo, and MSN, and the results were the same. Certainly the search engine brand has a major lift to it.
In this particular study using the search engine marketing data, we did multiple comparisons of brand or product name and the keyword in the title, in the snippet, in the URL to see if there was a correlation with higher sales. And without a doubt the correlation between a query with a brand term and an advertisement with a brand term is extremely, extremely positive. That particular tightness seems to resonate with online consumers.

So just to repeat, so if somebody’s using a branded query and they see that brand appear in the advertising, there’s obviously a statistical correlation between the success of that, right?

Yes. In that particular case, one, that the click will happen, and two, that the click will result in a sale was yes, very positive. It really relates to the whole idea of dynamic keyword insertion in advertisements…

So to follow that thread a little bit further, obviously if people have a brand in mind and they see that brand appear, then that’s an immediate reinforcement of relevancy. But what happens if the query is generic in nature, it’s for a product category, but a brand appears that people recognize as being a recognized and trusted brand within that product category? Did you do any analysis on that side of things?

Not specifically. No, I did not. That’s a real good question though, but no, I did not do that type of correlation.

The last thing I want to ask you about today, Jim, is this idea of personalization by gender. I believe from our initial discussions that you’re just in the process of looking at the data from this portion. Is that right?

Well, we finished the analysis. Now we’re just writing it up.

So is there anything that you can share with us at this point?

Again, the results to me were counterintuitive from what I expected. Usually, the idea of personalization is that the more personalized you get, the higher the payoff, the efficiency and effectiveness is. We took queries from this particular search engine marketing campaign and classified them based on gender probability using Microsoft’s demographic tool, which will classify a query by it’s probability of being male or female. We looked at it this way: now whether the searcher was male or female but did the particular query fit a gender stereotype – did it have a kind of a male, for example, feel to it or stereotype implications.

So more women would search for “Oprah,” and more men would search for “NASCAR”?


What did you find?

In terms of sales, far and away the most profitable were the set of queries that were totally gender-neutral. We took the queries and divided them into seven categories: “very strongly male,” “generally male,” “slightly male,” “gender neutral,” “slightly female,” “strongly female,” “very female.” By two orders of magnitude, the most profitable were the ones that were totally gender-neutral.


Yes, as a researcher who does personalization research, my guess would be “Ah, the more targeted they are, the more profitable.” But no, the means were two orders of magnitude different.

So give us an example of a gender-neutral query.

We defined gender-neutral to be were queries that the Microsoft tool classified somewhere between-  exactly gender-neutral is zero – up to like 59% either side. So we had a fairly big spread here. And there was a trend that was somewhat expected –  that the queries that were more female-targeted generated higher sales than the corresponding male counterparts.
So here’s some examples of queries based off the Microsoft tool:  “Electronic chess,” 100%. You know, the Microsoft tool classified that 100% male. For a gender-neutral query, I’ll just randomly pick up a couple here: “Atomic desk clock.” “Water purifier.”

I know you’re just writing this up now, but any ideas as to why that might be?

One thing that is coming out in the personalization research is that at a certain level, we have totally unique differences. You can personalize to a general category and to a certain level, but beyond that, it’s either not doing much good or may actually get in the way. And that may be something that is happening here – that these particular, very targeted gender keyword phrases are just not attracting the audience that the more gender-neutral queries and keywords are.

Again, it’s a “why” thing.  We spend a lot of time in web search trying to personalize to the individual level and really haven’t got very far. But now people are trying to do things like personalize to the task rather than the individual person, and there’s some interesting things happening there. Spell checks and query reformulations and things like that are very task-oriented rather than individual searcher oriented.

I remember Marissa Mayer from Google saying that when Google was looking at personalization, they found by far the best signal to look at was what’s the string, what immediately preceded the search or a series of search iterations. They found that a much better signal to follow than trying to do any person-level personalization, which is what you’re saying. If you can look at the context of the tasks they’re engaged in and get some kind of idea of what they’re doing or trying to accomplish in that task, that’s probably a better application of personalization than trying to get to know me as an individual and to try to anticipate what I might say or query for any given objective.

Yes, It’s just so hard to do. You know, Gord is different than Jim, and Gord today is different than Gord was five years ago. Personalizing at the individual level is just very difficult and may not even be a fruitful area to pursue.

I remember when Google first came out with talking about personalization there was this flurry around personalization in search. That was probably two, two and a half years ago and it really seems to have died down. You just don’t hear about it as much. And at the time I remember saying that personalization is a great thing to think of in ideal terms – you know, it certainly would make the search experience better if you could get it right or even half-ways right, but the challenge is doing just that. It’s a tough problem to tackle.

Yes, and as you mentioned earlier, we’re nonlinear creatures, we’re changing all the time. I can’t even keep up with all my changes and I can’t imagine some technology trying to do it. It just seems an unbelievably challenging, hard task to do.

I think the other thing is – and certainly in my writings and readings this becomes clearer and clearer – that we don’t even know what we’re doing most of the time. We think we have one intent but there’s much that’s hidden below the rational surface that’s actually driving us. And for an algorithm to try to read something that we can’t even read ourselves is a task of large magnitude to take on.

That’s a really good way of looking at it. I’ve commented on that before in terms of recommending a movie or book to me. I don’t even know what books and movies I like until I see them. Sometimes I pick up a book and say, “Oh, I’m going to really love this,” only to get a chapter into it and realize “Okay, this is horrible.” And I think you see that in the NetFlix challenge –  So many organizations have laboured for a decade now, and finally it looks like perhaps this year someone may win by combing 30 different approaches simultaneously to the very simple problem of “Recommend a movie. It’s just amazing the computational variations that are going on.

Amazon has obviously been trying to do this. They were one of the first to look at collaborative filtering and personalization engines, and they probably do it about as well as anyone. But even then, when I log on to Amazon, it’s not that they’re that far off base in their recommendations to me, but given what I buy on Amazon, it’s like they’re dealing with this weird fragmented personality because one time I’m ordering a psychology textbook because it has to do with the research I’m doing for something and the next time I’m turning around and ordering a DVD box set of The Office or even worse, the British version of The Office which really throws it for a loop.

Jim:    [laughs]

Then I’m ordering a book for my daughter like Twilight.  Amazon is going, “I don’t know who this Gord Hotchkiss is, but he’s one strange individual.”

From my interaction with Amazon, the recommendations I have found most effective are “You bought this book. Other people that bought this book bought these books” which I view as a very task-oriented personalization. And the other is a very broad, contextual one, “Here’s what other people in your area are buying,” which fascinates me. It’s almost like a Twitter, Facebook, social networking thing: “Oh, wow. I like that book,” you know? These task-oriented context personalizations, at least in my interactions, have been the most effective.

You obviously bring up that intersection between social and search, which is getting a lot of buzz with the explosion of Twitter and the fact that there’s now real-time search that allows you to identify patterns within the complexity of the real-time searches. We’ve known in the past in other areas that generally those patterns as they emerge can be pretty accurate, so that opens up a whole new area for improving the relevancy of search.

Jim, one last question while we’re talking about personalization. This is something I wrote about in an article a little while ago and I’d love to get your take on it as the last word of this interview. We were talking about personalization and getting it right more often, and the fact is the way we search now, engines can be somewhat lax in getting it right. There’s a lot of real estate there, we scroll up and down. The average search page has something between 18 and 20 links on it when you include the sponsored ones. It’s more like a buffet: “We’re hoping one of these things might prove interesting to you or whet your appetite.” But when we move to a mobile device, the real estate becomes a lot more restrictive and it becomes incumbent on the engines to get it right more often. We can’t afford a buffet anymore, we just need that waiter who knows what it is we like and can recommend it. What happens with personalization as the searches we’re launching are coming from a mobile device?

That’s a great question. I think it’s one of those areas that have got a lot of talk – everybody is saying (again) “This is the year mobile searching’s going take off.” It’s been going on for four or five years now, and really, I mean at least here in the US, it hasn’t really happened yet. But what I think is going to make it hit the mainstream is this combination of localized search.
When you have a mobile device, the technology has so much more information about you: it’s got your location to within a couple feet, the context that you’re in can really start entering the picture and information gets pushed to you –I’m thinking tagged buildings and restaurants and cultural events and on and on. And so with my mobile device, where I can talk into it, I don’t even have to type anything. I want “what’s going on in the area?” and it automatically knows my location and the time and perhaps something about me and the things that I’ve searched on before. “Oh, you like coffee shops where there’s some music playing. Guess what? Boom. There’s five right near, in your area that have live entertainment right then.” So I think in that respect it’ll be a little more narrowed search, but the technology will have so much more information about us that in a way it makes the job easier. The problem’s going to be the interface and the presentation of the results.

We’re talking about, you know, subvocalization commands and heads-up display. You start looking at that and say, “Wow, that would be pretty cool,” but…

Yes. Imagine being able to walk through a town … I live in Charlottesville, Virginia. Tons of history here from 400 years ago when Europeans first settled here, Thomas Jefferson, James Madison, etc., etc. Being able just to walk down Main Street and have tagged buildings interface with my mobile device… I’m a big history buff and so getting that particular information, one, pushed to me or at least available to push when I ask for it is a wonderful, wonderful area of personalization. This idea of localized search and mobile devices and mobile search may be the thing that brings it all together and makes mobile search happen.

It’s fascinating to contemplate. And I know I promised that was going to be my last question, but I’m going to cheat and squeak one more in, and it’s really a continuation. You remember the old days of Longhorn with Microsoft, when they were working what eventually became Vista. They were talking about building search more integrally into everything they did and they had this whole idea of Implicit Query – which really excited me because if anyone knows what you’re working on at any given time, it should be Microsoft, at least on the desktop. They control your e-mail, they control your word processing, they control your calendar. If you could combine all this… all those as signals – the document you’re writing and the next appointment you’ve got coming up and the trip you’re taking tomorrow – imagine how that could intersect with search and really turn into a powerful, powerful thing. I remember saying…this was years ago… “That could kill Google. If Microsoft can pull this off, that could be the Google killer.” Of course we know now that that never happened. But if we take all that integration and all that knowledge about what you’re doing and what you’re doing next and where you are and move that to a mobile device, that’s really interesting. In looking at where Google is going, introducing more and more things that compete directly against Microsoft… is that where Google’s heading, to become our big brother that sits in our pocket and continually tells us what we might be interested in?

You know, the “Big Brother” idea label has certain negative connotations, so I don’t want to say that they are Big Brother-ish in that regard. But certainly I think with their movement into free voice and free directory assistance, they will soon have a voice data archive that will allow them to do some amazing things with voice search, which would be an awesome feature for mobile devices. Being able to talk into a mobile device, have it recognize you nearly 100% of the time and execute the search.
Google of course is the one that knows what they’re doing, but certainly I think it would be naive not to be exploring that particular area. And I think the contrast from what you said about Microsoft and the desktop, the desktop is just so busy. You’re getting so many different signals in terms of business, personal things, my kids use my computer sometimes. And so the context is so large on the desktop, but the mobile device, it’s narrower. You know, you have some telephone calls, you can do some GPS things, so the context is narrower but very, very rich in that very narrow domain. I think it’s a really hot area of search.

Interview: Branton Kenton-Dau of VortexDNA

In this week’s Just Behave Column, I had the opportunity to interview Branton Kenton-Dau from Vortex DNA. I’ve posted the complete transcript here for those that are interested:

So I think what we’ll do in this interview is cover off on a basic level what VortexDNA does, and then we’ll get into a little bit more about the potential I think it has for users.  So, it’s obviously an interesting idea using core values to try to determine intent. Maybe I’ll let you just walk through a little bit about how VortexDNA works, and why your approach is unique.

Branton Kenton-Dau: 
Yeah, thanks Gordon. I think that is a great place for us to start, and I think for us, it probably would go back to the human genome project itself, where we initiated as a society this project to map our human DNA.  And the great vision around that was that once we knew what our physical DNA was like we would be able to define the characteristics of your world and in particular help prevent  serious illnesses.  And one the outcomes of that project, was actually we found that there weren’t enough genes.  We found too few genes to map the 100,000 chemical pathways of our body; and that since then, where science is taken us is that it’s demonstrated that our physical DNA actually doesn’t determine who we are, but the whole science of epigenetics is saying actually it’s our environment, you know what we eat and particularly what we believe about ourselves which determines our propensity to be ill, to be healthy, to be successful or not.  So, actually our belief system is a major impact in determining who we are, and what is exciting about that is basically it represents a shift for us as a society from the very deterministic view of ourselves; that we are basically physical machines. Either we’re broken or not, depending on what our parents gave us, to the idea that we are actually beings that are creating our own lives with much more build out of what we  believe about ourselves at any moment and any time. That’s exciting because what we believe about ourselves can obviously be changed.  And basically what VortexDNA is is a technology that came out of the insight that the way we structure our beliefs is governed by the mathematics of complex systems.  What that means is that we know the structure our beliefs, and because of that we can then map out the structure of our intentional DNA, the intentions behind the world we create, and that’s basically the breakthrough, the technology. It provides a map of the way people organize who they are, literally who they are, through their belief systems.  And out of that, then comes the opportunity to create a better world for yourself whether that’s finding your best life partner or finding better research results or finding better car insurance rates because your particular belief system has a low propensity for accidents. It actually touches every part of your life, because we are actually mapping human characteristics.  The true genome, if you like, based upon the new science.

Okay, this is an interesting approach and undoubtedly a unique approach. I don’t know anyone else who is doing this. But you know, I approach this with a fair degree of cynicism saying, okay, obviously if you learn more about my belief system you can try to map that against the content of the internet. But how well does that actually work because my beliefs are the foundation, but on top of that, there are a lot of layers of intent for a lot of different things. How granular can your belief system be in disambiguating intent?  In some searches, I would see it working very well where it points to sites that you know resonates with my belief frame work, but in others where it is a much more practical “looking for information”, will trying to anticipate with my beliefs might indicate would be a good site, will that really be a good indicator of relevance?

Branton Kenton-Dau: 
That’s a really good question Gordon; I think that the answer to that is that we don’t know yet.  I think that we are at very early stages of really what is the science of human intention, I mean, that’s really what’s it about.  And what I can share with you is that we validated the technology last year against Google search results, and that’s where we were able to show that we can improve Google’s page rank by up to 14% which would improve it by a 3% improvement in click rates.  And, what that seems to be saying to us is (it does help), and that’s across the board, people obviously searching for anything and everything that’s possible on the web, we were analyzing that data.  We haven’t been able break that down to say whether or not if you’re hunting for a job, we are able to provide better recommendations than when you are looking for your recipe for custard or something; we just don’t know yet enough about it.  One of the things is interesting is that when we had our expert review done on that data by a Rhode Island consultancy firm, they said that they the way that the technology works, because it’s iterative, i.e. it learns as it goes.  He said that we probably have no idea yet of how efficient the system is, we don’t know because we are dealing with a very small sample and as more and more users and more DNA is selected on links of the net, then there is no reason that it can’t actually be more effective than we’ve demonstrated, but we just don’t know yet.  It’s just early days yet.

You made the comment that this is iterative and it learns as it goes, and from going through your site I see you answer an initial questionnaire; and I’ll get to the whole privacy question, or the perception of privacy question is probably more accurate, I’ll get to that in a minute. But you answer the questionnaire that creates an ideological or a value-based profile of you which then gets mapped against different sites.  But then, at anytime you can go back and answer more surveys to further refine what that profile looks like.  How much of this refinement process or this learning process is incumbent on the user as opposed to transparent in the background just by VortexDNA watching what are you doing and how you are interacting with different sites?

Branton Kenton-Dau: 
The answer is this system actually learns that every time you click on something, because every time you click on something, if you have downloaded the extension, the MyWebDNA extension, basically every time you click somewhere it’s a statement of your intentionality, so if other people have also clicked on it, it helps build up a map for that link of the intentionality that has been focused on the link.  So, we can feedback that intentionality into your own profile and therefore you don’t have to do anything.  Actually this year we will probably do away with the survey, so then you will won’t even need the survey to get started, that was just like a pre-heat process.  So, all we have to do is just surf as normal and you will be monitoring the state of your intentionality moment by moment with each click you make.

Okay, so let’s deal with that a little bit.  If you are monitoring my activity, in some ways this overlaps with what Google is doing with their web history and their search history, where they are tracking your usage and trying to learn more about you as an individual, theoretically, and then altering the results on the fly based on the personal signals it’s picking up. What you are doing is you are layering this outline of core values and what our belief systems are over and above that to say, “Okay well, not only are we watching what you are doing; we are trying to understand what’s important to you as an individual.”  Now, if we take that and we say, Okay I am in a business where at any given time for any given hour I’m working, I may be doing research, I might be writing the column on hate literature in North America, so I am going to be going to the sites of Neo-Nazi groups, trying to find information..that’s just part of my job.  How do you know that that doesn’t reflect my belief system, how do you know that this is just something that I happen to need to find information on right now?

Branton Kenton-Dau: 
There are two parts to that.  One is that I feel we are very respectful of what Google and Yahoo, and their analysis in the whole semantic web push are doing in terms of trying to make the web more relevant to people and we do believe our technology is complementary to those approaches.  We don’t believe we are competing with any of those and, as you say, it’s overlay, it’s additive to those.  Having said that, we are really completely different to that because, it’s actually the structure, the pattern, the way your beliefs are organized that we are interested in, and what that means is that we actually turn your answers to your questionnaire, what you click on, into just a set of numbers.  There are seven numbers that correspond to different aspects of that pattern of organization, that makes up your intentionality.  And so, really when you are going around, what we are doing is as you click on something, we will compare your number, it might be 7632416 say, with the numbers that are held against that link.  So, what we are doing is we are comparing numbers, we don’t know whether you’ve gone to a Nazi site or whether you are looking for apple pie recipes. We have absolutely no idea and maintain no record of where you’ve been, in all those sites. So when your genome is updated, because you’ve gone to this site,  we might update your genome because you’ve been to that site to change one of your digits in one way, by one point or so, then that digit could be changed from any site you ;  the news or Yahoo! or whatever.  So, what makes us really truly unique with this approach, which we think is really important, is that we absolutely protect your privacy because we do not track your searches in any shape or form because we are just adding or subtracting numbers from that seven digit identifier.  So it makes no difference to us, and I think that’s a really powerful thing, because you know there is concern with people for what information people hold on them and all we hold is a seven digit number, and you could be anywhere, it could have been Walt Disney Movies, it could have been finding out about the players in your favorite football team, it makes no difference to us.  So we can’t tell even if a law enforcement agencies came to us, and asked to us, “hey, where has Gord been”, we would have no idea, we could not tell them.

So what you do is in your identification of all the sites, you look at the sites and you assign each of those a profile and then your profile is altered on the fly based on the profiles you are matching up with against your content then, right?

Branton Kenton-Dau:  That’s correct and they’re all number, those profiles are numbers so…

Right.  So there is no history retained, it’s just a constantly updated value which in turn, with every time you go out, is compared against all the values of the sites that come up in a search engine for instance, and the best possible matches are highlighted in the search results then.

Branton Kenton-Dau: 
Thank you, that’s absolutely perfect.

That’s interesting.  And now, obviously, that’s a totally different approach and one that should put some fears to rest on the privacy side, but I’ve got to tell you when I checked out VortexDNA and went through the process of the download, the whole idea of filling out a  questionnaire identifying my belief system, it gave me cause for concern. It was funny because as far as identifying me as an individual, the demographic information I fill out here and there across the web is potentially much more of a cause for concern for my privacy, because there is identifiable information in there.  I don’t usually have a second thought about but something about putting my beliefs down and sharing them with somebody else was very hard for me to do. Are you finding..and you said that you are probably going to drop the questionnaire…but are you finding that as a sticking point for people signing up for VortexDNA?

Branton Kenton-Dau: 
I think some people never think about it.  We get up in the morning, brush our teeth and go to work and make our daily bread.  Sometimes we don’t have time to think at all, “why am I here?”, and when you ask the question, “what’s your purpose in life?” Well that’s a pretty profound question.  What we found is that some people don’t fill in (the questionnaire) correctly or too quickly or they just answer anything, they don’t really think about it deeply and therefore, because noise goes in, they just get noise out.  That’s where over the course of last year we actually developed what we call this idealized genome. We can just infer your genome by what you click on.  We think that’s a much better way, and we can do that for instance, by just playing a game.  We can show some images, pick some of your favorites, we can infer your genome that way.  So, lot’s of more, less mentally taxing and more fun ways that we can get you started in creating your genome, your profile, which we think would work a lot better.

Well, you mentioned this whole approach may limit your potential market just because a lot of people haven’t thought about what their core values are or what their beliefs are, so the whole appeal of VortexDNA might be lost on them.  Unless you are a fan of Stephen Covey or you’ve read Built to Last, or Good to Great, you may not get it.  What are your thoughts about that?

Branton Kenton-Dau: 
I think you are right, and I think that the technology is broad enough, so it can serve anyone, whatever their focus is in life, and that it’s our responsibility to make sure that it’s that easy to use. We like to be able to do it (transparently), say, if you want to play Pacman, this way you are building up your profiles, and we can enable you to do that.  And we should be able to that shortly.  At the same time, I think one of the really key things about the technology, and certainly from my point of view of what, you know, gets me up in morning is the fact that I think it really is empowering for people to understand that the lives that they create, they literally do create it, it’s not given to them. Who you are is not determined by your upbringing, or your life experiences, or by the genes your parents gave you; but it is actually created by you moment by moment.  And, it’s my hope that the technology would help. It’s really a very American thing, in the sense that there is all about human freedom ultimately, and it gives people more freedom as they realize, “well, I am the creator of my life and if I am going to keep stuck in this rut then it is what I believe about myself that is keeping me there.”  That’s what I find exciting about the technology, so I hope that may dawn on some people faster than others and that’s okay, there is no problem with that.  But I believe the technology has the ability to make a contribution to human freedom ultimately.

So, you are climbing up Maslow’s hierarchy to the top level?

Branton Kenton-Dau: 
Yes, I absolutely believe that and I think that  we as human beings are always trying to really understand our true ability to create reality, and that our intentionality is, if you like, part of ourselves that we probably put less effort into training than anything else.  We spend a lot of time on the fitness machines or jogging to keep our bodies in shape, but we haven’t spent a lot of time in what actually seems to be a real key factor in determining the success of our life, which is our intentionality; and so I am hoping the technology will help focus us on that.

We’ve talked about some pretty lofty ideals for a technology here. About  helping people with self-actualization, and become better people, and become more aware of both what’s important to them internally and externally.  All of which is great for any fans of Collins or Covey who might be reading this or listening to this. We are getting to the hedgehog concept here; you are obviously passionate about this, you’ve got something different that you can be unique in, possibly best in the world at.  Now, comes the money question. How does this drive your economic engine? What’s the business model for VortexDNA, and how do you see that playing out over the 2 years to 3 years?

Branton Kenton-Dau
We have given a lot of thought about that and made a lot of mistakes as well and I think U-turns on it.  But basically, the company I represent, we basically have a technology which we issue licenses to other organizations and participate with them as strategic partners. For instance, in rolling out the technology.  And there are two kind of key parts for that, two key aspects of the technology, one is that the technology can be used by any ecommerce sites, whether that’s an e-tail or social site, in order to provide better recommendations, using their algorithms.  So, that’s a pure B2B solution, and we have the company incorporated in United States at the moment in order to do that. We would be interested in anyone who would like to partner with us to roll that out.  And then, the other side is that we feel we can create a better web by harnessing the power of mass collaboration, just like Wikipedia, to map the genome of the web and out of that, will come better search engines, will come a better ability to find people like you anywhere you are, enhance your blog, pretty much a holistic upgrade if you like, of the web itself. And that, we believe, like search engines themselves, is a pure advertising based free service to users.  So, we see there is an application there and in fact within next 30 days, we will be launching the Web Genome Project with its own website, and that would be an advertising based play again.  We believe that that has potential in every country in the world and we are open to issue licenses to partners that would like to take up the opportunity.

This territory has been somewhat explored in the past, you know the one example I am thinking of with the Music Genome Project where Pandora tried to use your past songs you like to recommend more songs you might like.  So, is this somewhat similar to that except obviously a much broader scope. Anything that could be on the web, right?
Branton Kenton-Dau:  That’s correct, I guess the difference would be that what’s great about the Web Genome Project is that while you sleep, millions of other people will be clicking on things that will make the web better for you in the morning.


Branton Kenton-Dau: 
That’s what’s so cool about mass collaboration. You do your clicking, you click on whatever, a thousand links in your day, but while you are sleeping well there are millions of links that have been updated and have better DNA against them, so that you can find what you want better when you wake up in the morning, and that’s really exciting.
Gord:  It’s definitely one of those big idea things.

Branton Kenton-Dau

To flip this on its side, as a community we are all clicking away, and this DNA matching is going, so it’s making the web a better place, as you say, collaboratively, but on the flip side of that, once you’ve identified or a profile has been built that’s been refined over time based on the sites you found interesting or you’ve spent time with.  That’s a unique identifier that says something about you so, theoretically, down the road, if somebody comes to a site, if the owner of that site can identify what’s important to that person based on the profile, it could on the fly serve up content matching those beliefs, is that another possible application for this?

Branton Kenton-Dau: 
Thank you, that’s what I was attempting to describe in the first application, that’s the B2B solution.   So an e-tailer or search engine can now take out a license to run the application, the user would visit the site, you won’t see anything different through your Google search or through your Amazon book recommendations, but that’s all being added to the recommendation engine behind, so you are just going to say “hey, for some reason, I just feel that I am getting better recommendations now”.  So, it’s our way of making the web more efficient and that technology is available right now. We’ve got three installations in United States currently progressing, and that’s the other, that’s the business-to-business model, and we believe that has applications around the world as well.

So, for any business applications the big question is critical mass, how many people will be downloading the plug in and using it?  This is fairly new, how long has the Firefox plug-in been available?

Branton Kenton-Dau: 
About a year now.

What has the adoption been like to this point?

Branton Kenton-Dau: 
At this point, it’s been slow because what we’ve actually done is that plug-in was actually built initially to validate the technology.  That’s what we had to do last year, and then we spent the rest of last year really building enterprise-grade technology that enabled to be used by clients. So we really start the year, as I said, the next 30 days will see the Web Genome Project being launched, so we are only just at the start of the technology coming online.

Are there any plans to accelerate the adoption either through partnerships or bundling or other ways to actually get people to download the plug in and start using it?

Branton Kenton-Dau: 
There are, I mean for each of the people, partners, e-tailors that would like to use the technology, we’ll be producing custom versions with extensions for their users, so that, they will encourage their uses to download the extension, because that will help map the DNA of their links quicker.  So, that will speed the application and we have also got plans to provide custom versions of the extension also to people of different social networks, so that they can enhance the experience within social networks as well.  And then, also if you’re logged in to any service, if you have an account with Amazon or some other e-tailer, you actually don’t need to use the plug in because when you login they already know who you are, so you already will be able to get better recommendations from that person without using the plug-in at all. You won’t see it. It would be completely seamless and invisible to you, and you just get a better web experience.

So, if you log into Amazon for instance, I guess it just keeps the profile so that profile would not be portable then.  It would stay with Amazon, and I think to get to the broader context of what you are talking about, the portability of that profile, the fact that it goes with you from site to site, would be an important part of that, right?

Branton Kenton-Dau: 
That’s why we see the extension would be great if people did use it or it just became embedded in web browsers generally.  Because it will give you a more universal better experience. 

Okay so looking forward, you’ve done a lot of development on the backend to build the infrastructure, and the theory is there.  Now, it’s just a matter of having it proven out in real world situations, right?

Branton Kenton-Dau: 
That’s exactly what we are about to see. That’s why these installations are taking place at the moment in the United States to validate that, and we are getting started with the Web Genome Project, it is all about delivery this year.

Well, it’s fascinating, like I said it’s one of those big idea things that is fascinating to contemplate.  Is there anything else you wanted to add before we wrapped up the interview?

Branton Kenton-Dau: 
We’ve worked on this  pretty much, well, it’s been an 8-year project, so it’s not been a fast thing for us, but I just thought I might share  a couple of books that have been really important to me which you probably know about anyway. One of them that I just finished reading is The Intention Experiment by Lynne McTaggart who is also the author of The Field. What is nice about that is she just documents all the rigorous science that is basically saying that we are shifting our paradigm, to understand we are more like any energy fields, if you like, than physical bodies, that’s the definition of us.  And just the science has come out of Stanford and other places that validates that, is just awe inspiring.  And then, the other one is The Biology of Belief by Bruce Lipton, which gives that whole transition process from us believing we are physical genes to the whole science of extra genetics, if it’s actually our environment including our beliefs which is a key factor in determining who we are; and I just wanted to share those because I found those two books very inspiring.  They happened after the fact. We didn’t build the technologies because we read the books, but with the books now, we say “oh yeah, that’s why our technology works.

Well, it’s interesting you mentioned that because it seems like anytime I ‘m talking to people about really interesting things there has been this almost renaissance of understanding about what makes us as humans tick, starting in areas like psychology, neurology, and evolutionary psychology and a whole bunch of different areas.  And it all started to happen in the early 90’s, and just for the last 10 years to 15 years, it seems like so many paradigms have shifted.  We’re just looking at things in a whole new way and I agree with you, it’s very inspiring and exciting to know that everything seems to be in such flux right now.

Branton Kenton-Dau: 
I absolutely agree with that, and that’s been our sense as well.  It’s just such a privilege to be part of that process.  I know your comments and what you are doing is aligned with that as well.  You know, we are all doing it, but when we are creating together, we are creating something which is new and exciting, I think, and we all have our parts to play in it.  I find it a privilege to participate in this, really, it’s human movement in taking us forward at such a rapid pace as well, so we are now absolutely aligned with what you were saying there.

Highlights from the Search: 2010 Webinar

Yesterday, I had the tremendous privilege of moderating a Webinar with our Search 2010 Panel: Marissa Mayer from Google, Larry Cornett from Yahoo, Justin Osmer from Microsoft, Daniel Read from Ask, Jakob Nielsen from the Nielsen Norman Group, Chris Sherman from Search Engine Land and Greg Sterling from Sterling Market Intelligence. It was a great conversation, and the full one hour Webinar is now available.

I won’t steal the panelists thunder, but the first question I posed to them was what they see as the biggest change to search in the coming year. Most pointed to the continued emergence of blended search results on the page, as well as more advances in disambiguating intent. A few panelists looked at the promise of mobile, driven by advances in mobile technology such as multi touch displays, embodied in the iPhone. After listening again to the various comments, I’ve put them together into 4 major driving forces for Search in 2008 and beyond:


The quest to understand what we want when we launch our search is nothing new. How do you deal with the complexities and ambiguity of the English language (or any language, for that matter) when you’re trying to make the connection between the vagaries of unexpressed intent and billions of possible matches? All we have to go by is a word or two, which may have multiple meanings. While this has always been the holy grail of search, expect to see some new approaches tested in 2008. We’ve already seen some of this with the search refinement and assist features seen on Yahoo, Live and Ask. Google also has their query refinement tool (at the bottom of the results page), but as Marissa Mayer pointed out in the Webinar, Google believes that as much disambiguation as possible should happen behind the scenes, transparent to the user.

The challenge with this, as Marissa also acknowledged in the Webinar, is that there are no big innovations on the horizon to help with untangling intent in the background. Personalization probably holds the biggest promise in this regard, and although it was regarded with varying degrees of optimism in the Webinar, no one believes personalization will make too much of a difference to the user in the next year or so. All the engines are still just dipping their toes in the murky waters of personalization. Using the social graph, or tracking the behavior of communities is another potential signal to use for disambiguation, but again, we’re at the earliest stages of this. And, as Jakob Nielsen pointed out, looking at community patterns might offer some help for the head phrases, but the numbers get too small as we move into the long tail to offer much guidance.

For the foreseeable future, disambiguation seems to rest with the user, through offering tools to help refine and focus queries, and possibly doing some behind the scenes disambiguation on the most popular and least ambiguous of queries, where the engines can be reasonably confident in the intent of the user. The example we used in the Webinar was Feist, a very popular Canadian recording artist. But “Feist” is also a breed of dog. If there’s a search for Feist, the engines can be fairly confident, based on the popularity of the artist, that the user is probably looking for information on her, not the dog.

More Useful Results

The second of the 4 major areas goes to the nature of the results themselves, and what is returned to us with our query. Universal (Federated, Blended, etc) results are the first step in this direction. Expect to see more of this. Daniel Read from Ask led the charge in this direction, with their much lauded 3D interface. As engines crawl more sources of information, including videos, audio, news stories, books and local directories, they can match more of this information to user’s interpreted intent. This will drive the biggest visible changes in search over the short term. For the head phrases, those high volume, less ambiguous queries, engines will become increasing confident in providing us a richer, more functional result set. This will mean media results for entertainment queries, maps and directory information for local queries and news results for topics of interest.

But Marissa Mayer feels we’re still a long ways from maximizing the potential of the plain old traditional web results. She pointed out some examples of results where Google’s teams had been working on pulling more relevant and informative snippets, and showing fresher results for time sensitive topics. Jakob Nielsen chimed in by saying that none of the examples shown during the Webinar were particularly useful. And here comes the crux of a search engine’s job. Just using relevance as the sole criteria isn’t good enough. For someone looking for when the iPhone might be available in Canada, there are a number of pages that could be equally relevant, based on content alone, but some of those pages could be far more useful than others. The concept of usefulness as a ranking factor hasn’t really been explored by any of the algorithms, and it’s a far more subtle and nuanced factor than pure relevance. It depends on gathering the interactions of users with the pages themselves. And, in this case, we’re again reliant on the popularity of a page. It will be much easier to gather data and accurately determine “usefulness” for popular queries than it will be for long tail queries.

By the way, the concept of usefulness extends to advertising as well. A good portion of the Webinar was devoted to how advertising might remain in sync with organic results, whatever their form. Increasingly, as long as usefulness is the criteria, I see the line blurring between what is editorial content and what is advertising on the page. If it gets a user closer to their intent, then it’s served its purpose.


When we’re talking innovation, the panel seems to see only incremental innovation in the near term on the desktop. But as a few panelists pointed out in the interview, mobile is in the midst of disruptive innovation right now. The iPhone marked a significant upping of the bar, with its multitouch capabilities and smoother user experience. What the iPhone did in the mobile world is move the user experience up to a whole new level. With that, there’s suddenly a competitive storm brewing to meet and exceed the iPhone’s capabilities. As the hardware and operating systems queue up for a series of dramatic improvements, it can only bode well for the mobile online experience, including search.

Remember, there’s a pent up flood of functionality just waiting in the mobile space for the hardware to handle it. The triad of bottlenecks that have restricted mobile innovation – speed of connectivity, processing power and limitations of the user interface – all appear that they could break loose at the same time. When those give way, all the players are ready to significantly up the ante in what the mobile search experience could look like.

Mash Ups

One area that we were only able to touch on tangentially (an hour was far too short a time with this group!) is how search functionality will start showing up in more and more places. Already, we’re seeing search being a key component in many mash ups. The ability to put this functionality under the hood and have it power more and more functional interfaces, combined with other 2.0 and 3.0 capabilities, will drive the web forward.

But it’s not only on the desktop that we’ll see search go undercover. We’ve already touched on mobile, but also expect to see search functionality built into smarter appliances (a fridge that scans for recipes and specials at the grocery store) and entertainment centers (on the fly searching for a video or audio file). Microsoft’s surface computing technology will bring smart interfaces to every corner of our home, and connectivity and searchability goes hand in hand with these interfaces between our physical and virtual worlds.

That touches on just some of the topics we covered in our one hour with the panelists. You can access the full Webinar at We’ll be following up in 2008 with more topics, so stay tuned!

Search Engine Results: 2010 – Marissa Mayer Interview

marissa-mayer-7882_cnet100_620x433Just getting back in the groove after SES San Jose. You may have caught some of my sessions or heard we have released a white paper looking at the future of search and with some eye tracking on personalized and universal search results. We don’t have the final version up yet, but it should be available later this week. The sneak preview got rave reviews in SJ.

Anyway, I interviewed a number of influencers in the space, and I’ll be posting the full transcripts here on my blog over the next week. I already posted Jakob Nielsen’s interview. Today I’ll be posting Marissa Mayer’s, who did a keynote at SES SJ. It makes for interesting reading. Also, I’ll be running excerpts and additional commentary on Just Behave on Search Engine Land. The first half ran a couple weeks ago. Look for more (and a more regular blog schedule) coming out over the next few weeks. Summer’s over and it’s back to work.

Here’s my chat with Marissa:

Gord: I guess I have one big question that will probably break out into a few smaller questions.  What I wanted to do for Search Engine Land is speculate on what the search engine results page might look like to the user in three years time.  With some of the emerging things like personalization and universal search results and some the things that are happening with the other engines: Ask with their 3D Search, which is their flavor of Universal, it seems to me that we might be at a point for the first time in a long time the results that we’re seeing may have a significant amount of flux over the next 3 years.  I wanted to talk to a few people in the industry about their thoughts of what we might be seeing 3 years down the road.  So that’s the big over-arching question I’m posing.

Marissa: Sure, Minority Report on search result pages…Well, I’d like to say it’s going to be like that but I think that’s a little further out.  There are some really fascinating technologies that I don’t know if you’ve seen..some work being done by a guy named Jeff Han?

Gord: No.

Marissa: So I ran into Jeff Han both of the past years at TED. Basically he was doing multi-touch before they did it on the iPhone on a giant wall sized screen, so it actually does look a lot like Minority Report. It was this big space where you could interact, you could annotate, you could do all those things.  But let me talk first about what I see happening as some trends that are going to drive change.

One is that we are seeing more and more broadband usage and I think in three years everyone will be on very fast connections, so a lot more to choose from and  a lot more data without taking a large latency hit.  The other thing we’re seeing is different mediums, audio, video.  They used to not work.  If you remember getting back a year ago, everytime you clicked on an audio file or a movie file, it would be, like, ‘thunk’?  It needs a plug in, or “thunk”, it doesn’t work.  Now we’re coming into some standardized formats and players that are either browser or technology independent enough, or are integrated enough that they are actually going to work.  And also we’re seeing users having more and more storage on their end.  And those are the sort of 3 computer science trends that are things that are going to change things.  I also think that people are becoming more and more inclined to annotate and interact with the web. It started with bloggers, and then it moved to mash ups, and now people are really starting to take a lot more ownership over their participation on the web and they want to annotate things, they want to mark it up.

So I think when you add these things together it means there’s a couple of things.  One, we will be able to have much more rich interaction with the search results pages. There might be layers of search results pages: take my results and show them on a map, take my results and show them to me on a timeline.  It’s basically the ability to interact in a really fast way, and take the results you have and see them in a new light.  So I think that that kind of interaction will be possible pretty easily and pretty likely.  I think it will be, hopefully, a layout that’s a little bit less linear and text based, even than our search results today and ultimately uses what I call the ‘sea of whiteness’ more in the middle of the page, and lays out in a more information dense way all the information from videos to audio reels to text, and so on and so forth.  So if you imagine the results page, instead of being long and linear, and having ten results on the page that you can scroll through to having ten very heterogeneous results, where we show each of those results in a form that really suits their medium, and in a more condensed format.  A couple of years ago we did a very interesting experiment here on the UI team where we took three or 4 different designs where the problem was artificially constrained.  It was above the fold Google.  If you needed to say everything that Google needed to say above the fold, how would you lay it out?  And some came in with two columns, but I think two columns is really hard when it was linear and text based.  When you started seeing some diagrams, some video, some news, some charts, you might actually have a page that looks and feels more like an interactive encyclopedia.

Gord: So, we’re almost going from a more linear presentation of results, very text based, to almost more of a portal presentation, but a personalized portal presentation.

Marissa: Right and I think as people, one, are getting more bandwidth and two, as they’re more savvy with how they look at more information, think of it this way, as more of serial access versus random access.  One of my pet peeves is broadcast news, where I really don’t like televised news anymore.  I like newspapers, and I like reading online because when I’m online or with newspapers, I have random access.  I can jump to whatever I’m most interested in.  And when you’re sitting there watching broadcast news you have to take it in the order, at the pace and at the speed that they are feeding it to you.  And yes, they try to make it better by having the little tickers at the bottom, but you can’t just jump in to what you’re interested in.  You can only read one piece of text at a time, and it’s hard to survey and scan and hone in on one type of medium or another when it’s all one medium.  So certainly there is some random access happening with the search results today.  I think as the results formats becomes much more heterogeneous, we’re going to have a more condensed presentation that allows for better random access.  Above the fold being really full of content, some text, some audio, some video, maybe even playing in place, and you see what grabs your attention, and pulls you in.   But it’s almost like random access on the front page of the New York Times, where am I more drawn to the picture, or the chart, or this piece of content down here?  What am I drawn to?

Gord: Right.  If you’re looking at different types of stimuli across the page, I guess what you’re saying is, as long as all that content is relevant to the query you can scan it more efficiently than you could with the standardized text based scanning, linear scanning, that we’re seeing now

Marissa: That’s right.

Gord: Ok.

Marissa: So the eyes follow and they just read and scan in a linear order, where when you start interweaving charts and pictures and text, people’s eyes can jump around more, and they can gravitate towards the medium that they understand best.

Gord: So, this is where Ask is going right now with their 3D search, where it’s broken it into 3 columns and they’re mixing images and text and different things.  So I guess what we’re looking at is taking it to the next extreme, making it a richer, more interactive experience, right?

Marissa: Rather than having three rote columns, it would actually be more organic.

Gord: So more dynamic.  And it mixes and matches the format based on the types of material it’s bringing back.

Marissa: Well, to keep hounding on the analogy of the front page of the New York Times.  It’s not like the New York Times…I mean they have basically the same layout each time, but it’s not like they have a column that only has this kind of content, and if it doesn’t fill the column, too bad.  They have a basic format that they change as it suits the information.

Gord: So in that kind of format, how much control does the user have? How much functionality do you put in the hands of the user?

Marissa: I think that, back to my third point, I think that people will be annotating search results pages and web pages a lot.  They’re going to be rating them, they’re going to be reviewing them.  They’re going to be marking them up, saying  “I want to come back to this one later”.  So we have some remedial forms of this in terms of Notebook now, but I imagine that we’re going to make notes right on the pages later.  People are going to be able to say I want to add a note here; I want to scribble something there, and you’ll be able to do that.  So I think the presentation is going to be largely based on our perceived notion of relevance, which of course leverages the user, in the ways they interact with the page, and look at what they do and that helps inform us as to what we should do.  So there is some UI user interaction, but the majority of user interaction will be on keeping that information and making it consumable in the best possible way.

Gord: Ok, and then if, like you said, if you go one step further, and provide multiple layers, so you could say, ok, plot my search results, if it’s a local search, plot my search results on a map.  There’s different ways to, at the user’s request, present that information, and they can have different layers that they can superimpose them on.

Marissa: So what I’m sort of imagining is that in the first basic search, you’re presented with a really rich general overview page, that interweaves all these different mediums, and on that page you have a few basic controls, so you could say, look, what really matters to me is the time dimension, or what really matters to me is the location dimension.  So do you want to see it on a timeline, do you want to see it on a map?

Gord: Ok, so taking a step further than what you do with your news results, or your blog search results, so you can sort them a couple of different ways, but then taking that and increasing the functionality so it’s a richer experience.

Marissa: It’s a richer experience. What’s nice about timeline and date as we’re currently experimenting with them on Google Experimental is not only do they allow you to sort differently, they allow you to visualize your results differently.  So if you see your results on a map, you can see the loci, so you can see this location is important to this query, and this location is really important to that query.  And when you look at it in time line you can see, “wow, this is a really hot topic for that decade”.  They just help you visualize the nut of information across all the results in these fundamentally different ways that ‘sorts’ kind of get at. But it’s really allowing that richer presentation and that overview of results on the meta level that helps you see it.

Gord: Ok.  I had a chance to talk to Jakob Nielsen about this on Friday, and he doesn’t believe that we’re going to be able to see much of a difference in the search results in 3 years.  He just doesn’t think that that can be accomplished in that time period.  What you’re talking about is a pretty drastic change from what we’re seeing today, and the search results that we’re seeing today haven’t changed that much in the last 10 years, as far as what the user is seeing.  You’re really feeling that this is possible?

Marissa: It’s interesting, you know, I pay am lot of attention to how the results look.  And I do think that change happens slowly over time and that there are little spurts of acceleration.  We at Google certainly saw a little accelerated push during May when we launched Universal Search.  I’m of the view that maybe its 3 years out, maybe it’s 5 years out, maybe it’s 10 years out.  I’m a big subscriber to the slogan that people tend to overestimate the short term and underestimate the long term.  My analogy to this is that when I was 5, I remember watching the Jetson’s and being, like, this rocks!  When I’m thirty there are flying cars!  Right?  And here I am, I’m 32 and we don’t even have a good flying car prototype, and yet the world has totally changed in ways that nobody expected because of the internet and computing.  In ways that in the 1980s no one even saw it coming.  Because personal computers were barely out, let alone the internet.  It’s interesting.  We do our off site in August. I do an offsite with my team where we do Google two years out. There it’s really interesting to see how people think about it.  I take all the prime members on my team, so they’re the senior engineers, and everybody has homework.  They have to do a homepage and a results page of Google, and this year it’ll be Google 2009.

Gord: Oh Cool!

Marissa: Six months out, it’s really easy because if we’re working on it, because if it’s going to launch in 6 months and it’s big enough that you would notice, we’re working on it right now and we know it’s coming.  And five years or ten years out we start getting into the bigger picture things like what I’m talking to you about.  When the little precursors that get us ready for those advances happen between now and then that’s what’s shifting.   So I’m giving you the big picture so you can start understanding what some of the mini steps that might happen in the next 3 years, to get us ready for that, would be.  The two to three year timeframe is painful. Everybody at my offsite said, “this timeframe sucks!” So it’s just far enough out that we don’t have great visibility, will mobile devices be something that’s a really big new factor in three years?  Maybe, maybe not.  Some of the things are making fast progress now may even take a big leap, right, like it was from 1994 to 97 on the internet.  Or if you think about G-mail and Maps, like AJAX wouldn’t have foreseen those in 2002 or 2003.  So, two or three years is a really painful time frame because some things are radically different, but probably in different ways than you would expect.  You have very low visibility in our industry to that time frame.  So I actually find it easier to talk about the six month timeframe, or the ten year timeframe.  So I’m giving you the ten year picture knowing that it’s not like the unveiling of a statue, where you can just take the sheet, snatch it off and go, “Voila there it is”.  If you look at the changes we’ve made over time at Google search they’ve always been “getting this ready, getting this ready”.  So the changes are very slow and feel like they’re very incremental.  But then you look at them in summation over 18 months or two years, you’re like, “you know, nothing felt really big along the way, but they are fundamentally different today”.

Gord: One last question.  So we’re looking at this much richer search experience where it’s more dynamic and fluid and there are different types of content being presented on the page.  Does advertising or the marketing message get mixed into that overall bucket, and does this open the door to significantly different types of presentation of the advertising message on the search results page?

Marissa: I think that there will be different types of advertising on the search results page.  As you know, my theory is always that the ads should match the search results.  So if you have text results, you have text ads, and if you have image results, you have image ads.  So as the page becomes richer, the ads also need to become richer, just so that they look alive and match the page.  That said, trust is a fundamental premise of search.  Search is a learning activity.  You think of Google and Ask and these other search engines as teachers.  As an end user the only reason learning and teaching works, the only way it works, is when you trust your teacher.  You know you’re getting the best information because it’s the best information, not because they have an agenda to mislead you or to make more money or to push you somewhere because of their own agenda.  So while I do think the ads will look different, they will look different in format, or they may look different in placement, I think our commitment to calling out very strongly where we have a monetary incentive and we may be biased will remain.  Our one promise on our search results page, and I think that will stand, is that we clearly mark the ads.  It’s very important to us that the users know what the ads are because it’s the disclosure of that bias, that ultimately builds the trust which is paramount to search

Gord: Ok.  Great to see you’re a keynote at San Jose in August.

Marissa: Should be fun.  This whole topic has me kind of jazzed up so maybe I’ll talk about that.

Search Engine Results: 2010 – Interview with Danny Sullivan

Danny-SullivanHere’s another in the series of the Search:2010 transcripts, this one of my chat with Search Engine Land Editor Danny Sullivan:

Gord: The big question that I’m asking is how much change are we going to see on the search engine results page over the next three years.  What impact are things like universal search and personalization and some of the other things we’re seeing come out, how much of that is going to impact the actual interface the user is going to see.  Maybe let’s just start there.

Danny: I love the whole series to begin with because then I thought, Gosh, I never really sat down and tried to plot out how I would do it, and I wish I had had the time to do that before we talked (laughs).  But it would be nice to have a contest or something for the people who are in the space to say I think this is the way we should do it or where it should go.
But the thing at the top of my head that I expect or I assume that we’re going to get is… I think they’re going to get a lot more intelligent at giving you more from a particular database when they know you’re doing a specific a kind of search.  It’s not necessarily an interface change, but then again it is.  This is the thing I talked about when I was saying about when the London Car Bombing attempts happened, and I’m searching for “London Bombings”.  When you see a spike in certain words you ought to know that there’s a reason behind that spike.  It’s going to be news driven probably, so why are you giving me 10 search results? Why don’t you give me 10 news results?  And saying I’ve also got stuff from across the web, or I’ve got other things that are showing up in that regard.  And that hasn’t changed.  I‘d like to see them get that.   I’d like to see them figure out some intelligent manner to maybe get to that point.  Part of what could come along with that too is that as we start displaying more vertical results the search interface itself could change.  So I think the most dramatic change in how we present search results, really, has come off of local.  And people go “wow, these maps are really cool!” Well of course they’re really cool, they’re presenting information on a map which makes sense when we’re talking about local information.  You want things displayed in that kind of manner.  It doesn’t make sense to take all web search results and put them on a map. You could do it, but it doesn’t communicate additional information for you that’s probably irrelevant and that needs to be presented in a visual manner.  If you think about the other kinds of search that you tend to do, Blog search for instance, it may be that there’s going to be a more chronological display. We saw them do with news archive where they would do a search and they would tell you this happened within these years at this time.  Right now when I do a Google blog search, by default it shows me ‘most relevant’.  But sometimes I want to know what the most recent thing is, and what’s the most recent thing that’s also the most relevant thing right? So perhaps when I do a Search, a Google blog search, I can see something running down the left hand side that says “last hour” and within the last hour you show me the most relevant things in the last hour, the last 4 hours, and then the last day.  And you could present it that way, almost sort of a timeline metaphor. I’m sure there are probably things you could do with shading and other stuff to go along with that.  Image search…Live has done some interesting things now where they’ve made it much less textual, and much more stuff that you’re hovering over, that you can interact with it in that regard.  An I don’t know, it might be that with book search and those other kinds of things that there’ll be other kinds of metaphors that come into place that you can do when you know you are going to present most of the information just from those sorts of resources.  With Video search… I think we’ve already seen a lot of the thing with video search is just giving you the display and being able to play the videos directly.  Rather than having to leave the site because it just doesn’t make sense to have to leave the site in that regard.

Gord: When I was talking to Marissa, she saw a lot more mash ups with search functionality, and you talked about having maps and that with local search making sense, but its almost like you take the search functionality and you layer that over different types of interfaces that make sense, given the type of information your interacting with.

Danny: Right.

Gord: One thing I talked about with a few different people is ‘how much functionality do you put in the hands of the user?’ how much needs to be transparent? How hard are we willing to work with a page of search results?

Danny: By default, not a lot, you know if you’re just doing a general search, I don’t think that putting a whole lot of functionality is going to help you. You could put a lot of options there but historically we haven’t seen people use those things, and I think that’s because they just want to do their searches. They want you to just naturally get the right kind of information that’s there and a lot of the time if they give you that direct answer you don’t need to do a lot of manipulation.  It’s a different thing I think when you get into some very vertical, very task orientated kinds of searches, where you’re saying, ‘I don’t just need the quick answer, I don’t just need to browse and see all the things that are out there, but actually, I’m trying to drill down on this subject in a particular way’.  And local tends to be a great example. ‘Now you’ve given me all the results that match the zip code, but really I would like to narrow it down into a neighborhood, so how can I do that?’  Or a shopping search.  ‘I have a lot of results but now I want to buy something, so now I need to know who has it in inventory? Now I really need to know who has it cheapest? And I need to know who’s the most trusted merchant?’ Then I think the searcher is going to be willing to do more work on the search and make use of more of the options that you give to them.

Gord: Like you say, if you’re putting users directly into an experience where they are closer to the information that they were looking for, there’s probably a greater likelihood that they’re willing to meet you half way, by doing a little extra work to refine that if you give them tools that are appropriate to the types of results they are seeing.  So if it’s shopping search, filtering that by price, or by brand.  That’s common functionality with a shopping search engine and maybe we’ll see that get in to some of the other verticals. But I guess the big question is, in the next three years are the major engines going to gain enough confidence that they’ll be providing a deeper vertical experience as the default, rather than as an invisible tab or a visible tab.

Danny: I still tend to think that the way that they are going to give a deeper vertical experience is the visible tab idea, which is you know, that you are not going to be overtly asked to do it, it is just going to do it for you, and then give you options to get out of it, if it was the wrong choice. So, both Ask, and Google, which are getting all the attention right now, for universal search, you know, blended search if you wanna find a generic term for it that, doesn’t favor one service over the other.  The other term is federated search and I’ve always hated that because it always felt like something from that, you know, came out of the Star Trek Enterprise (laugh). No, I want Klingon search! (laugh) I think that in both of those cases you do the search and the default still is web.  And Ask will say, over here on the side we have some other results. Yes, universal search is inserting an item here or an item there but in most of the cases it still looks like web search, right? They still, really feel like OneBoxes. I haven’t had a universal search happen to me yet that I’ve come along and I’ve thought ‘that really was something I couldn’t have got just from searching the web’ except when I’ve gotten a map.  That’s come in when they’ve shown the map, and that is that kind of dramatic change, and I think at some point they will get to that point, that kind of dramatic change where you just search for “plumbers” and a zip code.  I’m so confident of it I’m just going to give you Google local. I’m not just going to insert a map and give you 7 more web listings that are down there. I’m going to give you a whole bunch of listings and I’m going to change the whole interface on you and if you’re going ‘well, this isn’t what I want’, then I’m going to be able to give you some options if you want to escape out of it.  I like what Ask does, in the sense that it’s easy to escape out of that thing because you just look off to the side and there’s web search over here, there’s other stuff over there.  I think it’s harder for Google to do that when they try to blend it all together. The difficulty remains as to whether people will actually notice that stuff off to the side, and make use of it.

Gord: That was actually something that Jacob Nielsen brought up. He said the whole paradigm of the linear scan down the page is such a dominant user behavior, that we’ve got so used to, you know engines like Ask can experiment with a different layout where they’re going two dimensional, but will the users be able to scan that efficiently?

Danny: I’ve been using this Boeing versus Airbus analogy when I’m trying to explain to people the differences between what Google is doing and what Ask is doing.  Boeing is going, ‘Well, we’ll build small fast energy-efficient jets’ and Airbus is saying ‘We’ll build big huge jets, and we’ll move more people so you’ll be able to do less flights’.  And when I look at the blended search, Google’s approach is, well, we’ve got to stay linear, we’ve got to keep it all in there. That’s where people are expecting the stuff and so we’re going to go that way.  Ask’s approach is we’re going to be putting it all over the place on the page and we’ve got this split, really nice interface.  And I agree with them. And of course Walt Mossberg wrote that review where he said ‘oh they’re so much nicer, they look so much cleaner’, and that’s great, except that he’s a sophisticated person, I’m a sophisticated person, you’re a sophisticated person, we search all the time.  We look at that sort of stuff. A typical person might just ignore it; it might continue to be eye candy that they don’t even notice. And that is the big huge gamble that is going on between these two sorts of players and then, yet again, it might not be a gamble because when you talk to Jim Lanzone, he says ‘My testing tells me this is what our people do’. Well, his people might be different from the Google people. Google has got a lot more new people that come over there that are like, ‘I just want to do a search, show me some things, where’s the text links? I’m done’. So I tend to look perhaps more kindly on what Google is doing, than some people who try to measure them up against Ask because I understand that they deal with a lot more people than Ask, and they have to be much more conservative than what Ask is doing.  And I think that what’s going to happen is those two are going to approach closer together.  The advantage, of course, Jim has over at Ask, is that he doesn’t have to put ads in that column so he’s got a whole column he can make use of, and it is useful, and it is a nice sort of place to tuck it in there. If you really want to talk about search interfaces, what will be really fun to envision is what happens when Ajax starts coming along and doing other things. Can I start putting the sponsored search results where they are hovering above other results? Is there other issues that come with that?  There may be some confusion as to why I’m getting this and why I’m getting that. Can I pop up a map as I hover over a result? I could deliver you a standard set of search results and I can also deliver you local results on top of a particular type of picture.  If I move my mouse along it, I could show you a preview of what you get in local and you might go “Oh wow, there’s a whole map there”. I want to jump off in that direction.  That would be really fun to see that type of stuff come along there, but I’m just not seeing anything come out of it.  What we typically have had when people have played with the interface is, these really WYSIWYG things like, ‘well we’ll fly you though the results, or we’ll group them’.  None of which is really something that you’d need, that added to the choices, “do I want to go vertical, do I not want to go vertical?”

Gord: When we start talking about the fact that the search results page could be a lot more dynamic and interactive, of course the big question is what does that do for monetization of the page?  One of the things that Jakob (Nielsen) talked about was banner blindness.  Do people start cutting out sections of the page?  We talked a little about that.  How do you make sure that the advertising doesn’t get lost on the page when there’s just a lot more visual information in there to assimilate?

Danny: Well I think a variety of things that are going to start happening there.  For example, Google doesn’t do paid inclusion, right, but Google has partnerships with YouTube and they have these channels, and they’re going to be sharing revenue from these channels with other people. So when they start including that stuff up, perhaps they are getting paid off of that.  They didn’t pay to put it in the index but, because they are better able to promote their video channels, more people are going over there, and they’re making money off of that as a destination.  So in some ways, they can afford to have their video results start becoming more relevant because they don’t have to worry about if you didn’t click on the ad from the initial search result, they sort of lost you.  In terms of how the other ads might go, I guess the concern might be if the natural results are getting better and better why would anyone click on the ads anyway?  Maybe people will reassess the paid results and some people will come through and say that paid search results are a form of search data base as well.  So we’re going to call them classifieds or we’re going to call them ads, we’re going to move them right into the linear display.  You know there’ll be issues, because at least in the US, you have the FCC guidelines that say that you should really keep them segregated.  So if you don’t highlight them or blend them in some way, you might run into some regulatory problems.  But then again, maybe those rules might start to change as the search innovation starts to change, and go with it from there.  I don’t know, the search engines might come up with other things.  You know we’re getting toolbars that are appearing more on all of our things. Google might start thinking, ‘Well, let’s put ads back onto that toolbar’.  We used to have those sorts of things, and everyone seems to catch on, but they might come back, and that might be another way that some of the players, especially somebody like Google, might make money beyond just putting the ad on the search result page.

Gord: In the next three years, are we going to get to the point where search starts to become less of a destination activity like the way it is now, and the functionality  sits underneath more of Web 2.0 or semantic web or whatever you want to call it.  It almost becomes a mash up of functionality that underlies other types of sites. Are we going to stop going to a Google or a Yahoo as much to launch a distinct search as we do now?

Danny: You know people have been saying that for at least 3 or 4 years now, especially with Microsoft. ‘Oh you’re not even going to go there, you’re going to do it from your desktop.’  Vista, which I have yet to actually use.  I’ve got the laptop and I’m about to start playing with it! Apparently, it’s supposed to be even more integrated than it was with XP.  But I still tend to think, you know what? We do stuff in our browsers.  I know widgets are growing and I know there’s more stuff that’s just drawing stuff into your computer as well, but we still tend to do stuff in our browser.  I still see search as something where I’m going to go to a search engine and do the search.  With the exception of toolbars. I think we’re going to do a lot more searching through toolbars.  Tool bars are everywhere; it’s really rare for me to start a search where I’m actually not doing it from the toolbar.  I just have a toolbar that sits up there, and I don’t need to be at the search engine itself.  But I still want the results displayed in my browser.  Because I think most of the stuff I’m going to have to deal with is going to be in my browser as well.  So it doesn’t really help to be able to search from Microsoft Word, right?  Because I don’t want all these sites in a little window within Word. I’m probably going to have to read what they say, so I’m probably going to have to go there.  I think that will change though if I have a media player, then I think it makes much more sense for me, and you can already do this with some media players, where you can do searches, and have the results flow back in.  iTunes is a classic example. iTunes is basically a music search engine.  Sure, it’s limited to the music and the podcasts that are within iTunes, but it doesn’t really make any sense for me to go to the Apple website. Although, interestingly, here’s an example where Apple is just a terrible failure.  They’ve got all this stuff out there, they’ve got stuff that perhaps you might be interested in even if you don’t use their software and there’s just no way to get to it on the web.  The last time I looked you really had to do the searches in iTunes.  So they’re missing out on being a destination for those people who say ‘I’m not going to use iTunes’  or ‘I don’t have iTunes’ or ‘I’m on a different version.’ I don’t know if you’ve downloaded it recently but it takes forever and it’s just a pain.

Gord: I think that covers off the main questions I wanted to cover off in this.  Is there anything else as far as search in the next three years that you wanted to comment on?

Danny: You know, it’s hard because if you’d asked me that three years ago, would I have told you, ‘watch for the growth of verticals and watch for the growth of blended search’, (laughs) right?  I’ve been thinking really hard because, I’m like, ‘Gosh, now what am I going to talk about because they’re doing both of those things’. I think personalized search is going to continue to get strong.  I do think that Google is onto something with their personalized search results.  I don’t think that they’re going to cause you to be in an Amazon situation where you’re continuing to be recommended stuff you’re no longer interested in.  I think that people are misunderstanding how sophisticated it can be.  I think that the next big trend is that, ironically from what I just said to you, search is going to start jumping into devices.  And everything is going to have a search box.  But it will be appropriate.  My iPod itself will have a search capability within it.  And the iPhone, to some degree, maybe is going to be that look at how it’s happening already. But I’ll be able to search, access, and get information appropriate to that device within it.  Windows Media Center, when I first got that in 2005, I said, this is amazing, because it’s basically got TV search built into it.  I do the search and then of course, it allows me to subscribe to the program, and records the program, and knows when the next ones are coming up.  And it makes so much more sense for that search to be in that device than it did for me to have it elsewhere.  I use it all the time, when I want to know when a programs on, I don’t have to find where the TV listings are on the web, I just walk over to my computer and do a search from within the Media Center player.  So I think we’re going to have many more devices that are internet enabled, and there’s going to be reasons why you want to do searches with them, to find stuff for them in particular.  That’s going to be the new future of search and search growth will come into it.  And in terms of what that means to the search marketer, I think it’s going to be crucial to understand that these are going to be new growth areas, because those searches when they start are going to be fairly rudimentary. It’s going to be back in the days of, OK, they’re probably going to be driven off of meta data, so you got to make sure you have your title, and your description and making sure the item that your searching for is relevant.

Gord: So obviously all that leads itself to the question of mobile search, and will mobile search be more useful by 2010?

Danny: Sure, but it’s going to be more useful because it’s not going to be mobile search.  It’s just the device is going to catch up and be more desktop-like.  I have a Windows mobile phone at the moment, and I have downloaded some of the applets like Live Search and Google Maps, and those can be handy for me to use, but for the most part, if I want to do a search, I fire up the web browser, I look for what I’m looking for, the screen is fairly large, and I can see what I wanted to find.  And I think that you’re going to find that the devices are going to continue to be small and yet gain larger screens, and have the ability for you to better do direct input. So if you want to do search, you can do a search. It’s not like you’re going to need to have to have something that’s designed for the mobile device that only shows mobile pages.  I think that’s going to change.  You’re going to have some mobile devices that are specifically not going to be able to do that and those people in the end are going to find that no one is going to be trying to support you.

Gord: Thanks Danny.

Breaking “Auction Order” Explained

One of the things that raised eyebrows in my interview with Diane Tang and Nick Fox was the following section regarding how Google determines which ads rank first and climb into the all important top sponsored locations:

Nick: Yes, it’s based on two things.  One is the primary element is the quality of the ad. The highest quality ads get shown on the top. The lower quality ads get shown on the right hand side. We block off the top ads from the top of the auction, if you really believe those are truly excellent ads…

Diane: It’s worth pointing out that we never break auction order…

Nick: One of the things that’s sacred here is making sure that the advertiser’s have the incentive. In an auction, you want to make sure that the folks who win the auction are the ones who actually did win the auction. You can’t give the prize away to the person who didn’t win the auction. The primary element in that function is the quality of the ad. Another element of function is what the advertiser’s going to pay for that ad. Which, in some ways, is also a measure of quality. We’ve seen that in most cases, where the advertiser’s willing to pay more, it’s more of a commercial topic. The query itself is more commercial, therefore users are more likely to be interested in ads. So we typically see that queries that have high revenue ads, ads that are likely to generate a lot of revenue for Google are also the queries where the ads are also most relevant to the user, so the user is more likely to be happy as well. So it’s those two factors that go into it. But it is a very high threshold. I don’t’ want to get into specific numbers, but the fraction of queries that actually show these promoted ads is very small.

This seemed a little odd to me in the interview and I made a note to ask further about that, but what can I say, I forgot and went on to other things. But when the article got posted on Searchengineland, Danny jumped on it at Sphinn

“Seriously? I mean, it’s not an auction. If it were an auction, highest amount would win. They break it all the time by factoring in clickrate, quality score, etc. Not saying that’s bad, but it’s not an auction.”

This reminded me to follow up with Nick and Diane. Diana Adair, on the Google PR team, responded with this clarification:

We wanted to follow up with you regarding your question below.  We wanted to clarify that we rank ads based on both quality score and by bid.  Auction order, therefore, is based on the combination of both of those factors.  So that means that it’s entirely possible that an ad with a lower bid could rank higher than an ad with a higher bid if the quality score for the less expensive ad is high enough.

So, it seems it’s the use of the word “auction” that’s throwing everyone off here. Google’s use of the term includes ad quality. The rest of the world thinks of an auction as somewhere where the highest bid (exclusively) determines the winner. Otherwise, like Danny said, “it’s not an auction”. So, with that interpretation, I then assume that Nick and Diane’s (which sounds vaguely like a title of a John Mellencamp song) comment means that Google won’t arbitrarily hijack these positions for other types of packages which may include presence on the SERP, as in the current Bourne Ultimatum promotion.