The ZMOT Continued: More from Jim Lecinski

First published July 28, 2011 in Mediapost’s Search Insider

Last week, I started my conversation with Jim Lecinski, author of the new ebook from Google: “ZMOT, Winning the Zero Moment of Truth.”  Yesterday, Fellow Search Insider Aaron Goldman gave us his take on ZMOT. Today, I’ll wrap up by exploring with Jim the challenge that the ZMOT presents to organizations and some of the tips for success he covers in the book.

First of all, if we’re talking about what happens between stimulus and transaction, search has to play a big part in the activities of the consumer. Lecinski agreed, but was quick to point out that the online ZMOT extends well beyond search.

Jim Lecinski: Yes, Google or a search engine is a good place to look. But sometimes it’s a video, because I want to see [something] in use…Then [there’s] your social network. I might say, “Saw an ad for Bobby Flay’s new restaurant in Las Vegas. Anybody tried it?” That’s in between seeing the stimulus, but before… making a reservation or walking in the door.

We see consumers using… a broad set of things. In fact, 10.7 sources on average are what people are using to make these decisions between stimulus and shelf.

A few columns back, I shared the pinball model of marketing, where marketers have to be aware of the multiple touchpoints a buyer can pass through, potentially heading off in a new and unexpected direction at each point. This muddies the marketing waters to a significant degree, but it really lies at the heart of the ZMOT concept:

Lecinski: It is not intended to say, “Here’s how you can take control,” but you need to know what those touch points are. We quote the great marketer Woody Allen: “‘Eighty percent of success in life is just showing up.”

So if you’re in the makeup business, people are still seeing your ads in Cosmo and Modern Bride and Elle magazine, and they know where to buy your makeup. But if Makeupalley is now that place between stimulus and shelf where people are researching, learning, reading, reviewing, making decisions about your $5 makeup, you need to show up there.

Herein lies an inherent challenge for the organization looking to win the ZMOT: whose job is that? Our corporate org chart reflects marketplace realities that are at least a generation out of date. The ZMOT is virgin territory, which typically means it lies outside of one person’s job description. Even more challenging, it typically cuts across several departments.

Lecinski: We offer seven recommendations in the book, and the first one is “Who’s in charge?” If you and I were to go ask our marketer clients, “Okay, stimulus — the ad campaigns. Who’s in charge of that? Give me a name,” they could do that, right? “Here’s our VP of National Advertising.”

Shelf — if I say, “Who’s in charge of winning at the shelf?” “Oh. Well, that’s our VP of Sales” or “Shopper Marketing.” And if I say, “Product delivery,” – “well that’s our VP of Product Development” or “R&D” or whatever. So there’s someone in charge of those classic three moments. Obviously the brand manager’s job is to coordinate those. But when I say, “Who’s in charge of winning the ZMOT?” Well, usually I get blank stares back.

If you’re intent on winning the ZMOT, the first thing you have to do is make it somebody’s job. But you can’t stop there. Here are Jim’s other suggestions:

The second thing is, you need to identify what are those zero moments of truth in your category… Start to catalogue what those are and then you can start to say, “Alright. This is a place where we need to start to show up.”

The next is to ask, “Do we show up and answer the questions that people are asking?”

Then we talk about being fast and being alert, because up to now, stimulus has been characterized as an ad you control. But sometimes it’s not. Sometimes it’s a study that’s released by an interest group. Sometimes it’s a product recall that you don’t control. Sometimes it’s a competitor’s move. Sometimes it’s Colbert on his show poking a little fun at Miracle Whip from Kraft. That wasn’t in your annual plan, but now there’s a ZMOT because, guess what happens — everybody types in “Colbert Miracle Whip video.” Are you there, and what do people see? Because that’s how they’re going to start making up their mind before they get to Shoppers Drug Mart to pick up their Miracle Whip.

Winning the ZMOT is not a cakewalk. But it lies at the crux of the new marketing reality. We’ve begun to incorporate the ZMOT into the analysis we do for clients. If you don’t, you’re leaving a huge gap between the stimulus and shelf — and literally anything could happen in that gap.

Marketing in the ZMOT: An Interview with Jim Lecinski

First published July 21, 2011 in Mediapost’s Search Insider

A few columns back, I mentioned the new book from Google, “ZMOT, Winning the Zero Moment of Truth.” But, in true Google fashion, it isn’t really a book, at least, not in the traditional sense. It’s all digital, it’s free, and there’s even a multimedia app (a Vook) for the iPad.

Regardless of the “book” ‘s format, I recently caught up with its author, Jim Lecinski, and we had a chance to chat about the ZMOT concept. Jim started by explaining what the ZMOT is: “The traditional model of marketing is stimulus – you put out a great ad campaign to make people aware of your product, then you win the FMOT (a label coined by Procter and Gamble) — the moment of truth, the purchase point, the shelf. Then the target takes home the product and hopefully it will live up to its promises. It makes whites whiter, brights brighter, the package actually gets there by 10:30 the next morning.

What we came out with here in the book is this notion that there’s actually a fourth node in the model  of equal importance.  We gave the umbrella name to that new fourth moment that happens in between stimulus and shelf: if it’s prior to FMOT, one minus F is zero, ‘Zero Moment of Truth.'”

Google didn’t invent the ZMOT, just as Procter & Gamble didn’t invent the FMOT. These are just labels applied to consumer behaviours. But Google, and online in general, have had a profound effect on a consumer’s ability to interact in the Zero Moment of Truth.

Lecinski: “There were always elements of a zero moment of truth. It could happen via word of mouth. And in certain categories, of course  — washing machines, automotive, certain consumer electronics — the zero moment of truth was won or lost in print publications like Consumer Reports or Zagat restaurant guide or Mobil Travel Guide.

But those things had obvious limitations. One: there was friction — you had to actually get in the car and go to the library. The second is timeliness  — the last time they reviewed wash machines might have been nine months ago. And then the third is accuracy: ‘Well, the model that they reviewed nine months ago isn’t exactly the one I saw on the commercial last night that’s on sale this holiday weekend at Sears.'”

The friction, the timeliness and the simple lack of information all lead to an imbalance in the market place that was identified by economist George Akerlof in 1970 as information asymmetry. In most cases, the seller knew more about the product than the buyer. But the Web has driven out this imbalance in many product categories.

Lecinski: “The means are available to everybody to remove that sort of information asymmetry and move us into a post-Akerlof world of information symmetry. I was on the ad agency side for a long time, and we made the TV commercial assuming information asymmetry. We would say, ‘Ask your dealer to explain more about X, Y, and Z.’

Well, now that kind of a call to action in a TV commercial sounds almost silly, because you go into the dealer and there’s people with all the printouts and their smartphones and everything… So in many ways we are in a post-Akerlof world. Even his classic example of lemons for cars, well, I can be standing on the lot and pull up the CARFAX history report off my iPhone right there in the car lot.”

Lecinski also believes that our current cash flow issues drive more intense consumer research.  “Forty seven percent of U.S. households say that they cannot come up with $2,000 in a 30-day period without having to sell some possessions,” he says. “This is how paycheck to paycheck life is.”

When money is tight, we’re more careful with how we part with it. That means we spend more time in the ZMOT.

Next week, I’ll continue my conversation with Jim, touching on what the online ZMOT landscape looks like, the challenge ZMOT presents marketers and the seven suggestions Jim offers about how to win the Zero Moment of Truth.

Interview with Stefan Weitz posted at SNL

Apologies for my brief hiatus from blogging last week. I was in Santa Cruz for an extended weekend with my wife, which was fabulous…thanks for asking. Also got a chance to catch Wicked in SF. It was a great way to kick off the weekend.

In between Defying Gravity and bird watching on the California coast, I did get a chance to post Part One of an Interview with Microsoft’s Stefan Weitz on Search Engine Land. It was the kick off of a series I’m doing on where Search goes from here. Stefan and I talked mainly about Microsoft’s “Decision Engine” strategy and what Microsoft currently thinks is “broken” about search. An interview with Stefan can’t help but be interesting, so I encourage you to check it out over at Just Behave.

In the meanwhile, I’m still hopping across the country, but am hoping to get a few new posts done on the Psychology of Entertainment in between plane rides and racking up Hilton HHonors points. Why do I feel a compelling kinship to George Clooney’s character in Up in the Air?

Search Insider Sneak Peek: The Three-for-One Keynote

First published November 19, 2009 in Mediapost’s Search Insider

Avinash Kaushik, Google’s Analytics Evangelist, will be kicking off the Search Insider Summit in just two weeks. I had the opportunity to chat with Avinash last week about what might be in store. As anyone who has heard him before would agree, it won’t be-sugar coated, it will be colorful and it will probably wrench your perspective on things you took for granted at least 180 degrees. Here are the three basic themes he’ll be covering:

The Gold in the Long Tail

Avinash believers there is unmined search gold lying in the long tail of many campaigns. The secret is how to find it in an effective manner.  I’ve talked before about how longtail strategies must factor in the cost of administering the campaign, which can be a challenge as you expand into large numbers of low-traffic phrases. Chris Anderson’s Long Tail theory assumes frictionless markets where there is no or very low “inventory management” costs, such as digital music (iTunes) or print on demand bookstores (Amazon). In theory, this should apply to search but, in practice, effective management of search campaigns requires significant investments of time. You have to create copy, manage bid caps and, optimally, tweak landing pages, all of which quickly erode the ROI of long-tail phrases, so I’ll be very interested to see how Avinash recommends getting around this challenge. I’m sure if anyone can find the efficiencies of long tail management, Avinash Kaushik can.

Attribution Redefined

For the past three Search Insider Summits, attribution has been high on the list of discussion topics. Avinash thinks much of the thinking around attribution is askew (his term was not nearly as polite). All search marketers are struggling with attribution models for clients with longer sales cycles; often these models are little more than a marginally educated guess.  I believe simply crunching numbers cannot solve the convoluted challenge of attribution. The solution lies in a combination of qualitative and quantitative approaches. This, by the way, is the topic for another panel later in the day, “Balancing Hard Data & Real People.”  Avinash, despite his reputation as the analytics expert, always drops the numbers into a context that keeps human behavior firmly in focus.

Search Data Insights

The third topic that Avinash will be covering is how to take the massive set of consumer intent signals that lie within the search data and leverage it to not only improve your search strategies, but every aspect of your business. We chatted briefly on the phone about how unfortunate it is that search teams are often separated from much of the day-to-day running of a company. Typically, search marketers and their vast resources of campaign and competitive intelligence are not even connected to the other marketing teams. Avinash will show how the “database of intentions” can be effectively mined to provide unprecedented insight into the hearts, minds and needs of your market.

Any one of these topics is worthy of a keynote slot, but at the Search Insider Summit, you’ll be getting all three! See you there in just two weeks!

Talking Search with Dr. Jim Jansen at Penn State

JimJansen032105This is the full transcript of an interview with Professor Jim Jansen at Penn State University. Excerpts from the Interview are running in two parts (Part One ran a few weeks ago) on Search Engine Land. I wrote a column that provided a little background on Dr. Jansen on Search Engine Land.

Gord:
Jim, we’ll start by laying out some of the research you’ve been doing over the past year and a half and then we’ll dig deeper into each of those as we find interesting things. Just give me the quick 30- or 60-second summary of what you’ve been working on in the last little while.

Jim:
I have several research projects going on. One that I really find interesting is analyzing a five calendar year search engine marketing campaign from a major online retailer and brick-and-mortar retailer. It’s about 7 million interactions over that time, multi-million dollar accounts and sales and stuff. A fascinating temporal analysis of a search engine marketing effort.
I’ve been looking at that at several different levels – the buying funnel being one, aspect of branding being another, and then the aspect of some type of personalization, specifically along gender issues. And so that’s been very, very exciting and interesting and (has offered) some great insights.

Gord:
I’m familiar with the buying funnel one because you were kind enough to share that with me and ask for my feedback, so let’s start there. I know you went in to prove out some assumptions, for example, is there a correlation between the nature of the query and where people would be in the buying funnel? Is there identifiable search behaviours that map to where they might be in their purchase process? What did you find?

Jim:
I looked at it at several different levels. One goal was to verify whether the buying funnel was really a workable model for online e-commerce searching or was it just a paradigm for advertisers to, you know, get their handle around this chaos. And if it’s an effective model, what can it tell us in terms of how advertisers should respond?
In terms of the first question, we had mixed results. At the individual query level you can classify individual queries into different levels of this buying funnel model. There are unique characteristics that correspond very nicely to each of those levels. So in that respect, I think the model is valid.

Where it may not be valid is specifying this process that online consumers go through. We found that, no, it didn’t happen quite like we assume.  There was a lot of drop-out and they would do a very broad query and that might be all.
So we looked at the academic literature – you know, what theoretically could deal with that or explain that? – and the idea of sufficing seemed to fit. If it is a low cost, they won’t spend a lot of time, they will just purchase it and buy it.
In terms of classifying queries in terms of what advertisers’ payoff is, I think the most interesting finding was that the purchase queries – the last stage of the buying funnel – were the most expensive and had no higher payoff than the awareness or the very broad, relatively cheaper queries. From talking to practitioners, that is a phenomena that they have noted also … which is why a lot of people bid still on very broad terms, to snatch these potential customers at an early stage.

Gord:
Based on what you’ve seen, there are a couple of really interesting things. You and I have talked a little bit about this, but we similarly have found that you can’t assume a search funnel is happening because people use search at different stages and they’ll come in and then they’ll drop out of the process, and they may come in later or they may not, they may pursue other channels. But the other thing we found is sometimes there’s a remarkable consistency in the query used all the way through the process and that quite often can be navigational behaviour. It can be people who say, “Okay, the last time I did this, I searched on Google for so-and-so and I remember the site I found was the third or fourth level down,” and they just use the same route to navigate the online space over and over again. If you’re looking at it from a pure query level, it’s a bit of a head-scratcher because you’re going, “Well, why did they use the same query over and over?” but again, it’s one of those nuances of online behaviour. Did that seem to be one of the possible factors of some of the anomalies in the data?

Jim:
Well, that trend or something similar to it has been appearing in a lot of different domains and researchers are attributing it to “When I do a query, I expect a certain result.” So, you know, a query that may be very informational, what we’re finding is that searchers expect a Wikipedia entry. So in other words, a very navigational intent behind that very informational query. And I think the phenomena you’re describing is very similar. We have a transactional-type query and users are expecting a certain web page, a navigational aspect, and that “Okay, I have an anchor point here that I’m going to go to.” And then off search engine, maybe they do more searching and actually do some type of buying funnel process. But at the search engine, yes, we’re seeing a lot of that navigational aspect. I just looked at a query log from a major search engine and an unbelievable amount of queries were just navigational in nature.

Gord:
We’ve certainly seen that. A lot of our recent research has been in the B2B space, so it’s a little bit different but certainly it follows those same lines. When we looked at queries that people would use, a large percentage of them were either very specific or navigational in nature.

You know, the idea of satisficing, of taking a heuristic shortcut with their level of research is also interesting. It seems like if the risk is fairly low, the online paths are shorter. Is that what you were finding?

Jim:
Yes, and the principle of least effort is how it’s also presented. We see it in web searching itself generally in how people interact with search engines and how they interact with sites on the web. They may not get an optimal solution, but if it’s something that’s reasonable and if it’s good enough, they’ll go for it. That seems to be occurring in the e-commerce area also: “I want to buy something relatively cheap. This particular vendor may not have the best price, but it’s close to what I’m thinking it should be. Just go and get it done, get it over with, buy it.”

Gord:
I would suspect that that would also be true in product categories where you have mentally a good idea of what an acceptable price range would be, right?

Jim:
Yes.

Gord:
So if it’s a question of making a trade-off for $2 but saving yourself a half hour of time, as long as you’re aware of what those price ranges would be, you’re more apt to make that shortcut call, correct?

Jim:
Yes. It does assume some knowledge and risk mitigation –if it’s a small purchase and that varies a little bit for each of us, but you’re willing to cut your costs of searching and trying to find the best deal just to get it done.

Gord:
I suspect part of this would also  be your level of personal engagement with the product category you’re shopping in. So I’ll spend way too much time researching a purchase of a new gadget or something that I’m interested in just because I have that level of engagement. But if it’s a purchase that’s on my to-do list, if it’s just one task I have to get done and then move on to the next thing, I suspect that that’s where that satisficing behaviour would be more common.

Jim:
Yes. Now you bring up a really good point. If it becomes entertainment – like a gadget that you enjoy researching – it’s no longer work, it’s no longer something you get done. The process of doing it makes it enjoyable so you don’t mind spending a lot of time. In those kind of cases, the goal really is not the purchase, the goal is the looking.

Gord:
We found that alters the behaviour on the search page as well. So if it’s a task-type purchase where I just have to go and get there, you see that satisficing play out on the search page too. Typically when we look at engagement with the search page, you see people scan the top four, three or four listings. If it’s that satisficing type of intent where they’re saying, “I just want to buy this thing,” you’ll see people scan those first three or four and pick what they feel is the path of least effort. They go down and say, “Okay. It’s a book. Amazon’s there. I know Amazon’s price. I’m just going to click through and order this,” but if it’s entertainment, then suddenly they start treating the search page more like a catalogue where they’re paying more attention to the brands and they’re using that as a navigational hub to branch off to three or four different sites. Again, it can really impact the nature of engagement with the web… or with the search page.

Jim:
Absolutely, and I really like your analogy of a catalogue. You know, there are some people that love just looking at a catalog – flipping through it, looking at the dresses and shirts or gadgets or sporting gear or whatever. And so that’s a much different engagement than flipping through the classified ads trying to find some practical thing you need. The whole level of engagement is at totally opposite ends of the spectrum, really.

Gord:
As an extreme example of that, we did some eye-tracking with Chinese search engines and we found that with Baidu in particular, people were using it to look for MP3 files to download. So when we first saw the heat maps – and of course it was all in Chinese, so I could understand what the content on the page was without having it translated – I saw these heat maps going way deeper and much longer than we ever saw in typical North American behaviour. We saw a level of engagement unlike anything we had ever seen before. And it was exactly it. It was a free task – They were looking for MP3 files to download and they were treating the search page like a catalogue of MP3 files. They were reading everything on the page.  I think that’s just one extreme example of this catalogue browsing behaviour that we were talking about.

Let’s go to one of the other findings on the buying funnel: that quite often the more general, broader categories from an ROI perspective can perform just as well as what traditional wisdom tells us is your higher return terms.  Those closer to the end of the funnel – the ones that are more specific, longer, more transactionally oriented. What’s behind that?

Jim:
Like a lot of these questions there’s no simple answer because there are plenty of exceptions to the rule you just described. There are some very broad terms that are very cheap, others that are very expensive. On the purchase side, there are some key phrases that are very cheap because they’re so focussed and others are expensive. But in this particular analysis – and again, this was 7 million transactions over 33 months, from mid-2005 to mid-2008 – the awareness terms were cheaper than the purchase terms and they generated just as much revenue.

I think a lot of it is that perhaps the items this particular retailer was selling fell into that sufficing behaviour: gifts, fairly low-cost items – there was just no need to progress all the way to that particular purchase phase.

To me it was really very unexpected. I really expected those purchase terms to actually be cheaper because they were more narrowly focussed and to generate more revenue, but it didn’t turn out that way.

Gord:
That brings up an interesting point we’ve seen with client behavior, especially given the current economic condition. We found is a lot of clients are tending to optimize down the funnel – they are tending to look at their keyword lists they’re bidding on and move further and further down to more and more specific phrases, because the theory is – and generally they do have analytics to back this up – that there’s greater ROI on that because these are usually people that are searching for a specific model or something which is a pretty good indicator that they’re close to purchase. But I think one of the by-products of that is as people optimize their campaigns, those long tail phrases are getting more and more expensive because there’s more and more competition around them, and as people move some of their keyword baskets away from those awareness terms, maybe the prices on that, it all being based on an auction model, are starting to drop. Do you think that could be one of the factors happening here?

Jim:
That very well could be. The whole online auction is designed around (the concept that) as competition increases, cost-per-clicks will increase also. It also may be that those particular customers don’t mind clicking on a few links to do some comparison-shopping and may end up going somewhere else. They may have a higher aspect of intent to purchase, but the competition among where they’re going to buy is more intense.

You know, compare that to this sufficing shopper: you just have to get that person’s attention first with a reasonably priced product and you will make the sale. That is the one issue with analytics in terms of transaction log analysis – we can analyze behaviours and we can make some conjectures about what happened, but you need lab studies and surveys to pan all data, to get the why part.

Gord:
That’s a great comment and obviously something that people have heard from me over and over again, because we do tend to focus more on the quantitative approach. I think this goes back to what we were talking about originally –online information gathering is a natural extension of where we are in our actual lives so it’s not like a distinct, contained activity. It’s not like we set aside an hour each day to go through all our online research. More and more, we always have an outlet to the internet close by and as we’re talking or as we’re thinking about something, it’s a natural reaction just to go and use a search engine to find out more information. And I think because it’s such a natural extension of what’s happening in our day-to-day lives, that the idea of this one linear progression through an online research session isn’t the way people act. I think it’s just an extension of whatever’s happening in our real world. So we may do a search, we may find something, it may be an awareness search, and then we may pursue other paths to the eventual purchase. It’s not like we keep going back and forth between a search engine with this nicely refined search funnel. It’s not that neat and simple, just like our lives aren’t that neat and simple.

Jim:
Yes, all models get rid of all the noise that reflect reality. So the neater they are, the less accurate they are, and the buying funnel is obviously very neat and so I think it’s reasonable that it represents a very small number of searches that actually progress exactly like that. We’re very nonlinear in things we do and so I assume our purchase behaviors are too.

Gord:
I want to move on to the question of branding a little bit, because you mentioned that that was one of the areas you were looking at. And at Enquiro, we’ve done our own lab-based studies on branding, so I’d be fascinated to hear what came out as far as the impact of branded search.

Jim:
This year, I’ve really got into this whole idea of branding in terms of information seeking. That’s really my background, web searching and how people find and assemble information. One of my first studies was to look at the comparison of what a search engine brand would do to how searchers interpreted the results. So I ran a little experiment where I switched the labels from Google, Yahoo, and MSN, and the results were the same. Certainly the search engine brand has a major lift to it.
In this particular study using the search engine marketing data, we did multiple comparisons of brand or product name and the keyword in the title, in the snippet, in the URL to see if there was a correlation with higher sales. And without a doubt the correlation between a query with a brand term and an advertisement with a brand term is extremely, extremely positive. That particular tightness seems to resonate with online consumers.

Gord:
So just to repeat, so if somebody’s using a branded query and they see that brand appear in the advertising, there’s obviously a statistical correlation between the success of that, right?

Jim:
Yes. In that particular case, one, that the click will happen, and two, that the click will result in a sale was yes, very positive. It really relates to the whole idea of dynamic keyword insertion in advertisements…

Gord:
So to follow that thread a little bit further, obviously if people have a brand in mind and they see that brand appear, then that’s an immediate reinforcement of relevancy. But what happens if the query is generic in nature, it’s for a product category, but a brand appears that people recognize as being a recognized and trusted brand within that product category? Did you do any analysis on that side of things?

Jim:
Not specifically. No, I did not. That’s a real good question though, but no, I did not do that type of correlation.

Gord:
The last thing I want to ask you about today, Jim, is this idea of personalization by gender. I believe from our initial discussions that you’re just in the process of looking at the data from this portion. Is that right?

Jim:
Well, we finished the analysis. Now we’re just writing it up.

Gord:
So is there anything that you can share with us at this point?

Jim:
Again, the results to me were counterintuitive from what I expected. Usually, the idea of personalization is that the more personalized you get, the higher the payoff, the efficiency and effectiveness is. We took queries from this particular search engine marketing campaign and classified them based on gender probability using Microsoft’s demographic tool, which will classify a query by it’s probability of being male or female. We looked at it this way: now whether the searcher was male or female but did the particular query fit a gender stereotype – did it have a kind of a male, for example, feel to it or stereotype implications.

Gord:
So more women would search for “Oprah,” and more men would search for “NASCAR”?

Jim:
Exactly.

Gord:
What did you find?

Jim:
In terms of sales, far and away the most profitable were the set of queries that were totally gender-neutral. We took the queries and divided them into seven categories: “very strongly male,” “generally male,” “slightly male,” “gender neutral,” “slightly female,” “strongly female,” “very female.” By two orders of magnitude, the most profitable were the ones that were totally gender-neutral.

Gord:
Fascinating.

Jim:
Yes, as a researcher who does personalization research, my guess would be “Ah, the more targeted they are, the more profitable.” But no, the means were two orders of magnitude different.

Gord:
So give us an example of a gender-neutral query.

Jim:
We defined gender-neutral to be were queries that the Microsoft tool classified somewhere between-  exactly gender-neutral is zero – up to like 59% either side. So we had a fairly big spread here. And there was a trend that was somewhat expected –  that the queries that were more female-targeted generated higher sales than the corresponding male counterparts.
So here’s some examples of queries based off the Microsoft tool:  “Electronic chess,” 100%. You know, the Microsoft tool classified that 100% male. For a gender-neutral query, I’ll just randomly pick up a couple here: “Atomic desk clock.” “Water purifier.”

Gord:
I know you’re just writing this up now, but any ideas as to why that might be?

Jim:
One thing that is coming out in the personalization research is that at a certain level, we have totally unique differences. You can personalize to a general category and to a certain level, but beyond that, it’s either not doing much good or may actually get in the way. And that may be something that is happening here – that these particular, very targeted gender keyword phrases are just not attracting the audience that the more gender-neutral queries and keywords are.

Again, it’s a “why” thing.  We spend a lot of time in web search trying to personalize to the individual level and really haven’t got very far. But now people are trying to do things like personalize to the task rather than the individual person, and there’s some interesting things happening there. Spell checks and query reformulations and things like that are very task-oriented rather than individual searcher oriented.

Gord:
I remember Marissa Mayer from Google saying that when Google was looking at personalization, they found by far the best signal to look at was what’s the string, what immediately preceded the search or a series of search iterations. They found that a much better signal to follow than trying to do any person-level personalization, which is what you’re saying. If you can look at the context of the tasks they’re engaged in and get some kind of idea of what they’re doing or trying to accomplish in that task, that’s probably a better application of personalization than trying to get to know me as an individual and to try to anticipate what I might say or query for any given objective.

Jim:
Yes, It’s just so hard to do. You know, Gord is different than Jim, and Gord today is different than Gord was five years ago. Personalizing at the individual level is just very difficult and may not even be a fruitful area to pursue.

Gord:
I remember when Google first came out with talking about personalization there was this flurry around personalization in search. That was probably two, two and a half years ago and it really seems to have died down. You just don’t hear about it as much. And at the time I remember saying that personalization is a great thing to think of in ideal terms – you know, it certainly would make the search experience better if you could get it right or even half-ways right, but the challenge is doing just that. It’s a tough problem to tackle.

Jim:
Yes, and as you mentioned earlier, we’re nonlinear creatures, we’re changing all the time. I can’t even keep up with all my changes and I can’t imagine some technology trying to do it. It just seems an unbelievably challenging, hard task to do.

Gord:
I think the other thing is – and certainly in my writings and readings this becomes clearer and clearer – that we don’t even know what we’re doing most of the time. We think we have one intent but there’s much that’s hidden below the rational surface that’s actually driving us. And for an algorithm to try to read something that we can’t even read ourselves is a task of large magnitude to take on.

Jim:
That’s a really good way of looking at it. I’ve commented on that before in terms of recommending a movie or book to me. I don’t even know what books and movies I like until I see them. Sometimes I pick up a book and say, “Oh, I’m going to really love this,” only to get a chapter into it and realize “Okay, this is horrible.” And I think you see that in the NetFlix challenge –  So many organizations have laboured for a decade now, and finally it looks like perhaps this year someone may win by combing 30 different approaches simultaneously to the very simple problem of “Recommend a movie. It’s just amazing the computational variations that are going on.

Gord:
Amazon has obviously been trying to do this. They were one of the first to look at collaborative filtering and personalization engines, and they probably do it about as well as anyone. But even then, when I log on to Amazon, it’s not that they’re that far off base in their recommendations to me, but given what I buy on Amazon, it’s like they’re dealing with this weird fragmented personality because one time I’m ordering a psychology textbook because it has to do with the research I’m doing for something and the next time I’m turning around and ordering a DVD box set of The Office or even worse, the British version of The Office which really throws it for a loop.

Jim:    [laughs]

Gord:
Then I’m ordering a book for my daughter like Twilight.  Amazon is going, “I don’t know who this Gord Hotchkiss is, but he’s one strange individual.”

Jim:
From my interaction with Amazon, the recommendations I have found most effective are “You bought this book. Other people that bought this book bought these books” which I view as a very task-oriented personalization. And the other is a very broad, contextual one, “Here’s what other people in your area are buying,” which fascinates me. It’s almost like a Twitter, Facebook, social networking thing: “Oh, wow. I like that book,” you know? These task-oriented context personalizations, at least in my interactions, have been the most effective.

Gord:
You obviously bring up that intersection between social and search, which is getting a lot of buzz with the explosion of Twitter and the fact that there’s now real-time search that allows you to identify patterns within the complexity of the real-time searches. We’ve known in the past in other areas that generally those patterns as they emerge can be pretty accurate, so that opens up a whole new area for improving the relevancy of search.

Jim, one last question while we’re talking about personalization. This is something I wrote about in an article a little while ago and I’d love to get your take on it as the last word of this interview. We were talking about personalization and getting it right more often, and the fact is the way we search now, engines can be somewhat lax in getting it right. There’s a lot of real estate there, we scroll up and down. The average search page has something between 18 and 20 links on it when you include the sponsored ones. It’s more like a buffet: “We’re hoping one of these things might prove interesting to you or whet your appetite.” But when we move to a mobile device, the real estate becomes a lot more restrictive and it becomes incumbent on the engines to get it right more often. We can’t afford a buffet anymore, we just need that waiter who knows what it is we like and can recommend it. What happens with personalization as the searches we’re launching are coming from a mobile device?

Jim:
That’s a great question. I think it’s one of those areas that have got a lot of talk – everybody is saying (again) “This is the year mobile searching’s going take off.” It’s been going on for four or five years now, and really, I mean at least here in the US, it hasn’t really happened yet. But what I think is going to make it hit the mainstream is this combination of localized search.
When you have a mobile device, the technology has so much more information about you: it’s got your location to within a couple feet, the context that you’re in can really start entering the picture and information gets pushed to you –I’m thinking tagged buildings and restaurants and cultural events and on and on. And so with my mobile device, where I can talk into it, I don’t even have to type anything. I want “what’s going on in the area?” and it automatically knows my location and the time and perhaps something about me and the things that I’ve searched on before. “Oh, you like coffee shops where there’s some music playing. Guess what? Boom. There’s five right near, in your area that have live entertainment right then.” So I think in that respect it’ll be a little more narrowed search, but the technology will have so much more information about us that in a way it makes the job easier. The problem’s going to be the interface and the presentation of the results.

Gord:
We’re talking about, you know, subvocalization commands and heads-up display. You start looking at that and say, “Wow, that would be pretty cool,” but…

Jim:
Yes. Imagine being able to walk through a town … I live in Charlottesville, Virginia. Tons of history here from 400 years ago when Europeans first settled here, Thomas Jefferson, James Madison, etc., etc. Being able just to walk down Main Street and have tagged buildings interface with my mobile device… I’m a big history buff and so getting that particular information, one, pushed to me or at least available to push when I ask for it is a wonderful, wonderful area of personalization. This idea of localized search and mobile devices and mobile search may be the thing that brings it all together and makes mobile search happen.

Gord:
It’s fascinating to contemplate. And I know I promised that was going to be my last question, but I’m going to cheat and squeak one more in, and it’s really a continuation. You remember the old days of Longhorn with Microsoft, when they were working what eventually became Vista. They were talking about building search more integrally into everything they did and they had this whole idea of Implicit Query – which really excited me because if anyone knows what you’re working on at any given time, it should be Microsoft, at least on the desktop. They control your e-mail, they control your word processing, they control your calendar. If you could combine all this… all those as signals – the document you’re writing and the next appointment you’ve got coming up and the trip you’re taking tomorrow – imagine how that could intersect with search and really turn into a powerful, powerful thing. I remember saying…this was years ago… “That could kill Google. If Microsoft can pull this off, that could be the Google killer.” Of course we know now that that never happened. But if we take all that integration and all that knowledge about what you’re doing and what you’re doing next and where you are and move that to a mobile device, that’s really interesting. In looking at where Google is going, introducing more and more things that compete directly against Microsoft… is that where Google’s heading, to become our big brother that sits in our pocket and continually tells us what we might be interested in?

Jim:
You know, the “Big Brother” idea label has certain negative connotations, so I don’t want to say that they are Big Brother-ish in that regard. But certainly I think with their movement into free voice and free directory assistance, they will soon have a voice data archive that will allow them to do some amazing things with voice search, which would be an awesome feature for mobile devices. Being able to talk into a mobile device, have it recognize you nearly 100% of the time and execute the search.
Google of course is the one that knows what they’re doing, but certainly I think it would be naive not to be exploring that particular area. And I think the contrast from what you said about Microsoft and the desktop, the desktop is just so busy. You’re getting so many different signals in terms of business, personal things, my kids use my computer sometimes. And so the context is so large on the desktop, but the mobile device, it’s narrower. You know, you have some telephone calls, you can do some GPS things, so the context is narrower but very, very rich in that very narrow domain. I think it’s a really hot area of search.

Interview: Branton Kenton-Dau of VortexDNA

In this week’s Just Behave Column, I had the opportunity to interview Branton Kenton-Dau from Vortex DNA. I’ve posted the complete transcript here for those that are interested:

Gord: 
So I think what we’ll do in this interview is cover off on a basic level what VortexDNA does, and then we’ll get into a little bit more about the potential I think it has for users.  So, it’s obviously an interesting idea using core values to try to determine intent. Maybe I’ll let you just walk through a little bit about how VortexDNA works, and why your approach is unique.

Branton Kenton-Dau: 
Yeah, thanks Gordon. I think that is a great place for us to start, and I think for us, it probably would go back to the human genome project itself, where we initiated as a society this project to map our human DNA.  And the great vision around that was that once we knew what our physical DNA was like we would be able to define the characteristics of your world and in particular help prevent  serious illnesses.  And one the outcomes of that project, was actually we found that there weren’t enough genes.  We found too few genes to map the 100,000 chemical pathways of our body; and that since then, where science is taken us is that it’s demonstrated that our physical DNA actually doesn’t determine who we are, but the whole science of epigenetics is saying actually it’s our environment, you know what we eat and particularly what we believe about ourselves which determines our propensity to be ill, to be healthy, to be successful or not.  So, actually our belief system is a major impact in determining who we are, and what is exciting about that is basically it represents a shift for us as a society from the very deterministic view of ourselves; that we are basically physical machines. Either we’re broken or not, depending on what our parents gave us, to the idea that we are actually beings that are creating our own lives with much more build out of what we  believe about ourselves at any moment and any time. That’s exciting because what we believe about ourselves can obviously be changed.  And basically what VortexDNA is is a technology that came out of the insight that the way we structure our beliefs is governed by the mathematics of complex systems.  What that means is that we know the structure our beliefs, and because of that we can then map out the structure of our intentional DNA, the intentions behind the world we create, and that’s basically the breakthrough, the technology. It provides a map of the way people organize who they are, literally who they are, through their belief systems.  And out of that, then comes the opportunity to create a better world for yourself whether that’s finding your best life partner or finding better research results or finding better car insurance rates because your particular belief system has a low propensity for accidents. It actually touches every part of your life, because we are actually mapping human characteristics.  The true genome, if you like, based upon the new science.

Gord: 
Okay, this is an interesting approach and undoubtedly a unique approach. I don’t know anyone else who is doing this. But you know, I approach this with a fair degree of cynicism saying, okay, obviously if you learn more about my belief system you can try to map that against the content of the internet. But how well does that actually work because my beliefs are the foundation, but on top of that, there are a lot of layers of intent for a lot of different things. How granular can your belief system be in disambiguating intent?  In some searches, I would see it working very well where it points to sites that you know resonates with my belief frame work, but in others where it is a much more practical “looking for information”, will trying to anticipate with my beliefs might indicate would be a good site, will that really be a good indicator of relevance?

Branton Kenton-Dau: 
That’s a really good question Gordon; I think that the answer to that is that we don’t know yet.  I think that we are at very early stages of really what is the science of human intention, I mean, that’s really what’s it about.  And what I can share with you is that we validated the technology last year against Google search results, and that’s where we were able to show that we can improve Google’s page rank by up to 14% which would improve it by a 3% improvement in click rates.  And, what that seems to be saying to us is (it does help), and that’s across the board, people obviously searching for anything and everything that’s possible on the web, we were analyzing that data.  We haven’t been able break that down to say whether or not if you’re hunting for a job, we are able to provide better recommendations than when you are looking for your recipe for custard or something; we just don’t know yet enough about it.  One of the things is interesting is that when we had our expert review done on that data by a Rhode Island consultancy firm, they said that they the way that the technology works, because it’s iterative, i.e. it learns as it goes.  He said that we probably have no idea yet of how efficient the system is, we don’t know because we are dealing with a very small sample and as more and more users and more DNA is selected on links of the net, then there is no reason that it can’t actually be more effective than we’ve demonstrated, but we just don’t know yet.  It’s just early days yet.

Gord: 
You made the comment that this is iterative and it learns as it goes, and from going through your site I see you answer an initial questionnaire; and I’ll get to the whole privacy question, or the perception of privacy question is probably more accurate, I’ll get to that in a minute. But you answer the questionnaire that creates an ideological or a value-based profile of you which then gets mapped against different sites.  But then, at anytime you can go back and answer more surveys to further refine what that profile looks like.  How much of this refinement process or this learning process is incumbent on the user as opposed to transparent in the background just by VortexDNA watching what are you doing and how you are interacting with different sites?

Branton Kenton-Dau: 
The answer is this system actually learns that every time you click on something, because every time you click on something, if you have downloaded the extension, the MyWebDNA extension, basically every time you click somewhere it’s a statement of your intentionality, so if other people have also clicked on it, it helps build up a map for that link of the intentionality that has been focused on the link.  So, we can feedback that intentionality into your own profile and therefore you don’t have to do anything.  Actually this year we will probably do away with the survey, so then you will won’t even need the survey to get started, that was just like a pre-heat process.  So, all we have to do is just surf as normal and you will be monitoring the state of your intentionality moment by moment with each click you make.

Gord: 
Okay, so let’s deal with that a little bit.  If you are monitoring my activity, in some ways this overlaps with what Google is doing with their web history and their search history, where they are tracking your usage and trying to learn more about you as an individual, theoretically, and then altering the results on the fly based on the personal signals it’s picking up. What you are doing is you are layering this outline of core values and what our belief systems are over and above that to say, “Okay well, not only are we watching what you are doing; we are trying to understand what’s important to you as an individual.”  Now, if we take that and we say, Okay I am in a business where at any given time for any given hour I’m working, I may be doing research, I might be writing the column on hate literature in North America, so I am going to be going to the sites of Neo-Nazi groups, trying to find information..that’s just part of my job.  How do you know that that doesn’t reflect my belief system, how do you know that this is just something that I happen to need to find information on right now?

Branton Kenton-Dau: 
There are two parts to that.  One is that I feel we are very respectful of what Google and Yahoo, and their analysis in the whole semantic web push are doing in terms of trying to make the web more relevant to people and we do believe our technology is complementary to those approaches.  We don’t believe we are competing with any of those and, as you say, it’s overlay, it’s additive to those.  Having said that, we are really completely different to that because, it’s actually the structure, the pattern, the way your beliefs are organized that we are interested in, and what that means is that we actually turn your answers to your questionnaire, what you click on, into just a set of numbers.  There are seven numbers that correspond to different aspects of that pattern of organization, that makes up your intentionality.  And so, really when you are going around, what we are doing is as you click on something, we will compare your number, it might be 7632416 say, with the numbers that are held against that link.  So, what we are doing is we are comparing numbers, we don’t know whether you’ve gone to a Nazi site or whether you are looking for apple pie recipes. We have absolutely no idea and maintain no record of where you’ve been, in all those sites. So when your genome is updated, because you’ve gone to this site,  we might update your genome because you’ve been to that site to change one of your digits in one way, by one point or so, then that digit could be changed from any site you ;  the news or Yahoo! or whatever.  So, what makes us really truly unique with this approach, which we think is really important, is that we absolutely protect your privacy because we do not track your searches in any shape or form because we are just adding or subtracting numbers from that seven digit identifier.  So it makes no difference to us, and I think that’s a really powerful thing, because you know there is concern with people for what information people hold on them and all we hold is a seven digit number, and you could be anywhere, it could have been Walt Disney Movies, it could have been finding out about the players in your favorite football team, it makes no difference to us.  So we can’t tell even if a law enforcement agencies came to us, and asked to us, “hey, where has Gord been”, we would have no idea, we could not tell them.

Gord: 
So what you do is in your identification of all the sites, you look at the sites and you assign each of those a profile and then your profile is altered on the fly based on the profiles you are matching up with against your content then, right?

Branton Kenton-Dau:  That’s correct and they’re all number, those profiles are numbers so…

Gordon: 
Right.  So there is no history retained, it’s just a constantly updated value which in turn, with every time you go out, is compared against all the values of the sites that come up in a search engine for instance, and the best possible matches are highlighted in the search results then.

Branton Kenton-Dau: 
Thank you, that’s absolutely perfect.

Gord: 
That’s interesting.  And now, obviously, that’s a totally different approach and one that should put some fears to rest on the privacy side, but I’ve got to tell you when I checked out VortexDNA and went through the process of the download, the whole idea of filling out a  questionnaire identifying my belief system, it gave me cause for concern. It was funny because as far as identifying me as an individual, the demographic information I fill out here and there across the web is potentially much more of a cause for concern for my privacy, because there is identifiable information in there.  I don’t usually have a second thought about but something about putting my beliefs down and sharing them with somebody else was very hard for me to do. Are you finding..and you said that you are probably going to drop the questionnaire…but are you finding that as a sticking point for people signing up for VortexDNA?

Branton Kenton-Dau: 
I think some people never think about it.  We get up in the morning, brush our teeth and go to work and make our daily bread.  Sometimes we don’t have time to think at all, “why am I here?”, and when you ask the question, “what’s your purpose in life?” Well that’s a pretty profound question.  What we found is that some people don’t fill in (the questionnaire) correctly or too quickly or they just answer anything, they don’t really think about it deeply and therefore, because noise goes in, they just get noise out.  That’s where over the course of last year we actually developed what we call this idealized genome. We can just infer your genome by what you click on.  We think that’s a much better way, and we can do that for instance, by just playing a game.  We can show some images, pick some of your favorites, we can infer your genome that way.  So, lot’s of more, less mentally taxing and more fun ways that we can get you started in creating your genome, your profile, which we think would work a lot better.

Gord: 
Well, you mentioned this whole approach may limit your potential market just because a lot of people haven’t thought about what their core values are or what their beliefs are, so the whole appeal of VortexDNA might be lost on them.  Unless you are a fan of Stephen Covey or you’ve read Built to Last, or Good to Great, you may not get it.  What are your thoughts about that?

Branton Kenton-Dau: 
I think you are right, and I think that the technology is broad enough, so it can serve anyone, whatever their focus is in life, and that it’s our responsibility to make sure that it’s that easy to use. We like to be able to do it (transparently), say, if you want to play Pacman, this way you are building up your profiles, and we can enable you to do that.  And we should be able to that shortly.  At the same time, I think one of the really key things about the technology, and certainly from my point of view of what, you know, gets me up in morning is the fact that I think it really is empowering for people to understand that the lives that they create, they literally do create it, it’s not given to them. Who you are is not determined by your upbringing, or your life experiences, or by the genes your parents gave you; but it is actually created by you moment by moment.  And, it’s my hope that the technology would help. It’s really a very American thing, in the sense that there is all about human freedom ultimately, and it gives people more freedom as they realize, “well, I am the creator of my life and if I am going to keep stuck in this rut then it is what I believe about myself that is keeping me there.”  That’s what I find exciting about the technology, so I hope that may dawn on some people faster than others and that’s okay, there is no problem with that.  But I believe the technology has the ability to make a contribution to human freedom ultimately.

Gord: 
So, you are climbing up Maslow’s hierarchy to the top level?

Branton Kenton-Dau: 
Yes, I absolutely believe that and I think that  we as human beings are always trying to really understand our true ability to create reality, and that our intentionality is, if you like, part of ourselves that we probably put less effort into training than anything else.  We spend a lot of time on the fitness machines or jogging to keep our bodies in shape, but we haven’t spent a lot of time in what actually seems to be a real key factor in determining the success of our life, which is our intentionality; and so I am hoping the technology will help focus us on that.

Gord: 
We’ve talked about some pretty lofty ideals for a technology here. About  helping people with self-actualization, and become better people, and become more aware of both what’s important to them internally and externally.  All of which is great for any fans of Collins or Covey who might be reading this or listening to this. We are getting to the hedgehog concept here; you are obviously passionate about this, you’ve got something different that you can be unique in, possibly best in the world at.  Now, comes the money question. How does this drive your economic engine? What’s the business model for VortexDNA, and how do you see that playing out over the 2 years to 3 years?

Branton Kenton-Dau
We have given a lot of thought about that and made a lot of mistakes as well and I think U-turns on it.  But basically, the company I represent, we basically have a technology which we issue licenses to other organizations and participate with them as strategic partners. For instance, in rolling out the technology.  And there are two kind of key parts for that, two key aspects of the technology, one is that the technology can be used by any ecommerce sites, whether that’s an e-tail or social site, in order to provide better recommendations, using their algorithms.  So, that’s a pure B2B solution, and we have the company incorporated in United States at the moment in order to do that. We would be interested in anyone who would like to partner with us to roll that out.  And then, the other side is that we feel we can create a better web by harnessing the power of mass collaboration, just like Wikipedia, to map the genome of the web and out of that, will come better search engines, will come a better ability to find people like you anywhere you are, enhance your blog, pretty much a holistic upgrade if you like, of the web itself. And that, we believe, like search engines themselves, is a pure advertising based free service to users.  So, we see there is an application there and in fact within next 30 days, we will be launching the Web Genome Project with its own website, and that would be an advertising based play again.  We believe that that has potential in every country in the world and we are open to issue licenses to partners that would like to take up the opportunity.

Gord: 
This territory has been somewhat explored in the past, you know the one example I am thinking of with the Music Genome Project where Pandora tried to use your past songs you like to recommend more songs you might like.  So, is this somewhat similar to that except obviously a much broader scope. Anything that could be on the web, right?
Branton Kenton-Dau:  That’s correct, I guess the difference would be that what’s great about the Web Genome Project is that while you sleep, millions of other people will be clicking on things that will make the web better for you in the morning.

Gord: 
Right.

Branton Kenton-Dau: 
That’s what’s so cool about mass collaboration. You do your clicking, you click on whatever, a thousand links in your day, but while you are sleeping well there are millions of links that have been updated and have better DNA against them, so that you can find what you want better when you wake up in the morning, and that’s really exciting.
Gord:  It’s definitely one of those big idea things.

Branton Kenton-Dau
Yes.

Gord
To flip this on its side, as a community we are all clicking away, and this DNA matching is going, so it’s making the web a better place, as you say, collaboratively, but on the flip side of that, once you’ve identified or a profile has been built that’s been refined over time based on the sites you found interesting or you’ve spent time with.  That’s a unique identifier that says something about you so, theoretically, down the road, if somebody comes to a site, if the owner of that site can identify what’s important to that person based on the profile, it could on the fly serve up content matching those beliefs, is that another possible application for this?

Branton Kenton-Dau: 
Thank you, that’s what I was attempting to describe in the first application, that’s the B2B solution.   So an e-tailer or search engine can now take out a license to run the application, the user would visit the site, you won’t see anything different through your Google search or through your Amazon book recommendations, but that’s all being added to the recommendation engine behind, so you are just going to say “hey, for some reason, I just feel that I am getting better recommendations now”.  So, it’s our way of making the web more efficient and that technology is available right now. We’ve got three installations in United States currently progressing, and that’s the other, that’s the business-to-business model, and we believe that has applications around the world as well.

Gord: 
So, for any business applications the big question is critical mass, how many people will be downloading the plug in and using it?  This is fairly new, how long has the Firefox plug-in been available?

Branton Kenton-Dau: 
About a year now.

Gord: 
What has the adoption been like to this point?

Branton Kenton-Dau: 
At this point, it’s been slow because what we’ve actually done is that plug-in was actually built initially to validate the technology.  That’s what we had to do last year, and then we spent the rest of last year really building enterprise-grade technology that enabled to be used by clients. So we really start the year, as I said, the next 30 days will see the Web Genome Project being launched, so we are only just at the start of the technology coming online.

Gordon: 
Are there any plans to accelerate the adoption either through partnerships or bundling or other ways to actually get people to download the plug in and start using it?

Branton Kenton-Dau: 
There are, I mean for each of the people, partners, e-tailors that would like to use the technology, we’ll be producing custom versions with extensions for their users, so that, they will encourage their uses to download the extension, because that will help map the DNA of their links quicker.  So, that will speed the application and we have also got plans to provide custom versions of the extension also to people of different social networks, so that they can enhance the experience within social networks as well.  And then, also if you’re logged in to any service, if you have an account with Amazon or some other e-tailer, you actually don’t need to use the plug in because when you login they already know who you are, so you already will be able to get better recommendations from that person without using the plug-in at all. You won’t see it. It would be completely seamless and invisible to you, and you just get a better web experience.

Gord: 
So, if you log into Amazon for instance, I guess it just keeps the profile so that profile would not be portable then.  It would stay with Amazon, and I think to get to the broader context of what you are talking about, the portability of that profile, the fact that it goes with you from site to site, would be an important part of that, right?

Branton Kenton-Dau: 
That’s why we see the extension would be great if people did use it or it just became embedded in web browsers generally.  Because it will give you a more universal better experience. 

Gord: 
Okay so looking forward, you’ve done a lot of development on the backend to build the infrastructure, and the theory is there.  Now, it’s just a matter of having it proven out in real world situations, right?

Branton Kenton-Dau: 
That’s exactly what we are about to see. That’s why these installations are taking place at the moment in the United States to validate that, and we are getting started with the Web Genome Project, it is all about delivery this year.

Gord: 
Well, it’s fascinating, like I said it’s one of those big idea things that is fascinating to contemplate.  Is there anything else you wanted to add before we wrapped up the interview?

Branton Kenton-Dau: 
We’ve worked on this  pretty much, well, it’s been an 8-year project, so it’s not been a fast thing for us, but I just thought I might share  a couple of books that have been really important to me which you probably know about anyway. One of them that I just finished reading is The Intention Experiment by Lynne McTaggart who is also the author of The Field. What is nice about that is she just documents all the rigorous science that is basically saying that we are shifting our paradigm, to understand we are more like any energy fields, if you like, than physical bodies, that’s the definition of us.  And just the science has come out of Stanford and other places that validates that, is just awe inspiring.  And then, the other one is The Biology of Belief by Bruce Lipton, which gives that whole transition process from us believing we are physical genes to the whole science of extra genetics, if it’s actually our environment including our beliefs which is a key factor in determining who we are; and I just wanted to share those because I found those two books very inspiring.  They happened after the fact. We didn’t build the technologies because we read the books, but with the books now, we say “oh yeah, that’s why our technology works.

Gord: 
Well, it’s interesting you mentioned that because it seems like anytime I ‘m talking to people about really interesting things there has been this almost renaissance of understanding about what makes us as humans tick, starting in areas like psychology, neurology, and evolutionary psychology and a whole bunch of different areas.  And it all started to happen in the early 90’s, and just for the last 10 years to 15 years, it seems like so many paradigms have shifted.  We’re just looking at things in a whole new way and I agree with you, it’s very inspiring and exciting to know that everything seems to be in such flux right now.

Branton Kenton-Dau: 
I absolutely agree with that, and that’s been our sense as well.  It’s just such a privilege to be part of that process.  I know your comments and what you are doing is aligned with that as well.  You know, we are all doing it, but when we are creating together, we are creating something which is new and exciting, I think, and we all have our parts to play in it.  I find it a privilege to participate in this, really, it’s human movement in taking us forward at such a rapid pace as well, so we are now absolutely aligned with what you were saying there.

Highlights from the Search: 2010 Webinar

Yesterday, I had the tremendous privilege of moderating a Webinar with our Search 2010 Panel: Marissa Mayer from Google, Larry Cornett from Yahoo, Justin Osmer from Microsoft, Daniel Read from Ask, Jakob Nielsen from the Nielsen Norman Group, Chris Sherman from Search Engine Land and Greg Sterling from Sterling Market Intelligence. It was a great conversation, and the full one hour Webinar is now available.

I won’t steal the panelists thunder, but the first question I posed to them was what they see as the biggest change to search in the coming year. Most pointed to the continued emergence of blended search results on the page, as well as more advances in disambiguating intent. A few panelists looked at the promise of mobile, driven by advances in mobile technology such as multi touch displays, embodied in the iPhone. After listening again to the various comments, I’ve put them together into 4 major driving forces for Search in 2008 and beyond:

Disambiguation

The quest to understand what we want when we launch our search is nothing new. How do you deal with the complexities and ambiguity of the English language (or any language, for that matter) when you’re trying to make the connection between the vagaries of unexpressed intent and billions of possible matches? All we have to go by is a word or two, which may have multiple meanings. While this has always been the holy grail of search, expect to see some new approaches tested in 2008. We’ve already seen some of this with the search refinement and assist features seen on Yahoo, Live and Ask. Google also has their query refinement tool (at the bottom of the results page), but as Marissa Mayer pointed out in the Webinar, Google believes that as much disambiguation as possible should happen behind the scenes, transparent to the user.

The challenge with this, as Marissa also acknowledged in the Webinar, is that there are no big innovations on the horizon to help with untangling intent in the background. Personalization probably holds the biggest promise in this regard, and although it was regarded with varying degrees of optimism in the Webinar, no one believes personalization will make too much of a difference to the user in the next year or so. All the engines are still just dipping their toes in the murky waters of personalization. Using the social graph, or tracking the behavior of communities is another potential signal to use for disambiguation, but again, we’re at the earliest stages of this. And, as Jakob Nielsen pointed out, looking at community patterns might offer some help for the head phrases, but the numbers get too small as we move into the long tail to offer much guidance.

For the foreseeable future, disambiguation seems to rest with the user, through offering tools to help refine and focus queries, and possibly doing some behind the scenes disambiguation on the most popular and least ambiguous of queries, where the engines can be reasonably confident in the intent of the user. The example we used in the Webinar was Feist, a very popular Canadian recording artist. But “Feist” is also a breed of dog. If there’s a search for Feist, the engines can be fairly confident, based on the popularity of the artist, that the user is probably looking for information on her, not the dog.

More Useful Results

The second of the 4 major areas goes to the nature of the results themselves, and what is returned to us with our query. Universal (Federated, Blended, etc) results are the first step in this direction. Expect to see more of this. Daniel Read from Ask led the charge in this direction, with their much lauded 3D interface. As engines crawl more sources of information, including videos, audio, news stories, books and local directories, they can match more of this information to user’s interpreted intent. This will drive the biggest visible changes in search over the short term. For the head phrases, those high volume, less ambiguous queries, engines will become increasing confident in providing us a richer, more functional result set. This will mean media results for entertainment queries, maps and directory information for local queries and news results for topics of interest.

But Marissa Mayer feels we’re still a long ways from maximizing the potential of the plain old traditional web results. She pointed out some examples of results where Google’s teams had been working on pulling more relevant and informative snippets, and showing fresher results for time sensitive topics. Jakob Nielsen chimed in by saying that none of the examples shown during the Webinar were particularly useful. And here comes the crux of a search engine’s job. Just using relevance as the sole criteria isn’t good enough. For someone looking for when the iPhone might be available in Canada, there are a number of pages that could be equally relevant, based on content alone, but some of those pages could be far more useful than others. The concept of usefulness as a ranking factor hasn’t really been explored by any of the algorithms, and it’s a far more subtle and nuanced factor than pure relevance. It depends on gathering the interactions of users with the pages themselves. And, in this case, we’re again reliant on the popularity of a page. It will be much easier to gather data and accurately determine “usefulness” for popular queries than it will be for long tail queries.

By the way, the concept of usefulness extends to advertising as well. A good portion of the Webinar was devoted to how advertising might remain in sync with organic results, whatever their form. Increasingly, as long as usefulness is the criteria, I see the line blurring between what is editorial content and what is advertising on the page. If it gets a user closer to their intent, then it’s served its purpose.

Mobile

When we’re talking innovation, the panel seems to see only incremental innovation in the near term on the desktop. But as a few panelists pointed out in the interview, mobile is in the midst of disruptive innovation right now. The iPhone marked a significant upping of the bar, with its multitouch capabilities and smoother user experience. What the iPhone did in the mobile world is move the user experience up to a whole new level. With that, there’s suddenly a competitive storm brewing to meet and exceed the iPhone’s capabilities. As the hardware and operating systems queue up for a series of dramatic improvements, it can only bode well for the mobile online experience, including search.

Remember, there’s a pent up flood of functionality just waiting in the mobile space for the hardware to handle it. The triad of bottlenecks that have restricted mobile innovation – speed of connectivity, processing power and limitations of the user interface – all appear that they could break loose at the same time. When those give way, all the players are ready to significantly up the ante in what the mobile search experience could look like.

Mash Ups

One area that we were only able to touch on tangentially (an hour was far too short a time with this group!) is how search functionality will start showing up in more and more places. Already, we’re seeing search being a key component in many mash ups. The ability to put this functionality under the hood and have it power more and more functional interfaces, combined with other 2.0 and 3.0 capabilities, will drive the web forward.

But it’s not only on the desktop that we’ll see search go undercover. We’ve already touched on mobile, but also expect to see search functionality built into smarter appliances (a fridge that scans for recipes and specials at the grocery store) and entertainment centers (on the fly searching for a video or audio file). Microsoft’s surface computing technology will bring smart interfaces to every corner of our home, and connectivity and searchability goes hand in hand with these interfaces between our physical and virtual worlds.

That touches on just some of the topics we covered in our one hour with the panelists. You can access the full Webinar at http://www.enquiroresearch.com/future-of-search-2010.aspx. We’ll be following up in 2008 with more topics, so stay tuned!