Interview with Ask’s Michael Ferguson

I recently had the opportunity to chat with one of my favorite usability people, Michael Ferguson at Ask.com. You can find excerpts of the interview, along with commentary, on Search Engine Land in this week’s Just Behave column. Some of Michael’s comments are particularly timely now, given Google’s announcement of Universal search.

Gord: How does Ask.com approach the search user experience and in big terms, what is your general philosophy?

Michael: A lot of what we do is, to some extent, informed by core search needs but also by our relevant market share, understanding that people have often experienced other engines before they come to us, not necessarily in that session but generally on the web. People have at least done a few searches on Google and Yahoo, so they have some context coming from those search experiences. So often, we’re taking what we’ve learned from best practices from competitors and others and then, on top of that, trying to add a lot of product experience and relevance experiences that are differentiated. Of course, we’re coming from this longer history of the company where we’ve had various user experiences over the time that we’ve been around. We’ve marketed around natural language, in the late 90’s and answered people’s questions at the top of the page, but in the last year and a half or so, we’ve rebranded and really focused on getting the word out to the end users that we are a keyword search engine, an everyday search engine.

A lot of the things that we’ve done with users have been to try to, implicitly, if not explicitly, inform users that are coming to the site you can use it very much like you can use any other kind of search engine you’ve been on before. Or, if they’re current users and people are coming back to the site, to let them know that the range of experiences and the type of information we bring back to them has greatly expanded. So that’s pretty much it. It’s informed by the context of not just a sense of pure search and information retrieval and all the research that’s gone on that in the last 35 or 40 years but also the dynamics of the experiences that we’ve had before and people’s previous experiences with Ask. Then, an acknowledgement that they’ve often searched on other sites and looked for information.

Gord: You brought up a number of topics that I’d like to touch on, each in sequence. You mentioned that in a lot of cases, they’re coming to Ask and they’ve used Google or Yahoo or they’ve used another engine as one of their primary search tools. Does Ask’s role as a supplemental engine or an alternative engine give you a little more latitude? You can add things from a functionality point of view to really differentiate yourselves. I actually just did a search and see that you, at least on my computer here, have made the move to incorporate some of the things that you were testing on AskX into the main site. Maybe we’ll start there. Is that an ongoing test? Am I just part of a beta test on that or this rollover complete now?

Michael: We’re still in testing with that and it will roll out. We have decided because of a lot of the user experience metrics that we’re getting from the beta test that we’re going to go for it. We have decided to move the full experience over to the AskX experience. Of course, there are variants to that, but the basic theme of, in a smart way, bringing together results from different search verticals and wrapping those around the core organic results (as well as) a sponsored experience. So that will happen sometime this year. We don’t know exactly when, but just a couple of days ago, we really decided we’ve seen enough and we’re pretty excited about that.

Google has a really great user experience going, and Yahoo does too, but they have so many different levers that move so much revenue and traffic and experience metrics that I think it’s harder for them to take chances and to move things around and get buy-offs at a bureaucratic level. To some extent, we see ourselves as having permission and a responsibility to really innovate on the user experience. It’s definitely a good time for us because we have such great support from IAC and they’re very much invested in us improving the user experience and getting more traffic and getting frequency and taking market share and they’re ready to very much invest in that. So we don’t need to cram the page with sponsored links and things like that. It’s mostly a transitional time when we’re getting people to reconsider the brand and the search engine as a full keyword based, everyday search engine that has lots to offer. I’m talking to people all the time about Ask and there’s definitely still people that say, “Hey, last night, it came up with my buddies at the bar, this trivia question about the Los Angeles Lakers, 1966 to 1972 (and I went to Ask and asked a question)”. Then there are other people that see us as evolving beyond that but still really surprised that we haven’t had image search.  Now with AskX we’ll have preview search and there’s lots of other stuff coming along now. So yes, it’s a great place to be. I love working with it. There are so many things that, in an informed way, we can take chances on, relative to our competitors.

Gord: So does this mean that the main site becomes more of an active site? Are you being more upfront with the testing on Ask.com rather than on AskX.com?

Michael: Well, I think the general sense of what we’re going to do is that, at some point this year, the AskX experience will, at least at a wireframe level, become the default experience and, of course, we have a lot of next generation “after that” stuff queued up that we’re thinking about and we’re actively testing right now but not in any live sense.  So potentially, things will slide in behind the move of the full interface going out and then AskX will remain a sandbox for another instance of, hopefully, new and really useful and differentiated search experience coming after that. A general thing that we’re going to try to do, instead of having 15 or 18 different product managers and engineering teams working on all these different facets of information retrieval and services, we’re going to stay search focused and just have one sandbox area where people go in and see multiple facets of what we’re thinking about.

Gord: Let’s talk about the sponsored ads for a bit. I notice that for a couple of searches that I’ve done while we’ve been talking that they’ve definitely been dialed down as far as the presence of sponsored on the page. I’m only seeing top sponsored appear, so you’re using the right rail to add additional search value or information value, whether it be suggested searches or on a local search, where it brought me back the current weather and time. So what’s the current strategy on Ask as far as presentation of sponsored results and the amount of real estate devoted to them?

Michael: Just to fit along with the logic of Eye Tracking II (Enquiro’s second eye tracking study), those ads are not a delineated part of the user experience for the end user and they’re relevance and their frequency can color the perception of the rest of the page and especially the organic listings below them. Right now, as I said, we’re very much focusing on improved user experience and building frequency and retention of customers, which all the companies are, I’m sure. But we’re really being, basically, cautious with the ads and getting them there when they’re appropriate and, as best we can, adjust them over time, so that when they’re there, they’re going to valuable for the user and for the vendor.

Gord: That’s a fairly significant evolution in thinking about what the results page looks like from say, two years ago, with Ask. Is that purely a function of IAC knowing that this is a long term game and it begins with market share and after that comes the monetization opportunities?

Michael: Actually, I think way before we got acquired by IAC we knew that. We test like other engines would. We test lots of different ad configurations and presentations and things like that but definitely you want to balance that. Way before we got acquired we realized that there’s one thing that’s kind of fun about making the quarter and blowing through it a little bit and then there’s another thing about eroding customers. And definitely there’s a lifetime value that can be gained by giving people what you know is a better user experience over time, so once we did become part of the IAC family, we brought them up to speed with the results that we were finding that were pointing to taking that road and they’ve very much been in support of it. And, of course, their revenue is spread amongst a lot of different pieces of online and offline business so their ability to absorb it is probably more flexible than ours was as a stand alone company.

Gord: That brings me to my next question, which is, with all the different properties that IAC has and their deep penetration into some of the vertical areas, you had talked about the opportunity to bring some of that value to the search results page. What are we looking at as far as that goes? Are we going to see more and more information pulled from other IAC into the main AskX interface?

Michael: Maybe the most powerful thing about the internet is that you as an individual now have a very empowered position relative to other producers of information, other businesses where you can consume a bunch of different points of view. You have a bunch of different opportunities to do business and get the lowest price and read reviews that the company itself hasn’t sanctioned, or anything like that.  You have access to your peer network and to your social networks. Search, like the internet, becomes, and it necessarily needs to be, a proxy for that neutral, unbiased view of all the information that’s available. This probably gets a little bit into what may or not may work with something like Google’s search history. Users over time have said again and again, “Don’t hide anything from me or don’t over think what you may think I might want. Give me all of the best stuff, use your algorithms to rank all that, but if I get the sense that anything’s biased or people are paying for this, then I’m not going to trust you and I’m going to go somewhere else where I can get that sense of empowerment again.”

As I’ve sat in user experience research over time, I’ve seen people..and I know this isn’t true of Google and I know it isn’t true of Ask right now with the  retraction from paid inclusion…but you ask users why they think this came up first on Google, maybe with a navigational query like Honda or Honda Civic and Honda comes up first. They’ll say, “Oh, Honda paid for that.” So even with the engines that aren’t doing paid inclusion, there’s still this kind of wariness that consumers have of just generally somebody on the internet, somewhere, behind the curtains, trying to take advantage of them or steer them in some way. So as soon as we got acquired by IAC, we have made it very much part of their perception of this and their culture. Their product management point of view is that you can’t sacrifice that neutrality. You can’t load a bunch of IAC stuff all over the place. The relationship with IAC does give us access to proprietary databases that we can do lots of deep dives in and get lots of rich information out  that can help the user in their instance of their search needs that other companies wouldn’t be able to get access to, while maintaining access to everything else.

The way we approached AskCity was a great example of this. We had leveraged a lot of CitySearch data but at the same time, we know that when people go out and want to see reviews, they want to see reviews from AOL Neighborhoods, they want to see reviews from Yelp they want to see reviews from all these other points of view too. So we go and scrape all those and fold them into the CitySearch stuff. We give access to all those results that come up on AskCity. If they’re, for instance, at a restaurant, you can get Open Table reviews and you can get movie reservations through Fandango and other stuff like that. Those companies have nothing to do with IAC. Those decisions were borne from user needs and from us looking as individuals in particular urban areas, and saying “Hey, what would I want to come up?” We know from previous experience from AOL that the walled garden thing doesn’t work. It’s just not what people expect from search and not what they expect from the internet, so that lesson’s been learned. I don’t know how much it would be different if we had some dominant market share over search, but that’s even more reason for us to be appealing to as wide a population as possible. That’s my philosophy right now.

Gord: I guess the other thing that every major engine is struggling with right now is in this quest to disambiguate intent, where is the trade-off with user control? Like you said, just show me a lot of the best stuff and I’ll decide where I want to drill down and I’ll change the query based on what I’m seeing to filter down to what I want. In talking to Marissa at Google and their moves towards personalization and introducing web history, I  think for anyone who understands how search engines work, it’s not that hard to see the benefits of personalization but from a user perspective there does seem to be some significant push back against that. Some users are saying, “I don’t want a lot of things happening in the background that are not transparent to me. I want to stay in control.” How is Ask approaching that?

Michael: The other major thing that’s going on right now is that we have fully revamped how we’re taking this. We developed the Direct Hit late 90’s technology. And then the Teoma technology we acquired. And really, it’s not that we’re taking those to the next level, we got all of that stuff together and over the past three years, we’ve been saying, “Okay, what do we have and what’s unique and differentiated?” There’s a lot of great user behavior data that Direct Hit understands.  We have a whole variety of things there and that’s unlocked, that’s across all the people coming in and out over time but not any personally identifiable type of stuff. And then there’s Teoma, which is good at seeing communities on the web, expertise within the communities and how communities relate. So right now, even though we have personalization stuff and My Stuff and other things that are coming up, we’re investing a lot more in the next version of the algorithm and the infrastructure for us to grow called Edison. And we started talking about that a week ago since A.G. (Apostolos Gerasoulis) mentioned it. Across a lot of user data it understands a lot about the context from the user intention side and because we’re constantly capturing the topology of the web and it’s communities and how they’re related, we then match the intention and the map of the web as it stands and the  blogosphere as it stands and other domains as they stand. Our Zoom product, which is now on the left under the search box in the AskX experience and it’s on the right on the live site, is the big area that we’re going to more passively offer people different paths.

For example, just like with AskX, you search for U2, it’s going to bring up news, and product results, and video results and images, and a Smart Answer at the top of the page. It’s also going to know that there’s U2 as the entity, the music band and therefore search the blogosphere but just search within music blogs. So what it’s doing, over time, is trying to give a personalized experience that’s informed by lots of behavior and trying to capture the structure of the web, basically. So that’s where we are there.

There’s a book that came out in early 1999 called Net Worth, which you might want to read. I almost want to revisit it myself now. It’s a Harvard Business School book that Marc Singer and John Hagel came out with. It talked about infomediaries and it imagined this future where there’d be these trusted brands and companies. They were thinking along the lines of American Express or some other concurrent banking entity at the time, but these infomediaries would have outside vendors come to them and they would entrust all their information, as much as they wanted to, they could control that, both online and offline.  You were talking in your latest blog post about understanding in the consideration phase where somebody is and presenting, potentially, websites that they hadn’t seen yet or ones that they might like at that point in the car purchase behavior. But the way that they were imagining it was that there would be a credit card that might show that someone had been taking trips from the San Francisco Bay area to the Tahoe region at a certain time of year and had maybe met with real estate agents up there and things like that. But these infomediaries, on top of not just web history but even offline stuff, would be a broker for all that information and there would be this nice marketplace where someone could come and say, “I want to pay $250 to talk to this person right now with this specific message”. So it seems that Google is doing a lot of that, especially with the DoubleClick acquisition. But I’m just wondering about the other side of it, keeping the end user aware of and empowered over that information and where it’s at. So Net Worth is a neat book to check out because the way they were describing it, the end user, even to the broker, would seep out exactly what they wanted to seep out at any given time. It wouldn’t be this passive recording device thing that’s silently taping. My experience so far of using the Google Toolbar that’s allowing the collection of history, is that it’s ambiguous to me about how much of my behavior is getting taken up by that system and used. We’ll see where it goes but right now we don’t have strong plans to do anything with that for search.

Gord: It’s going to be really interesting because, up to now, the tool bar was collecting data but there was no transparency into what it was collecting, and now that they’ve done that, we’ll see what the user response is to that. Now that they can go into their web history and have that initial shock of realizing how much Google actually does know about them.

One other question, and this is kind of a sidelight, but it’s always something that I’ve been interested in. Now that you have the search box along the left side there and it gives search suggestions as you’re typing, have you done any tracking to see how that’s altered your query logs? Have you noticed any trends in people searching differently now that you’re suggesting possible searches to them as they’re typing?

Michael: There are two broad things that are encouraging to us. One is that over time, the natural language queries are down tremendously. Our queries, because we promoted in the late nineties this “ask a question” thing, tended to be longer and more phrase based, more natural language based.  That’s really gone down and is approaching what we would consider normal for an every day search engine profile as far as the queries. And we really think that this zooming stuff has really helped that because it’s often keyword based. You will sometimes see some natural language stuff in there. There are communities on the web that are informing us that there’s an interest in this topic that’s related to the basic topic so it is helping change the user behavior on Ask.

And the other result of that is as people use it more for everyday keyword based search engine, the topics or the different categories of queries that people see are normalizing out too. Less and less they’re reference type stuff and more and more they’re transactional type queries, so that’s a good thing. And that’s just been happening as we rebranded and we presented Zoom.

And then with the AskX experience, we are definitely seeing that even more because of the fact that they’re just in proximity to the search box. We always knew that these suggestions should ideally be close to the search box so that people understand fully what we’re trying to offer them. For instance, on the current site, we do see users that will sometimes type a query in the search box on top and because they’re used to seeing ads on the right rail on so many other sites and because they don’t necessarily know what narrow and expand your search is they think those are just titles to other results or websites. It’s a relatively small portion. Most people get what it is, but there was that liability there. Now in the AskX experience, it’s close and visually grouped with the search box. It’s definitely getting used more and guiding queries and people seem even more comfortable putting general terms in. We’ve made it that you can just arrow down to the one and hit return. It’s definitely driving the queries differently.

Gord: I’ve always liked what you guys have done on the search page. I think it’s some of the most innovative stuff with a major search property that I see out there and I think that there’s definitely a good place for that kind of initiative. So let me wrap up by asking, if you had your way, in two years, what part would Ask be playing in the total search landscape?

Michael: We’d definitely have significantly more than 10% market share. My point of view, from dealing with the user experience, is that I’ve been proud of the work that we’ve done and I really think that we’ve been very focused and innovative with a very talented team here and we’re really hoping that as we look at the rest of the year and we put out Edison and the AskX experience, that we become recognized for taking chances and presenting the user experience in a differentiated way that people have to respond to us in the market and start adopting some of the things that we’re doing. Because of the amount of revenue that Microsoft, Yahoo and Google are dealing with on the search side, they often get a lot of press but our hope is really to take share and to hopefully have a user experience that inform and improve the user experience of our competitors.

Gord: Thank you for your time Michael.

Personalization: Google’s Defensible Trump Card?

A thought that came up in a conversation with Michael Ferguson, Ask’s usability guy (which is probably why I like talking to him. He always greases the mental machinery) was Google’s defensible position that personalization offers.

Google is betting the farm on personalization. And really, they’re possibly the only search engine that can make this work. Here are the required components:

  • A high enough degree of additional user value to convince people to opt in to personalization. As I’ve talked about before, that’s why it’s being rolled out in organic search first. Expect a slew of other value adds in the near future, all powered by personalization and all aimed and getting you to hit the opt in box.
  • An extensive network so you can maintain multiple touch points for the delivery of targeted advertising. Nobody has a bigger network that Google’s AdSense network
  • Critical mass amongst users. With Google’s almost 65% market share and the highest penetration of installed tool bars (42% plus in a recent B to B study we did), Google also has the required components to tap into a significant slice of the available market. And future Gadgets and tools will likely either require personalization to be turned on, or will provide an enhanced level of functionality when they are. Expect Google to get aggressive with forcing adoption in the next year or so.

It came to light when I was talking to Michael about Ask’s algo and whether personalization will play a part (by the way, this is part of an interview that will be on Search Engine Land next week). After the interview, I realized it’s not an option for Ask, at least not at the level that Google’s contemplating. Even if they did move to personalization, they just don’t own enough of the total online user experience to push them to opt into personalization. They’d never gain the critical mass needed to make it work.

Microsoft has an outside chance through Messenger, but it would be a long shot. Yahoo also has a long shot at it (although better than Microsoft’s) but they’d have to start gaining market share, and there are a number of huge obstacles in their way. Google is by far the best bet to force personalization on the market and have it be adopted at significant rates.

So what are the options for the other engines? Well, again, there’s an interesting twist there as well. One thing that’s touted heavily by the contenders is social search. I have severe doubts about the scalability of anything that requires a human element, and I’ve written about this in the past. But then I realized that personalization gives Google social search in a way that others just can’t touch.

If Google is collecting both web and search history, they’re collecting implicit votes for the quality of every property on the web. They create their own community, and with every click, that community votes for the quality and relevance of every site they visit. It’s social search in a very powerful and completely transparent form. In this form, social search requires no additional action on the part of the user (one of the critical risk areas of social search) and is completely scalable, because there’s no human bottleneck (the other critical risk area).

The more I think about personalization, the more I think that Google has just trumped the entire search space…again.

Thoughts on Yahoo and Microsoft Merging

Note: This was actually written on Friday, but I haven’t had a chance to post it til now. I’ve been travelling and access has been an issue. But I just came back from the opening reception at the MediaPost Search Insider Summit and the latest seems to be that the hype of this deal is far ahead of any actual discussions. That said, I think my comments are still valid, because as we’ve learned, things can happen fast in this industry.

Friday, May 4

The big news this morning as I was burning off some calories on the stair climber was the possible acquisition of Yahoo by Microsoft. I was actually in New York when I heard the story break, and one of my meetings today was at the Microsoft New York office, so I thought it would be interesting to ask my contact there what she thought. She indicated that this story has been going on for years now, but apparently they’re going back to the table. As we were chatting in a conference room, someone walked by outside asking somebody else if they had bought Yahoo stock. The media speculation was good news for Yahoo stock, not so for Microsoft.

Obviously, there’s a lot to mull over here. Rumor has it that Steve Ballmer is not taking Google’s DoubleClick scoop lightly. In fact, he’s downright pissed. And he may be preparing to make Terry Semel an offer he can’t refuse. Semel’s played hard to get before, but this time the shotgun marriage just might take.

The obvious question is how the two search properties will combine. In this case, it might be a case of two wrongs not making a right. Yahoo has managed to keep their search share from eroding too badly with Google’s domination, but Microsoft has been sputtering out of the starting gate from day one. The problem is that Yahoo and Live search duplicate each other in many ways, rather than complement each other. The biggest problem with both engines is too much focus on revenue generation and not enough on user experience. They each have their different flavors, but the combined Microhoo (or YahSoft) is in no way a Google killer. In fact, with the turmoil of a merger and the inevitable awkwardness of combining search teams, I see the focus on the user suffering even more. Both engines desperately need a clearly focused user champion to revamp the search experience (ala Google’s power usability troika, Larry, Sergey and Marissa) and this deal just doesn’t produce that.

I think the rationale of the deal has much less to do with search and more to do with a rather petulant online land grab. Yahoo does bring some interesting assets into the Microsoft fold. Microsoft is definitely eyeing the Asian market, and Yahoo has dominates in most of these markets, with the exception of China, and that’s a whole other story. Yahoo also brings a lot of users and online real estate as well, with roughly double Microsoft’s user base. This move looks like a strategy to bolster the front line for a head to head confrontation with Google in the ad serving space. Of course, it could just be the Ballmer has a lot of cash burning a hole in his pocket and everytime he goes to spend it, Google snatches the acquisition away from him. Steve wants to buy a ball he can actually take home.

One really interesting aspect of this is what it will do in the search space. While I really don’t think Yahoo’s search assets are the impetus for the deal, the potential combining of Live Search and Yahoo cleans up the search landscape a bit, and my guess is there will be significant user fall out from this. This will not be good news for the users of these two engines in the short term. But it could be extremely good news for Ask.

I just did an interview with Michael Ferguson, Ask’s usability point person (coming in Search Engine Land next week) and the IAC team are doing some really smart and relatively innovative things with their engine. And they’re probably the least aggressive in jamming ads on the page right now. Diller has provided a big enough bankroll to allow Jim Lanzone and his team to take a long run at capturing marketshare and this just may be the break they need. Based on what I’ve seen, Ask is paying a lot of attention to the user experience, and they may well pick up some converts and some pretty significant marketshare lift because of that. Perhaps Microsoft employees should be eyeing IAC stock. Or perhaps Steve Ballmer is starting to jot them down on his shopping list. After all, Google will probably scoop Yahoo out from underneath him at the last minute anyway!

Does Online Video Give Us a New User Interface?

In Wednesday’s SearchInsider, Aaron Goldman looked at video search and what’s going to be required for it to truly become an interesting advertising vehicle.  Some of the speculation comes from Aaron’s musing about what might happen if Google purchased Blinkx.

To me, video search is one of the more interesting growth areas for search in the future.  Currently, there are some restrictions on video search that are imposed by the current state of technology.  Our ability to index video is restricted to the addition of metadata.  For each video clip, someone must take the time to include the tags indicating what the video is about.  As long as video search relies on this, the opportunities for advancement are extremely limited.  But right now we’re advancing on several technical fronts to be able to index content and not rely on metadata.  Several organizations, including Microsoft, are working on visual recognition algorithms that allow for true indexing of video content.  Advancements in computing horsepower will soon give us the sheer muscle required for the gargantuan indexing task.  Once we remove humans from the equation, allowing for automated indexing video content, the world of video search suddenly becomes much more promising.

When this happens, we move accessing information in a video from being a linear experience to being a nonlinear experience.  Suddenly we have random-access to information embedded within the video.  As mentioned, the technology is being developed to enable this, but the question is, will we as viewers be able to adapt to this paradigm shift?  The evolution of video has been one that is coming from a linear, storytelling experience.  Every video is generally a self-contained story with a distinct beginning, middle and end.  This is how we’re used to looking at video.

But when video search makes it possible to access information at any point in the video, how will that impact our engagement with that video?  In the last 10 years, we’ve seen some fairly dramatic shifts in how we assimilate written information.  We have moved from our past experience, where information was presented in very much a linear fashion in novels or books, to the way we currently assimilate information on websites.  When we interact with websites, we “berry pick”, hunting in various places on the page for information cues that seemed to offer what we are looking for.  Assimilation of the written word is much more erratic experience right now.  We move in a nonlinear fashion through websites, picking up information and navigating based solely on our intent and the paths we choose for ourselves.  One of the greatest revelations in website design was that we can not restrict users to a linear progression through our site, much as we might want to control their experience.

This adaptation has happened fairly quickly on websites, but will it happen as quickly with video?  When we can search for and access information anywhere in the video, what does that do for the nature of our engagement with that video?  Certainly it opens the door to some very interesting marketing opportunities, with what I’ve previously described as “product placement on steroids”.  The ability to click on any item in a video and instantly be connected to more information about that item creates a tremendous opportunity for advertisers.  But it also opens the potential for multiple paths through a video.  Does watching a video become more like playing a video game, where we can pursue different paths and have different experiences depending on the path we choose?  Does a travel video on Prague become an interactive virtual tour, where we choose our own path through Prague?  And is that interactive virtual tour assembled on-the-fly from dozens of different video clips? do we assemble content based on our intent with the help of our video search tool?  Do video producers take a dramatically more granular approach to producing content, leaving you to assemble the storyline from these individual bits of content, based on what you want to see?

This promises an extraordinarily rich user experience.  Consider how this might play out for an individual user.  We go to Google video search tool and search for the Loreta, one of the top tourist attractions in Prague.  We find a clip that takes us on a quick virtual tour and within the clip we could click on other things of interest.  For instance, we could climb to the top of the bell tower and take a look over Prague.  We could click on any building and if there was a video available we would be instantly transported to that building.  Or, if we choose, we could search for the nearest hotel and find the corresponding video clip.  The entire video has been indexed so no matter what we click on, our video search engine can use that to initiate a query and bring us back the resulting clips.  The clips are assembled into a virtual montage that we can navigate through depending on our interest areas.  We create a virtual version of Prague, assembled from all the video content that’s available, and we can access just what we’re interested in and search for any content that might be embedded into any of those individual video files.  Underneath this layer of video content there could be additional layers of functionality.  For instance you could tie it in with mapping functionality, à la Google Earth.  You could tie in Web search functionality so that you could easily click through to the relevant websites.  This could also provide access to booking engines and a number of other potential actions that we could take.

Such an experience is not that great a stretch from where we are currently at.  To see how it might play out take a look at Microsoft’s PhotoSynth.

photosynth

PhotoSynth View of Piazza San Marco in Venice

It does just what I’m describing with video, only with pictures.  It creates a 3-D world from the thousands of pictures that have been publicly shared.  I highly recommend taking it for a spin, as it provides a fascinating look at what human computer interfaces can be.

As we start considering the possibilities for video, the problem is we’re still stuck in our current paradigm of how we interact with video.  My feeling is once indexing technology allows us to truly index the content of the video, the nature of our interaction with video will completely change.  We’ll take the sensory input we expect from video and extend that into our typical user experience with more types of content.  Our interfaces will be more satisfying because they will become more like real life.  They will engage more of our senses and put us into a deeper and richer virtual world.  More and more, as technology progresses, our interface with technology will start to look more like our experience with the physical world.  As this happens, we will have the ability to step from a interface that engages our senses of sight and sound into a more abstract world where we interact with written text.  The transition between these two interfaces will be seamless and we can step back and forth as we wish.

The promise of video lies not so much in taking video as we know it and bringing it online.  The promise of video is that it provides a distinctly different user experience which could prove to be the new interface to technology.  But to make this happen we have to be able to index and search for the content that lies embedded within video.  We have to be able to take that video content and manipulate and mold it into a virtual world that we can interact with.  And that is the promise that lies within the next-generation video search.

Improving the Odds of Connecting with Your Target Market

Kim Krause Berg had a interesting additional thought to my post about eye tracking. Her question, “What happens when your target market gets up on the wrong side of the bed?”.

This got me to thinking about the validity of market research and understanding more about your target customer. Kim’s point, which she makes quite clearly, is that people are people and all the research in the world won’t be able to tell you if your target customers having a bad day, or for that matter, an extraordinarily good day, when they are interacting with your site. How much of a role does emotion play with predicted behavior?

In marketing and user centered design circles, we often talk about our targeted users and customers. Companies with money to blow will run studies on who their target consumers are, or run focus groups on what people love and hate about their products. The human factors industry studies human-computer behavior. Usability companies try to understand what ticks off end users. Conversions experts look for all the reasons behind failed sales. Search engine marketers dig deep for keywords used by the perfect end user who knows exactly what they’re looking for.

Once all this data is gathered, white papers are written, case studies are published and articles are run that inform us about what our site visitors and product users want, what they like, how they make choices and why. We may think we’re very cool and savvy to have found the holy grail of ROI.

What if your product, service, internet application or website is humming along, primed for the perfect targeted end user and that person is suddenly different?

Perhaps they are emotionally upset. PMS. Menopausal. Facing surgery. Sleepless parents. Overworked wage earners. Out of work. On medication. Depressed. Drunk. Suffers a sudden loss of eyesight or use of their hands. There are a zillion reasons why someone has an “off” day, is feeling emotionally or mentally out of whack or drastically changes in some way. This can last for a day, or longer.

Either way, what they are dealing with, at the moment they are accessing your website, service, product or application, may have an impact on how successful they are at completing a task.

Marketing is a game of percentages. It’s all about increasing your odds of hitting that perfect combination: putting the right message in front of the right person at the right time. Will you get it right 100% of the time? Of course not. But then again, if you can improve your odds of success from 50% to 60 or 70% you’ve just scored a huge marketing coup.

When you reduce marketing to one to one communication, you’re completely dependent on the receptiveness of your intended target. Unless you’re in front of the person when you communicate with them, there’s no way for you to pick up their mood or emotion. You can’t alter your message accordingly to the signals that you’re picking up. But the interesting thing is, as variable as people are on an individual basis, if you put enough of them together they start reacting in predictable patterns. While it might be impossible to predict the success of your message on an individual basis, the greater the size of the group, the more confident you are in predicting what the aggregate patterns will look like. And that’s where understanding more about your target market can dramatically improve your odds. If Kim is in my target market, I might not know what her mood might be on any given day. If I have 10,000 Kim’s in my target market, I can be fairly sure that on any given day a certain percentage of them will be in a good mood, a certain percentage will be in a bad mood, and a certain percentage will be relatively ambivalent. I don’t have to be precise on a one-to-one level, because the law of averages works in my favor. I’ll get more right than wrong. What is important, however, is that you have a good understanding of what all those Kim’s generally like, what motivates them, and what their intent is when they interact with my brand.

There’s a lot of talk about personas as a tool to help you understand your target market better. One of the reasons people hesitate to use personas is that it feels odd, when your target market could be made up of thousands or millions of individuals, to build a conceptual framework represents just one individual. Again, it seems like you’re oversimplifying the collective needs and wants of your segment. But the power of a persona is the way it forces you to shift your paradigm, the way it forces you to look at things from a customer’s point of view and interact with your brand through their eyes, not yours. It’s this fundamental shift in thinking that has to happen to be able to effectively close communication. Once you build your persona framework, you can start dropping in the individual pieces of research intelligence you might have on your target market. It helps to create a profile, complete with a much greater understanding of what motivates that target, relative to your offering. It’s very difficult start a conversation with someone when you have no idea who you’re talking to.

The whole point of communication is to effectively connect and transfer information back and forth. The greater the understanding, the greater the odds of making that connection. Ideally, we should all be able to sit in front of each individual we’re communicating with and be able to read their body language, be able to pick up their signals, be able to interpret their moods and emotions. This being impossible (my track record with my wife is pretty abysmal and I live with her every day) the next best thing is to understand more about the group as a whole and what motivates them, and then to be able to craft your messaging in a way that resonates with them. Again, it’s all about improving your odds for success. If Kim gets up on the wrong side of the bed today, I might totally blow my chances of getting the right message to her, simply because she’s not in the mood to receive it. But for every one I get wrong, there will be several more that I get right.

Shari Thurow Talking Smack about Eye Tracking

You know, if I didn’t know better I’d say that Shari Thurow had issues with me and eye tracking. I ran across a column a couple of weeks ago where she was talking about the niches that SEO’s are carving out for themselves and she mentioned eye tracking specifically. In fact she devoted a whole section to eye tracking. Now, it’s pretty hard not to take it personally when Enquiro is the only search marketing company I know that does extensive eye tracking. We’re the only ones I’m aware of that have eye tracking equipment in-house. So when Shari singles out eye tracking and warns about using the results in isolation…

That brings me to my favorite group of SEO specialists: search usability professionals. As much as I read and admire their research, they, too, often don’t focus on the big picture.

…I’m not sure who else she might be talking about.

I’ve been meaning to post on this for awhile but I just didn’t get around to it. I’m on the road today and feeling a little cranky so what the heck. It’s time to respond in kind. First, here’s Shari’s take on on eye tracking and SEO.

Eye-tracking data is always fascinating to observe on a wide variety of Web pages, including SERPs (define). As a Web developer, I love eye-tracking data to let me know how well I’m drawing visitors’ attention to the appropriate calls to action for each page type.

Nonetheless, eye-tracking data can be deceiving. Most search marketers understand the SERP’s prime viewing area, which is in the shape of an “F.” Organic or natural search results are viewed far more often than search engine ads are, and (as expected) top, above-the-fold results are viewed more often than the lower, below-the-fold results. Viewing a top listing in a SERP isn’t the same as clicking that link and taking the Web site owner’s desired call to action.

Remember, usability testing isn’t the same as focus groups and eye tracking. Focus groups measure peoples’ opinions about a product or service. Eye-tracking data provide information about where people focus their visual attention. Usability testing is task-oriented. It measures whether participants complete a desired task. If the desired task isn’t completed, the tests often reveal the many roadblocks to task completion.

Eye-tracking tests used in conjunction with usability tests and Web analytics analysis can reveal a plethora of accurate information about search behavior. But eye-tracking tests used in isolation yield limited information, just as Web analytics and Web positioning data yield limited (and often erroneous) information.

Okay Shari, you didn’t mention me or Enquiro by name but again, who else would you be talking about?

Actually, Shari and I agree more than we disagree here. I agree that no single data source or research or testing approach provides all the answers, including eye tracking. However, eye tracking data adds an extraordinarily rich layer of data to common usability testing. When Shari says eye tracking is not the same as usability testing, she’s only half right. As Shari points out, eye tracking combines very well with usability testing but in many cases, can be overkill. Usability testing is task oriented. There’s no reason why eye tracking studies can’t be task oriented as well (most of ours are). The eye tracking equipment we use is very unobtrusive. It virtually like interacting with any computer in a usability lab. In usability testing you put someone in front of the computer with the task and asked them to complete the task. Typically you record the entire interaction with software such as TechSmith’s Morae. After you can replay the session and watch where the cursor goes. Eye tracking can capture all that, plus capture where the eyes went. It’s like taking a two dimensional test and suddenly making it three-dimensional. Everything you do in usability can also be done with eye tracking.

The fact is, the understanding we currently have of interaction with the search results would be impossible to know without eye tracking. I’d like to think that a lot of our current understanding of interaction with search results comes from the extensive eye tracking testing we’ve done on the search results page. The facts that Shari says are common knowledge among search marketers comes, in large part, from our work with eye tracking. And we’re not the only ones. Cornell and Microsoft have done their own eye tracking studies, as has Jakob Nielsen, and findings have been remarkably similar. I’ve actually talked to the groups responsible for these other eye tracking tests and we’ve all learned from each other.

When Enquiro produced our studies we took a deep dive into the data that we collected. I think we did an excellent job at not presenting just the top level findings but really tried to create an understanding of what the interaction with the search results page looks like. Over the course of the last two years I’ve talked to Google, Microsoft and Yahoo. I’ve shared the findings of our research and learned a little bit more about the findings of their own internal research. I think, on the whole, we know a lot more about how people interact with search than we did two years ago, thanks in large part to eye tracking technology. The big picture Shari keeps alluding to has broadened and been colored much more extensively thanks to those studies. And Enquiro has tried to share that information as much as possible. I don’t know of anyone else in the search marketing world who’s done more to help marketers understand how people interact with search. When we released our first study, Shari wrote a previous column that basically said, “Duh, who didn’t know this before?” Well, based on my discussions with hundreds, actually, thousands of people, almost everyone, save for a few usability people at each of the main engines.

There are some dangers with eye tracking. Perhaps the biggest danger is that heat maps are so compelling visually. People tend not to go any further. The Golden Triangle image has been displayed hundreds, if not thousands of times, since we first released it. It’s one aggregate snapshot of search activity. And perhaps this is what Shari’s referring to. If so, I agree with her completely. This one snapshot can be deceiving. You need to do a really deep dive into the data to understand all the variations that can take place. But it’s not the methodology of eye tracking that’s at fault here. It’s people’s unwillingness to roll up their sleeves and weed through the amount of data that comes with eye tracking, preferring instead to stop at those colorful heat maps and not go any further. Conclusions on limited data can be dangerous, no matter the methodology behind them. I actually said the same for an eye tracking study Microsoft did that had a few people drawing overly simplified conclusions. The same is true for usability testing, focus groups, quantitative analysis, you name it. I really don’t believe Enquiro is guilty of doing this. That’s why we released reports that are a couple hundred pages in length, trying to do justice to the data we collected.

Look, eye tracking is a tool, a very powerful one. And I don’t think there’s any other tool I’ve run across that can provide more insight into search experience, when it’s used with a well designed study. Personally, if you want to learn more about how people interact with engines, I don’t think there’s any better place to start than our reports. And it’s not just me saying so. I’ve heard as much from hundreds of people who have bought them, including representatives at every major search engine (they all have corporate licenses, as well as a few companies you might have heard of, IBM, HP, Xerox..to name a few). I know the results pages you see at each of the major engines look the way they do in part because of our studies.

Shari says we don’t focus on the big picture. Shari, you should know that you can’t see the big picture until you fill in the individual pieces of the puzzle. That’s what we’ve been trying to do. I only wish more people out there followed our example.

User-centricity is More than Just a Word

Ever since Time Magazine made you and I the person of the year, user experience has been the two words on the tip of everyone’s tongue. We’re all saying that the user is king and that we’re building everything around them. But I fear that user-centricity is quickly becoming one of those corporate clichés that’s easy to say, but much, much harder to do. All too often I see internal fighting in a lot of companies between those that truly get user centricity and have become the internal user champions and those that are continuing to push the corporate agenda, at the expense of the user experience. The tough part of user centricity is seeing things through the users eyes. We can do user testing but if we truly put the user first, it requires tremendous courage and fortitude to make the user the primary stakeholder. All too often, I see user considerations being one of several factors that are being balanced in the overall design. And often, it takes a backseat to other considerations, such as monetization. This is the trap that Yahoo currently finds themselves in. They talk about user experience all the time. But the fact is, over the last two years it’s really been the advertiser whose’s owned their search results page. I’ve recently seen signs of the balance tipping more towards the user’s favor with the rollout of Panama and a more judicious presentation of top sponsored ads. But I’m still not sure the user is winning the battle at Yahoo!

It’s not easy to step inside your user’s head when it comes to designing interfaces. It’s very tought to toggle the user perspective on and off when you’re going through a design cycle. The feedback we get from usability testing tends to be too far removed from the actual implementation of the design. By that time the meat of the findings has been watered down and diluted to the point where the user’s voice is barely heard. That’s why I like personas as a design vehicle. A well formulated persona keeps you on track. It keeps you in the mindset of the user. It gives you a mental framework you can step into quickly and readjust your perspective to that of the user, not the designer.

If you’re truly going to be user centric, be prepared to take a lot of flack from a lot of people. This is not a promise to be made lightly. You have to commit to it and not let anything dissuade you from delivering the best possible end-user experience, defined in the user’s own terms. This can’t be a corporate feel good thing. It has to be a corporate commitment that requires balls the size of Texas. And if you’re going to make a commitment, you better be damn sure that the entire company is also willing to make the same commitment. The user experience group can’t be a lone bastion for the user, fighting a huge sea of corporate momentum going in the opposite direction. This isn’t about balancing the user in the grand scheme of things, it’s about committing wholeheartedly to them and getting everyone else in the organization to make the same commitment. If you can do so, I think the potential wins are huge. There’s a lot of people talking about user centricity but there’s not a lot of people delivering on it consistently and wholeheartedly.

A Caffeine Fueled Vision of the Future

This week, for some reason (largely to do with thinking I could still handle caffeine and being horribly wrong), a number of pieces fell into place for me when it came to looking at how we might interact with computers and the Internet in the future.  I began to sketch that out in my SearchInsider column today (more details about the caffeine episode are in it) , but quickly found that I was at the end of my editorial limit and there were a lot of pieces of the vision that I wasn’t able to draw together.  So I promised to put a post on this blog going into a little more detail.

The ironic thing about this vision was that although I’d never seen it fully described before, as I thought about it I realized a lot of the pieces to make this happen are already in development.  So obviously, somewhere out there, somebody also seen the same vision, or at least pieces of it.  The other thing that struck me was: it all made sense as a logical extension of how I interacted with computers today.  Obviously there’s a lot of technology being developed but if you take each of those vectors and follow it forward into the future, they all seem to converge into a similar picture.

Actually, the most commonly referenced rendering of the future that I’ve seen is the world that Spielberg imagined in his movie Minority Report.  Although anchored in pop culture, the way that Spielberg arrived at his vision is interesting to note. He took the original short story by Philip K. Dick and fleshed it out by assembling a group of futurists, including philosophers, scientists and artists, and putting them together in a think tank.  Together they came up with a vision of the future that was both chilling and intriguing.

I mention Minority Report because there are certain aspects of what I saw the future to be that seem to mirror what Spielberg came up with for his future.  So, let me flesh out the individual components and provides links to technology currently under development that seem to point this way.

The Cloud

First of all, what will the web become?  There’s been a lot of talk about Web 2.0 and Web 3.0, or the Semantic Web envisioned by Tim Berners Lee.  Seth Godin had a particularly interesting post (referenced in my column) that he called the Web4.  All these visions of the Web’s future share common elements. In Godin’s version, “Web4 is about making connections, about serendipity and about the network taking initiative”. This Web knows what we’re doing, knows what we have to do in the future, knows where we are at any given time, knows what we want and works as our personal assistant to tie all those pieces together and make our lives easier.  More than that, it connects us a new ways, creating the ad hoc communities that I talked about in my earlier post, Brain Numbing Ideas on Friday afternoon.

For the sake of this post, I’m calling my version of the new Web “the Cloud”, borrowing some language from Microsoft. For me the Cloud is all about universal access, functionality, connection and information.  The Cloud becomes the repository where we put all our information, both that which we want to make publicly accessible and that which we want to keep private.  Initially this will cause some concern, as we wrestle with the change of thinking required to understand that physical ownership of data does not always equal security of that same data.  We’ll have to gain a sense of comfort that data stored in online repositories can still remain private. 

Another challenge will be understanding where we, ourselves, draw the line between the data we choose to make publicly accessible and the data we want to keep for our own personal use.  There will be inevitable mistakes of an embarrassing nature as we learn where to put up our own firewalls.  But the fascinating part about the Cloud is that it completely frees us physically. We can take all the data we need to keep our lives on track, stored in the Cloud, and have it accessible to us anywhere we are. What’s more, everyone else is doing the same thing.  So within the Cloud, we’ll be able to find anything that anyone chooses to share with us. This could include the music they create, the stories they write, or on a more practical level, what our favorite store currently has in stock, or what our favorite restaurant has on for it’s special tonight.  Flight schedules, user manuals, technical documentation, travel journals…the list is endless.  And it all resides in the Cloud, accessible to us if we choose.

The other really interesting aspect of the Cloud is the functionality it can offer as we begin to build true applications into the web, through Web 2.0 technology. We start to imagine a world where any functionality we could wish for is available when we need it, and where we can buy access as required.  The Cloud becomes a rich source of all the functionality we could ever want.  Some of that functionality we use daily, to create our own schedules, to communicate, to connect with others and to manage our finances.  Some of that functionality we may use once or twice in a lifetime.  It really doesn’t matter because it’s always there for us when we need it.

The functionality of the Cloud is already under development.  The two most notable examples can be found in Microsoft’s new Office Live Suite and in the collection of applications that Google is assembling.  Although both are early in their development cycles, one can already see where they could go in the future.

The final noteworthy aspect of the Cloud is that it will create the basic foundation for all communication in the future.  Our entertainment options will be delivered through the Cloud.  We will communicate with each other through the Cloud, either by talking, writing or seeing each other.  We will access all our information through the Cloud.

For the Cloud to work, it has to be ubiquitous.  This represents possibly the single greatest challenge at the current time.  The Cloud is already being built, but our ability to access the Cloud still depends on the speed of our connection and the fact is right now, our wireless infrastructure doesn’t allow for a robust enough connection to really leverage what the Cloud has to offer.  But universal wireless access is currently being rolled out in more and more locations, so the day is drawing near when access will cease to be a problem.

So, when the Cloud exists, the next question is how do we access it?  Let’s start with the two access points that are most common today: home and at work.

The Home Box

The Home Box becomes the nerve center of our home.  It acts as a control point for all the functionality and communication we need when we’re not at work.  The Home Box consists of a central unit, which doubles as our main entertainment center, and a number of “smart pods” located throughout the home, each connected to a touch screen.

So, what would the Home Box do?  Well first of all, it would inform and entertain us.  The pipeline that funnels our entertainment options to us would be directly connected to the Cloud.  We would choose what we want to see, so the idea of channels becomes obsolete.  All entertainment options exist in the Cloud and we pick and choose what we want, when we want.

Also, the Home Box makes each one of those entertainment options totally interactive.  We can engage with the programming and shape it as we see fit.  We can manipulate the content to match our preferences.  The Home Box can watch four or five sporting events and assemble a customized highlight reel based on what we want to see.  The Home Box can scan the Cloud for new works by artists, whether they be visual artists, music artists or video artists, notifies us when new content is ready for us to enjoy.  If we have an interest that suddenly develops in one particular area, for instance a location that we want to visit on an upcoming vacation, the Home Box assembles all the information that exists, sorted by our preferences, and brings it back to us.  And at any time, while watching a video about a particular destination, we can tag items of interest within the video for further reference.  As soon as they’re tagged, a background application can start compiling information on whatever we indicated we were interested in.  Advertising, in this manifestation, becomes totally interwoven into the experience.  We indicate when we’re interested in something and the connection to the advertiser is initiated by us with a quick click.

But the Home Box is much more than just a smarter TV set or stereo.  It also runs our home.  It monitors energy consumption levels and adjusts them as required.  It monitors what’s currently in our fridge and our pantry (by the way, computers are already being built into fridges) and notifies us when we’re out of something.  Or, if there’s a particular recipe we want to make, it will let us know what we currently have and what we need to go shopping for.

Microsoft already has the vision firmly in mind.  Many of the components are already here.  The limited success of Microsoft’s Windows Media Center has not dissuaded them from this vision of the future.  Windows Media Center is now built into premium versions of the Vista operating system. And the is Smart Pods I refer to?  Each Xbox 360 has the ability to tap right into windows XP Media Center.  The technology is already in place.

The Work Box

Probably the least amount of change that I see in the future is in how we access the Internet at work.  For those who of us who work in an office environment, we’re already fairly well connected to the Internet.  The primary difference in this case would be where the data resides.  Eventually, as we gain comfort with the security protocols that exists within the Cloud, we will feel more comfortable and realize the benefits that come with hosting our corporate data where it’s accessible to all members of the organization, no matter where they are physically located.

But consider what happens for the workers who don’t work in an office environment.  Access to the Cloud now allows them to substantially increase their connectivity and functionality while they’re mobile.  You could instantly access the inventory of any retail location within the chain.  You can see if a parts in stock at the warehouse.  You can access files and documents from anywhere, at any time.  And, you can tap into the core functionality of your office applications as you wish, where ever you happen to be.

Once again, much of the functionality that would enable this is already in place or being developed.  In the last year we at Enquiro have started to realize the capabilities of Microsoft Exchange Server and Sharepoint services.  Just today, Google announced new enterprise level apps would be available on the web. Increasingly, more and more collaborative tools that use the Internet as their common ground are being developed.  The logical next step is to allow these to reside within the Cloud and to free them from the constraints of our own internal hardware and software infrastructure.

The Mobile Device

When we talk about tangible technology that will enable this future; hardware that we can see and touch, the mobile piece of the equation is the most critical.  For us to truly realize the full functionality of the Cloud, we have to have universal access to it.  It has to come with us as we live our lives.  The new mobile device becomes a constant connection to the Cloud.  Small, sleek, GPS enabled, with extended communication capabilities, the new handheld device will become our computing device of choice.  All the data and the functionality that we could require at any time exists in the Cloud.  The handheld device acts as our primary connection to the Cloud  We pull down the information that we need, we rent functionality as required, we do what we have to do and then we move on with our lives.

Our mobile device comes with us and plugs into any environment that we’re in.  When we’re at work, we plug it into a small docking station and all the files that we require are interchanged automatically.  Work we did at home is automatically uploaded to the corporate section of the Cloud, our address books and appointment calendars are instantly updated, new communications are downloaded, and an accurate snapshot of our lives is captured and is available to us.  When we get home again we dock our mobile device and the personal half of our lives is likewise updated.

Consider some practical applications of this:

When we go to the gym, our exercise equipment is now “Cloud” enabled.  Our entire exercise program is recorded on our mobile device.  As we move from station to station we quickly plug it into a docking station, the weights are automatically adjusted, the number of reps is uploaded, and as we do our exercises, appropriate motivating music and messages are heard in our ear. At the same time, our heart rate and other biological signals are being monitored and are being fed back to the exercise equipment, maximizing our workout.

When we’re at home, we quickly plug our mobile device into the Smart Pod in the kitchen, and everything we need to get on our upcoming shopping trip is instantly uploaded.  What’s more, with the functionality built into the Cloud, the best specials on each of the items is instantly determined, the best route to pick up all the items is send to our GPS navigation module, and our shopping trip is efficiently laid out for us. While we’re there, the built in bar code scanner allows us to comparison shop on any item, in the geographic radius we choose.

As I fly back from San Francisco, a flight delay means that I may miss my connecting flight in Seattle.  My mobile device notes this, adjusts my schedule accordingly, automatically notifies my wife and scans airline schedules to see if an alternative flight might still get me home without an unexpected layover near SeaTac Airport. It there’s no way I can make it back, it books me a room at my prefered hotel.

The Missing Pieces

I happen to think this is a pretty compelling vision of the future.  And as it started to come together for me, I was surprised by how many of the components already exist or are being currently developed.  As I said in the beginning, it seems like a puzzle with a lot of the pieces already in place.  There are some things, however, we still need to come together for this vision to become real.  Here are the challenges as I see them.

Computing Horsepower

For the mobile device that I envisioned to become a reality, we have to substantially up the ante of the computing horsepower.  The story that led to my writing of the SearchInsider column was one about the new research chip that is currently under development at Intel.  Right now the super chips are being developed for a new breed of supercomputer, but the trickle-down effects are inevitable.  Just to give you an idea of the quantum leap in performance we’re talking about, the chip is designed to deliver teraflops performance.  Teraflops are trillions of calculations per second.  The first time teraflops performance was achieved was in 1997 on a supercomputer that took up more than 2000 square feet, powered by 10,000 Pentium Pro processors.  With the new development, that same performance is achieved on a single multi-core chip about the size of a fingernail. This opens the door to dramatic new performance capabilities, including a new level of artificial intelligence, instant video communications, photorealistic games, multimedia data mining and real-time speech recognition.

A descendent of this prototype chip could make our mobile device several orders of magnitude more powerful than our most powerful desktop box today.  And when implanted in our Home Box, this new super chip allows us to scan any video file and pick up specific items of interest.  You could scan the top 100 movies of any year to see how many of them reference the city of Cleveland, Ohio (not exactly sure why you’d want to do this), or included a product placement for Apple.

Better Speech Recognition

One of the biggest challenges with mobile computing is the input/output part of the problem.  Small just does not lend itself to being user-friendly when it comes to getting information in and out of the device.  We struggle with tiny keyboards and small screens.  But simply talking has proven to be a remarkably efficient communication tool for us for thousands of years.  The keyboard was a necessary evil because speech recognition wasn’t an option for us in the past.  We can talk much faster than we can talk.

I recently was introduced to Dragon Naturally Speaking for the first time.  I’ve been trying it for about three weeks now.  Although it’s still getting to know me and I’m still getting to know it, when it works it works very well.  I found it a much more efficient way to interact with my computer.  It would certainly make interacting with a mobile device infinitely more satisfying.  The challenge right now with this is that speech recognition requires a fairly quiet environment, you’re constantly speaking to yourself, and mobile devices just don’t have enough computing power to be able to handle it.

We’ve already dealt with the computing horsepower problem above.  So how do we deal with the challenge of being able to get our vocal commands recognized by our mobile device? Let me introduce you to the subvocalization mic.  The mic actually picks up the vibrations from our vocal cords, even if we’re only whispering, and renders recognizable speech without all the background noise.  New prototype sensors can detect sub vocal or silent speech.  We can speak quietly (even silently) to ourselves, no matter how noisy the environment, and our mobile device would be able to understand what we’re saying.

Better Visual Displays

The other challenge with a mobile device is in freeing ourselves from the tiny little 2.5″ x 2 .5″ screen.  It just does not produce a very satisfying user experience.  One of the biggest frustrations I hear about the lack of functionality with many of the mobile apps comes just because we don’t have enough screen real estate.  This is where a heads-up display could make our lives much, much easier.  Right now they’re still pretty cumbersome and make us look like cyborgs but you just know we’re not far from the day where they could easily be built into a pair of non-intrusive eyeglasses.  Then the output from our mobile device can be as large as we wanted to be.

Going this one step further, let’s borrow a scene from Spielberg’s Minority Report.  We have the heads-up display which creates a virtual 3-D representation of the interface.  We could also have sensors on our hands that would turn that display into a virtual 3-D touchscreen experience.  We could “touch” different things within the display and interact with our computing device in this way.  Combined with sub vocalization speech commands, this could create the ultimate user interface.  Does this sound far-fetched?  Microsoft has already developed much of the technology and has licensed it to a company called eon reality.  Like I said no matter what the mind can envision, it’s probably already under development. As I started down this path, it particularly struck me how many of the components under development had the Microsoft brand on them.

If you can fill in other pieces of the puzzle, or you have your own vision of the future, make sure you take a few moments to comment.

Marissa Mayer Interview on Personalization

marissa-mayer-7882_cnet100_620x433Below is the full transcript of the interview with Marissa Mayer on personalization of search results. For commentary, see the Just Behave column on Searchengineland.

Gord: It’s a little more than two weeks ago since Google made the announcement that personalization would become more of a default standard for more users on Google.  Why did you move towards making that call?

Marissa: We’ve had a very impressive suite of personalized products for awhile now: personalized homepage, search history, the personalized webpage and we haven’t had them integrated, which I think has made it somewhat confusing for users. A lot of people didn’t know if they had signed up for search history or personalized search; whether or not it was on.  What we really wanted to do was move to a signed in version of Google and a signed out version of Google.  So if you’re signed in you have access to the personalized home page, the personalized search results and search history.  You know all three of those are working for you when you’re signed in.  And if you’re signed out, meaning that you don’t see an email in the upper right hand corner that personalized search isn’t turned on.  If anything, it’s a cleaning up of the user model, to make it clearer to users what services they’re using them and when they’re using them.

Gord: But some of the criticism actually runs counter to that.  One of the criticisms is that it used to be clearer, as far as the user went, when you were signed in and when you are signed out.  There were more indicators on the Google results page whether you were getting personalized results or not.  Some of those have seemed to disappear, so personalized results have become more of a default now, rather than an option that’s available to the user.

Marissa: If you think about it as default-on when you’re signed in, I think that it’s still as clear on the search results page.  We removed the “turn off the personalized search results” link, but you still see very clearly up in the upper right-hand corner whether or not you’re signed in, your e-mail address appears, and that’s your clue Google has personalized you and that’s why that e-mail address is there.  I do think, based on our user studies and our own usage at Google, that we’ve made the model clearer.  We were actually ended up at the stage with our personalized product earlier this year where, at one point, Eric (Schmidt) asked “am I using personalized search?”  And the team’s answer as to whether or not he was currently using it was so complicated that even he couldn’t follow it.  You’d have to go to “my account”, see whether or not he was signed up for personalized search, make sure that your toggle hadn’t been turned off or on, and there was no way to just glance at the search results page and easily tell whether or not it was invoked.  So now it’s very easy, if you see your username and e-mail address up in the upper left-hand corner, you’re getting personalized results and if you don’t, you’re not.  So effectively there are two parallel universes of Google, per se.  One if you’re signed out where you see the classic homepage and the classic search results and one where you’re signed in, where you get the personalized home page and…you’ll be able to toggle back and forth, of course…and then the personalized search results page and the search history becomes coupled with all that because that’s how we personalize your search.

Gord: So, to sum up, it’s fair to say that really the search experience hasn’t changed that dramatically, it’s just cleaning up the user experience about whether you’re signed in or signed out and that’s been the primary change.

Marissa: That’s right.  Before you could be signed in and be using one of the three products or two of the three products but not all and, of course, because people like to experiment with a new product, they forget whether they signed up for personalized search.  Had they signed up for search history?  This just makes it cleaner.  If you’re signed in you’re using and/or have access to all three, if you’re signed out, you’re on the anonymous version of Google that doesn’t have personalization.

Gord: We can say that it cleans up the user experience because it makes it easier to you know when you’re signed in or signed out, but having done the eye tracking studies, we know that where the e-mail address shows is in a location that’s not prominently scanned as part of the page.  Do the changes mean that more people are going to be looking at personalized search results, just because we’ve made that more of a default opt in and we’ve moved the signals that you’re signed in a little bit out of the scanned area of the page.  Once people fixate on their task they are looking further down the page.  This should mean at a lot more people are looking at personalized search results than previously.

Marissa: Actually, I don’t think it will change the volume of personalized search all that much, not based on what we’ve seen on our logs and usage.  It makes it cleaner to understand whether or not you’re using it and I do think that over time, what it does is it pushes the envelope of search more such that you expect personalized results by default.  And we think that the search engines in the future will become better for a lot of different reasons, but one of the reasons will be that we understand the user better.  And so when we think about how we can advance towards that search engine of the future that we’re building, part of that will be personalization.  I do think that when we look five years out, 10 years out, users will have an expectation of better results.  One of the reasons that they have that expectation is that search engines will have become more personalized.  I think that in the future, working with the search engine that understands something about you will become the expectation.  But you’re right in that we believe that for users that are signed in, who find value in the personalized search results, over time as those users know they are signed in and that there search history is being kept track of, that their search results are being personalized, and they don’t need to look at every single search task to see whether or not they are signed in because that’s what their expectation is and they’re expecting personalized results.  So I do think we won’t see a drastic increase of volume right now of the use of personalized search but that it will hopefully change the user’s disposition over time to become more comfortable that personalization is a benefit for them and it’s something they come to expect.

Gord: There are a number of aspects of that question that I’d like to get into, and leave behind the question of whether you’re signed in or signed out of personalized search, but I have one question before we move on.  We’ve been talking a lot about existing users. The other change was where people were creating a new Google account and they got personalized search and search history by default.  The opt-out box is tucked into an area where most users would go right past it.  The placement of that opt-out box seems to indicate that Google would much rather have people opting into personalized search.

Marissa: I think that falls in with the philosophy that I just outlined. We believe that the search engines of the future will be personalized and that it will offer users better results.  And the way for us to get that benefit to our users is to try and have as many users signed up for personalized search as possible.  And so certainly we’re offering it to all of our users, and we’re going to be reasonably aggressive about getting them to try it out. Of course, we try to make sure they’re well-educated about how to turn it off if that’s what they prefer to do.

Gord: When this announcement came out I saw it as a pretty significant announcement for Google because it lays the foundation for the future.  I would think from Google’s perspective the challenge would be knowing what personalized search could be 5 to 10 years down the road,  what it would mean for the user experience and how do you start adding that incrementally to the user experience in the meantime?  From Google’s side, you have invested in algorithmic work to categorize content online. I would think the challenge would be just as significant to introduce the technology required to disambiguate intent and get to know more about users. You’re not going to hit that out of the park on the first pitch. That’s going to be a continuing trial and error process.  How do you maintain a fairly consistent user experience as you start to introduce personalization without negatively impacting that user experience?

Marissa: I will say that there are a lot of challenges there and a lot of this is something that’s going to be a pragmatic evolution for us.  You have to know that this is not a new development for us. We’ve been working on personalized search now for almost 4 years. It goes back to the Kaltix acquisition. So we’ve been working on it for awhile and our standards are really high.  We only want to offer personalized search if it offers a huge amount of end user benefit.  So we’re very comfortable and confident in the relevance seen from those technologies in order to offer them at all, let alone have them veered more towards the results, as we’re doing today.  We acquired a very talented team in March of 2003 from Kaltix.  It was a group of three students from Stanford doing their Ph.D, headed up by a guy named Sep Kamvar, who is the fellow who cosigned the post with me to the blog. Sep and his team did a lot of PageRank style work at Stanford.  Interestingly enough, one of the papers they produced was on how to compute PageRank faster.  They wrote this paper about how to compute page rank faster and it caused a huge media roil around the web because everyone said there are these students at Stanford who created an even faster version of Google.  Because the press obviously doesn’t understand search engines and thinks that we actually do the PageRank calculation on the fly on each query, as opposed to pre-computing it.  Their advance was actually significant not because it helps you prepare an index faster, which is what the press thought was significant.  Interestingly enough, the reason they were interested in building a faster version of PageRank was because what they wanted to do was be able to build a PageRank for each user.  So, based on seed data on which pages were important to you, and what pages you seemed to visit often, re-computing PageRank values based on that. PageRank as an algorithm is very sensitive to the seed pages.  And so, what they were doing, was that they had figured out a way to sort by host and as a result of sorting by host, be able to compute PageRank in a much more computationally efficient way to make it feasible to compute a PageRank per user, or as a vector of values that are different from the base PageRank.  The reason we were really interested in them was: one, because they really grasped and cogged all of Google’s technology really easily; and, two, because we really felt they were on the cutting edge of how personalization would be done on the web, and they were capable of looking at things like a searcher’s history and their past clicks, their past searches, the websites that matter to them, and ultimately building a vector of PageRank that can be used to enhance the search results.

We acquired them in 2003 and we’ve worked for some time since to outfit our production system to be capable of doing that computation and holding a vector for each user in parallel to the base computation.  We’ve been very responsible in the way that we’ve personalized Search Labs and we also did what we called Site Flavored Search on Labs where you can put a search box on your page and that is geared towards a page of interests that you’ve selected. So if you have a site about baseball you can say you want to base it on these three of your favorite baseball sites and have a search box that has a PageRank that’s veered in that direction for baseball queries.

So, the Kaltix team has been really successful at integrating all these Google technologies and taking this piece of theoretical research and ultimately bringing it to life on the Web.  And as it’s growing stronger and stronger and our confidence around the Kaltix technology grew, we’ve been putting it forward more and more.  We started off on Labs through a sign-up process, then we transitioned it over to Google.com and now we are in effect leaning towards a model where for people who use Google.com and have a Google account, they get personalized search basically by default.  If you look at the historical reviews of the Kaltix work it’s gotten pretty rave reviews.  The users that have noticed it and have been using it for a long time, like Danny (Sullivan), they’ll say that they think it’s one of the biggest advances to relevance that they’ve seen in the past three years.

Gord: So when you the Kaltix technology working over and above the base algorithm, obviously that’s going to be as good as the signals you’re picking up on the individual.  And right now the signals are past sites they visited, perhaps what they put on their personalized homepage and sites that they’ve bookmarked. But obviously the data that you can include to help create that on-the-fly, individual index improves as you get more signals to watch.  In our previous interview you said one thing that was really interesting to you was looking at the context of the task you are engaged in, for example, if you’re composing an e-mail in Gmail. So is contextual relevance another factor to look at.  Are those things that could potentially be rolled into this in the future?

Marissa: I think so.  I think that overall, we really feel that personalized search is something that holds a lot of promise, and we’re not exactly sure of the signals that will yield the best results.  We know that search history, your clicks and your searches together provide a really rich set of signals but it’s possible that some of the other data that Google gathers could also be useful. It’s a matter of understanding how.  There’s an interesting trade off around personalized search for the user which is, as you point out, the more signals that you have and the more data you have about the user, the better it gets.  It’s a hard sell sometimes, we’re asking them to sign up for a service where we begin to collect data in the form of search history yet they don’t see the benefits of that, at least in its fullest form, for some time.  It’s one of those things that we think about and struggle with. And that’s one reason why we’re trying to enter a model where search history and personalized search are, in fact, more expected.  And I should also note that as we look at reading some of the signals across different services we will obviously abide by the posted privacy policies.  So there are certain services where we’ve made it very clear we won’t cross correlate data. For example on Gmail, we’ve made it very clear that we won’t cross correlate that data with searches without being very, very explicit with the end user.  You don’t have to worry about things like that.

Gord: One of the points of concern seems to be how smart will that algorithm get and do we lose control?  For example, when we’re exploring new territory online and we’re trying to find answers we’ve refine our results based on our search experience.  So, at the beginning, we use very generic terms that cast a very wide net and then we narrow our search queries as we go. Somebody said to me, “Well, if we become better searchers, does that decrease the need for personalization?”  Do we lose some control in that?  Do we lose the ability to say “No, I want to see everything, and I will decide how I narrow or filter that query.  I don’t want Google filtering that query on the front end”?

Marissa: I think it really depends on how forcefully we’re putting forth personalization.  And right now we might be very forceful in getting people to sign up to it, or at least more forceful than we were. The actual implementation of personalized search is that as many as two pages of content, that are personalized to you, could be lifted onto the first page and I believe they never displace the first result, in our current substantiation, because that’s a level of relevance that we feel comfortable with.  So right now, at least eight of the results on your first page will be generic, vanilla Google results for that query and only up to two of them will be results from the personalized algorithm.  We’re introducing it in a fairly limited form for exactly the reason that you point out.  And I think if we tend to veer towards a model where there are more results that are personalized, we would have ways of making it clearer: “Do you want to explore this topic as a novice or with the personalization in place?” So the user will be able to toggle in a different filter form.  I think the other thing to remember is, even when personalization happens and lifts those two results onto the page, for most users it happens one out of every five times.  When you think about it, 20% of the queries are much better by doing that, but for 80% of the queries, people are, in fact, exploring topics that are unknown to them and we can tell from their search history that they haven’t searched for anything in this sphere before. There’s no other search like it. They’ve never clicked on any results that are related to this topic, and, as a result, we actually don’t change their query set at all because we know that they need the basic Google results.  The search history is valuable not only because it can help personalize the results but they’re also valuable because we can tell when not to.

Gord: There’s two parts to that: one is the intelligence of the algorithm to know when to push personalization and when not to push personalization, and two, as you said, right now this is only impacting one out of five searches where you may have a couple of new results being introduced into the top 10 as a result of personalization.  But that’s got to be a moving target.  As you become more confident in the technology and that it’s adding to the user experience, personalization will creep higher and higher up the fold and increasingly take over more of the search results page, right?

Marissa: Possibly.  I think that’s one of many things that could possibly happen, and I think that’s a pretty aggressive stance.  I look at our evolution and our foray into personalization, where we’re sitting here three or four years in, with some base technology that several years old already and it still has been very slight in a way that we have it interact with the user experience.  Mostly because we think that base Google is pretty good.  As it becomes more aggressive, certainly I would be pushing for an understanding of the ability of the user to know that these results are, in fact, coming from my personalization and not background and if I want to filter them out and get back to basics, that that would be possible.  One thing that we’ve struggled with is if we should actually mark the results are entering the page as a result of personalization but because team is currently and frequently doing experiments, we didn’t want to settle on a particular model or marker at this exact moment.

Gord: The challenge there is as you roll more personal results into the results page and get feedback from some users that they would want more control over what on the page is personalized and the degree of personalization and introduce more filters or more sophisticated toggles, it complicates the user experience. And as we know, that user experience needs to be very simple. Is it a delicate balance of how much control you give the user versus how much do you impact the 95% of the searches that are just a few seconds in duration and have to be really simple to do?

Marissa: There are two thoughts there.  One, even if we introduce them to filtering on the results page, it wouldn’t be any more complicated than what you had two weeks ago, so we already have that filter.  Two, we put the user first, and people have varying opinions about whether their search results page is too complicated, but the same people who designed that user experience will be the people who will be tackling this for Google, so I think you can expect results of a similar style and direction.

Gord: In the last few weeks, Google has introduced some new functionality, related searches and refine search suggestions, that are appearing at the bottom of the page for a number of searches.  To me that would seem to be a prime area that could be impacted by personalization opportunities that are coming.  As you make suggestions about other queries that you could be using, using that personalization data to refine those. Is that something you’re considering? And how long before personalization starts impacting the ads that are being presented to you on a search results page?

Marissa: Refinement is an interesting but a neophyte technology from our perspective.  We are finally now just beginning to develop some refining technologies that we believe in enough to use on the search results page.  A lot of people have been doing it for a lot longer. When you look at the overall utility, probably 1 to 5% of people will click those query refinements on any given search, where most users, probably more than two thirds of users, end up using one of our results. So in terms of utility and value that is delivered to the end user, the search results themselves and personalizing those are an order of magnitude more impactful then personalizing a query refinement.  So part of it is a question of, it’s such a new technology that we really haven’t looked at how we can make personalization make it work more effectively.  But the other thing is on a “bang for the buck” basis, personalizing those search results get us a lot more.

And as to ads, I think there are some easy ways to personalize ads that we’ve known for some time, but we’ve chosen at this point to focus on personalizing the search results because we wanted to make sure to delivered the end-user value on that, because that’s our focus, before we look at personalizing ads

Gord: So, no immediate plans for the personalization of ads?

Marissa: That’s right

Gord: Thank you so much for your time Marissa.

I Have Seen the Future (Thanks to Regular Coffee)

First published February 22, 2007 in Mediapost’s Search Insider

Why do epiphanies always happen in the middle of the night? Why can’t they be more conveniently scheduled during regular business hours, say between 10 and 11 in the morning or right after afternoon break at around 3: 30? But no, they usually occur somewhere between 2 and 4 in the morning. The fact that I was in a semiconscious state for this particular epiphany has everything to do with the fact that we ran out of decaf at the office yesterday, and I figured I could squeeze in just one cup of regular coffee without serious side effects. I was wrong.

Intel’s New Super Chip

This particular epiphany was catalyzed by a short news story about the new research processor chip that Intel is working on. It promises to be a performance breakthrough of breathtaking proportions and while it’s destined for supercomputers, the trickle-down effect to our everyday computing requirements is inevitable. Moore’s Law just keeps rolling along.

So, I asked myself, sometime between 2:45 and 2:49 a.m., with processing power set to take another leap forward, where would this new technology change our lives the most? The answer: mobile computing.

More Horsepower for Mobile

Some time ago I wrote a column about my frustrations with the limitations of mobile computing as it currently sits. But if you can pack enough horsepower into your average mobile device to facilitate things like speech recognition and more robust support for virtual displays, the mobile computing experience becomes much less frustrating. And when that happens, our entire interaction with the Web changes with it.

Right now the majority of our access probably happens in two places: at work or at home. Mobile access is generally limited to checking e-mails right now, and even that is a truncated experience where we’re scanning subject lines to see if there’s any fires we have to put out.

Godin’s Web4

Another thread that went into the weaving of this epiphany was a post I read on Seth Godin’s blog about a month ago, a post he called Web4. In it, Seth talked about the Web as our personal assistant that helps shuffle our schedule, introduces us to new interests and businesses, and generally makes our lives better in a number of helpful ways. For the Web4 that Godin envisions to happen, our computers have to know where we are, always be connected to the Internet, have a quick and easy way for us to communicate with it, and generally fit our lifestyle much better than the current boxes on our desks, whether they be at home or at work.

Living in the Wireless “Clouds”

Here’s another thread. Microsoft’s Live suite has one purpose: to put the functionality of Microsoft apps at your fingertips no matter where you are, no matter what your connection to online is. It “unhooks” you from the desktop and lets you move around and live your life with wireless freedom.

Computing and online access have to fit us, not the other way around. There are times during the day when we tend to stick in one spot for a while. When that happens, it makes sense for us to have a static access point and computing platform with some of the advantages that a little more elbow room could offer. Two places that come to mind immediately: our workplace, and when we sit down at home to be entertained. The rest of the time our computer should move with us.

The Home Box

At home our computers could become the oft-predicted convergent box that provides our entertainment options, but does more than that. It plugs into our home-based activities and keeps them organized for us. It becomes a communications center, our security system, an energy usage monitor, a recipe book and shopping, but most important, it’s our primary link to all our information and entertainment alternatives, allowing us to interact with those alternatives in ways never previously possible.

The Work Box

If we tend to stay in one place at work, it also makes sense to have a static access point to our corporate networks and the Internet. But the minute we get up from our seat, a mobile device would become the access point and computing platform of choice. All the data and functionality that defines us, the things we want at our fingertips, have to travel with us. When you get home you quickly plug it into your home system and the required information would be quickly transferred and the necessary updates would be done. When you get to work, you plug it in to your corporate network and again the required work-related information would be seamlessly transferred. The rest of the time, this little engineering marvel that knows where you are, what you like and what you have to do today would become your primary connection to the wired world.

Search as the Common Thread

When you look at this always-on, always-wired lifestyle, one can only imagine the dramatic uptick that would happen in all types of search activity. Once again, search becomes the common thread that runs through all that. It’s what allows us at home to search through all our entertainment options and find precisely what we would like to watch or listen to right now. At work, it’s what allows us to sift through the mountain of corporate data that resides either on our internal network or on vast online data repositories to find the file we need right now. And when we’re out there, interacting with the real world, it’s our trusted shortcut to the relevant content on the Web.

I happen to think this vision of the future is pretty darn cool. Unfortunately I’m already pushing the editorial boundaries of this column. There still seems to be a fair amount of regular coffee coursing through my veins, so check out my blog for some additional posts on the topic.