Five Years Later – An Answer to Lance’s Question (kind of)

112309-woman-internetIt never ceases to amaze me how writing can take you down the most unexpected paths, if you let it. Over 5 years ago now, I wrote a post called “Chasing Digital Fluff – Who Cares about What’s Hot?” It was a rant, and it was aimed at marketer’s preoccupation with what the latest bright shiny object was. At the time, it was social. My point was that true loyalty needs stabilization in habits to emerge. If you’re constantly chasing the latest thing, your audience will be in a constant state of churn. You’d be practicing “drive-by” marketing. If you want to find stability, target what your audience finds useful.

This post caused my friend Lance Loveday to ask a very valid question…”What about entertainment?” Do we develop loyalty to things that are entertaining? So, I started with a series of posts on the Psychology of Entertainment. What types of things do we find entertaining? How do we react to stories, or humor, or violence? And how do audiences build around entertainment? As I explored the research on the topic, I came to the conclusion that entertainment is a by-product of several human needs – the need to bond socially, the need to be special, our appreciation for others whom we believe to be special, a quest for social status and artificially stimulated tweaks to our oldest instincts – to survive and to procreate. In other words, after a long and exhausting journey, I concluded that entertainment lives in our phenotype, not our genotype. Entertainment serves no direct evolutionary purpose, but it lives in the shadows of many things that do.

So, what does this mean for stability of an audience for entertainment? Here, there is good news, and bad news. The good news is that the raw elements of entertainment haven’t really changed that much in the last several thousand years. We can still be entertained by a story that the ancient Romans might have told. Shakespeare still plays well to a modern audience. Dickens is my favorite author and it’s been 144 years since his last novel was published. We haven’t lost our evolved tastes for the basic building blocks of entertainment. But, on the bad news side, we do have a pretty fickle history when it comes to the platforms we use to consume our entertainment.

This then introduces a conundrum for the marketer. Typically, our marketing channels are linked to platforms, not content. And technology has made this an increasingly difficult challenge. While we may connect to, and develop a loyalty for, specific entertainment content, it’s hard for marketers to know which platform we may consume that content on. Take Dickens for example. Even if you, the marketer, knows there’s a high likelihood that I may enjoy something by Dickens in the next year, you won’t know if I’ll read a book on my iPad, pick up an actual book or watch a movie on any one of several screens. I’m loyal to Dickens, but I’m agnostic as to which platform I use to connect with his work. As long as marketing is tied to entertainment channels, and not entertainment content, we are restricted to targeting our audience in an ad hoc and transitory manner. This is one reason why brands have rushed to use product placement and other types of embedded advertising, where the message is set free from the fickleness of platform delivery challenges. If you happen to be a fan of American Idol, you’re going to see the Coke and Ford brands displayed prominently whether you watch on TV, your laptop, your tablet or your smartphone.

It’s interesting to reflect on the evolution of electronic media advertising and how it’s come full circle in this one regard. In the beginning, brands sponsored specific shows. Advertising messages were embedded in the content. Soon, however, networks, which controlled the only consumption choice available, realized it was far more profitable to decouple advertising from the content and run it in freestanding blocks during breaks in their programming. This decoupling was fine as long as there was no fragmentation in the channels available to consume the content, but obviously this is no longer the case. We now watch TV on our schedule, at our convenience, through the device of our choice. Content has been decoupled from the platform, leaving the owners of those platforms scrambling to evolve their revenue models.

So – we’re back to the beginning. If we want to stabilize our audience to allow for longer-term relationship building, what are our options? Obviously, entertainment offers some significant challenges in this regard, due mainly to the fragmentation of platforms we use to consume that content. If we use usefulness as a measure, the main factor in determining loyalty is frequency and stability. If you provide a platform that becomes a habit, as Google has, then you’ll have a fairly stable audience. It won’t destabilize until there is a significant enough resetting of user expectations, forcing the audience to abandon habits (always very tough to do) and start searching for another useful tool that is a better match for the reset expectations. If this happens, you’ll be continually following your audience through multiple technology adoption curves. Still, it seems that usefulness offers a better shot at a stable audience than entertainment.

But there’s still one factor we haven’t explored – what part does social connection play? Obviously, this is a huge question that the revenue models of Facebook, Twitter, Snapchat and others will depend on. So, with entertainment and usefulness explored ad nauseum, in the series of posts, I’ll start tracking down the Psychology of Social connection.

The Psychology of Usefulness: How We Made Google a Habit

In the last two posts, I looked first at the difference between autotelic and exotelic activities, then how our brain judges the promise of usefulness. In today’s post, I want to return to the original question: How does this impact user loyalty? As we use more and more apps and destinations that rely on advertising for their revenues, this question becomes more critical for those apps and destinations.

The obvious example here is search engines, the original functional destination. Google is the king of search, but also the company most reliant on these ads. For Google, user loyalty is the difference between life and death. In 2012, Google made a shade over 50 billion dollars (give or take a few hundred million). Of this, over $43 billion came from advertising revenue (about 86%) and of that revenue, 62% came from Google’s own search destinations. That a big chunk of revenue to come from one place, so user loyalty is something that Google is paying pretty close attention to.

Now, let’s look at how durable that hold Google has on our brains is. Let’s revisit the evaluation cascade that happens in our brain each time we contemplate a task:

  • If very familiar and highly stable, we do it by habit
  • If fairly familiar but less stable, we do it by a memorized procedure with some conscious guidance
  • If new and unfamiliar, we forage for alternatives by balance effort required against

Not surprisingly, the more our brain has to be involved in judging usefulness, the less loyal we are. If you can become a habit, you are rewarded with a fairly high degree of loyalty. Luckily for Google, they fall into this category – for now. Let’s look at little more at how Google became a habit and what might have to happen for us to break this habit.

Habits depend on three things: high repetition, a stable execution environment and consistently acceptable outcomes. Google was fortunate enough to have all three factors present.

First – repetition. How many times a day do you use a search engine? For me, it’s probably somewhere between 10 and 20 times per day. And usage of search is increasing. We search more now than we did 5 years ago. If you do something that often throughout the day it wouldn’t make much sense to force your brain to actively think it’s way through that task each and every time – especially if the steps required to complete that task don’t really change that much. So, the brain, which is always looking for ways to save energy, records a “habit script” (or, to use the terminology of Ann Graybiel – “chunks”) that can play out without a lot of guidance. Searching definitely meets the requirements for the first step of forming a habit.

Second – stability. How many search engines do you use? If you’re like the majority of North Americans, you probably use Google for almost all your searches.  This introduces what we would call a stable environment. You know where to go, you know how to use it and you know how to use the output. There is a reason why Google is very cautious about changing their layout and only do so after a lot of testing. What you expect and what you get shouldn’t be too far apart. If it is, it’s called disruptive, and disruption breaks habits. This is the last thing that Google wants.

Third – acceptable outcomes. So, if stability preserves habits, why would Google change anything? Why doesn’t Google’s search experience look exactly like it did in 1998 (fun fact – if you search Google for “Google in 1998” it will show you exactly what the results page looked like)? That would truly be stable, which should keep those all important habits glued in place. Well, because expectations change. Here’s the thing about expected utility – which I talked about in the last post. Expected utility doesn’t go away when we form a habit, it just moves downstream in the process. When we do a task for the first time, or in an unstable environment, expected utility precedes our choice of alternatives. When a “habit script” or “chunk” plays out, we still need to do a quick assessment of whether we got what we expected. Habits only stay in place if the “habit script” passes this test. If we searched for “Las Vegas hotels” and Google returned results for Russian borscht, that habit wouldn’t last very long.  So, Google constantly has to maintain this delicate balance – meeting expectations without disrupting the user’s experience too much. And expectations are constantly changing.

Internet adoption over time chartWhen Google was introduced in 1998, it created a perfect storm of habit building potential. The introduction coincided with a dramatic uptick in adoption of the internet and usage of web search in particular.  In 1998, 36% of American adults were using the Internet (according to PEW). In 2000, that had climbed to 46% and by 2001 that was up to 59%. More of us were going online, and if we were going online we were also searching.  The average searches per day on Google exploded from under 10,000 in 1998 to 60 million in 2000 and 1.2 billion in 2007. Obviously, we were searching  – a lot – so the frequency of task prerequisite was well in hand.

Now – stability. In the early days of the Internet, there was little stability in our search patterns. We tended to bounce back and forth between a number of different search engines. In fact, the search engines themselves encouraged this by providing “Try your search on…” links for their competitors (an example from Google’s original page is shown below). Because our search tasks were on a number of different engines, there was no environmental stability, so no chance for the creation of a true task. The best our brains could do at this point was store a procedure that required a fair amount of conscious oversight (choosing engines and evaluating outcomes). Stability was further eroded by the fact that some engines were better at some types of searches than others. Some, like Infoseek, were better for timely searches due to their fast indexing cycles and large indexes. Some, like Yahoo, were better at canonical searches that benefited from a hierarchal directory approach. When searching in the pre-Google days, we tended to match our choice of engine to the search we were doing. This required a fairly significant degree of rational neural processing on our part, precluding the formation of a habit.

Googlebottompage1998

But Google’s use of PageRank changed the search ballgame dramatically. Their new way of determining relevancy rankings was consistently better for all types of searches than any of their competitors. As we started to use Google for more types of searches because of their superior results, we stopped using their competitors. This finally created the stability required for habit formation.

Finally, acceptable outcomes. As mentioned above, Google came out of the gate with outcomes that generally exceeded our expectations, set by the spotty results of their competitors. Now, all Google had to do to keep the newly formed habit in place was to continue to meet the user’s expectations of relevancy. Thanks to truly disruptive leap Google took with the introduction of PageRank, they had a huge advantage when it came to search results quality. Google has also done an admirable job of maintaining that quality over the past 15 years. While the gap has narrowed significantly (today, one could argue that Bing comes close on many searches and may even have a slight advantage on certain types of searches) Google has never seriously undershot the user’s expectations when it comes to providing relevant search results. Therefore, Google has never given us a reason to break our habits. This has resulted in a market share that has hovered over 60% for several years now.

When it comes to online loyalty, it’s hard to beat Google’s death grip on search traffic. But, that grip may start to loosen in the near future. In my next post, I’ll look the conditions that can break habitual loyalty, again using Google as an example. I’ll also look at how our brains decide to accept or reject new useful technologies.

Google Holds the Right Cards for a Horizontal Market

First published January 9, 2014 in Mediapost’s Search Insider

android_trhoneFunctionality builds up, then across. That was the principle of emerging markets that I talked about in last week’s column. Up – then across – breaking down siloes into a more open, competitive and transparent market. I’ll come back here in a moment.

I also talked about how Google + might be defining a new way of thinking about social networking, one free of dependence on destinations. It could create a social lens through which all our online activity passes through, adding functionality and enriching information.

Finally, this week, I read that Google is pushing hard to extend Android as the default operating system in the Open Automotive Alliance – turning cars into really big mobile devices. This builds on Android’s dominance in the smartphone market (with an 82% market share).

See a theme here?

For years, I’ve been talking about the day when search transitions from being a destination to a utility, powering apps which provide very specific functionality that far outstrips anything you could do on a “one size fits all” search portal. This was a good news/bad news scenario for Google, who was the obvious choice to provide this search grid. But, in doing so, they lose their sole right to monetize search traffic, a serious challenge to their primary income source. However, if you piggy back that search functionality onto the de facto operating system that powers all those apps, and then add a highly functional social graph, you have all the makings of a foundation that will support the ‘horizontalization” of the mobile connected market. Put this in place, and revenue opportunities will begin falling into your lap.

The writing is plainly on the wall here. The future is all about mobile connections. It is the foundation of the Web of Things, wearable technology, mobile commerce – anything and everything we see coming down the pipe.  The stakes are massive. And, as markets turn horizontal in the inevitable maturation phase to come, Google seems to be well on their way to creating the required foundations for that market.

Let’s spend a little time looking at how powerful this position might be for Google. Microsoft is still coasting on their success in creating a foundation for the desktop, 30 years later.  The fact that they still exist at all is testament to the power of Windows. But the desktop expansion that happened was reliant on just one device – the PC. And, the adoption curve for the PC took two decades to materialize, due to two things: the prerequisite of a fairly hefty investment in hardware and a relatively steep learning curve. The mobile adoption curve, already the fastest in history, has no such hurdles to clear. Relative entry price points are a fraction of what was required for PCs. Also, the learning curve is minimal. Mobile connectivity will leave the adoption curve of PCs in the dust.

In addition, an explosion of connected devices will propel the spread of mobile connectivity. This is not just about smart phones. Two of the biggest disruptive waves in the next 10 years will be wearable technologies and the Web of Things. Both of these will rely on the same foundations, an open and standardized operating system and the ability to access and share data. At the user interface level, the enhancements of powerful search technologies and social-graph enabled filters will significantly improve the functionality of these devices as they interface with the “cloud.”

In the hand that will have to inevitably be played, it seems that Google is currently holding all the right cards.

Revisiting Entertainment vs Usefulness

brain-cogsSome time ago, I did an extensive series of posts on the psychology of entertainment. My original goal, however, was to compare entertainment and usefulness in how effective they were in engendering long-term loyalty. How do our brains process both? And, to return to my original intent, in that first post almost 4 years ago, how does this impact digital trends and their staying power?

My goal is to find out why some types of entertainment have more staying power than other types. And then, once we discover the psychological underpinnings of entertainment, lets look at how that applies to some of the digital trends I disparaged: things like social networks, micro-blogging, mobile apps and online video. What role does entertainment play in online loyalty? How does it overlap with usefulness? How can digital entertainment fads survive the novelty curse and jump the chasm to a mainstream trends with legs?

In the previous set of posts, I explored the psychology of entertainment extensively, ending up with a discussion of the evolutionary purpose of entertainment. My conclusion was that entertainment lived more in the phenotype than the genotype. To save you going back to that post, I’ll quickly summarize here: the genotype refers to traits actually encoded in our genes through evolution – the hardwired blueprint of our DNA. The phenotype is the “shadow” of these genes – behaviors caused by our genetic blueprints. Genotypes are directly honed by evolution for adaptability and gene survival. Phenotypes are by-products of this process and may confer no evolutionary advantage. Our taste for high-fat foods lives in the genotype – the explosion of obesity in our society lives in the phenotype.

This brings us to the difference between entertainment and usefulness – usefulness relies on mechanisms that predominately live in the genotype.  In the most general terms, it’s the stuff we have to do to get through the day. And to understand how we approach these things on our to-do list, it’s important to understand the difference between autotelic and exotelic activities.

Autotelic activities are the things we do for the sheer pleasure of it. The activity is it’s own reward. The word autotelic is Greek for “self + goal” – or “having a purpose in and not apart from itself.” We look forward to doing autotelic things. All things that we find entertaining are autotelic by nature.

Exotelic activities are simply a necessary means to an end. They have no value in and of themselves.  They’re simply tasks – stuff on our to do list.

The brain, when approaching these two types of activities, treats them very differently. Autotelic activities fire our reward center – the nucleus accumbens. They come with a corresponding hit of dopamine, building repetitive patterns. We look forward to them because of the anticipation of the reward. They typically also engage the prefrontal medial cortex, orchestrating complex cognitive behaviors and helping define our sense of self. When we engage in an autotelic activity, there’s a lot happening in our skulls.

Exotelic activities tend to flip the brain onto its energy saving mode. Because there is little or no neurological reward in these types of activities (other than a sense of relief once they’re done) they tend to rely on the brain’s ability to store and retrieve procedures. With enough repetition, they often become habits, skipping the brain’s rational loop altogether.

In the next post, we’ll look at how the brain tends to process exotelic activities, as it provides some clues about the loyalty building abilities of useful sites or tools. We’ll also look at what happens when something is both exotelic and autotelic.

Google’s Etymological Dream Come True

First published November 14, 2013 in Mediapost’s Search Insider

Yesterday’s Search Insider column caught my eye. Aaron Goldman explained how search ads were the original native ads. He also explained why native ads work. This is backed up by research we did about 5 years ago, showing how contextual relevance substantially boosted ad effectiveness (but not, ironically, ad awareness). I did a fairly long blog post on the concept of “aligned” intent, if you really want to roll up your sleeves and dive in.

The funny thing was, I was struck by the use of the word “native” itself. For some reason, the use of the term in today’s more politically charged world struck a note of immediate uneasiness. On a gut level, it reminded me of the insensitivity of Daniel Snyder, owner of the Washington Redskins. There’s nothing immoral about the term itself, but it is currently tied to an emotionally charged issue.

As I often do, I decided to check the etymological roots of “native” and immediately noticed something different on the Google search page.  There, at the top, was an etymological time line, showing the root of “native” is the Latin “nasci” – meaning born. So, it was entirely appropriate, given Aaron’s assertion that “native” advertising was “born” on the search page. But it was at the bottom, where a downwards arrow promised “more,” that I hit etymological pay dirt.

Google showed me the typical dictionary entries, but at the bottom, it gave me a chart from it’s nGram viewer showing usage of “native” in books and publications over the past 200 years. Interestingly, the term has been in slow decline over the past 200 hundred years, with a bit of a resurgence over the last 25 years. When I clicked on the graph it broke it down further, showing that small-n “native” has been used less and less, but big-N “Native” took a jump in popularity in the mid-80’s, accounting for the mild bump.

Google’s nGram isn’t new, but its capabilities have been recently beefed up, providing a fascinating visual tool for us “wordies” out there. With it, you can plot the popularity of words over 500 years in a body of over 5 million books. For example, a blog post at Informationisbeautiful.net shows several fascinating word trend charts in the English corpus, including drug trends (cocaine was a popular topic in Victorian times, slowed down in the 20’s and exploded again in the 80’s), the battle of religion vs science (the popularity cross over was in 1930, but the trend has reversed and we’re heading for another one) and interest in sex vs. marriage (sex was barely mentioned prior to 1800, stayed relatively constant until 1910 and grew dramatically in the 70s, but lately it’s dropped off a cliff. Marriage has had a spikier history but has remained fairly constant in the last 200 years.)

I tried a few charts of my own. Since 1885, “Evolution” has beaten “Creation,” but it took a noticeable drop during the 30’s. Since 1960 both have been on the rise.  In1980, Apple got off to an initial head start, but Microsoft passed it in 1992, never to look back (although it’s had a precipitous decline since 2000.)  Perhaps the most interesting chart is comparing “radio”, “television” and “internet” since 1900. Radio started growing in the 20’s and hit its popularity peak around 1945, but the cross-over with television would take another 40 years (about 1982.) Television would only enjoy a brief period of dominance. In 1990, the meteoric rise of the Internet started and it surpassed both radio and television around 1997.

tvradiointernet

My final chart was to see how Google fared in it’s own tool. Not surprisingly, Google has dominated the search space since 2001, and done so quite handily. Currently, it’s 6 times more popular than its rivals, Yahoo and Bing.  One caveat here though – Bing’s popularity started to climb in 1830, so I think they’re talking about either the cherry, Chinese people named Bing or a German company that used to make kitchen utensils.  Either that, or Microsoft has had their search engine in development a lot longer than anyone guessed.

googleyahoobing

Yahoo Under the Mayer Regime

First published November 7, 2013 in Mediapost’s Search Insider

marissa-mayer-7882_cnet100_620x433OK, it has a new logo. The mail interface has been redesigned. But according to a recent New York Timespiece, Yahoo still doesn’t know what it wants to be when it grows up. Marissa Mayer seems to be busy, with a robust hiring spree, eight new acquisitions, 15 new product updates, a nice 20% bump in traffic and a stock price that’s been consistently heading north. But all this activity hasn’t seemed to coalesce into a discernible strategy — from the outside, anyway.

It’s probably because Mayer is busy rebuilding the guts of the organization. Cultures are notoriously difficult things to change. In any organization where a major change in direction is required, you will have to deal with several layers of inertia — and, even more challenging, momentum heading the wrong way.  In the blog post, design guru Don Norman agrees, ““The major changes she has made are not what the logo looks like or a new Yahoo Mail. The major changes are what the company looks like internally. She’s revitalizing the inside of the company, and what everyone sees on the surface are just little ripples.”

To be fair, Yahoo has been an organization lacking a clear direction for a long, long time. I remember speaking at the Sunnyvale campus years ago, when Yahoo was still being remade into a media property, under the direction of Terry Semel. There were entire departments (including the core search team) that felt cut adrift. Since then, the strategic direction of Yahoo has resembled that of a Roomba vacuum, plowing forward until it senses an obstacle, then heading off in an entirely new direction.

What was interesting about the recent Times post was the marked contrast to the rumors and kvetching coming from Mayer’s old digs: Google. There, the big news seems to be the ultra-secret party barge anchored in San Francisco bay. And a Quora thread entitled “What’s the Worst Part about Working at Google?” paints a picture of a frat house that has yet to wake up and realize the party’s over:

  • Overqualified people working at menial jobs.
  • Frustration at not being able to contribute anything meaningful in an increasingly bureaucratic environment.
  • Engineers with egos outstripping their skills.
  • Bottlenecks preventing promotion,
  • A permanent “party” atmosphere that makes it difficult to get any actual work done.

But perhaps the most telling comment came from someone who spent seven years at Google, who said that all the meaningful innovation comes from an exceedingly small group, headed by Larry and Sergey. The rest of the Googlers are just along for the ride:

Here’s something to ponder.  The only meaningful organic products to come out of Google were Search and then AdSense.  (Android — awesome, purchased.  YouTube — awesome, purchased, etc. Larry and/or Sergey were obviously intimately involved in both.  Maps – awesome, purchased. Google Plus is a flop for all non-Googlers globally, Chrome browser is great, but no direct monetization (indirectly protects search), the world has passed the Chrome OS by… etc. ) Fast-forward 14 years, and the next big thing from Google, I bet, will be Google Glass, and guess who PMd it.  Sergey Brin.  Tiny number of wave creators, huge number of surfers.

So we have Google, still surfing a wave that started 15 years ago, and Yahoo struggling to get in position to catch the next one. For both, the challenge is a fundamental one: How do you effect change in a massive organization and get thousands of employees contributing in a meaningful way? Ironically, it may turn out that Marissa Mayer has significant advantage here. If you’re bright, ambitious and looking to do something meaningful with your career, what would be more appealing: trying to shoehorn your way into an already overcrowded house party, or the opportunity to roll up your sleeves and resurrect one of the Web’s great brands?

What is this “Online” You Speak Of?

First published September 12, 2013 in Mediapost’s Search Insider.

I was in an airport yesterday, and I was eavesdropping. That’s what I do in airports. It’s much more entertaining than watching the monitors. In this particular case, I was listening to a conversation between a well-dressed elderly gentleman, probably in his late ’80s, and what appeared to be his son. They were waiting for pre-boarding. The son was making that awkward small talk — you know, the conversation you have when you don’t really know your parent well enough anymore to be able to talk about what they’re really interested in, but you still feel the need to fill the silence. In this case, the son was talking to his dad about a magazine: “I used to get a copy every time I flew to London,” he said. “But they don’t publish it anymore. It’s all done online.”

The father, who had the look and appearance of a retired university professor, looked at his son quizzically for a few minutes. It’s as if the son had suddenly switched from English to Swahili midstream in his conversation.

“What’s ‘online’?”

“Online — on the Internet. It’s published electronically. There’s no print version anymore?”

The father grappled with the impact of this statement, then shook his head slowly and sadly. “That’s very sad. I suppose the mail service’s days are numbered too.”

The son replied, “Oh yes, I’m sure. No one mails things anymore.”

“But what will I do? I still buy things from catalogs.” It was as if the entire weight of the last two-and-a-half decades had suddenly settled on the frail gentleman’s shoulders.

At first, I couldn’t believe that anyone still alive didn’t know what “online” was. Isn’t that pretty much equivalent to oxygen or gravity now? Hasn’t it reached the point of ubiquity at which we all just take it for granted, no longer needing to think about it?

But then, because in the big countdown of life, I’m also on the downhill slope, closer to the end than to the beginning, I started thinking about how wrenching technological change has become. If you don’t keep up, the world you know is swept away, to be replaced with a world where your mail carrier’s days are numbered, the catalogs you depend on are within a few years of disappearing, and everything seems to be headed for the mysterious destination known as “online.”

As luck would have it, my seat on the airplane was close enough to this gentleman’s that I was able to continue my eavesdropping (if you see me at an airport, I advise you to move well out of earshot). You might have thought, as I first did, that he was in danger of losing his marbles. I assure you, nothing could be further from the truth. For over four hours, he carried on intelligent, informed conversations on multiple topics, made some amazing sketches in pencil, and generally showed every sign of being the man I hope to be when I’m approaching 90. This was not a man who had lost touch with reality; this was a man who is continually surprised (and, I would assume, somewhat frustrated) to find that reality seems to be a moving target.

We, the innovatively smug, may currently feel secure in our own technophilia, but our ability to keep up with the times may slip a little in the coming years. It’s human to feel secure with the world we grew up and functioned in. Our evolutionary environment was substantially more stable than the one we know today. As we step back from the hectic pace, don’t be surprised if we lose a little ground. Someday, when our children speak to us of the realities of their world, don’t be surprised if some of the terms they use sound a little foreign to our ears.

Google Glass and the Sixth Dimension of Diffusion

First published August 29, 2013 in Mediapost’s Search Insider

Tech stock analyst and blogger Henry Blodget has declared Google Glass dead on arrival. I’m not going to spend any time talking about whether or not I agree with Mr. Blodget (for the record, I do – Google Glass isn’t an adoptable product as it sits – and I don’t – wearable technology is the next great paradigm shifter) but rather dig into the reason that he feels Google Glasses are stillborn.

They make you look stupid.

The input for Google Glass is your voice, which means you have to walk around saying things like, “Glass, take a video” or “Glass, what is the temperature?” The fact is, to use Google Glass, you either have to accept the fact that you’ll look like a moron or the biggest jerk in the world. Either way, the vast majority of us aren’t ready to step into that particular spotlight.

Last week, I talked about Everett Rogers’ Diffusion of Technology and shared five variables that determine the rate of adoption. There is actually an additional factor that Rogers also mentioned: “the status-conferring aspects of innovations emerged as the sixth dimension predicting rate of adoption.”

If you look at Roger’s Diffusion curve, you’ll find the segmentation of the adoption population is as follows: Innovators (2.5% of the population), Early Adopters (13.5%), Early Majority (34%), Late Majority (34%)  and Laggards (16%).  But there’s another breed that probably hides out somewhere between Innovators and Early Adopters. I call them the PAs (for Pompous Asses). They love gadgets, they love spending way too much for gadgets, and they love being seen in public sporting gadgets that scream “PA.” Previously, they were the ones seen guffawing loudly into Bluetooth headsets while sitting next to you on an airplane, carrying on their conversation long after the flight attendant told them to wrap it up. Today, they’d be the ones wearing Google Glass.

 

This sixth dimension is critical to consider when the balance between the other five is still a little out of whack. Essentially, the first dimension, Relative Advantage, has to overcome the friction of #2, Compatibility, and #3, Complexity (#4, Trialability, and #5, Observability, are more factors of the actual mechanics of diffusion, rather then individual decision criteria). If the advantage of an innovation does not outweigh its complexity or compatibility, it will probably die somewhere on the far left slopes of Rogers’ bell curve. The deciding factor will be the Sixth Dimension.

This is the territory that Google Glass currently finds itself in. While I have no doubt that the advantages of wearable technology (as determined by the user) will eventually far outweigh the corresponding “friction” of adoption, we’re not there yet. And so Google Glass depends on the Sixth Dimension. Does adoption make you look innovative, securely balanced on the leading edge? Or does it make you look like a dork? Does it confer social status or strip it away? After the initial buzz about Glass, social opinion seems to be falling into the second camp.

This brings us to another important factor to consider when trying to cash in on a social adoption wave: timing. Google is falling into the classic Microsoft trap of playing its hand too soon through beta release. New is cool among the early adopter set, which makes timing critical. If you can get strategic distribution and build up required critical mass fast enough, you can lessen the “pariah” factor. It’s one thing to be among a select clique of technological PAs, but you don’t want to be the only idiot in the room. Right now, with only 8,000 pairs distributed, if you’re wearing a pair, you’re probably the one that everyone else is whispering about.

Of course, you might not be able to hear them over the sound of your own voice, as you stand in front of the mirror and ask Google Glass to “take a picture.”

 

Why I – And Mark Zuckerberg – are Bullish on Google Glass

First published February 28, 2013 in Mediapost’s Search Insider

Call it a Tipping Point. Call it an Inflection Point. Call it Epochal (what ever that means). The gist is, things are going to change — and they’re going to change in a big, big way!

First, with due deference to the brilliant Kevin Kelley, let’s look at how technology moves. In his book “What Technology Wants,” Kelley shows that technology is not dependent on a single invention or inventor. Rather, it’s the sum of multiple, incremental discoveries that move technology to a point where it can breach any resistance in its way and move into a new era of possibility. So, even if Edison had never lived, we’d still have electric lights in our home. If he weren’t there, somebody else would have discovered it (or more correctly, perfected it). The momentum of technology would not have been denied.

Several recent developments indicate that we’re on the cusp of another technological wave of advancement. These developments have little to do with online technologies or capabilities. They’re centered on how humans and hardware connect — and it’s impossible to overstate their importance.

The Bottleneck of Our Brains

Over the past two decades, there has been a massive build-up of online capabilities. In this case, what technology has wanted is the digitization of all information. That was Step One. Step Two is to render all that information functional. Step Three will be to make all the functionality personalized. And we’re progressing quite nicely down that path, thank you very much. The rapidly expanding capabilities of online far surpass what we are able to assimilate and use at any one time. All this functionality is still fragmented and is in the process of being developed (one of the reasons I think Facebook is in danger of becoming irrelevant) but it’s there. It’s just a pain in the butt for us to utilize it.

The problem is one of cognition. The brain has two ways to process information, one fast and one slow. The slow way (using our conscious parts of the brain) is tremendously flexible but inefficient. This is the system we’ve largely used to connect online. Everything has to be processed in the form of text, both in terms of output and input, generally through a keyboard and a screen display. It’s the easiest way for us to connect with information, but it’s far from the most efficient way.

The second way is much, much faster. It’s the subconscious processing of our environment that we do everyday.  It’s what causes us to duck when a ball is thrown at our head, jump out of the way of an oncoming bus, fiercely protect our children and judge the trustworthiness of a complete stranger. If our brains were icebergs, this would be the 90% hidden beneath the water. But we’ve been unable to access most of this inherent efficiency and apply it to our online interactions — until now.

The Importance of Siri and Glass

Say what you want about Mark Zuckerberg, he’s damned smart. That’s why he knew immediately that Google Glass is important.

I don’t know if Google Glass will be a home run for Google. I also don’t know if Siri will every pay back Apple’s investment in it. But I do know that 30 years from now, they’ll both be considered important milestones. And they’ll be important because they were representative of a sea change in how we connect with information. Both have the potential to unlock the efficiency of the subconscious brain. Siri does it by utilizing our inherent communication abilities and breaking the inefficient link that requires us not only to process our thoughts as language, but also laboriously translate them into keystrokes. In neural terms, this is one of the most inefficient paths imaginable.

But if Siri teases us with a potentially more efficient path, Google Glass introduces a new, mind-blowing scenario of what might be possible. To parse environment cues and stream information directly into our visual cortex in real time, creating a direct link with all that pent-up functionality that lives “in the cloud,” wipes away most of the inefficiency of our current connection paradigm.

Don’t think of the current implementation that Google is publicizing. Think beyond that to a much more elegant link between the vast capabilities of a digitized world and our own inner consciousness. Whatever Glass and Siri (and their competitors) eventually evolve into in the next decade or so, they will be far beyond what we’re considering today.

With the humanization of these interfaces, a potentially dark side effect will take place. These interfaces will become hardwired into our behavior strategies. Now, because our online interactions are largely processed at a conscious level, the brain tends to maintain maximum flexibility regarding the routines it uses. But as we access subconscious levels of processing with new interface opportunities, the brain will embed these at a similarly subconscious level. They will become habitual, playing out without conscious intervention. It’s the only way the brain can maximize its efficiency. When this happens, we will become dependent on these technological interfaces. It’s the price we’ll pay for the increased efficiency.

Pursuing the Unlaunched Search

First published November 29, 2012 in Mediapost’s Search Insider

Google’s doing an experiment. Eight times a day, randomly, 150 people get an alert from their smartphone and Google asks them this question, “ What did you want to know recently?” The goal? To find out all the things you never thought to ask Google about.

This is a big step for Google. It moves search into a whole new arena. It’s shifting the paradigm from explicit searching to implicit searching. And that’s important for all of the following reasons:

Search is becoming more contextually sensitive. Mobile search is contextually sensitive search. If you have your calendar, your to-do list, your past activities and a host of other information all stored on a device that knows where you are, it becomes much easier to guess what you might be interested in. Let’s say, for example, that your calendar has “Date with Julie” entered at 7 p.m., and you’re downtown. In the past year, 57% of your “dates with Julie” have generally involved dinner and a movie. You usually spend between $50 and $85 dollars on dinner, and your movies of choice generally vacillate between rom-coms and action-adventures (depending on who gets to choose).

In this scenario, without waiting for you to ask, Google could probably be reasonably safe in suggesting local restaurants that match your preferences and price ranges, showing you any relevant specials or coupons, and giving you the line-up of suggested movies playing at local theatres. Oh, and by the way, you’re out of milk and it’s on sale at the grocery store on the way home.

Can Googling become implicit? “We’ve often said the perfect search engine will provide you with exactly what you need to know at exactly the right moment, potentially without you having to ask for it,” says Google Lead Experience Designer Jon Wiley, one of the leads of the research experiment.

As our devices know more about us, the act of Googling may move from a conscious act to a subliminal suggestion. The advantage, for Google and us, is that it can provide us with information we never thought to ask for.  In the ideal state envisioned by Google, it can read the cues of our current state and scour its index of information to provide relevant options. Let’s say we just bought a bookcase from Ikea. Without asking, Google can download the user’s manual and pull relevant posts from user support forums.

It ingrains the Google habit. Google is currently in the enviable position of having become a habit. We don’t think to use Google, we just do. Of course, habits can be broken. Habits are a subconscious script that plays out in a familiar environment, delivering an expected outcome without conscious intervention. To break a habit, you usually look at disrupting the environment, stopping the script before it has a chance to play out.

The environment of search is currently changing dramatically. This raises the possibility of the breaking of the Google habit. If our habits suddenly find themselves in unfamiliar territory, the regular scripts are blocked and we’re forced to think our way through the situation.

But if Google can adapt to unfamiliar environments and prompt us with relevant information without us having to give it any thought, the company not only preserves the Google habit but ingrains it even more deeply. Good news for Google, bad news for Bing and other competitors.

It expands Google’s online landscape. Finally, at this point, Google’s best opportunity for a sustainable revenue channel is to monetize search. As long as Google controls our primary engagement point with online information, it has no shortage of monetization opportunities. By moving away from waiting for a query and toward proactive serving of information, Google can exponentially expand the number of potential touch points with users. Each of these  touch points comes with another advertising opportunity.

All this is potentially ground-breaking, but it’s not new. Microsoft was talking about Implicit Querying a decade ago. It was supposed to be built into Windows Vista. At that time, it was bound to the desktop. But now, in a more mobile world, the implications of implicit searching are potentially massive.