The Inevitable Wearable Technology Backlash

First published January 16, 2014 in Mediapost’s Search Insider

piem-1024x705Okay, I’ve gone on record – I think wearable technology is a huge disruptive wave currently bearing down on us. Accept it.

And I’ve also said that stupid wearable technology is inevitable. Accept that as well.

It appears that this dam is beginning to burst.

Catharine Taylor had a humorous and totally on-point reaction to the “tech-togs” that were unveiled at CES. Her take: “Thanks but no thanks”

Maarten Albarda a similar reaction to his first go-around with Google Glass – “Huh?”

Look – don’t get me wrong. Wearable technology, together with the “web of everything,” will eventually change our lives, but most of us won’t be going willingly. We’re going to have to get through the “bubble of silliness” first. Some of this stuff will make sense and elicit a well-earned “Cool” (or “Dope” or “Sick” or what ever generational thumbs-up is appropriate). Other things will garner an equally well-earned WTF? And some will be imminently sensible but will still end up being tossed out with the bathwater anyway.

Rob Garner always says “adoption follows function” This is true, but each of us has different thresholds for what we deem to be functional. If technology starts moving that bar, we know, thanks to the work of Everett Rogers and others, that the audience’s acceptance of that will follow the inevitable bell curve. Functionality is not equal in the eyes of all beholders.

The other problem with these new interfaces with technology is that function is currently scattered around like a handful of grass clippings in the wind. Sure, there are shards of usefulness, but unless you’re willing to wear more layers of wearable tech than your average early adopting Eskimo (or, as we say here in the politically correct north – Inuit), it’s difficult to see how this can significantly improve our day-to-day lives.

The other thing we have to grapple with is what I would call the WACF – The Weird and Creepy Factor. How exactly do we feel about having the frequency of our butt imprinting our sofa, our bank balance, our blood pressure and our body fat percentage beamed up to the data center of a start up we’d never heard of before last Friday? I’m an admitted early adopter and I have to confess – I’m not ready to make that leap right now.

It’s not just the privacy of my personal data that’s holding me back, although that is certainly a concern. Part of this goes back to something I talked about a few columns back – the redefinition of what it means to “be” online rather than “go” online. With wearable technology, we’re always “on” – plugged into the network and sharing data whether we’re aware of it or not.  This requires us with a philosophical loss of control. Chances are that we haven’t given this a lot of rational consideration, but it contributes to that niggling WACF that may be keeping us from donning the latest piece of wearable tech.

Eventually, the accumulated functionality of all this new technology will overcome all these barriers to adoption, but we will all have differing thresholds marking our surrender to the inevitable.  Garner’s assertion that adoption follows function is true, but it’s true of the functional wave as a whole and in that wave there will be winners and losers. Not all functional improvements get adopted. If all adoption followed all functional improvements, I’d be using a Dvorak keyboard right now. Betamax would have become the standard for videocassettes. And we’d be conversing in Esperanto. All functional improvements – all casualties to an audience not quite ready to embrace them.

Expect more to come.

The Psychology of Usefulness: How We Made Google a Habit

In the last two posts, I looked first at the difference between autotelic and exotelic activities, then how our brain judges the promise of usefulness. In today’s post, I want to return to the original question: How does this impact user loyalty? As we use more and more apps and destinations that rely on advertising for their revenues, this question becomes more critical for those apps and destinations.

The obvious example here is search engines, the original functional destination. Google is the king of search, but also the company most reliant on these ads. For Google, user loyalty is the difference between life and death. In 2012, Google made a shade over 50 billion dollars (give or take a few hundred million). Of this, over $43 billion came from advertising revenue (about 86%) and of that revenue, 62% came from Google’s own search destinations. That a big chunk of revenue to come from one place, so user loyalty is something that Google is paying pretty close attention to.

Now, let’s look at how durable that hold Google has on our brains is. Let’s revisit the evaluation cascade that happens in our brain each time we contemplate a task:

  • If very familiar and highly stable, we do it by habit
  • If fairly familiar but less stable, we do it by a memorized procedure with some conscious guidance
  • If new and unfamiliar, we forage for alternatives by balance effort required against

Not surprisingly, the more our brain has to be involved in judging usefulness, the less loyal we are. If you can become a habit, you are rewarded with a fairly high degree of loyalty. Luckily for Google, they fall into this category – for now. Let’s look at little more at how Google became a habit and what might have to happen for us to break this habit.

Habits depend on three things: high repetition, a stable execution environment and consistently acceptable outcomes. Google was fortunate enough to have all three factors present.

First – repetition. How many times a day do you use a search engine? For me, it’s probably somewhere between 10 and 20 times per day. And usage of search is increasing. We search more now than we did 5 years ago. If you do something that often throughout the day it wouldn’t make much sense to force your brain to actively think it’s way through that task each and every time – especially if the steps required to complete that task don’t really change that much. So, the brain, which is always looking for ways to save energy, records a “habit script” (or, to use the terminology of Ann Graybiel – “chunks”) that can play out without a lot of guidance. Searching definitely meets the requirements for the first step of forming a habit.

Second – stability. How many search engines do you use? If you’re like the majority of North Americans, you probably use Google for almost all your searches.  This introduces what we would call a stable environment. You know where to go, you know how to use it and you know how to use the output. There is a reason why Google is very cautious about changing their layout and only do so after a lot of testing. What you expect and what you get shouldn’t be too far apart. If it is, it’s called disruptive, and disruption breaks habits. This is the last thing that Google wants.

Third – acceptable outcomes. So, if stability preserves habits, why would Google change anything? Why doesn’t Google’s search experience look exactly like it did in 1998 (fun fact – if you search Google for “Google in 1998” it will show you exactly what the results page looked like)? That would truly be stable, which should keep those all important habits glued in place. Well, because expectations change. Here’s the thing about expected utility – which I talked about in the last post. Expected utility doesn’t go away when we form a habit, it just moves downstream in the process. When we do a task for the first time, or in an unstable environment, expected utility precedes our choice of alternatives. When a “habit script” or “chunk” plays out, we still need to do a quick assessment of whether we got what we expected. Habits only stay in place if the “habit script” passes this test. If we searched for “Las Vegas hotels” and Google returned results for Russian borscht, that habit wouldn’t last very long.  So, Google constantly has to maintain this delicate balance – meeting expectations without disrupting the user’s experience too much. And expectations are constantly changing.

Internet adoption over time chartWhen Google was introduced in 1998, it created a perfect storm of habit building potential. The introduction coincided with a dramatic uptick in adoption of the internet and usage of web search in particular.  In 1998, 36% of American adults were using the Internet (according to PEW). In 2000, that had climbed to 46% and by 2001 that was up to 59%. More of us were going online, and if we were going online we were also searching.  The average searches per day on Google exploded from under 10,000 in 1998 to 60 million in 2000 and 1.2 billion in 2007. Obviously, we were searching  – a lot – so the frequency of task prerequisite was well in hand.

Now – stability. In the early days of the Internet, there was little stability in our search patterns. We tended to bounce back and forth between a number of different search engines. In fact, the search engines themselves encouraged this by providing “Try your search on…” links for their competitors (an example from Google’s original page is shown below). Because our search tasks were on a number of different engines, there was no environmental stability, so no chance for the creation of a true task. The best our brains could do at this point was store a procedure that required a fair amount of conscious oversight (choosing engines and evaluating outcomes). Stability was further eroded by the fact that some engines were better at some types of searches than others. Some, like Infoseek, were better for timely searches due to their fast indexing cycles and large indexes. Some, like Yahoo, were better at canonical searches that benefited from a hierarchal directory approach. When searching in the pre-Google days, we tended to match our choice of engine to the search we were doing. This required a fairly significant degree of rational neural processing on our part, precluding the formation of a habit.

Googlebottompage1998

But Google’s use of PageRank changed the search ballgame dramatically. Their new way of determining relevancy rankings was consistently better for all types of searches than any of their competitors. As we started to use Google for more types of searches because of their superior results, we stopped using their competitors. This finally created the stability required for habit formation.

Finally, acceptable outcomes. As mentioned above, Google came out of the gate with outcomes that generally exceeded our expectations, set by the spotty results of their competitors. Now, all Google had to do to keep the newly formed habit in place was to continue to meet the user’s expectations of relevancy. Thanks to truly disruptive leap Google took with the introduction of PageRank, they had a huge advantage when it came to search results quality. Google has also done an admirable job of maintaining that quality over the past 15 years. While the gap has narrowed significantly (today, one could argue that Bing comes close on many searches and may even have a slight advantage on certain types of searches) Google has never seriously undershot the user’s expectations when it comes to providing relevant search results. Therefore, Google has never given us a reason to break our habits. This has resulted in a market share that has hovered over 60% for several years now.

When it comes to online loyalty, it’s hard to beat Google’s death grip on search traffic. But, that grip may start to loosen in the near future. In my next post, I’ll look the conditions that can break habitual loyalty, again using Google as an example. I’ll also look at how our brains decide to accept or reject new useful technologies.

Google Holds the Right Cards for a Horizontal Market

First published January 9, 2014 in Mediapost’s Search Insider

android_trhoneFunctionality builds up, then across. That was the principle of emerging markets that I talked about in last week’s column. Up – then across – breaking down siloes into a more open, competitive and transparent market. I’ll come back here in a moment.

I also talked about how Google + might be defining a new way of thinking about social networking, one free of dependence on destinations. It could create a social lens through which all our online activity passes through, adding functionality and enriching information.

Finally, this week, I read that Google is pushing hard to extend Android as the default operating system in the Open Automotive Alliance – turning cars into really big mobile devices. This builds on Android’s dominance in the smartphone market (with an 82% market share).

See a theme here?

For years, I’ve been talking about the day when search transitions from being a destination to a utility, powering apps which provide very specific functionality that far outstrips anything you could do on a “one size fits all” search portal. This was a good news/bad news scenario for Google, who was the obvious choice to provide this search grid. But, in doing so, they lose their sole right to monetize search traffic, a serious challenge to their primary income source. However, if you piggy back that search functionality onto the de facto operating system that powers all those apps, and then add a highly functional social graph, you have all the makings of a foundation that will support the ‘horizontalization” of the mobile connected market. Put this in place, and revenue opportunities will begin falling into your lap.

The writing is plainly on the wall here. The future is all about mobile connections. It is the foundation of the Web of Things, wearable technology, mobile commerce – anything and everything we see coming down the pipe.  The stakes are massive. And, as markets turn horizontal in the inevitable maturation phase to come, Google seems to be well on their way to creating the required foundations for that market.

Let’s spend a little time looking at how powerful this position might be for Google. Microsoft is still coasting on their success in creating a foundation for the desktop, 30 years later.  The fact that they still exist at all is testament to the power of Windows. But the desktop expansion that happened was reliant on just one device – the PC. And, the adoption curve for the PC took two decades to materialize, due to two things: the prerequisite of a fairly hefty investment in hardware and a relatively steep learning curve. The mobile adoption curve, already the fastest in history, has no such hurdles to clear. Relative entry price points are a fraction of what was required for PCs. Also, the learning curve is minimal. Mobile connectivity will leave the adoption curve of PCs in the dust.

In addition, an explosion of connected devices will propel the spread of mobile connectivity. This is not just about smart phones. Two of the biggest disruptive waves in the next 10 years will be wearable technologies and the Web of Things. Both of these will rely on the same foundations, an open and standardized operating system and the ability to access and share data. At the user interface level, the enhancements of powerful search technologies and social-graph enabled filters will significantly improve the functionality of these devices as they interface with the “cloud.”

In the hand that will have to inevitably be played, it seems that Google is currently holding all the right cards.

Revisiting Entertainment vs Usefulness

brain-cogsSome time ago, I did an extensive series of posts on the psychology of entertainment. My original goal, however, was to compare entertainment and usefulness in how effective they were in engendering long-term loyalty. How do our brains process both? And, to return to my original intent, in that first post almost 4 years ago, how does this impact digital trends and their staying power?

My goal is to find out why some types of entertainment have more staying power than other types. And then, once we discover the psychological underpinnings of entertainment, lets look at how that applies to some of the digital trends I disparaged: things like social networks, micro-blogging, mobile apps and online video. What role does entertainment play in online loyalty? How does it overlap with usefulness? How can digital entertainment fads survive the novelty curse and jump the chasm to a mainstream trends with legs?

In the previous set of posts, I explored the psychology of entertainment extensively, ending up with a discussion of the evolutionary purpose of entertainment. My conclusion was that entertainment lived more in the phenotype than the genotype. To save you going back to that post, I’ll quickly summarize here: the genotype refers to traits actually encoded in our genes through evolution – the hardwired blueprint of our DNA. The phenotype is the “shadow” of these genes – behaviors caused by our genetic blueprints. Genotypes are directly honed by evolution for adaptability and gene survival. Phenotypes are by-products of this process and may confer no evolutionary advantage. Our taste for high-fat foods lives in the genotype – the explosion of obesity in our society lives in the phenotype.

This brings us to the difference between entertainment and usefulness – usefulness relies on mechanisms that predominately live in the genotype.  In the most general terms, it’s the stuff we have to do to get through the day. And to understand how we approach these things on our to-do list, it’s important to understand the difference between autotelic and exotelic activities.

Autotelic activities are the things we do for the sheer pleasure of it. The activity is it’s own reward. The word autotelic is Greek for “self + goal” – or “having a purpose in and not apart from itself.” We look forward to doing autotelic things. All things that we find entertaining are autotelic by nature.

Exotelic activities are simply a necessary means to an end. They have no value in and of themselves.  They’re simply tasks – stuff on our to do list.

The brain, when approaching these two types of activities, treats them very differently. Autotelic activities fire our reward center – the nucleus accumbens. They come with a corresponding hit of dopamine, building repetitive patterns. We look forward to them because of the anticipation of the reward. They typically also engage the prefrontal medial cortex, orchestrating complex cognitive behaviors and helping define our sense of self. When we engage in an autotelic activity, there’s a lot happening in our skulls.

Exotelic activities tend to flip the brain onto its energy saving mode. Because there is little or no neurological reward in these types of activities (other than a sense of relief once they’re done) they tend to rely on the brain’s ability to store and retrieve procedures. With enough repetition, they often become habits, skipping the brain’s rational loop altogether.

In the next post, we’ll look at how the brain tends to process exotelic activities, as it provides some clues about the loyalty building abilities of useful sites or tools. We’ll also look at what happens when something is both exotelic and autotelic.

The Death and Rebirth of Google+

google_plus_logoGoogle Executive Chairman Eric Schmidt has come out with his predictions for 2014 for Bloomberg TV. Don’t expect any earth-shaking revelations here. Schmidt plays it pretty safe with his prognostications:

Mobile has won – Schmidt says everyone will have a smartphone. “The trend has been mobile was winning..it’s now won.” Less a prediction than stating the obvious.

Big Data and Machine Intelligence will be the Biggest Disruptor – Again, hardly a leap of intuitive insight. Schmidt foresees the evolution of an entirely new data marketplace and corresponding value chain. Agreed.

Gene Sequencing Has Promise in Cancer Treatments – While a little fuzzier than his other predictions, Schmidt again pounces on the obvious. If you’re looking for someone willing to bet the house on gene sequencing, try LA billionaire Patrick Soon-Shiong.

See Schmidt’s full clip:

The one thing that was interesting to me was an admission of failure with Google+:

The biggest mistake that I made was not anticipating the rise of the social networking phenomenon.  Not a mistake we’re going to make again. I guess in our defense we were busy working on many other things, but we should have been in that area and I take responsibility for that.

I always called Google+ a non-starter, despite a deceptively encouraging start. But I think it’s important to point out that we tend to judge Google+ against Facebook or other social destinations. As Google+ Vice President of Product Bradley Horowitz made clear in an interview last year with Dailytech.com, Google never saw this as a “Facebook killer.”

I think in the early going there was a lot of looking for an alternative [to Facebook, Twitter, etc.],” said Horowitz. “But I think increasingly the people who are using Google+ are the people using Google. They’re not looking for an alternative to anything, they’re looking for a better experience on Google.

social-networkAnd this highlights a fundamental change in how we think about online social activity – one that I think is more indicative of what the future holds. Social is not a destination, social is a paradigm. It’s a layer of connectedness and shared values that acts as a filter, a lens  – a way we view reality. That’s what social is in our physical world. It shapes how we view that world. And Horowitz is telling us that that’s how Google looks at social too. With the layering of social signals into our online experience, Google+ gives us an enhanced version of our online experience. It’s not about a single destination, no matter how big that destination might be. It’s about adding richness to everything we do online.

Because humans are social animals our connections and our perception of ourselves as part of an extended network literally shape every decision we make and everything we do, whether we’re conscious of the fact or not. We are, by design, part of a greater whole. But because online, social originated as distinct destinations, it was unable to impact our entire online experience. Facebook, or Pinterest, act as a social gathering place – a type of virtual town square – but social is more than that. Google+ is closer to this more holistic definition of “social.”

I’m not  sure Google+ will succeed in becoming our virtual social lens, but I do agree that as our virtual sense of social evolves, it will became less about distinct destinations and more about a dynamic paradigm that stays with us constantly, helping to shape, sharpen, enhance and define what we do online. As such, it becomes part of the new way of thinking about being online – not going to a destination but being plugged into a network.

What’s Apple’s Plan for 2014?

First published January 2, 2014 in Mediapost’s Search Insider

apple-storeWhen new markets open, value chains first build up, then across. Someone first creates a vertically integrated experience, and then the market opens up as free competition drives efficiency. This is the challenge that currently lies ahead of Apple.

Apple has been the acknowledged master at creating seamless vertically integrated experiences. They did it with the personal computer. They did it with music. They did it with mobile. They did it with tablets. The advantage of working within a closed value chain is that you control every aspect of the experience. You can make sure that everyone plays nice with each other.

The challenge is that at some point, as adoption heats up, you simply cannot scale fast enough to meet market demand. Open competition drives horizontal competition, which drives down prices. The lack of control up and down the chain introduces some short-term user pain, but eventually the dynamics of an open market overcome this and the advantages of having several companies working on an opportunity outweigh the disadvantages.

Apple loves early markets. Or, at least, they have in the past. Under Jobs, they had a knack of creating an elegantly integrated experience that was carefully crafted from top to bottom within the walls of Cupertino. The vision and obsession with detail that defined the Jobs era was a potent combination when it came to building vertical experiences. Somehow, Apple was able to open new markets over and over again, seemingly at will. They were able to bridge Geoffrey Moore’s “Chasm” – by making new experiences painless enough for the front end of the adoption bell curve. As markets rode up the curve, markets turned from vertical to horizontal, driving a decline in margins and prices. This is where Apple tended to kick out and look for the next wave to catch.

But that was then, and this is now. As mentioned, Apple doesn’t do very well when markets turn horizontal. They depend on high margins. Only once, with the Mac, were they able to come back and stake out a respectable claim in a horizontal market. And they almost disappeared in the process. The number of dependent circumstances that would be required to repeat that trick is such that I doubt they’re eager to go down the same path with the iPhone or iPad.

In the year end summaries, many are talking about a seeming anomaly –  that despite Android’s massive market share dominance over iOS (81% vs 12.9%, according to a recent Forbes article) it’s Apple that’s ringing up the holiday sales with mobile shoppers (23% vs Android’s paltry 5%).  This becomes more understandable when you put it in the context of a vertical market that is becoming horizontal. Shopping experiences are still much less painful on iOS. And, you have a user base that is much more comfortable with mobile ecommerce because they’re on the leading edge of the adoption curve. They’ve had a mobile device for a number of years now. Android users, in general, tend to be further back on the curve. As the benefits of Darwinian competition redefine the mobile marketplace along more horizontal lines, those ecommerce numbers will revert to a more natural balance, but it will take some time.

As this inevitable change in the marketplace happens, the question then becomes, “What does Apple do next?” Can they find the next wave? And, if they do, does an Apple without Jobs still have what it takes to create the vertical experience that can open up a new market? There are plenty of opportunities – the two most notable ones being connected entertainment devices (the much-rumored new generation of Apple TV) and wearable technology (iWatches, etc).

Apple has always been known for keeping their cards glued against their chest. In 2014, it remains to be seen if they have anything amazing up their sleeve.

Google’s Etymological Dream Come True

First published November 14, 2013 in Mediapost’s Search Insider

Yesterday’s Search Insider column caught my eye. Aaron Goldman explained how search ads were the original native ads. He also explained why native ads work. This is backed up by research we did about 5 years ago, showing how contextual relevance substantially boosted ad effectiveness (but not, ironically, ad awareness). I did a fairly long blog post on the concept of “aligned” intent, if you really want to roll up your sleeves and dive in.

The funny thing was, I was struck by the use of the word “native” itself. For some reason, the use of the term in today’s more politically charged world struck a note of immediate uneasiness. On a gut level, it reminded me of the insensitivity of Daniel Snyder, owner of the Washington Redskins. There’s nothing immoral about the term itself, but it is currently tied to an emotionally charged issue.

As I often do, I decided to check the etymological roots of “native” and immediately noticed something different on the Google search page.  There, at the top, was an etymological time line, showing the root of “native” is the Latin “nasci” – meaning born. So, it was entirely appropriate, given Aaron’s assertion that “native” advertising was “born” on the search page. But it was at the bottom, where a downwards arrow promised “more,” that I hit etymological pay dirt.

Google showed me the typical dictionary entries, but at the bottom, it gave me a chart from it’s nGram viewer showing usage of “native” in books and publications over the past 200 years. Interestingly, the term has been in slow decline over the past 200 hundred years, with a bit of a resurgence over the last 25 years. When I clicked on the graph it broke it down further, showing that small-n “native” has been used less and less, but big-N “Native” took a jump in popularity in the mid-80’s, accounting for the mild bump.

Google’s nGram isn’t new, but its capabilities have been recently beefed up, providing a fascinating visual tool for us “wordies” out there. With it, you can plot the popularity of words over 500 years in a body of over 5 million books. For example, a blog post at Informationisbeautiful.net shows several fascinating word trend charts in the English corpus, including drug trends (cocaine was a popular topic in Victorian times, slowed down in the 20’s and exploded again in the 80’s), the battle of religion vs science (the popularity cross over was in 1930, but the trend has reversed and we’re heading for another one) and interest in sex vs. marriage (sex was barely mentioned prior to 1800, stayed relatively constant until 1910 and grew dramatically in the 70s, but lately it’s dropped off a cliff. Marriage has had a spikier history but has remained fairly constant in the last 200 years.)

I tried a few charts of my own. Since 1885, “Evolution” has beaten “Creation,” but it took a noticeable drop during the 30’s. Since 1960 both have been on the rise.  In1980, Apple got off to an initial head start, but Microsoft passed it in 1992, never to look back (although it’s had a precipitous decline since 2000.)  Perhaps the most interesting chart is comparing “radio”, “television” and “internet” since 1900. Radio started growing in the 20’s and hit its popularity peak around 1945, but the cross-over with television would take another 40 years (about 1982.) Television would only enjoy a brief period of dominance. In 1990, the meteoric rise of the Internet started and it surpassed both radio and television around 1997.

tvradiointernet

My final chart was to see how Google fared in it’s own tool. Not surprisingly, Google has dominated the search space since 2001, and done so quite handily. Currently, it’s 6 times more popular than its rivals, Yahoo and Bing.  One caveat here though – Bing’s popularity started to climb in 1830, so I think they’re talking about either the cherry, Chinese people named Bing or a German company that used to make kitchen utensils.  Either that, or Microsoft has had their search engine in development a lot longer than anyone guessed.

googleyahoobing

Yahoo Under the Mayer Regime

First published November 7, 2013 in Mediapost’s Search Insider

marissa-mayer-7882_cnet100_620x433OK, it has a new logo. The mail interface has been redesigned. But according to a recent New York Timespiece, Yahoo still doesn’t know what it wants to be when it grows up. Marissa Mayer seems to be busy, with a robust hiring spree, eight new acquisitions, 15 new product updates, a nice 20% bump in traffic and a stock price that’s been consistently heading north. But all this activity hasn’t seemed to coalesce into a discernible strategy — from the outside, anyway.

It’s probably because Mayer is busy rebuilding the guts of the organization. Cultures are notoriously difficult things to change. In any organization where a major change in direction is required, you will have to deal with several layers of inertia — and, even more challenging, momentum heading the wrong way.  In the blog post, design guru Don Norman agrees, ““The major changes she has made are not what the logo looks like or a new Yahoo Mail. The major changes are what the company looks like internally. She’s revitalizing the inside of the company, and what everyone sees on the surface are just little ripples.”

To be fair, Yahoo has been an organization lacking a clear direction for a long, long time. I remember speaking at the Sunnyvale campus years ago, when Yahoo was still being remade into a media property, under the direction of Terry Semel. There were entire departments (including the core search team) that felt cut adrift. Since then, the strategic direction of Yahoo has resembled that of a Roomba vacuum, plowing forward until it senses an obstacle, then heading off in an entirely new direction.

What was interesting about the recent Times post was the marked contrast to the rumors and kvetching coming from Mayer’s old digs: Google. There, the big news seems to be the ultra-secret party barge anchored in San Francisco bay. And a Quora thread entitled “What’s the Worst Part about Working at Google?” paints a picture of a frat house that has yet to wake up and realize the party’s over:

  • Overqualified people working at menial jobs.
  • Frustration at not being able to contribute anything meaningful in an increasingly bureaucratic environment.
  • Engineers with egos outstripping their skills.
  • Bottlenecks preventing promotion,
  • A permanent “party” atmosphere that makes it difficult to get any actual work done.

But perhaps the most telling comment came from someone who spent seven years at Google, who said that all the meaningful innovation comes from an exceedingly small group, headed by Larry and Sergey. The rest of the Googlers are just along for the ride:

Here’s something to ponder.  The only meaningful organic products to come out of Google were Search and then AdSense.  (Android — awesome, purchased.  YouTube — awesome, purchased, etc. Larry and/or Sergey were obviously intimately involved in both.  Maps – awesome, purchased. Google Plus is a flop for all non-Googlers globally, Chrome browser is great, but no direct monetization (indirectly protects search), the world has passed the Chrome OS by… etc. ) Fast-forward 14 years, and the next big thing from Google, I bet, will be Google Glass, and guess who PMd it.  Sergey Brin.  Tiny number of wave creators, huge number of surfers.

So we have Google, still surfing a wave that started 15 years ago, and Yahoo struggling to get in position to catch the next one. For both, the challenge is a fundamental one: How do you effect change in a massive organization and get thousands of employees contributing in a meaningful way? Ironically, it may turn out that Marissa Mayer has significant advantage here. If you’re bright, ambitious and looking to do something meaningful with your career, what would be more appealing: trying to shoehorn your way into an already overcrowded house party, or the opportunity to roll up your sleeves and resurrect one of the Web’s great brands?

What Does Being “Online” Mean?

plugged-inFirst published October 24, 2013 in Mediapost’s Search Insider

If readers’ responses to my few columns about Google’s Glass can be considered a representative sample (which, for many reasons, it can’t, but let’s put that aside for the moment), it appears we’re circling the concept warily. There’s good reason for this. Privacy concerns aside, we’re breaking virgin territory here that may shift what it means to be online.

Up until now, the concept of online had a lot in common with our understanding of physical travel and acquisition. As Peter Pirolli and Stuart Card discovered, our virtual travels tapped into our evolved strategies for hunting and gathering. The analogy, which holds up in most instances, is that we traveled to a destination. We “went” online, to “go” to a website, where we “got” information. It was, in our minds, much like a virtual shopping trip. Our vehicle just happened to be whatever piece of technology we were using to navigate the virtual landscape of “online.”

As long as we framed our online experiences in this way, we had the comfort of knowing we were somewhat separate from whatever “online” was. Yes, it was morphing faster than we could keep up with, but it was under our control, subject to our intent. We chose when we stepped from our real lives into our virtual ones, and the boundaries between the two were fairly distinct.

There’s a certain peace of mind in this. We don’t mind the idea of online as long as it’s a resource subject to our whims. Ultimately, it’s been our choice whether we “go” online or not, just as it’s our choice to “go” to the grocery store, or the library, or our cousin’s wedding. The sphere of our lives, as defined by our consciousness, and the sphere of “online” only intersected when we decided to open the door.

As I said last week, even the act of “going” online required a number of deliberate steps on our part. We had to choose a connected device, frame our intent and set a navigation path (often through a search engine). Each of these steps reinforced our sense that we were at the wheel in this particular journey. Consider it our security blanket against a technological loss of control.

But, as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being “online” will cease to be about “going” and will become more about “being.”  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.

Being “online” will mean being “plugged in.” The lines between “online” and “ourselves” will become blurred, perhaps invisible, as technology moves at the speed of unconscious thought. We won’t be rationally choosing destinations, applications or devices. We won’t be keying in commands or queries. We won’t even be clicking on links. All the comforting steps that currently reinforce our sense of movement through a virtual space at our pace and according to our intent will fade away. Just as a light bulb doesn’t “go” to electricity, we won’t “go” online.  We will just be plugged in.

Now, I’m not suggesting a Matrix-like loss of control. I really don’t believe we’ll become feed sacs plugged into the mother of all networks. What I am suggesting is a switch from a rather slow, deliberate interface that operates at the speed of conscious thought to a much faster interface that taps into the speed of our subconscious cognitive processing. The impulses that will control the gateway of information, communication and functionality will still come from us, but it will be operating below the threshold of our conscious awareness. The Internet will be constantly reading our minds and serving up stuff before we even “know” we want it.

That may seem like neurological semantics, but it’s a vital point to consider. Humans have been struggling for centuries with the idea that we may not be as rational as we think we are. Unless you’re a neuroscientist, psychologist or philosopher, you may not have spent a lot of time pondering the nature of consciousness, but whether we actively think about it or not, it does provide a mental underpinning to our concept of who we are.  We need to believe that we’re in constant control of our circumstances.

The newly emerging definition of what it means to be “online” may force us to explore the nature of our control at a level many of us may not be comfortable with.

Losing My Google Glass Virginity

Originally published October 17, 2013 in Mediapost’s Search Insider

Rob, I took your advice.

A few columns back, when I said Google’s Glass might not be ready for mass adoption, fellow Search Insider Rob Garner gave me this advice:“Don’t knock it until you try it.”  So, when a fellow presenter at a conference I was at last week brought along his Glass and offered me a chance to try them (Or “it”? Does anyone else find Google’s messing around with plural forms confusing and irritating?), I took him up on it. To say I jumped at it may be overstating the case – let’s just say I enthusiastically ambled to it.

I get Google Glass. I truly do. To be honest, the actual experience of using them came up a little short of my expectations, but not much. It’s impressive technology.

But here’s the problem. I’m a classic early adopter. I always look at what things will be, overlooking the limitations of what currently “is.” I can see the dots of potential extending toward a horizon of unlimited possibility, and don’t sweat the fact that those dots still have to be connected.

On that level, Google Glass is tremendously exciting, for two reasons that I’ll get to in a second. For many technologies, I’ll even connect a few dots myself, willing to trade off pain for gain. That’s what early adopters do. But not everyone is an early adopter. Even given my proclivity for nerdiness, I felt a bit like a jerk standing in a hotel lobby, wearing Glass, staring into space, my hand cupped over the built-in mike, repeating instructions until Glass understood me. I learned there’s a new label for this; for a few minutes I became a “Glasshole.”Screen-Shot-2013-05-19-at-2.09.03-AM

Sorry Rob, I still can’t see the mainstream going down this road in the near future.

But there are two massive reasons why I’m still tremendously bullish on wearable technology as a concept. One, it leverages the importance of use case in a way no previous technology has ever done. And two, it has the potential to overcome what I’ll call “rational lag time.”

The importance of use case in technology can be summed up in one word: iPad. There is absolutely no technological reason why tablets, and iPads in particular, should be as popular as they are. There is nothing in an iPad that did not exist in another form before. It’s a big iPhone, without the phone. The magic of an iPad lies in the fact that it’s a brilliant compromise: the functionality of a smartphone in a form factor that makes it just a little bit more user-friendly. And because of that, it introduced a new use case and became the “lounge” device. Unlike a smartphone, where size limits the user experience in some critical ways (primarily in input and output), tablets offer acceptable functionality in a more enjoyable form. And that is why almost 120 million tablets were sold last year, a number projected (by Gartner) to triple by 2016.

The use case of wearable technology still needs to be refined by the market, but the potential to create an addictive user experiences is exceptional. Even with Glass’ current quirks, it’s a very cool interface. Use case alone leads me to think the recent $19 billion by 2018 estimate of the size of the wearable technology market is, if anything, a bit on the conservative side.

But it’s the “rational lag time” factor that truly makes wearable technology a game changer.  Currently, all our connected technologies can’t keep up with our brains. When we decide to do something, our brains register subconscious activity in about 100 milliseconds, or about one tenth of a second. However, it takes another 500 milliseconds (half a second) before our conscious brain catches up and we become aware of our decision to act. In more complex actions, a further lag happens when we rationalize our decision and think through our possible alternatives. Finally, there’s the action lag, where we have to physically do something to act on our intention. At each stage, our brains can shut down  impulses if it feels like they require too much effort.  Humans are, neurologically speaking, rather lazy (or energy-efficient, depending on how you look at it).

So we have a sequence of potential lags before we act on our intent: Unconscious Stimulation > Conscious Awareness > Rational Deliberation > Possible Action. Our current interactions with technology live at the end of this chain. Even if we have a smartphone in our pocket, it takes several seconds before we’re actively engaging with it. While that might not seem like much, when the brain measures action in split seconds, that’s an eternity of time.

But technology has the potential to work backward along this chain. Let’s move just one step back, to rational deliberation. If we had an “always on” link where we could engage in less than one second, we could utilize technology to help us deliberate. We still have to go through the messiness of framing a request and interpreting results, but it’s a quantum step forward from where we currently are.

The greatest potential (and the greatest fear) lies one step further back – at conscious awareness. Now we’re moving from wearable technology to implantable technology. Imagine if technology could be activated at the speed of conscious thought, so the unconscious stimulation is detected and parsed and by the time our conscious brain kicks into gear, relevant information and potential actions are already gathered and waiting for us. At this point, any artifice of the interface is gone, and technology has eliminated the rational lag. This is the beginning of Kurzweil’s Singularity: the destination on a path that devices like Google Glass are starting down.

As I said, I like to look at the dots. Someone else can worry about how to connect them.