Google’s Etymological Dream Come True

First published November 14, 2013 in Mediapost’s Search Insider

Yesterday’s Search Insider column caught my eye. Aaron Goldman explained how search ads were the original native ads. He also explained why native ads work. This is backed up by research we did about 5 years ago, showing how contextual relevance substantially boosted ad effectiveness (but not, ironically, ad awareness). I did a fairly long blog post on the concept of “aligned” intent, if you really want to roll up your sleeves and dive in.

The funny thing was, I was struck by the use of the word “native” itself. For some reason, the use of the term in today’s more politically charged world struck a note of immediate uneasiness. On a gut level, it reminded me of the insensitivity of Daniel Snyder, owner of the Washington Redskins. There’s nothing immoral about the term itself, but it is currently tied to an emotionally charged issue.

As I often do, I decided to check the etymological roots of “native” and immediately noticed something different on the Google search page.  There, at the top, was an etymological time line, showing the root of “native” is the Latin “nasci” – meaning born. So, it was entirely appropriate, given Aaron’s assertion that “native” advertising was “born” on the search page. But it was at the bottom, where a downwards arrow promised “more,” that I hit etymological pay dirt.

Google showed me the typical dictionary entries, but at the bottom, it gave me a chart from it’s nGram viewer showing usage of “native” in books and publications over the past 200 years. Interestingly, the term has been in slow decline over the past 200 hundred years, with a bit of a resurgence over the last 25 years. When I clicked on the graph it broke it down further, showing that small-n “native” has been used less and less, but big-N “Native” took a jump in popularity in the mid-80’s, accounting for the mild bump.

Google’s nGram isn’t new, but its capabilities have been recently beefed up, providing a fascinating visual tool for us “wordies” out there. With it, you can plot the popularity of words over 500 years in a body of over 5 million books. For example, a blog post at Informationisbeautiful.net shows several fascinating word trend charts in the English corpus, including drug trends (cocaine was a popular topic in Victorian times, slowed down in the 20’s and exploded again in the 80’s), the battle of religion vs science (the popularity cross over was in 1930, but the trend has reversed and we’re heading for another one) and interest in sex vs. marriage (sex was barely mentioned prior to 1800, stayed relatively constant until 1910 and grew dramatically in the 70s, but lately it’s dropped off a cliff. Marriage has had a spikier history but has remained fairly constant in the last 200 years.)

I tried a few charts of my own. Since 1885, “Evolution” has beaten “Creation,” but it took a noticeable drop during the 30’s. Since 1960 both have been on the rise.  In1980, Apple got off to an initial head start, but Microsoft passed it in 1992, never to look back (although it’s had a precipitous decline since 2000.)  Perhaps the most interesting chart is comparing “radio”, “television” and “internet” since 1900. Radio started growing in the 20’s and hit its popularity peak around 1945, but the cross-over with television would take another 40 years (about 1982.) Television would only enjoy a brief period of dominance. In 1990, the meteoric rise of the Internet started and it surpassed both radio and television around 1997.

tvradiointernet

My final chart was to see how Google fared in it’s own tool. Not surprisingly, Google has dominated the search space since 2001, and done so quite handily. Currently, it’s 6 times more popular than its rivals, Yahoo and Bing.  One caveat here though – Bing’s popularity started to climb in 1830, so I think they’re talking about either the cherry, Chinese people named Bing or a German company that used to make kitchen utensils.  Either that, or Microsoft has had their search engine in development a lot longer than anyone guessed.

googleyahoobing

Yahoo Under the Mayer Regime

First published November 7, 2013 in Mediapost’s Search Insider

marissa-mayer-7882_cnet100_620x433OK, it has a new logo. The mail interface has been redesigned. But according to a recent New York Timespiece, Yahoo still doesn’t know what it wants to be when it grows up. Marissa Mayer seems to be busy, with a robust hiring spree, eight new acquisitions, 15 new product updates, a nice 20% bump in traffic and a stock price that’s been consistently heading north. But all this activity hasn’t seemed to coalesce into a discernible strategy — from the outside, anyway.

It’s probably because Mayer is busy rebuilding the guts of the organization. Cultures are notoriously difficult things to change. In any organization where a major change in direction is required, you will have to deal with several layers of inertia — and, even more challenging, momentum heading the wrong way.  In the blog post, design guru Don Norman agrees, ““The major changes she has made are not what the logo looks like or a new Yahoo Mail. The major changes are what the company looks like internally. She’s revitalizing the inside of the company, and what everyone sees on the surface are just little ripples.”

To be fair, Yahoo has been an organization lacking a clear direction for a long, long time. I remember speaking at the Sunnyvale campus years ago, when Yahoo was still being remade into a media property, under the direction of Terry Semel. There were entire departments (including the core search team) that felt cut adrift. Since then, the strategic direction of Yahoo has resembled that of a Roomba vacuum, plowing forward until it senses an obstacle, then heading off in an entirely new direction.

What was interesting about the recent Times post was the marked contrast to the rumors and kvetching coming from Mayer’s old digs: Google. There, the big news seems to be the ultra-secret party barge anchored in San Francisco bay. And a Quora thread entitled “What’s the Worst Part about Working at Google?” paints a picture of a frat house that has yet to wake up and realize the party’s over:

  • Overqualified people working at menial jobs.
  • Frustration at not being able to contribute anything meaningful in an increasingly bureaucratic environment.
  • Engineers with egos outstripping their skills.
  • Bottlenecks preventing promotion,
  • A permanent “party” atmosphere that makes it difficult to get any actual work done.

But perhaps the most telling comment came from someone who spent seven years at Google, who said that all the meaningful innovation comes from an exceedingly small group, headed by Larry and Sergey. The rest of the Googlers are just along for the ride:

Here’s something to ponder.  The only meaningful organic products to come out of Google were Search and then AdSense.  (Android — awesome, purchased.  YouTube — awesome, purchased, etc. Larry and/or Sergey were obviously intimately involved in both.  Maps – awesome, purchased. Google Plus is a flop for all non-Googlers globally, Chrome browser is great, but no direct monetization (indirectly protects search), the world has passed the Chrome OS by… etc. ) Fast-forward 14 years, and the next big thing from Google, I bet, will be Google Glass, and guess who PMd it.  Sergey Brin.  Tiny number of wave creators, huge number of surfers.

So we have Google, still surfing a wave that started 15 years ago, and Yahoo struggling to get in position to catch the next one. For both, the challenge is a fundamental one: How do you effect change in a massive organization and get thousands of employees contributing in a meaningful way? Ironically, it may turn out that Marissa Mayer has significant advantage here. If you’re bright, ambitious and looking to do something meaningful with your career, what would be more appealing: trying to shoehorn your way into an already overcrowded house party, or the opportunity to roll up your sleeves and resurrect one of the Web’s great brands?

Whom Would You Trust: A Human or an Algorithm?

First published October 31, 2013 in Mediapost’s Search Insider

I’vmindrobote been struggling with a dilemma.

Almost a year ago, I wrote a column asking if Big Data would replace strategy. That started a several-month journey for me, when I’ve been looking for a more informed answer to that query. It’s a massively important question that’s playing out in many arenas today, including medicine, education, government and, of course, finance.

In marketing, we’re well into the era of big data. Of course, it’s not just data we’re talking about. We’re talking about algorithms that use that data to make automated decisions and take action. Some time ago, MediaPost’s Steve Smith introduced us to a company called Persado, that takes an algorithmic approach to copy testing and optimization. As an ex-copywriter turned performance marketer I wasn’t sure how I felt about that. I understand the science of continuous testing but I have an emotional stake in the art of crafting an effective message. And therein lies the dilemma. Our comfort with algorithms seems to depend on the context in which we’re encountering them and the degree of automation involved.

Let me give you an example, from Ian Ayre’s book “Super Crunchers.” There’s a company called Epagogix that uses an algorithm to predict the box-office appeal of unproduced movie scripts. Producers can retain the service to help them decide which projects to fund. Epagogix will also help producers optimize their chosen scripts to improve box-office performance. The question here is, do we want an algorithm controlling the creative output of the movie industry? Would we be comfortable take humans out of the loop completely and see where the algorithm eventually takes us?

Now, you may counter that we could include feedback from audience responses. We could use social signals to continually improve the algorithm, a collaborative filtering approach that uses the power of Big Data to guide the film industry’s creative process. Humans are still in the loop in this approach, but only as an aggregated sounding board. We have removed the essentially human elements of creativity, emotion and intuition. Even with the most robust system imaginable, are you comfortable with us humans taking our hands off the wheel?

Here’s another example from Ayre’s book. There is substantial empirical evidence that shows algorithms are better at diagnosing medical conditions than clinical practitioners. In a 1989 study by Dawes, Faust and Meehl, a diagnosis algorithmic rule set was consistently more reliable than actual clinical doctors. They then tried a combination, where doctors were made aware of the outcomes of the algorithm but were the final judges. Again, doctors would have been better off going with the results of the algorithm. Their second-guessing increased their margin of error significantly.

But, even knowing this, would you be willing to rely completely on an automated algorithm the next time you need medical attention? What if there was no doctor involved at all, and you were diagnosed and treated by an algo-driven robot?

There is also mounting (albeit highly controversial) evidence showing that direct instruction produces better learning outcomes that traditional exploratory teaching methods. In direct instruction, scripted automatons could easily replace the teacher’s role. Test scores could provide self-optimizing feedback loops. Learning could be driven by algorithms and delivered at a distance. Classrooms, along with teachers, could disappear completely. Is this a school you’d sign your kid up for?

Let’s stoke the fires of this dilemma a little. In a frightening TED talk, Kevin Slavin talks about how algorithms rule the world and offers a few examples of how algorithms have gotten it wrong in the past. The pricing algorithms of Amazon priced an out-of-print book called “The Making of a Fly” at a whopping $23.6 million dollars. Surprisingly, there were no sales. And in financial markets, where we’ve largely abdicated control to algorithms, those same algorithms spun out of control in 2012 no fewer than 18,000 times. So far, these instances have been identified and corrected in milliseconds, but there’s always a Black Swan chance that one time, they’ll crash the economy just for the hell of it.

But should we humans feel too smug, let’s remember this sobering fact: 20% of all fatal diseases were misdiagnosed. In fact, misdiagnosis accounts for about one-third of all medical error. And we humans have no one but ourselves to blame but for that.

As I said – it’s a dilemma.

What Does Being “Online” Mean?

plugged-inFirst published October 24, 2013 in Mediapost’s Search Insider

If readers’ responses to my few columns about Google’s Glass can be considered a representative sample (which, for many reasons, it can’t, but let’s put that aside for the moment), it appears we’re circling the concept warily. There’s good reason for this. Privacy concerns aside, we’re breaking virgin territory here that may shift what it means to be online.

Up until now, the concept of online had a lot in common with our understanding of physical travel and acquisition. As Peter Pirolli and Stuart Card discovered, our virtual travels tapped into our evolved strategies for hunting and gathering. The analogy, which holds up in most instances, is that we traveled to a destination. We “went” online, to “go” to a website, where we “got” information. It was, in our minds, much like a virtual shopping trip. Our vehicle just happened to be whatever piece of technology we were using to navigate the virtual landscape of “online.”

As long as we framed our online experiences in this way, we had the comfort of knowing we were somewhat separate from whatever “online” was. Yes, it was morphing faster than we could keep up with, but it was under our control, subject to our intent. We chose when we stepped from our real lives into our virtual ones, and the boundaries between the two were fairly distinct.

There’s a certain peace of mind in this. We don’t mind the idea of online as long as it’s a resource subject to our whims. Ultimately, it’s been our choice whether we “go” online or not, just as it’s our choice to “go” to the grocery store, or the library, or our cousin’s wedding. The sphere of our lives, as defined by our consciousness, and the sphere of “online” only intersected when we decided to open the door.

As I said last week, even the act of “going” online required a number of deliberate steps on our part. We had to choose a connected device, frame our intent and set a navigation path (often through a search engine). Each of these steps reinforced our sense that we were at the wheel in this particular journey. Consider it our security blanket against a technological loss of control.

But, as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being “online” will cease to be about “going” and will become more about “being.”  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.

Being “online” will mean being “plugged in.” The lines between “online” and “ourselves” will become blurred, perhaps invisible, as technology moves at the speed of unconscious thought. We won’t be rationally choosing destinations, applications or devices. We won’t be keying in commands or queries. We won’t even be clicking on links. All the comforting steps that currently reinforce our sense of movement through a virtual space at our pace and according to our intent will fade away. Just as a light bulb doesn’t “go” to electricity, we won’t “go” online.  We will just be plugged in.

Now, I’m not suggesting a Matrix-like loss of control. I really don’t believe we’ll become feed sacs plugged into the mother of all networks. What I am suggesting is a switch from a rather slow, deliberate interface that operates at the speed of conscious thought to a much faster interface that taps into the speed of our subconscious cognitive processing. The impulses that will control the gateway of information, communication and functionality will still come from us, but it will be operating below the threshold of our conscious awareness. The Internet will be constantly reading our minds and serving up stuff before we even “know” we want it.

That may seem like neurological semantics, but it’s a vital point to consider. Humans have been struggling for centuries with the idea that we may not be as rational as we think we are. Unless you’re a neuroscientist, psychologist or philosopher, you may not have spent a lot of time pondering the nature of consciousness, but whether we actively think about it or not, it does provide a mental underpinning to our concept of who we are.  We need to believe that we’re in constant control of our circumstances.

The newly emerging definition of what it means to be “online” may force us to explore the nature of our control at a level many of us may not be comfortable with.

Losing My Google Glass Virginity

Originally published October 17, 2013 in Mediapost’s Search Insider

Rob, I took your advice.

A few columns back, when I said Google’s Glass might not be ready for mass adoption, fellow Search Insider Rob Garner gave me this advice:“Don’t knock it until you try it.”  So, when a fellow presenter at a conference I was at last week brought along his Glass and offered me a chance to try them (Or “it”? Does anyone else find Google’s messing around with plural forms confusing and irritating?), I took him up on it. To say I jumped at it may be overstating the case – let’s just say I enthusiastically ambled to it.

I get Google Glass. I truly do. To be honest, the actual experience of using them came up a little short of my expectations, but not much. It’s impressive technology.

But here’s the problem. I’m a classic early adopter. I always look at what things will be, overlooking the limitations of what currently “is.” I can see the dots of potential extending toward a horizon of unlimited possibility, and don’t sweat the fact that those dots still have to be connected.

On that level, Google Glass is tremendously exciting, for two reasons that I’ll get to in a second. For many technologies, I’ll even connect a few dots myself, willing to trade off pain for gain. That’s what early adopters do. But not everyone is an early adopter. Even given my proclivity for nerdiness, I felt a bit like a jerk standing in a hotel lobby, wearing Glass, staring into space, my hand cupped over the built-in mike, repeating instructions until Glass understood me. I learned there’s a new label for this; for a few minutes I became a “Glasshole.”Screen-Shot-2013-05-19-at-2.09.03-AM

Sorry Rob, I still can’t see the mainstream going down this road in the near future.

But there are two massive reasons why I’m still tremendously bullish on wearable technology as a concept. One, it leverages the importance of use case in a way no previous technology has ever done. And two, it has the potential to overcome what I’ll call “rational lag time.”

The importance of use case in technology can be summed up in one word: iPad. There is absolutely no technological reason why tablets, and iPads in particular, should be as popular as they are. There is nothing in an iPad that did not exist in another form before. It’s a big iPhone, without the phone. The magic of an iPad lies in the fact that it’s a brilliant compromise: the functionality of a smartphone in a form factor that makes it just a little bit more user-friendly. And because of that, it introduced a new use case and became the “lounge” device. Unlike a smartphone, where size limits the user experience in some critical ways (primarily in input and output), tablets offer acceptable functionality in a more enjoyable form. And that is why almost 120 million tablets were sold last year, a number projected (by Gartner) to triple by 2016.

The use case of wearable technology still needs to be refined by the market, but the potential to create an addictive user experiences is exceptional. Even with Glass’ current quirks, it’s a very cool interface. Use case alone leads me to think the recent $19 billion by 2018 estimate of the size of the wearable technology market is, if anything, a bit on the conservative side.

But it’s the “rational lag time” factor that truly makes wearable technology a game changer.  Currently, all our connected technologies can’t keep up with our brains. When we decide to do something, our brains register subconscious activity in about 100 milliseconds, or about one tenth of a second. However, it takes another 500 milliseconds (half a second) before our conscious brain catches up and we become aware of our decision to act. In more complex actions, a further lag happens when we rationalize our decision and think through our possible alternatives. Finally, there’s the action lag, where we have to physically do something to act on our intention. At each stage, our brains can shut down  impulses if it feels like they require too much effort.  Humans are, neurologically speaking, rather lazy (or energy-efficient, depending on how you look at it).

So we have a sequence of potential lags before we act on our intent: Unconscious Stimulation > Conscious Awareness > Rational Deliberation > Possible Action. Our current interactions with technology live at the end of this chain. Even if we have a smartphone in our pocket, it takes several seconds before we’re actively engaging with it. While that might not seem like much, when the brain measures action in split seconds, that’s an eternity of time.

But technology has the potential to work backward along this chain. Let’s move just one step back, to rational deliberation. If we had an “always on” link where we could engage in less than one second, we could utilize technology to help us deliberate. We still have to go through the messiness of framing a request and interpreting results, but it’s a quantum step forward from where we currently are.

The greatest potential (and the greatest fear) lies one step further back – at conscious awareness. Now we’re moving from wearable technology to implantable technology. Imagine if technology could be activated at the speed of conscious thought, so the unconscious stimulation is detected and parsed and by the time our conscious brain kicks into gear, relevant information and potential actions are already gathered and waiting for us. At this point, any artifice of the interface is gone, and technology has eliminated the rational lag. This is the beginning of Kurzweil’s Singularity: the destination on a path that devices like Google Glass are starting down.

As I said, I like to look at the dots. Someone else can worry about how to connect them.

Bounded Rationality in a World of Information

First published October 11, 2013 in Mediapost’s Search Insider.  

Humans are not good data crunchers. In fact, we pretty much suck at it. There are variations to this rule, of course. We all fall somewhere on a bell curve when it comes to our sheer rational processing power. But, in general, we would all fall to the far left of even an underpowered laptop.

Herbert Simon

Herbert Simon

Herbert Simon recognized this more than a half century ago, when he coined the term “bounded rationality.”  In a nutshell, we can only process so much information before we become overloaded, when we fall back on much more human approaches, typically known as emotion and gut instinct.

Even when we think we’re being rational, logic-driven beings, our decision frameworks are built on the foundations of emotion and intuition. This is not bad. Intuition tends to be a masterful way to synthesize inputs quickly and efficiently, allowing us generally to make remarkably good decisions with a minimum of deliberation. Emotion acts to amplify this process, inserting caution where required and accelerating when necessary. Add to this the finely honed pattern recognition instincts we humans have, and it turns out the cogs of our evolutionary machinery work pretty well, allowing us to adequately function in very demanding, often overwhelming environments.

We’re pretty efficient; we’re just not that rational. There is a limit to how much information we can “crunch.”

So when information explodes around us, it raises a question – if we’re not very good at processing data, what happen when we’re inundated with the stuff? Yes, Google is doing its part by helpfully “organizing the world’s information,” allowing us to narrow down our search to the most relevant sources, but still, how much time are we willing to devote to wading through mounds of data? It’s as if we were all born to be dancers, and now we’re stuck being insurance actuaries. Unlike Heisenberg (sorry, couldn’t resist the “Breaking Bad” reference) – we don’t like it, we’re not very good at it, and it doesn’t make us feel alive.

To make things worse, we feel guilty if we don’t use the data. Now, thanks to the Web, we know it’s there. It used to be much easier to feign ignorance and trust our guts. There are few excuses now. For every decision we have to make, we know that there is information which, carefully analyzed, should lead us to a rational, logical conclusion. Or, we could just throw a dart and then go grab a beer. Life is too short as it is.

When Simon coined the term “bounded rationality,” he knew that the “bounds” were not just the limits on the information available but also the limits of our own cognitive processing power and the limits on our available time. Even if you removed the boundaries on the information available (as is now happening) those limits to cognition and time would remain.

I suspect we humans are developing the ability to fool ourselves that we are highly rational. For the decisions that count, we do the research, but often we filter that information through a very irrational web of biases, beliefs and emotions. We cherry-pick information that confirms our views, ignore contradictory data and blunder our way to what we believe is an informed decision.

But, even if we are stuck with the same brain and the same limitations, I have to admit that the explosion of available information has moved us all a couple of notches to the right on Simon’s “satisficing” curve. We may not crunch all the information available, but we are crunching more than we used to, simply because it’s available.  I guess this is a good thing, even if we’re a little delusional about our own logical abilities.

What a Social Media “Like” Should Really Mean

Originally posted in Mediapost’s Search Insider on October 3, 2013

Italy’s Agriturismo program has been a success by any measure you might want to use. Since the initial legislation was passed in 1985, thousands of small farms through Italy, teetering on the edge of extinction, have been thrown a financial lifeline by letting operators supplement their income  welcoming tourists to “stay on the farm.” The program includes one-time renovation grants and an ongoing marketing program. Today, there are almost 3,500 agriturismos throughout Italy. Many of these have sprung up just in the past decade. The program brings the market directly to the farm, allowing onsite sales of products to guests and showcasing the homegrown produce in the agriturismo’s restaurant.

The program’s success, however, has superheated the competition for tourism among the operators. In Tuscany, where I stayed at one such farm, there are 1,000 agriturismos, almost one third of the total number in Italy. You literally can’t throw a Tuscan stone without hitting some type of tourist-targeted operation. This competitive environment is made even more fervent when you consider that almost every restaurant in Italy is also an independent operation. There are no big chains. All these businesses are literally mom and pop (sorry, Momma and Poppa) operations. They run on a shoestring. There is little to no money for advertising. If ever there was a test bed for guerilla marketing, this is it.

Here, online ratings are the currency of choice. A top spot in an online directory is the difference between life and death for these businesses. In this almost perfect but unflinchingly brutal adaptive environment, if you’re terrible, you die quickly. If you’re mediocre, you die slowly. If you’re good, you stumble along. And for a very few exceptions, if you’re excellent, you may do OK and even prosper, relatively speaking. I would put Fausto and Susanna in this last category. They run a small agriturismo just outside San Gimignano.

When it comes to the directories that matter, one towers above the rest. TripAdvisor wields the same power in this market that Google wields in our world of search. It is the ultimate arbitrator of life and death. And the smartest of the operators have taken this to heart. They “get” social media at a level that is humbling to this particular North American online marketing “expert.” It’s not just asking for a “like” or a good review. They know that the best way to get a glowing review is to utterly, undeniably, completely deserve it.  There’s no faint praise here; you have to blow your customer’s socks off.

It’s this intimate, person-to-person exchange that makes this the most efficient market possible. No money or marketing efforts are wasted on inefficient channels.  There are no middlemen. It all takes place directly between the host and the guest. It’s completely genuine. How many marketing campaigns can you say that about? They give you the experience of a lifetime, and you say a heartfelt thank you. TripAdvisor (and Facebook, and Yelp, etc.) is just there to make sure the world hears about it.

If Fausto and Susanna have understood the power of social media, Marina Pasquino is teaching a master’s class in it. In all my years of staying in hotels and consulting to businesses, I’m not sure I’ve ever seen a better-run business than Signora Pasquino’s small hotel on the Adriatic coast. My jaw dropped during check-in, and didn’t manage to snap back into place until we left seven awestruck days later.

The Hotel Belvedere, a tiny hotel in Riccione with less than 50 rooms, has blown TripAdvisor’s review algorithm to smithereens. It doesn’t just top the ratings for hotels in its area – it’s TripAdvisor’s number-one hotel in all of Italy, and one of the top 25 hotels in the world! Of the over 800 reviews it’s collected, 97% of them are effusive over-the-top odes to the hotel, its staff and the complete Belvedere experience.  The feedback is so overwhelming positive, posts sometimes get flagged for manual review to ensure they’re not fraudulent. They’re not, by the way. I mean, how many hotel staff actually hug you when you check in? Seriously.

Business is almost completely generated by word of mouth (both traditionally and digitally). Guests come back every single year. And they bring their friends. During our week, several groups (many from Canada, where I’m from) were at the hotel. And all this is fueled by a warm contact through social media after you leave. With the Belvedere, when you talk about friending and liking, you don’t have to put quotes around the words. In this case, those labels match your intention.

I’ve talked before about how rugged adaptive environments drive the evolution of new breeds of marketers. I can’t think of any environment more rugged than the tourism industry in today’s Italy. And here, the Faustos, the Susannas and the Marinas are showing that if you work your ass off to be amazing, we’ll return the favor by letting people know. I’m not sure what you would call this particular species, but I hope it prospers. We could certainly use more of them in the world.

What is this “Online” You Speak Of?

First published September 12, 2013 in Mediapost’s Search Insider.

I was in an airport yesterday, and I was eavesdropping. That’s what I do in airports. It’s much more entertaining than watching the monitors. In this particular case, I was listening to a conversation between a well-dressed elderly gentleman, probably in his late ’80s, and what appeared to be his son. They were waiting for pre-boarding. The son was making that awkward small talk — you know, the conversation you have when you don’t really know your parent well enough anymore to be able to talk about what they’re really interested in, but you still feel the need to fill the silence. In this case, the son was talking to his dad about a magazine: “I used to get a copy every time I flew to London,” he said. “But they don’t publish it anymore. It’s all done online.”

The father, who had the look and appearance of a retired university professor, looked at his son quizzically for a few minutes. It’s as if the son had suddenly switched from English to Swahili midstream in his conversation.

“What’s ‘online’?”

“Online — on the Internet. It’s published electronically. There’s no print version anymore?”

The father grappled with the impact of this statement, then shook his head slowly and sadly. “That’s very sad. I suppose the mail service’s days are numbered too.”

The son replied, “Oh yes, I’m sure. No one mails things anymore.”

“But what will I do? I still buy things from catalogs.” It was as if the entire weight of the last two-and-a-half decades had suddenly settled on the frail gentleman’s shoulders.

At first, I couldn’t believe that anyone still alive didn’t know what “online” was. Isn’t that pretty much equivalent to oxygen or gravity now? Hasn’t it reached the point of ubiquity at which we all just take it for granted, no longer needing to think about it?

But then, because in the big countdown of life, I’m also on the downhill slope, closer to the end than to the beginning, I started thinking about how wrenching technological change has become. If you don’t keep up, the world you know is swept away, to be replaced with a world where your mail carrier’s days are numbered, the catalogs you depend on are within a few years of disappearing, and everything seems to be headed for the mysterious destination known as “online.”

As luck would have it, my seat on the airplane was close enough to this gentleman’s that I was able to continue my eavesdropping (if you see me at an airport, I advise you to move well out of earshot). You might have thought, as I first did, that he was in danger of losing his marbles. I assure you, nothing could be further from the truth. For over four hours, he carried on intelligent, informed conversations on multiple topics, made some amazing sketches in pencil, and generally showed every sign of being the man I hope to be when I’m approaching 90. This was not a man who had lost touch with reality; this was a man who is continually surprised (and, I would assume, somewhat frustrated) to find that reality seems to be a moving target.

We, the innovatively smug, may currently feel secure in our own technophilia, but our ability to keep up with the times may slip a little in the coming years. It’s human to feel secure with the world we grew up and functioned in. Our evolutionary environment was substantially more stable than the one we know today. As we step back from the hectic pace, don’t be surprised if we lose a little ground. Someday, when our children speak to us of the realities of their world, don’t be surprised if some of the terms they use sound a little foreign to our ears.

Beware Confirmation Bias

First published September 5, 2013 in Mediapost’s Search Insider

Most testing of marketing is disproportionately biased towards the positive. We test to find winners. But in the process, we often cut losers off without a second glance. And this can be dangerously myopic.

I’ve talked in the past about taking a Bayesian approach to strategy. The more I explore this idea, the better I like it. But it comes with some challenges – the biggest being that we’re not Bayesian by nature. In fact, there’s a cognitive bias roughly the size of a good-sized cow barn that often leaves us blind to the true state of affairs. In psychological circles, it’s called Confirmation Bias, and in a comprehensive academic review in 1998, Raymond Nickerson stated its potential negative impact, “If one were to attempt to identify a single problematic aspect of human reasoning that deserves attention above all others, the confirmation bias would have to be among the candidates for consideration.”

Here’s the thing. We love to be right. We hate to be wrong. So we will go to extraordinary lengths to make sure that we’re proven correct. And we won’t even know we’re doing it. Our brain, working surreptitiously in the background, doesn’t alert us to how biased we actually are. The many tricks that go along with Confirmation Bias usually play out subconsciously.

If we try to be good little Bayesians, we have to embrace alternative ideas of all shapes and sizes, whether or not they agree with our current view of things. In fact, we should be prepared to rip our current view apart, as it’s in the disproving and rebuilding of hypotheses that the truth is eventually found.

Here’s where things go wrong in most market testing. We usually test to prove our hunches right. We go in with a favored option and try to build a case for it.  We may deny it, but we all do it. That means that the less favored alternatives usually get short shrift. And it’s often in one of these alternatives that the optimal choice may be found. The more that there is at stake in the test, the more susceptible we are to Confirmation Bias.

Here is the rogue’s gallery of typical Confirmation Bias tricks:

Favored Hypothesis Information Seeking and Interpretation – As I said, we tend to seek information that supports our favored hypothesis, and avoid information that would contradict it. In the Bayesian view, this is equivalent to ignoring likelihood ratios.

Preferential Treatment of Evidence Supporting Existing Beliefs – Even if we somehow collect unbiased information, we will tend to focus on the information that supports our favored view. It gets “over-weighted” in analysis.

Looking for Positive Cases – This is the classic trap of testing only for winners and ignoring the losers. Often, the losers can tell us more about the true state of affairs.

The Primacy Effect – We tend to pay more attention to the first information we look at, which can bias analysis of any subsequent information.

Belief Persistence – Even when the evidence mounts that our original hunch is wrong, we can be incredibly inventive in twisting evidentiary frameworks to provide continuing support. Along with this is another bias called the “Sunk Cost Fallacy.”  The more we have invested in our original hunch (i.e. a major multimillion-dollar campaign that was launched based on it) the more tenacious we are in holding on to it.

Going back a few columns to Philip Tetlock’s Hedgehogs and Foxes, he found that Foxes make much better natural Bayesians. They are more open to updating their beliefs. The big takeaway here? Keep an open mind.

Google Glass and the Sixth Dimension of Diffusion

First published August 29, 2013 in Mediapost’s Search Insider

Tech stock analyst and blogger Henry Blodget has declared Google Glass dead on arrival. I’m not going to spend any time talking about whether or not I agree with Mr. Blodget (for the record, I do – Google Glass isn’t an adoptable product as it sits – and I don’t – wearable technology is the next great paradigm shifter) but rather dig into the reason that he feels Google Glasses are stillborn.

They make you look stupid.

The input for Google Glass is your voice, which means you have to walk around saying things like, “Glass, take a video” or “Glass, what is the temperature?” The fact is, to use Google Glass, you either have to accept the fact that you’ll look like a moron or the biggest jerk in the world. Either way, the vast majority of us aren’t ready to step into that particular spotlight.

Last week, I talked about Everett Rogers’ Diffusion of Technology and shared five variables that determine the rate of adoption. There is actually an additional factor that Rogers also mentioned: “the status-conferring aspects of innovations emerged as the sixth dimension predicting rate of adoption.”

If you look at Roger’s Diffusion curve, you’ll find the segmentation of the adoption population is as follows: Innovators (2.5% of the population), Early Adopters (13.5%), Early Majority (34%), Late Majority (34%)  and Laggards (16%).  But there’s another breed that probably hides out somewhere between Innovators and Early Adopters. I call them the PAs (for Pompous Asses). They love gadgets, they love spending way too much for gadgets, and they love being seen in public sporting gadgets that scream “PA.” Previously, they were the ones seen guffawing loudly into Bluetooth headsets while sitting next to you on an airplane, carrying on their conversation long after the flight attendant told them to wrap it up. Today, they’d be the ones wearing Google Glass.

 

This sixth dimension is critical to consider when the balance between the other five is still a little out of whack. Essentially, the first dimension, Relative Advantage, has to overcome the friction of #2, Compatibility, and #3, Complexity (#4, Trialability, and #5, Observability, are more factors of the actual mechanics of diffusion, rather then individual decision criteria). If the advantage of an innovation does not outweigh its complexity or compatibility, it will probably die somewhere on the far left slopes of Rogers’ bell curve. The deciding factor will be the Sixth Dimension.

This is the territory that Google Glass currently finds itself in. While I have no doubt that the advantages of wearable technology (as determined by the user) will eventually far outweigh the corresponding “friction” of adoption, we’re not there yet. And so Google Glass depends on the Sixth Dimension. Does adoption make you look innovative, securely balanced on the leading edge? Or does it make you look like a dork? Does it confer social status or strip it away? After the initial buzz about Glass, social opinion seems to be falling into the second camp.

This brings us to another important factor to consider when trying to cash in on a social adoption wave: timing. Google is falling into the classic Microsoft trap of playing its hand too soon through beta release. New is cool among the early adopter set, which makes timing critical. If you can get strategic distribution and build up required critical mass fast enough, you can lessen the “pariah” factor. It’s one thing to be among a select clique of technological PAs, but you don’t want to be the only idiot in the room. Right now, with only 8,000 pairs distributed, if you’re wearing a pair, you’re probably the one that everyone else is whispering about.

Of course, you might not be able to hear them over the sound of your own voice, as you stand in front of the mirror and ask Google Glass to “take a picture.”