The Inevitable Wearable Technology Backlash

First published January 16, 2014 in Mediapost’s Search Insider

piem-1024x705Okay, I’ve gone on record – I think wearable technology is a huge disruptive wave currently bearing down on us. Accept it.

And I’ve also said that stupid wearable technology is inevitable. Accept that as well.

It appears that this dam is beginning to burst.

Catharine Taylor had a humorous and totally on-point reaction to the “tech-togs” that were unveiled at CES. Her take: “Thanks but no thanks”

Maarten Albarda a similar reaction to his first go-around with Google Glass – “Huh?”

Look – don’t get me wrong. Wearable technology, together with the “web of everything,” will eventually change our lives, but most of us won’t be going willingly. We’re going to have to get through the “bubble of silliness” first. Some of this stuff will make sense and elicit a well-earned “Cool” (or “Dope” or “Sick” or what ever generational thumbs-up is appropriate). Other things will garner an equally well-earned WTF? And some will be imminently sensible but will still end up being tossed out with the bathwater anyway.

Rob Garner always says “adoption follows function” This is true, but each of us has different thresholds for what we deem to be functional. If technology starts moving that bar, we know, thanks to the work of Everett Rogers and others, that the audience’s acceptance of that will follow the inevitable bell curve. Functionality is not equal in the eyes of all beholders.

The other problem with these new interfaces with technology is that function is currently scattered around like a handful of grass clippings in the wind. Sure, there are shards of usefulness, but unless you’re willing to wear more layers of wearable tech than your average early adopting Eskimo (or, as we say here in the politically correct north – Inuit), it’s difficult to see how this can significantly improve our day-to-day lives.

The other thing we have to grapple with is what I would call the WACF – The Weird and Creepy Factor. How exactly do we feel about having the frequency of our butt imprinting our sofa, our bank balance, our blood pressure and our body fat percentage beamed up to the data center of a start up we’d never heard of before last Friday? I’m an admitted early adopter and I have to confess – I’m not ready to make that leap right now.

It’s not just the privacy of my personal data that’s holding me back, although that is certainly a concern. Part of this goes back to something I talked about a few columns back – the redefinition of what it means to “be” online rather than “go” online. With wearable technology, we’re always “on” – plugged into the network and sharing data whether we’re aware of it or not.  This requires us with a philosophical loss of control. Chances are that we haven’t given this a lot of rational consideration, but it contributes to that niggling WACF that may be keeping us from donning the latest piece of wearable tech.

Eventually, the accumulated functionality of all this new technology will overcome all these barriers to adoption, but we will all have differing thresholds marking our surrender to the inevitable.  Garner’s assertion that adoption follows function is true, but it’s true of the functional wave as a whole and in that wave there will be winners and losers. Not all functional improvements get adopted. If all adoption followed all functional improvements, I’d be using a Dvorak keyboard right now. Betamax would have become the standard for videocassettes. And we’d be conversing in Esperanto. All functional improvements – all casualties to an audience not quite ready to embrace them.

Expect more to come.

The Psychology of Usefulness: How We Made Google a Habit

In the last two posts, I looked first at the difference between autotelic and exotelic activities, then how our brain judges the promise of usefulness. In today’s post, I want to return to the original question: How does this impact user loyalty? As we use more and more apps and destinations that rely on advertising for their revenues, this question becomes more critical for those apps and destinations.

The obvious example here is search engines, the original functional destination. Google is the king of search, but also the company most reliant on these ads. For Google, user loyalty is the difference between life and death. In 2012, Google made a shade over 50 billion dollars (give or take a few hundred million). Of this, over $43 billion came from advertising revenue (about 86%) and of that revenue, 62% came from Google’s own search destinations. That a big chunk of revenue to come from one place, so user loyalty is something that Google is paying pretty close attention to.

Now, let’s look at how durable that hold Google has on our brains is. Let’s revisit the evaluation cascade that happens in our brain each time we contemplate a task:

  • If very familiar and highly stable, we do it by habit
  • If fairly familiar but less stable, we do it by a memorized procedure with some conscious guidance
  • If new and unfamiliar, we forage for alternatives by balance effort required against

Not surprisingly, the more our brain has to be involved in judging usefulness, the less loyal we are. If you can become a habit, you are rewarded with a fairly high degree of loyalty. Luckily for Google, they fall into this category – for now. Let’s look at little more at how Google became a habit and what might have to happen for us to break this habit.

Habits depend on three things: high repetition, a stable execution environment and consistently acceptable outcomes. Google was fortunate enough to have all three factors present.

First – repetition. How many times a day do you use a search engine? For me, it’s probably somewhere between 10 and 20 times per day. And usage of search is increasing. We search more now than we did 5 years ago. If you do something that often throughout the day it wouldn’t make much sense to force your brain to actively think it’s way through that task each and every time – especially if the steps required to complete that task don’t really change that much. So, the brain, which is always looking for ways to save energy, records a “habit script” (or, to use the terminology of Ann Graybiel – “chunks”) that can play out without a lot of guidance. Searching definitely meets the requirements for the first step of forming a habit.

Second – stability. How many search engines do you use? If you’re like the majority of North Americans, you probably use Google for almost all your searches.  This introduces what we would call a stable environment. You know where to go, you know how to use it and you know how to use the output. There is a reason why Google is very cautious about changing their layout and only do so after a lot of testing. What you expect and what you get shouldn’t be too far apart. If it is, it’s called disruptive, and disruption breaks habits. This is the last thing that Google wants.

Third – acceptable outcomes. So, if stability preserves habits, why would Google change anything? Why doesn’t Google’s search experience look exactly like it did in 1998 (fun fact – if you search Google for “Google in 1998” it will show you exactly what the results page looked like)? That would truly be stable, which should keep those all important habits glued in place. Well, because expectations change. Here’s the thing about expected utility – which I talked about in the last post. Expected utility doesn’t go away when we form a habit, it just moves downstream in the process. When we do a task for the first time, or in an unstable environment, expected utility precedes our choice of alternatives. When a “habit script” or “chunk” plays out, we still need to do a quick assessment of whether we got what we expected. Habits only stay in place if the “habit script” passes this test. If we searched for “Las Vegas hotels” and Google returned results for Russian borscht, that habit wouldn’t last very long.  So, Google constantly has to maintain this delicate balance – meeting expectations without disrupting the user’s experience too much. And expectations are constantly changing.

Internet adoption over time chartWhen Google was introduced in 1998, it created a perfect storm of habit building potential. The introduction coincided with a dramatic uptick in adoption of the internet and usage of web search in particular.  In 1998, 36% of American adults were using the Internet (according to PEW). In 2000, that had climbed to 46% and by 2001 that was up to 59%. More of us were going online, and if we were going online we were also searching.  The average searches per day on Google exploded from under 10,000 in 1998 to 60 million in 2000 and 1.2 billion in 2007. Obviously, we were searching  – a lot – so the frequency of task prerequisite was well in hand.

Now – stability. In the early days of the Internet, there was little stability in our search patterns. We tended to bounce back and forth between a number of different search engines. In fact, the search engines themselves encouraged this by providing “Try your search on…” links for their competitors (an example from Google’s original page is shown below). Because our search tasks were on a number of different engines, there was no environmental stability, so no chance for the creation of a true task. The best our brains could do at this point was store a procedure that required a fair amount of conscious oversight (choosing engines and evaluating outcomes). Stability was further eroded by the fact that some engines were better at some types of searches than others. Some, like Infoseek, were better for timely searches due to their fast indexing cycles and large indexes. Some, like Yahoo, were better at canonical searches that benefited from a hierarchal directory approach. When searching in the pre-Google days, we tended to match our choice of engine to the search we were doing. This required a fairly significant degree of rational neural processing on our part, precluding the formation of a habit.

Googlebottompage1998

But Google’s use of PageRank changed the search ballgame dramatically. Their new way of determining relevancy rankings was consistently better for all types of searches than any of their competitors. As we started to use Google for more types of searches because of their superior results, we stopped using their competitors. This finally created the stability required for habit formation.

Finally, acceptable outcomes. As mentioned above, Google came out of the gate with outcomes that generally exceeded our expectations, set by the spotty results of their competitors. Now, all Google had to do to keep the newly formed habit in place was to continue to meet the user’s expectations of relevancy. Thanks to truly disruptive leap Google took with the introduction of PageRank, they had a huge advantage when it came to search results quality. Google has also done an admirable job of maintaining that quality over the past 15 years. While the gap has narrowed significantly (today, one could argue that Bing comes close on many searches and may even have a slight advantage on certain types of searches) Google has never seriously undershot the user’s expectations when it comes to providing relevant search results. Therefore, Google has never given us a reason to break our habits. This has resulted in a market share that has hovered over 60% for several years now.

When it comes to online loyalty, it’s hard to beat Google’s death grip on search traffic. But, that grip may start to loosen in the near future. In my next post, I’ll look the conditions that can break habitual loyalty, again using Google as an example. I’ll also look at how our brains decide to accept or reject new useful technologies.

The Psychology of Usefulness: How Our Brains Judge What is Useful

To-Do-ListDid you know that “task” and “tax” have the same linguistic roots? They both come from the Latin “taxare” – meaning to appraise. This could explain the lack of enthusiasm we have for both.

Tasks are what I referred to in the last post as an exotelic activity – something we have to do to reach an objective that carries no inherent reward. We do them because we have to do them, not because we want to do them.

When we undertake a task, we want to find the most efficient way to get it done. Usefulness becomes a key criterion. And when we judge usefulness, there are some time-tested procedures the brain uses.

Stored Procedures and Habits

The first question our brain asks when undertaking a task is – have we done this before? Let’s first deal with what happens if the answer is yes:

If we’ve done something before our brains – very quickly and at a subconscious level – asks a number of qualifying questions:

–       How often have we done this?

–       Does the context in which the task plays out remain fairly consistent (i.e. are we dealing with a stable environment)?

–       How successful have we been in carrying out this task in the past

If we’ve done a task a number of times in a stable environment with successful outcomes, it’s probably become a habit. The habit chunk is retrieved from the basal ganglia and plays out without much in the way of rational mediation. Our brain handles the task on autopilot.

If we have less familiarity with the task, or if there’s less stability in the environment, but have done it before we probably have stored procedures, which are set procedural alternatives. These require more in the way of conscious guidance and often have decision points where we have to determine what we do next, based on the results of the previous action.

If we’re entering new territory and can’t draw on past experience, our brains have to get ready to go to work. This is the route least preferred by our brain. It only goes here when there’s no alternative.

Judging Expected Utility and Perceived Risk

If a task requires us to go into unfamiliar territory, there are new routines that the brain must perform. Basically, the brain must place a mental bet on the best path to take, balancing a prediction of a satisfactory outcome against the resources required to complete the task. Psychologists call this “Expected Utility.”

Expected Utility is the brain’s attempt to forecast scenarios that require the balancing of risks and rewards where the outcomes are not known.  The amount of processing invested by the brain is usually tied to the size of the potential risk and reward. Low risk/reward scenarios require less rationalization. The brain drives this balance by using either positive or negative emotional valences, interpreted by us as either anticipation or anxiety. Our emotional balance correlates with the degree of risk or reward.

Expected utility is more commonly applied in financial decision and game theory. In the case of conducting a task, there is usually no monetary element to risk and reward. What we’re risking is our own resources – time and effort. Because these are long established evolved resources, it’s reasonable to assume that we have developed subconscious routines to determine how much effort to expend in return for a possible gain. This would mean that these cognitive evaluations and calculations may happen at a largely subconscious level, or at least, more subconscious than the processing that would happen in evaluating financial gambles or those involving higher degrees of risk and reward.  In that context, it might make sense to look at how we approach another required task – finding food.

Optimal Foraging and Marginal Value

Where we balance gain against expenditure of time and effort, the brain has some highly evolved routines that have developed over our history. The oldest of these would be how we forage for food. But, we also have a knack of borrowing strategies developed for other purposes and using them in new situations.

Pirolli and Card (1999) found, for instance, that we use our food foraging strategies to navigate digital information. Like food, information online tends to be “patchy” and of varying value to us. Often, just like looking for a food source, we have to forage for information by judging the quality of hyperlinks that may take us to those information sources or “patches.” Pirolli and Card called these clues to the quality of information that may lie on the other end of links information scent.

Cartoon_foraging_theoryTied with this foraging strategy is the concept of Marginal Value.  This was first proposed by Eric Charnov in 1976 as a evolved strategy for determining how much time to spend in a food patch before deciding to move on. In a situation with diminishing returns (ie depleted food supplies) the brain must balance effort expended against return. If you happen on a berry bush in the wild, with a reasonable certainty that there are other bushes nearby (perhaps you can see them just a few steps away) you have to mentally solve the following equation – how many berries can be gathered with a reasonable expenditure of effort vs. how much effort would it take to walk to the next bush and how many berries would be available there?

This is somewhat analogous to information foraging, with one key difference. Information isn’t depleted as you consume it. So the rule of diminishing returns is less relevant. But if, as I suspect, we’ve borrowed these subconscious strategies for judging usefulness – both in terms of information and functionality – in online environment, our brains may not know or care about the subtle differences in environments.

The reason why we may not be that rational in the application of these strategies in online encounters is that they play out below the threshold of consciousness. We are not constantly and consciously adjusting our marginal value algorithm or quantifiably assessing the value of an information patch. No, our brains use a quicker and more heuristic method to mediate our output of effort – emotions. Frustration and anxiety tell us it’s time to move onto the next site or application. Feelings of reward and satisfaction indicate we should stay right where we are. The remarkable thing about this is that as quick and dirty as these emotional guidelines are, if you went to the trouble of rationally quantifying the potential of all possible alternatives, using a Bayesian approach, for instance, you’d probably find you ended up in pretty much the same place. These strategies, simmering below the surface of our consciousness, are pretty damn accurate!

So, to sum up this post, when judging the most useful way to get a task done, we have an evaluation cascade that happens very quickly in our brain:

  • If a very familiar task needs to be done in a stable environment, our habits will take over and it will be executed with little or no rational thought.
  • If the task is fairly familiar but requires some conscious guidance, we’ll retrieve a stored procedure and look for successful feedback as we work through it.
  • If a task is relatively new to us, we’ll forage through alternatives for the best way to do it, using evolved biological strategies to help balance risk (in terms of expended effort) against reward.

Now, to return to our original question, how does this evaluation cascade impact long and short-term user loyalty? I’ll return to this question in my next post.

Google Holds the Right Cards for a Horizontal Market

First published January 9, 2014 in Mediapost’s Search Insider

android_trhoneFunctionality builds up, then across. That was the principle of emerging markets that I talked about in last week’s column. Up – then across – breaking down siloes into a more open, competitive and transparent market. I’ll come back here in a moment.

I also talked about how Google + might be defining a new way of thinking about social networking, one free of dependence on destinations. It could create a social lens through which all our online activity passes through, adding functionality and enriching information.

Finally, this week, I read that Google is pushing hard to extend Android as the default operating system in the Open Automotive Alliance – turning cars into really big mobile devices. This builds on Android’s dominance in the smartphone market (with an 82% market share).

See a theme here?

For years, I’ve been talking about the day when search transitions from being a destination to a utility, powering apps which provide very specific functionality that far outstrips anything you could do on a “one size fits all” search portal. This was a good news/bad news scenario for Google, who was the obvious choice to provide this search grid. But, in doing so, they lose their sole right to monetize search traffic, a serious challenge to their primary income source. However, if you piggy back that search functionality onto the de facto operating system that powers all those apps, and then add a highly functional social graph, you have all the makings of a foundation that will support the ‘horizontalization” of the mobile connected market. Put this in place, and revenue opportunities will begin falling into your lap.

The writing is plainly on the wall here. The future is all about mobile connections. It is the foundation of the Web of Things, wearable technology, mobile commerce – anything and everything we see coming down the pipe.  The stakes are massive. And, as markets turn horizontal in the inevitable maturation phase to come, Google seems to be well on their way to creating the required foundations for that market.

Let’s spend a little time looking at how powerful this position might be for Google. Microsoft is still coasting on their success in creating a foundation for the desktop, 30 years later.  The fact that they still exist at all is testament to the power of Windows. But the desktop expansion that happened was reliant on just one device – the PC. And, the adoption curve for the PC took two decades to materialize, due to two things: the prerequisite of a fairly hefty investment in hardware and a relatively steep learning curve. The mobile adoption curve, already the fastest in history, has no such hurdles to clear. Relative entry price points are a fraction of what was required for PCs. Also, the learning curve is minimal. Mobile connectivity will leave the adoption curve of PCs in the dust.

In addition, an explosion of connected devices will propel the spread of mobile connectivity. This is not just about smart phones. Two of the biggest disruptive waves in the next 10 years will be wearable technologies and the Web of Things. Both of these will rely on the same foundations, an open and standardized operating system and the ability to access and share data. At the user interface level, the enhancements of powerful search technologies and social-graph enabled filters will significantly improve the functionality of these devices as they interface with the “cloud.”

In the hand that will have to inevitably be played, it seems that Google is currently holding all the right cards.

Revisiting Entertainment vs Usefulness

brain-cogsSome time ago, I did an extensive series of posts on the psychology of entertainment. My original goal, however, was to compare entertainment and usefulness in how effective they were in engendering long-term loyalty. How do our brains process both? And, to return to my original intent, in that first post almost 4 years ago, how does this impact digital trends and their staying power?

My goal is to find out why some types of entertainment have more staying power than other types. And then, once we discover the psychological underpinnings of entertainment, lets look at how that applies to some of the digital trends I disparaged: things like social networks, micro-blogging, mobile apps and online video. What role does entertainment play in online loyalty? How does it overlap with usefulness? How can digital entertainment fads survive the novelty curse and jump the chasm to a mainstream trends with legs?

In the previous set of posts, I explored the psychology of entertainment extensively, ending up with a discussion of the evolutionary purpose of entertainment. My conclusion was that entertainment lived more in the phenotype than the genotype. To save you going back to that post, I’ll quickly summarize here: the genotype refers to traits actually encoded in our genes through evolution – the hardwired blueprint of our DNA. The phenotype is the “shadow” of these genes – behaviors caused by our genetic blueprints. Genotypes are directly honed by evolution for adaptability and gene survival. Phenotypes are by-products of this process and may confer no evolutionary advantage. Our taste for high-fat foods lives in the genotype – the explosion of obesity in our society lives in the phenotype.

This brings us to the difference between entertainment and usefulness – usefulness relies on mechanisms that predominately live in the genotype.  In the most general terms, it’s the stuff we have to do to get through the day. And to understand how we approach these things on our to-do list, it’s important to understand the difference between autotelic and exotelic activities.

Autotelic activities are the things we do for the sheer pleasure of it. The activity is it’s own reward. The word autotelic is Greek for “self + goal” – or “having a purpose in and not apart from itself.” We look forward to doing autotelic things. All things that we find entertaining are autotelic by nature.

Exotelic activities are simply a necessary means to an end. They have no value in and of themselves.  They’re simply tasks – stuff on our to do list.

The brain, when approaching these two types of activities, treats them very differently. Autotelic activities fire our reward center – the nucleus accumbens. They come with a corresponding hit of dopamine, building repetitive patterns. We look forward to them because of the anticipation of the reward. They typically also engage the prefrontal medial cortex, orchestrating complex cognitive behaviors and helping define our sense of self. When we engage in an autotelic activity, there’s a lot happening in our skulls.

Exotelic activities tend to flip the brain onto its energy saving mode. Because there is little or no neurological reward in these types of activities (other than a sense of relief once they’re done) they tend to rely on the brain’s ability to store and retrieve procedures. With enough repetition, they often become habits, skipping the brain’s rational loop altogether.

In the next post, we’ll look at how the brain tends to process exotelic activities, as it provides some clues about the loyalty building abilities of useful sites or tools. We’ll also look at what happens when something is both exotelic and autotelic.

The Death and Rebirth of Google+

google_plus_logoGoogle Executive Chairman Eric Schmidt has come out with his predictions for 2014 for Bloomberg TV. Don’t expect any earth-shaking revelations here. Schmidt plays it pretty safe with his prognostications:

Mobile has won – Schmidt says everyone will have a smartphone. “The trend has been mobile was winning..it’s now won.” Less a prediction than stating the obvious.

Big Data and Machine Intelligence will be the Biggest Disruptor – Again, hardly a leap of intuitive insight. Schmidt foresees the evolution of an entirely new data marketplace and corresponding value chain. Agreed.

Gene Sequencing Has Promise in Cancer Treatments – While a little fuzzier than his other predictions, Schmidt again pounces on the obvious. If you’re looking for someone willing to bet the house on gene sequencing, try LA billionaire Patrick Soon-Shiong.

See Schmidt’s full clip:

The one thing that was interesting to me was an admission of failure with Google+:

The biggest mistake that I made was not anticipating the rise of the social networking phenomenon.  Not a mistake we’re going to make again. I guess in our defense we were busy working on many other things, but we should have been in that area and I take responsibility for that.

I always called Google+ a non-starter, despite a deceptively encouraging start. But I think it’s important to point out that we tend to judge Google+ against Facebook or other social destinations. As Google+ Vice President of Product Bradley Horowitz made clear in an interview last year with Dailytech.com, Google never saw this as a “Facebook killer.”

I think in the early going there was a lot of looking for an alternative [to Facebook, Twitter, etc.],” said Horowitz. “But I think increasingly the people who are using Google+ are the people using Google. They’re not looking for an alternative to anything, they’re looking for a better experience on Google.

social-networkAnd this highlights a fundamental change in how we think about online social activity – one that I think is more indicative of what the future holds. Social is not a destination, social is a paradigm. It’s a layer of connectedness and shared values that acts as a filter, a lens  – a way we view reality. That’s what social is in our physical world. It shapes how we view that world. And Horowitz is telling us that that’s how Google looks at social too. With the layering of social signals into our online experience, Google+ gives us an enhanced version of our online experience. It’s not about a single destination, no matter how big that destination might be. It’s about adding richness to everything we do online.

Because humans are social animals our connections and our perception of ourselves as part of an extended network literally shape every decision we make and everything we do, whether we’re conscious of the fact or not. We are, by design, part of a greater whole. But because online, social originated as distinct destinations, it was unable to impact our entire online experience. Facebook, or Pinterest, act as a social gathering place – a type of virtual town square – but social is more than that. Google+ is closer to this more holistic definition of “social.”

I’m not  sure Google+ will succeed in becoming our virtual social lens, but I do agree that as our virtual sense of social evolves, it will became less about distinct destinations and more about a dynamic paradigm that stays with us constantly, helping to shape, sharpen, enhance and define what we do online. As such, it becomes part of the new way of thinking about being online – not going to a destination but being plugged into a network.

360 Degrees of Seperation

First published December 5, 2013 in Mediapost’s Search Insider

IMT_iconsIn the past two decades or so, a lot of marketers talked about gaining a 360-degree view of their customers.  I’m not exactly sure what this means, so I looked it up.  Apparently, for most marketers, it means having a comprehensive record of every touch point a customer has had with a company. Originally, it was the promise of CRM vendors, where anyone in an organization, at any time, can pull up a complete customer history.

So far, so good.

But like many phrases, it’s been appropriated by marketers and its meaning has become blurred. Today, it’s bandied about in marketing meetings, where everyone nods knowingly, confident in the fact that they are firmly ensconced in the customer’s cranium and have all things completely under control. “We have a 360-degree view of our customers,” the marketing manager beams, and woe to anyone that dares question it.

But there are no standard criteria that you have to meet before you use the term. There is no rubber-meets-the-road threshold you have to climb over. No one knows exactly what the hell it means. It sure sounds good, though!

If a company is truly striving to build as complete a picture of their customers as possible, they probably define 360 degrees as the total scope of a customer’s interaction with their company. This would follow the original CRM definition. In marketing terms, it would mean every marketing touch point and would hopefully extend through the customer’s entire relationship with that company. This would be 360-degrees as defined by Big Data.

But is it actually 360 degrees? If we envision this as a Venn diagram, we have one 360-degree sphere representing the mental model of customers, including all the things they care about. We have another 360-degree sphere representing the footprint of the company and all the things they do. What we’re actually looking at then, even in an ideal world, is where those two spheres intersect. At best, we’re looking at a relatively small chunk of each sphere.

So let’s flip this idea on its head. What if we redefine 360 degrees as understanding the customer’s decision space? I call this the Buyersphere. The traditional view of 360 degrees is from the inside looking out, from the company’s perspective. The Buyersphere moves the perspective to that of the customer, looking from the outside in. It expands the scope to include the events that lead to consideration, the competitive comparisons, the balancing of buying factors, interactions with all potential candidates and the branches of the buying path itself.  What if you decide to become the best at mapping that mental space?  I still wouldn’t call it a 360-degree view, but it would be a view that very few of your competitors would have.

One of the things that I believe is holding Big Data back is that we don’t have a frame within which to use Big Data. Peter Norvig, chief researcher for Google, outlined 17 warning signs in experimental design and interpretation. One was lack of a specific hypothesis, and the other was a lack of a theory. You need a conceptual frame from which to construct a theory, and then, from that theory, you can decide on a specific hypothesis for validation. It’s this construct that helps you separate signal from noise. Without the construct, you’re relying on serendipity to identify meaningful patterns, and we humans have a nasty tendency to mistake noise for patterns.

If we look at opportunities for establishing a competitive advantage, redefining what we mean by understanding our customers is a pretty compelling one. This is a construct that can provide a robust and testable space within which to use Big Data and other, more qualitative, approaches. It’s relatively doable for any organization to consolidate its data to provide a fairly comprehensive “inside-out” view of customer’s touch points. Essentially, it’s a logistical exercise. I won’t say it’s easy, but it is doable.  But if we set our goal a little differently, working to achieve a true “outside-in” view of our company, that sets the bar substantially higher.

360 degrees? Maybe not. But it’s a much broader view than most marketers have.

Google’s Etymological Dream Come True

First published November 14, 2013 in Mediapost’s Search Insider

Yesterday’s Search Insider column caught my eye. Aaron Goldman explained how search ads were the original native ads. He also explained why native ads work. This is backed up by research we did about 5 years ago, showing how contextual relevance substantially boosted ad effectiveness (but not, ironically, ad awareness). I did a fairly long blog post on the concept of “aligned” intent, if you really want to roll up your sleeves and dive in.

The funny thing was, I was struck by the use of the word “native” itself. For some reason, the use of the term in today’s more politically charged world struck a note of immediate uneasiness. On a gut level, it reminded me of the insensitivity of Daniel Snyder, owner of the Washington Redskins. There’s nothing immoral about the term itself, but it is currently tied to an emotionally charged issue.

As I often do, I decided to check the etymological roots of “native” and immediately noticed something different on the Google search page.  There, at the top, was an etymological time line, showing the root of “native” is the Latin “nasci” – meaning born. So, it was entirely appropriate, given Aaron’s assertion that “native” advertising was “born” on the search page. But it was at the bottom, where a downwards arrow promised “more,” that I hit etymological pay dirt.

Google showed me the typical dictionary entries, but at the bottom, it gave me a chart from it’s nGram viewer showing usage of “native” in books and publications over the past 200 years. Interestingly, the term has been in slow decline over the past 200 hundred years, with a bit of a resurgence over the last 25 years. When I clicked on the graph it broke it down further, showing that small-n “native” has been used less and less, but big-N “Native” took a jump in popularity in the mid-80’s, accounting for the mild bump.

Google’s nGram isn’t new, but its capabilities have been recently beefed up, providing a fascinating visual tool for us “wordies” out there. With it, you can plot the popularity of words over 500 years in a body of over 5 million books. For example, a blog post at Informationisbeautiful.net shows several fascinating word trend charts in the English corpus, including drug trends (cocaine was a popular topic in Victorian times, slowed down in the 20’s and exploded again in the 80’s), the battle of religion vs science (the popularity cross over was in 1930, but the trend has reversed and we’re heading for another one) and interest in sex vs. marriage (sex was barely mentioned prior to 1800, stayed relatively constant until 1910 and grew dramatically in the 70s, but lately it’s dropped off a cliff. Marriage has had a spikier history but has remained fairly constant in the last 200 years.)

I tried a few charts of my own. Since 1885, “Evolution” has beaten “Creation,” but it took a noticeable drop during the 30’s. Since 1960 both have been on the rise.  In1980, Apple got off to an initial head start, but Microsoft passed it in 1992, never to look back (although it’s had a precipitous decline since 2000.)  Perhaps the most interesting chart is comparing “radio”, “television” and “internet” since 1900. Radio started growing in the 20’s and hit its popularity peak around 1945, but the cross-over with television would take another 40 years (about 1982.) Television would only enjoy a brief period of dominance. In 1990, the meteoric rise of the Internet started and it surpassed both radio and television around 1997.

tvradiointernet

My final chart was to see how Google fared in it’s own tool. Not surprisingly, Google has dominated the search space since 2001, and done so quite handily. Currently, it’s 6 times more popular than its rivals, Yahoo and Bing.  One caveat here though – Bing’s popularity started to climb in 1830, so I think they’re talking about either the cherry, Chinese people named Bing or a German company that used to make kitchen utensils.  Either that, or Microsoft has had their search engine in development a lot longer than anyone guessed.

googleyahoobing

Whom Would You Trust: A Human or an Algorithm?

First published October 31, 2013 in Mediapost’s Search Insider

I’vmindrobote been struggling with a dilemma.

Almost a year ago, I wrote a column asking if Big Data would replace strategy. That started a several-month journey for me, when I’ve been looking for a more informed answer to that query. It’s a massively important question that’s playing out in many arenas today, including medicine, education, government and, of course, finance.

In marketing, we’re well into the era of big data. Of course, it’s not just data we’re talking about. We’re talking about algorithms that use that data to make automated decisions and take action. Some time ago, MediaPost’s Steve Smith introduced us to a company called Persado, that takes an algorithmic approach to copy testing and optimization. As an ex-copywriter turned performance marketer I wasn’t sure how I felt about that. I understand the science of continuous testing but I have an emotional stake in the art of crafting an effective message. And therein lies the dilemma. Our comfort with algorithms seems to depend on the context in which we’re encountering them and the degree of automation involved.

Let me give you an example, from Ian Ayre’s book “Super Crunchers.” There’s a company called Epagogix that uses an algorithm to predict the box-office appeal of unproduced movie scripts. Producers can retain the service to help them decide which projects to fund. Epagogix will also help producers optimize their chosen scripts to improve box-office performance. The question here is, do we want an algorithm controlling the creative output of the movie industry? Would we be comfortable take humans out of the loop completely and see where the algorithm eventually takes us?

Now, you may counter that we could include feedback from audience responses. We could use social signals to continually improve the algorithm, a collaborative filtering approach that uses the power of Big Data to guide the film industry’s creative process. Humans are still in the loop in this approach, but only as an aggregated sounding board. We have removed the essentially human elements of creativity, emotion and intuition. Even with the most robust system imaginable, are you comfortable with us humans taking our hands off the wheel?

Here’s another example from Ayre’s book. There is substantial empirical evidence that shows algorithms are better at diagnosing medical conditions than clinical practitioners. In a 1989 study by Dawes, Faust and Meehl, a diagnosis algorithmic rule set was consistently more reliable than actual clinical doctors. They then tried a combination, where doctors were made aware of the outcomes of the algorithm but were the final judges. Again, doctors would have been better off going with the results of the algorithm. Their second-guessing increased their margin of error significantly.

But, even knowing this, would you be willing to rely completely on an automated algorithm the next time you need medical attention? What if there was no doctor involved at all, and you were diagnosed and treated by an algo-driven robot?

There is also mounting (albeit highly controversial) evidence showing that direct instruction produces better learning outcomes that traditional exploratory teaching methods. In direct instruction, scripted automatons could easily replace the teacher’s role. Test scores could provide self-optimizing feedback loops. Learning could be driven by algorithms and delivered at a distance. Classrooms, along with teachers, could disappear completely. Is this a school you’d sign your kid up for?

Let’s stoke the fires of this dilemma a little. In a frightening TED talk, Kevin Slavin talks about how algorithms rule the world and offers a few examples of how algorithms have gotten it wrong in the past. The pricing algorithms of Amazon priced an out-of-print book called “The Making of a Fly” at a whopping $23.6 million dollars. Surprisingly, there were no sales. And in financial markets, where we’ve largely abdicated control to algorithms, those same algorithms spun out of control in 2012 no fewer than 18,000 times. So far, these instances have been identified and corrected in milliseconds, but there’s always a Black Swan chance that one time, they’ll crash the economy just for the hell of it.

But should we humans feel too smug, let’s remember this sobering fact: 20% of all fatal diseases were misdiagnosed. In fact, misdiagnosis accounts for about one-third of all medical error. And we humans have no one but ourselves to blame but for that.

As I said – it’s a dilemma.

What Does Being “Online” Mean?

plugged-inFirst published October 24, 2013 in Mediapost’s Search Insider

If readers’ responses to my few columns about Google’s Glass can be considered a representative sample (which, for many reasons, it can’t, but let’s put that aside for the moment), it appears we’re circling the concept warily. There’s good reason for this. Privacy concerns aside, we’re breaking virgin territory here that may shift what it means to be online.

Up until now, the concept of online had a lot in common with our understanding of physical travel and acquisition. As Peter Pirolli and Stuart Card discovered, our virtual travels tapped into our evolved strategies for hunting and gathering. The analogy, which holds up in most instances, is that we traveled to a destination. We “went” online, to “go” to a website, where we “got” information. It was, in our minds, much like a virtual shopping trip. Our vehicle just happened to be whatever piece of technology we were using to navigate the virtual landscape of “online.”

As long as we framed our online experiences in this way, we had the comfort of knowing we were somewhat separate from whatever “online” was. Yes, it was morphing faster than we could keep up with, but it was under our control, subject to our intent. We chose when we stepped from our real lives into our virtual ones, and the boundaries between the two were fairly distinct.

There’s a certain peace of mind in this. We don’t mind the idea of online as long as it’s a resource subject to our whims. Ultimately, it’s been our choice whether we “go” online or not, just as it’s our choice to “go” to the grocery store, or the library, or our cousin’s wedding. The sphere of our lives, as defined by our consciousness, and the sphere of “online” only intersected when we decided to open the door.

As I said last week, even the act of “going” online required a number of deliberate steps on our part. We had to choose a connected device, frame our intent and set a navigation path (often through a search engine). Each of these steps reinforced our sense that we were at the wheel in this particular journey. Consider it our security blanket against a technological loss of control.

But, as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being “online” will cease to be about “going” and will become more about “being.”  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.

Being “online” will mean being “plugged in.” The lines between “online” and “ourselves” will become blurred, perhaps invisible, as technology moves at the speed of unconscious thought. We won’t be rationally choosing destinations, applications or devices. We won’t be keying in commands or queries. We won’t even be clicking on links. All the comforting steps that currently reinforce our sense of movement through a virtual space at our pace and according to our intent will fade away. Just as a light bulb doesn’t “go” to electricity, we won’t “go” online.  We will just be plugged in.

Now, I’m not suggesting a Matrix-like loss of control. I really don’t believe we’ll become feed sacs plugged into the mother of all networks. What I am suggesting is a switch from a rather slow, deliberate interface that operates at the speed of conscious thought to a much faster interface that taps into the speed of our subconscious cognitive processing. The impulses that will control the gateway of information, communication and functionality will still come from us, but it will be operating below the threshold of our conscious awareness. The Internet will be constantly reading our minds and serving up stuff before we even “know” we want it.

That may seem like neurological semantics, but it’s a vital point to consider. Humans have been struggling for centuries with the idea that we may not be as rational as we think we are. Unless you’re a neuroscientist, psychologist or philosopher, you may not have spent a lot of time pondering the nature of consciousness, but whether we actively think about it or not, it does provide a mental underpinning to our concept of who we are.  We need to believe that we’re in constant control of our circumstances.

The newly emerging definition of what it means to be “online” may force us to explore the nature of our control at a level many of us may not be comfortable with.