The Power of Meta

meta111First published April 24, 2014 in Mediapost’s Search Insider

To the best of our knowledge, humans are the only species capable of thinking about thinking, even though most of us don’t do it very often. We use the Greek word “meta” to talk about this ability.  Basically, “meta” refers to a concept which is an abstraction of another concept –an instruction sheet for whatever the original thing is.

Because humans can grasp this concept, it can be a powerful way to overcome the limits of our genetic programming. Daniel Kahneman’s book, Thinking Fast and Slow, is essentially a meta-guide to the act of thinking – an owner’s guide for our minds. In it, he catalogs evolutions extensive list of cognitive “gotch-yas” that can way lay our rational reasoning.

In our digital world, we use the word “metadata” a lot. Essentially, metadata is a guide to the subject data. It sits above the data in question, providing essential information about it, such as sources, structure, indexing guides, etc. Increasingly, as we get data from more and more disparate sources, metadata will be required to use it. Ideally, it will provide universally understood implementation guide.  This, of course, requires a common schema for metadata; something that organizations like schema.org is currently working on.

Meta is a relatively new concept that has exploded in the last few decades. It’s one of those words we throw around, but we probably don’t stop to think about. It’s power lies in its ability to both “mark up” the complexity of real world, giving us another functional layer in which to operate. But it also allows us to examine ourselves and overcome some of the mental foibles we’re subject to.

According to Wikipedia, there are over 160 cognitive biases that can impact our ability to rationally choose the optimal path. They include such biases as the Cheerleader Effect, where individuals are more attractive in a group, the IKEA Effect, where we overvalue something we assemble ourselves, and the Google Effect, where we tend to forget information we know we can look up on Google. These are like little bugs in our operating software and most times, they impact our rational performance without us even being aware of them.  But if we have a meta-awareness of them, we can mitigate them to a large degree. We can step back from our decision process and see where biases may be clouding our judgment.

Meta also allows us to model and categorize complexity. It allows us to append data to data, exponentially increasing the value of the aggregated data set. This becomes increasingly important in the new era of Big Data. The challenge with Big Data is that it’s not only more data, because in this case, more is different. Big Data typically comes from multiple structured sources and when it’s removed from the guidance of it’s native contextual schema, it becomes unwieldy. A metadata layer gives us a Rosetta’s Stone with which we can integrate these various data sources. And it’s in the combining of data in new combinations that the value of Big Data can be found.

Perhaps the most interesting potential of meta is in how we might create a meta-model of ourselves. I’ve talked about this before in the context of social media.  Increasingly, our interactions with technology will gain value from personalization. Each of us will be generating reams of personal data. There needs to be an efficient connection between the two. We can’t invest the time required to train all these platforms, tools and apps to know us better. It makes sense to consolidate the most universally applicable data about us into a meta-profile of our goals, preferences and requirements. In effect, it will be a technologically friendly abstraction of who we are.  If we can agree on a common schema for these meta-profiles, the developers of technology can develop their various tools to recognize them and reconfigure their functionality tailor made for us.

As our world becomes more complex, the power of meta will become more and more important.

Five Years Later – An Answer to Lance’s Question (kind of)

112309-woman-internetIt never ceases to amaze me how writing can take you down the most unexpected paths, if you let it. Over 5 years ago now, I wrote a post called “Chasing Digital Fluff – Who Cares about What’s Hot?” It was a rant, and it was aimed at marketer’s preoccupation with what the latest bright shiny object was. At the time, it was social. My point was that true loyalty needs stabilization in habits to emerge. If you’re constantly chasing the latest thing, your audience will be in a constant state of churn. You’d be practicing “drive-by” marketing. If you want to find stability, target what your audience finds useful.

This post caused my friend Lance Loveday to ask a very valid question…”What about entertainment?” Do we develop loyalty to things that are entertaining? So, I started with a series of posts on the Psychology of Entertainment. What types of things do we find entertaining? How do we react to stories, or humor, or violence? And how do audiences build around entertainment? As I explored the research on the topic, I came to the conclusion that entertainment is a by-product of several human needs – the need to bond socially, the need to be special, our appreciation for others whom we believe to be special, a quest for social status and artificially stimulated tweaks to our oldest instincts – to survive and to procreate. In other words, after a long and exhausting journey, I concluded that entertainment lives in our phenotype, not our genotype. Entertainment serves no direct evolutionary purpose, but it lives in the shadows of many things that do.

So, what does this mean for stability of an audience for entertainment? Here, there is good news, and bad news. The good news is that the raw elements of entertainment haven’t really changed that much in the last several thousand years. We can still be entertained by a story that the ancient Romans might have told. Shakespeare still plays well to a modern audience. Dickens is my favorite author and it’s been 144 years since his last novel was published. We haven’t lost our evolved tastes for the basic building blocks of entertainment. But, on the bad news side, we do have a pretty fickle history when it comes to the platforms we use to consume our entertainment.

This then introduces a conundrum for the marketer. Typically, our marketing channels are linked to platforms, not content. And technology has made this an increasingly difficult challenge. While we may connect to, and develop a loyalty for, specific entertainment content, it’s hard for marketers to know which platform we may consume that content on. Take Dickens for example. Even if you, the marketer, knows there’s a high likelihood that I may enjoy something by Dickens in the next year, you won’t know if I’ll read a book on my iPad, pick up an actual book or watch a movie on any one of several screens. I’m loyal to Dickens, but I’m agnostic as to which platform I use to connect with his work. As long as marketing is tied to entertainment channels, and not entertainment content, we are restricted to targeting our audience in an ad hoc and transitory manner. This is one reason why brands have rushed to use product placement and other types of embedded advertising, where the message is set free from the fickleness of platform delivery challenges. If you happen to be a fan of American Idol, you’re going to see the Coke and Ford brands displayed prominently whether you watch on TV, your laptop, your tablet or your smartphone.

It’s interesting to reflect on the evolution of electronic media advertising and how it’s come full circle in this one regard. In the beginning, brands sponsored specific shows. Advertising messages were embedded in the content. Soon, however, networks, which controlled the only consumption choice available, realized it was far more profitable to decouple advertising from the content and run it in freestanding blocks during breaks in their programming. This decoupling was fine as long as there was no fragmentation in the channels available to consume the content, but obviously this is no longer the case. We now watch TV on our schedule, at our convenience, through the device of our choice. Content has been decoupled from the platform, leaving the owners of those platforms scrambling to evolve their revenue models.

So – we’re back to the beginning. If we want to stabilize our audience to allow for longer-term relationship building, what are our options? Obviously, entertainment offers some significant challenges in this regard, due mainly to the fragmentation of platforms we use to consume that content. If we use usefulness as a measure, the main factor in determining loyalty is frequency and stability. If you provide a platform that becomes a habit, as Google has, then you’ll have a fairly stable audience. It won’t destabilize until there is a significant enough resetting of user expectations, forcing the audience to abandon habits (always very tough to do) and start searching for another useful tool that is a better match for the reset expectations. If this happens, you’ll be continually following your audience through multiple technology adoption curves. Still, it seems that usefulness offers a better shot at a stable audience than entertainment.

But there’s still one factor we haven’t explored – what part does social connection play? Obviously, this is a huge question that the revenue models of Facebook, Twitter, Snapchat and others will depend on. So, with entertainment and usefulness explored ad nauseum, in the series of posts, I’ll start tracking down the Psychology of Social connection.

Letting the Foxes into Journalism’s Hen(Hedgehog) House

First published March 27, 2014 in Mediapost’s Search Insider

fanhI am rooting for Nate Silver and fivethirtyeight.com, his latest attempt to introduce a little data-driven veracity into the murky and anecdotal world of journalism. But I may be one of the few, at least if we take the current backlash as a non-scientific, non-quantitative sample:

I have long been a fan of Nate Silver, but so far I don’t think this is working. – Tyler Cowen, Marginal Revolution

Nate Silver’s new venture may become yet another outlet for misinformation when it comes to the issue of human-caused climate change, Michael Mann, director of the Earth System Science Center at Pennsylvania State University.

Here’s hoping that Nate Silver and company up their game, soon. – Paul Krugman, NY Times

Krugman also states:

You can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking.

Now..Nate Silver doesn’t disagree with this. In fact, he says pretty much the same thing in his book, The Signal and the Noise:

The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning.

But he goes on,

Like Caesar, we may construe them in self-serving ways that are detached from their objective reality.

And it’s this construal that Silver is hoping to nip in the bud with FiveThirtyEight. In essence, he wants to do it by being a Fox, to borrow from Isaiah Berlin’s analogy.

‘The fox knows many things, but the hedgehog knows one big thing.’ We take a pluralistic approach and we hope to contribute to your understanding of the news in a variety of ways.

Silver thinks the media’s preoccupation with punditry is a dangerous thing. Pundits, whether they’re coming from the right or left, are Hedgehogs. They get paid for their expertise on “one big thing.” And the more controversial their stand, the more attention they get. This can lead to a dangerous spiral, as researcher Philip Tetlock found out:

What experts think matters far less than how they think. If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin’s prototypical fox—those who “know many little things,” draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life.

Tetlock was researching how expertise correlated with the ability to make good predictions. What he found was that it was actually an inverse relationship. The higher the degree of expertise, the more likely the person in question was a hedgehog. Media pundits are usually extreme versions of hedgehogs, which not only have one worldview, but also love to talk about it. Nate Silver believes that to get an objective view of world events, you need to be a fox, first, but secondly; you should be a fox that’s good at sifting through data:

Conventional news organizations on the whole are lacking in data journalism skills, in my view. Some of this is a matter of self-selection. Students who enter college with the intent to major in journalism or communications have above-average test scores in reading and writing, but below-average scores in mathematics.

So, all this makes sense. The problem in Silver’s approach is that journalism is the way it is because that’s the way humans want it. While I applaud Silver’s determination to change it, he may be trying to push water up hill. Pundits exist not just because the media keeps pushing them in front of us – they exist because we keep listening. Humans like opinions and anecdotes. We’re not hardwired to process data and objectively rationalize. We connect with stories and we’re drawn to decisive opinion leaders. Silver will have to find some middle ground here, and that seems to be where the problems arise. The minute writers add commentary to data; they have to impose an ideological viewpoint. It’s impossible not to. And when you do that, you introduce a degree of abstraction.

The backlash against Fivethirtyeight.com generally falls into two camps: Foxes like Silver that have no problem with the approach but disagree with the specific data put forward and Hedgehogs that just don’t like the entire concept. The first camp may come onside as Silver and his team work out the inevitable hiccups in their approach. The second, which, it should be noted, have a large number of pundits in their midst, will never become fans of Silver and his foxlike approach.

In the end though, it really doesn’t matter what columnists and journalists think. It’s up to the consumers of news media. We’ll decide what we like better – hedgehogs or foxes.

The Bug in Google’s Flu Trend Data

First published March 20, 2014 in Mediapost’s Search Insider

Last year, Google Flu Trends blew it. Even Google admitted it. It over predicted the occurrence of flu by a factor of almost 2:1.  Which is a good thing for the health care system, because if Google’s predictions had have been right, we would have had the worst flu season in 10 years.

Here’s how Google Flu Trends works. It monitors a set of approximately 50 million flu related terms for query volume. It then compares this against data collected from health care providers where Influenza-like Illnesses (ILI) are mentioned during a doctor’s visit. Since the tracking service was first introduced there has been a remarkably close correlation between the two, with Google’s predictions typically coming within 1 to 2 percent of the number of doctor’s visits where the flu bug is actually mentioned. The advantage of Google Flu Trends is that it is available about 2 weeks prior to the ILI data, giving a much needed head start for responsiveness during the height of flu season.

FluBut last year, Google’s estimates overshot actual ILI data by a substantial margin, effectively doubling the size of the predicted flu season.

Correlation is not Causation

This highlights a typical trap with big data – we tend to start following the numbers without remembering what is generating the numbers. Google measures what’s on people’s minds. ILI data measures what people are actually going to the doctor about. The two are highly correlated, but one doesn’t not necessarily cause the other. In 2013, for instance, Google speculated that increased media coverage might be the cause for the overinflated predictions. More news coverage would have spiked interest, but not actual occurrences of the flu.

Allowing for the Human Variable

In the case of Google Flu Trends, because it’s using a human behavior as a signal – in this case online searching for information – it’s particularly susceptible to network effects and information cascades. The problem with this is that these social signals are difficult to rope into an algorithm. Once they reach a tipping point, they can break out on their own with no sign of a rational foundation. Because Google tracks the human generated network effect data and not the underlying foundational data, it is vulnerable to these weird variables in human behavior.

Predicting the Unexpected

A recent article in Scientific American pointed out another issue with an over reliance on data models –  Google Flu Trends completely missed the non-seasonal H1N1 pandemic in 2009. Why? Algorithmically, Google wasn’t expecting it. In trying to eliminate noise from the model, they actually eliminated signal coming during an unexpected time. Models don’t do very well at predicting the unexpected.

Big Data Hubris

The author of the Scientific American piece, associate editor Larry Greenemeier, nailed another common symptom of our emerging crush on data analytics – big data hubris. We somehow think the quantitative black box will eliminate the need for more mundane data collection – say – actually tracking doctor’s visits for the flu. As I mentioned before, the biggest problem with this is that the more we rely on data, which often takes the form of arm’s length correlated data, the further we get from exploring causality. We start focusing on “what” and forget to ask “why.”

We should absolutely use all the data we have available. The fact is, Google Flu Trends is a very valuable tool for health care management. It provides a lot of answers to very pertinent questions. We just have to remember that it’s not the only answer.

Can Facebook Maintain High Ground?

 First published March 13, 2014 in Mediapost’s Search Insider

SnapchatPicAs I said in my last column, Facebook’s recent acquisition spree seems to indicate that they’re trying to evolve from being our Social Landmark to being a virtual map that guides us through our social activity. But, as Facebook rolls out new features or acquires one-time competitors in order to complete this map of the social landscape, will we use it?  Snapchat CEO Evan Spiegel apparently doesn’t think so. That’s part of the reason he turned down $3 billion from Facebook.

At the end of 2012, Mark Zuckerberg paid Spiegel and his team a visit. The purpose of the visit was to scare the bejeezus out of Snapchat by threatening to crush them with the roll out of Poke.  Of course, we now know that Poke was a monumental flop while Snapchat rolled along quite nicely, thank you.  Several months later, Zuck flew out to meet with the Snapchat team again, taking a decidedly different tone this time. He also brought along a very big checkbook.  Snapchat said thanks, but no thanks.

So, how can a brash start up like Snapchat beat the 800 lb Gorilla in it’s own back yard? Why was Poke DOA? Was it a one-of-a-kind miscue on the part of Facebook – or part of a trend?

Part of the answer may lie in how we feel about novelty vs familiarity in the things we deal with. As I said in the last column, we go through 3 stages when we explore new landscapes. We move from navigating by landmarks to memorizing routes and finally, we create our own mental maps of the space, allowing us to plot our own routes as needed. It we apply this to navigating a virtual space like the online social sphere, we should move from relying on landmarks (like Facebook) to using routes (single purpose apps like Snapchat) and finally, to creating our own map that allows us to switch back and forth between apps as required.  Facebook wants to jump from the first stage to the last in order to remain dominant in the social market maintaining our map for us by becoming a hub for all required social functionality. But if the Poke story is any indication, we may not be willing to go along for the ride.

But there’s a subtle psychological point to how we learn to navigate new landscapes – we gain mastery over our environment. With this increased confidence comes a reluctance to feel we’re moving backward. We tend to discard the familiar and embrace novelty as we gain confidence. This squares with research done in the familiarity and novelty seeking in humans. We look for familiarity in things that have high degrees of risk, in the faces of others around us or when we’re operating on autopilot. But when we’re actively considering and judging options and looking for new opportunities, we are drawn to new things.

Humans are natural foragers. We have built in rules of conduct when we go out seeking things that will improve our lot, whether it be food, shelter or tools. Ideally, we look for things that will offer us a distinct advantage over the status quo with a reasonable investment of effort. We balance the two – advantage against effort. If the new options come from a overly familiar place, we tend to mentally discount the potential advantage because we no longer feel we’re moving forward. Over time, this builds into a general feeling of malaise towards the overly familiar.

Time will tell if Evan Spiegel was prescient or just plain stupid in turning down Facebook’s offer. The question is not so much will Facebook prevail, but rather will Snapchat end up emerging as a key part of the social landscape on a continuing basis? That particular landscape is notoriously unstable and it’s been known to swallow up many, many other companies with nary a burp.  Perhaps Spiegel should have taken the money and ran.

But then I wouldn’t be betting the farm on Facebook’s chances of permanence either.

Finding Our Way in the Social Landscape

First published March 6, 2014 in Mediapost’s Search Insider

social-mediaLast month, Om Malik (of GigaOM fame) wrote an article in Fast Company about the user backlash against Facebook.  To be fair, It seems that what’s happening to Facebook is not so much a backlash as apathy. You have to care to lash back. This is more of a wholesale abandonment, as millions of users are going elsewhere – using single purpose apps to get their social media fix. According to the article,

“we cycle between periods in which we want all of our Internet activity consolidated and other times in which we want a bunch of elegant monotaskers. Clearly we have reentered a simplification phase.”

There’s a reason why Facebook has been desperately trying to acquire Snapchat for a reported $3 Billion. There’s also a reason why they picked up Instagram for a billion last year.  It’s because these simple little apps are leaving the home grown Facebook alternatives in the dust. Snapchat is killing Facebook’s Poke – as Mashable pointed out in this comparison.  Snapchat has consistently stayed near the top of App Annie’s most popular download chart for the past 18 months. This coincides exactly with Facebook’s release of Poke.

Screen-Shot-2013-11-14-at-11.29.55-AM

Download rates of Facebook Poke

Screen-Shot-2013-11-14-at-12.10.02-PM

Download rates of Snapchat

Malik indicates it’s because we want a simpler, streamlined experience. A recent article in Business Insider goes one step further – Facebook is just not cool anymore. The mere name induces extended eye rolling in teenagers. It’s like parking the family mini-van in the high school parking lot.  “I hate Facebook. It’s just so boring,” said one of the teens interviewed. Hate! That’s a pretty strong word. What did the Zuck ever do to garner such contempt? Maybe it’s because he’s turning 30 in a few months. Maybe it’s because he’s an old married man.

Or maybe it’s just that we have a better alternative. Malik has a good point. He indicates that we tend to oscillate between consolidation and specialization. I take a bit different view. What’s happening in social media is that we’re getting to know the landscape better. We’re finding our way. This isn’t so much about changing tastes as it is about increased familiarity and a resetting of expectations.

If you look at how humans navigate new environments, you’ll notice some striking similarities. When we encounter a new landscape, we go through three phases of way finding. We begin with relying on landmarks. These are the “highest ground” in a new, unfamiliar landscape and we navigate relative to them. They become our reference points and we don’t stray far from them. Facebook is, you guessed it, a landmark.

The next phase is called “Route Knowledge.” Here, we memorize the routes we use to get from landmark to landmark. We become to recognize the paths we take all the time. In the world of online landscapes, you could substitute the word “app” for “route.” Instagram, Snapchat, Vine and the rest are routes we use to get where we need to go quickly and easily. They’re our virtual “short cuts.”

The last stage of way finding is “Survey Knowledge.”  Here, we are familiar enough with a landscape that we’ve acquired a mental “map” of it and can mentally calculate alternative routes to get to our destination. This is how you navigate in your hometown.

What’s happening to Facebook is not so much that our tastes are swinging. It’s just that we’re confident enough in our routes/apps that we’re no longer solely reliant on landmarks.  We know what we want to do and we know the right tool to use. The next stage of wayfinding, Survey Knowledge, will require some help, however. I’ve talked in the past about the eventual emergence of meta-apps. These will sit between us and the dynamic universe of tools available. They may be largely or even completely transparent to us. What they will do is learn about us and our requirements while maintaining an inventory of all the apps at our disposal. Then, as our needs arise, it will serve up the right app for the job. These meta-apps will maintain our survey knowledge for us, keeping a virtual map of the online landscape to allow us to navigate at will.

As Facebook tries to gobble up the Instagrams and Snapchats of the world, they’re trying to become both a landmark and a meta-app. Will they succeed? I have my thoughts, but those will have to wait until a future column.

How Can Humans Co-Exist with Data?

First published February 6, 2014 in Mediapost’s Search Insider

tumblr_inline_mpt49sqAwV1qz4rgpLast week, I talked about our ability to ignore data. I positioned this as a bad thing. But Pete Austin called me on it, with an excellent counterpoint:

Ignoring Data is the most important thing we do. Only the people who could ignore the trees and see the tiger, in real-time, survived to become our ancestors.”

Too true. We’re built to subconsciously filter and ignore vast amounts of input data in order to maintain focus on critical tasks, such as avoiding hungry tigers. If you really want to dive into this, I would highly recommend Daniel Simons and Christopher Chabris’s “The Invisible Gorilla.” But, as Simons and Chabris point out, with example after example of how our intuitions (which we use as filters) can mislead us, this “inattentional blindness” is not always a good thing. In the adaptive environment in which we evolved, it was pretty effective at keeping us alive.  But in a modern, rational environment, it can severely inhibit our ability to maintain an objective view of the world.

But Pete also had a second, even more valid point:

“What you need to concentrate on now is “curated data”, where the junk has already been ignored for you.”

And this brought to mind an excellent example from a recent interview I did as background for an upcoming book I’m working on.  This idea of pre-filtered, curated data becomes a key consideration in this new world of Big Data.

Nowhere are the stakes higher for the use of data than in healthcare. It’s what lead to the publication of a manifesto in 1992 calling for a revolution in how doctors made life and death decisions. One of the authors, Dr. Gordon Guyatt, coined the term “Evidence based medicine.” The rational is simple here. By taking an empirical approach to not just diagnosis but also to the best prescriptive path, doctors can rise above the limitations of their own intuition and achieve higher accuracy. It’s data driven decision-making, applied to health care. Makes perfect sense, right? But even though Evidence based medicine is now over 20 years old, it’s still difficult to consistently apply at the doctor to individual patient level.

I had the chance to ask Dr. Guyatt why this was:

“Essentially after medical school, learning the practice of medicine is an apprenticeship exercise and people adopt practice patterns according to the physicians who are teaching them and their role models and there is still a relatively small number of physicians who really do good evidence-based practice themselves in terms of knowing the evidence behind what they’re doing and being able to look at it critically.”

The fact is, a data driven approach to any decision-making domain that previously used to rely on intuition just doesn’t feel – well – very intuitive. It’s hard work. It’s time consuming. It, to Mr. Austin’s point, runs directly counter to our tiger-avoidance instincts.

Dr. Guyatt confirms that physicians are not immune to this human reliance on instinct:

“Even the best folks are not going to do it – maybe the best folks – but most folks are not going to be able to do that very often.”

The answer in healthcare, and likely the answer everywhere else where data should back up intuition, is the creation of solid data based resources, which adhere to empirical best practices without requiring every single practitioner to do the necessary heavy lifting. Dr. Guyatt has seen exactly this trend emerge in the last decade:

“What you need is preprocessed information. People have to be able to identify good preprocessed evidence-based resources where the people producing the resources have gone through that process well.”

The promise of curated, preprocessed data is looming large in the world of marketing. The challenge is that, unlike medicine, where data is commonly shared and archived, in the world of marketing much of the most important data stays proprietary. What we have to start thinking about is a truly empirical, scientific way to curate, analyze and filter our own data for internal consumption, so it can be readily applied in real world situations without falling victim to human bias.

Never Underestimate the Human Ability to Ignore Data

First published January 30, 2014 in Mediapost’s Search Insider

ignore_factsIt’s one thing to have data. It’s another to pay attention to it.

We marketers are stumbling over ourselves to move to data-driven marketing. No one would say that’s a bad thing. But here’s the catch in that. Data driven marketing is all well and good when it’s a small stakes game – optimizing spend, targeting, conversion rates, etc. If we gain a point or two on the topside, so much the better. And if we screw up and lose a point or two – well – mistakes happen and as long as we fix it quickly, no permanent harm done.

But what if the data is telling us something we don’t want to know? I mean – something we really don’t want to know. For instance, our brand messaging is complete BS in the eyes of our target market, or they feel our products suck, or our primary revenue source appears to be drying up or our entire strategic direction looks to be heading over a cliff? What then?

This reminds me of a certain CMO of my acquaintance who was a “Numbers Guy.” In actual fact, he was a numbers guy only if the numbers said what he wanted them to say. If not, then he’d ask for a different set of numbers that confirmed his view of the world. This data hypocrisy generated a tremendous amount of bogus activity in his team, as they ran around grabbing numbers out of the air and massaging them to keep their boss happy. I call this quantifiable bullshit.

I think this is why data tends to be used to optimize tactics, but why it’s much more difficult to use data to inform strategy. The stakes are much higher and even if the data is providing clear predictive signals, it may be predicting a future we’d rather not accept. Then we fall back on our default human defense: ignore, ignore, ignore.

Let me give you an example. Any human who functions even slightly above the level of brain dead has to accept the data that says our climate is changing. The signals couldn’t be clearer. And if we choose to pay attention to the data, the future looks pretty damn scary. Best-case scenario – we’re probably screwing up the planet for our children and grand children. Worst-case scenario – we’re definitely screwing up the planet and it will happen in our lifetime. And we’re not talking about an increased risk of sunburn. We’re talking about the potential end of our species. So what do we do? We ignore it. Even when flooding, drought and ice storms without historic precedent are happening in our back yards. Even when Atlanta is paralyzed by a freak winter storm. Nothing about what is happening is good news, and it’s going to get worse. So, damn the data, let’s just look the other way.

In a recent poll by the Wall Street Journal, out of a list of 15 things that Americans believed should be top priorities for President Obama and Congress, climate change came out dead last – behind pension reform, Iran’s nuclear program and immigration legislation. Yet, if we look at the data that the UN and the World Economic Forum collects, quantifying the biggest threats to our existence, climate change is consistently near the top, both in terms of likelihood and impact. But, it’s really hard to do something about it. It’s a story we don’t want to hear, so we just ignore the data, like the afore-said CMO.

As we get access to more and more data, it will be harder and harder to remain uninformed, but I suspect it will have little impact on our ability to be ignorant. If we don’t know something, we don’t know it. But if we can know something, and we choose not to, that’s a completely different matter. That’s embracing ignorance. And that’s dangerous. In fact, it could be deadly.

The Psychology of Usefulness: How Online Habits are Broken

google-searchLast post, I talked about how Google became a habit – Google being the most extreme case of online loyalty based on functionality I could think of. But here’s the thing with functionally based loyalty – it’s very fickle. In the last post I explained how Charnov’s Marginal Value Theorem dictates how long animals spend foraging in a patch before moving on to the next one. I suspect the same principles apply to our judging of usefulness. We only stay loyal to functionality as long as we believe there are no more functional alternatives available to us for an acceptable investment of effort. If that functionality has become automated in the form of a habit, we may stick with it a little longer, simply because it takes our rational brain awhile to figure out there may be better options, but sooner or later it will blow the whistle and we’ll start exploring our options. Charnov’s internal algorithm will tell us it’s time to move on to the next functional “patch.”

Habits break down when there’s a shift if one of the three prerequisites: frequency, stability or acceptable outcomes.

If we stop doing something on a frequent basis, the habit will slowly decay. But because habits tend to be stored at the limbic level (in the basal ganglia), they prove to be remarkably durable. There’s a reason we say old habits die hard. Even after a long hiatus we find that habits can easily kick back in. Reduction of frequency is probably the least effective way to break a habit.

A more common cause of habitual disruption is a change in stability. Suddenly, if something significant changes in our task environment, our  “habit scripts” start running into obstacles. Think about the last time you did a significant upgrade to a program or application you use all the time. If menu options or paths to common functions change, you find yourself constantly getting frustrated because things aren’t where you expect them to be. Your habit scripts aren’t working for you anymore and you are being forced to think. That feeling of frustration is how the brain protects habits and shows how powerful our neural energy saving mode is. But, even if the task environment becomes unstable for a time, chances are the instability is temporary. The brain will soon reset its habits and we’ll be back plugging subconsciously away at our tasks. Instability does break a habit, but it just rebuilds a new one to take its place.

A more permanent form of habit disruption comes when outcomes are no longer acceptable. The brain hates these types of disruptions, because it knows that finding an alternative could require a significant investment of effort. It basically puts us back at square one. The amount of investment required is dependent on a number of things, including the scope of change required (is it just one aspect of a multi-step task or the entire procedure?), current awareness of acceptable alternatives (is a better solution near at hand or do we have to find it?), the learning curve involved (how different is the alternative from what we’re used to using), are there other adoption requirements (do we have to make an investment of resources – including time and/or money?) and how much down time will be involved in order to adopt the alternative. All these questions are the complexities that can be factors in the Marginal Value Theorem.

Now, let’s look at how each of these potential habit breakers applies to Google. First of all, frequency probably won’t be a factor because we will search more, not less, in the future.

Stability may be a more likely cause. The fact is, the act of online searching hasn’t really changed that much in the last 20 years. We still type in a query and get a list of results. If you look at Google circa 1998, it looks a little clunky and amateurish next to today’s results page, but given that 16 years have come and gone, the biggest surprise is that the search interface hasn’t changed more than it has.

Google now and then

A big reason for this is to maintain stability in the interface, so habits aren’t disrupted. The search page relies on ease of information foraging, so it’s probably the most tested piece of online real estate in history. Every pixel of what you see on Google, and, to a lesser extent, it’s competitors, has been exhaustively tested.

That has been true in the past but because of the third factor, acceptability of outcomes, it’s not likely to remain true in the future. We are now in the age of the app. Searching used to be a discrete function that was just one step of many required to complete a task. We were content to go to a search engine, retrieve information and then use that information elsewhere with other tools or applications. In our minds, we had separate chunks of online functionality that we would assemble as required to meet our end goal.

Let me give you an example. Let’s imagine we’re going to London for a vacation. In order to complete the end goal – booking flights, hotels and whatever else is required – we know we will probably have to go to many different travel sites, look up different types of information and undertake a number of actions. We expect that this will be the best path to take to our end goal. Each chunk of this “master task” may in turn be broken down into separate sub tasks. Along the way, we’ll be relying on those tools that we’re aware of and a number of stored procedures that have proven successful in the past. At the sub-task level, it’s entirely possible that some of those actions have been encoded as habits. For an example of how these tasks and stored procedures would play out in a typical search, see my previous post, A Cognitive Walkthrough of Searching.

But we have to remember that the only reason the brain is willing to go to all this work is that it believes it’s the most efficient route available to it. If there were a better alternative that would produce an acceptable outcome, the brain would take it. Our expectation of what an acceptable outcome would be would be altered, and our Marginal Value algorithm would be reset.

Up to now, functionality and information didn’t intersect too often online. There were places we went to get information, and there were places we went to do things. But from this point forward, expect those two aspects of online to overlap more and more often. Apps will retrieve information and integrate it with usefulness. The travel aggregator sites like Kayak and Expedia are an early example of this. They retrieve pricing information from vendors, user content from review sites and even some destination related information from travel sites. This ups the game in terms of what we expect from online functionality when we book a trip. Our expectation has been reset because Kayak offers a more efficient way to book travel than using search engines and independent vendor sites. That’s why we don’t immediately go to Google when we’re planning a trip.

Let’s fast-forward a few years to see how our expectations could be reset in the future. I suspect we’re not too far away from having an app where our travel preferences have been preset. This proposed app would know how we like to travel and the things we like to do when we’re on vacation. It would know the types of restaurants we like, the attractions we visit, the activities we typically do, the types of accommodation we tend to book, etc.  It would also know the sources we tend to use when qualifying our options (i.e. TripAdvisor). If we had such an app, we would simply put in the bare details of our proposed trip: departure and return dates, proposed destinations and an approximate itinerary. It would then go and assemble suggestions based on our preferences, all in one location. Booking would require a simple click, because our payment and personal information would be stored in the app. There would be no discrete steps, no hopping back and forth between sites, no cutting and pasting of information, no filling out forms with the same information multiple times. After confirmation, the entire trip and all required information would be made available on your mobile device.  And even after the initial booking, the app would continue to comb the internet for new suggestions, reviews or events that you might be interested in attending.

This “mega-app” would take the best of Kayak, TripAdvisor, Yelp, TripIt and many other sites and combine it all in one place. If you love travel as much as I do, you couldn’t wait to get your hands on such an app. And the minute you did, your brain would have reset it’s idea of what an acceptable outcome would be. There would be a cascade of broken habits and discarded procedures.

This integration of functionality and information foraging is where the web will go next. Over the next 10 years, usefulness will become the new benchmark for online loyalty. As this happens, our expectation set points will be changed over and over again. And this, more than anything, will be what impacts user loyalty in the future. This changing of expectations is the single biggest threat that Google faces.

In the next post I’ll look at what happens when our expectations get reset and we have to look at adopting a new technology.

The Psychology of Usefulness: How We Made Google a Habit

In the last two posts, I looked first at the difference between autotelic and exotelic activities, then how our brain judges the promise of usefulness. In today’s post, I want to return to the original question: How does this impact user loyalty? As we use more and more apps and destinations that rely on advertising for their revenues, this question becomes more critical for those apps and destinations.

The obvious example here is search engines, the original functional destination. Google is the king of search, but also the company most reliant on these ads. For Google, user loyalty is the difference between life and death. In 2012, Google made a shade over 50 billion dollars (give or take a few hundred million). Of this, over $43 billion came from advertising revenue (about 86%) and of that revenue, 62% came from Google’s own search destinations. That a big chunk of revenue to come from one place, so user loyalty is something that Google is paying pretty close attention to.

Now, let’s look at how durable that hold Google has on our brains is. Let’s revisit the evaluation cascade that happens in our brain each time we contemplate a task:

  • If very familiar and highly stable, we do it by habit
  • If fairly familiar but less stable, we do it by a memorized procedure with some conscious guidance
  • If new and unfamiliar, we forage for alternatives by balance effort required against

Not surprisingly, the more our brain has to be involved in judging usefulness, the less loyal we are. If you can become a habit, you are rewarded with a fairly high degree of loyalty. Luckily for Google, they fall into this category – for now. Let’s look at little more at how Google became a habit and what might have to happen for us to break this habit.

Habits depend on three things: high repetition, a stable execution environment and consistently acceptable outcomes. Google was fortunate enough to have all three factors present.

First – repetition. How many times a day do you use a search engine? For me, it’s probably somewhere between 10 and 20 times per day. And usage of search is increasing. We search more now than we did 5 years ago. If you do something that often throughout the day it wouldn’t make much sense to force your brain to actively think it’s way through that task each and every time – especially if the steps required to complete that task don’t really change that much. So, the brain, which is always looking for ways to save energy, records a “habit script” (or, to use the terminology of Ann Graybiel – “chunks”) that can play out without a lot of guidance. Searching definitely meets the requirements for the first step of forming a habit.

Second – stability. How many search engines do you use? If you’re like the majority of North Americans, you probably use Google for almost all your searches.  This introduces what we would call a stable environment. You know where to go, you know how to use it and you know how to use the output. There is a reason why Google is very cautious about changing their layout and only do so after a lot of testing. What you expect and what you get shouldn’t be too far apart. If it is, it’s called disruptive, and disruption breaks habits. This is the last thing that Google wants.

Third – acceptable outcomes. So, if stability preserves habits, why would Google change anything? Why doesn’t Google’s search experience look exactly like it did in 1998 (fun fact – if you search Google for “Google in 1998” it will show you exactly what the results page looked like)? That would truly be stable, which should keep those all important habits glued in place. Well, because expectations change. Here’s the thing about expected utility – which I talked about in the last post. Expected utility doesn’t go away when we form a habit, it just moves downstream in the process. When we do a task for the first time, or in an unstable environment, expected utility precedes our choice of alternatives. When a “habit script” or “chunk” plays out, we still need to do a quick assessment of whether we got what we expected. Habits only stay in place if the “habit script” passes this test. If we searched for “Las Vegas hotels” and Google returned results for Russian borscht, that habit wouldn’t last very long.  So, Google constantly has to maintain this delicate balance – meeting expectations without disrupting the user’s experience too much. And expectations are constantly changing.

Internet adoption over time chartWhen Google was introduced in 1998, it created a perfect storm of habit building potential. The introduction coincided with a dramatic uptick in adoption of the internet and usage of web search in particular.  In 1998, 36% of American adults were using the Internet (according to PEW). In 2000, that had climbed to 46% and by 2001 that was up to 59%. More of us were going online, and if we were going online we were also searching.  The average searches per day on Google exploded from under 10,000 in 1998 to 60 million in 2000 and 1.2 billion in 2007. Obviously, we were searching  – a lot – so the frequency of task prerequisite was well in hand.

Now – stability. In the early days of the Internet, there was little stability in our search patterns. We tended to bounce back and forth between a number of different search engines. In fact, the search engines themselves encouraged this by providing “Try your search on…” links for their competitors (an example from Google’s original page is shown below). Because our search tasks were on a number of different engines, there was no environmental stability, so no chance for the creation of a true task. The best our brains could do at this point was store a procedure that required a fair amount of conscious oversight (choosing engines and evaluating outcomes). Stability was further eroded by the fact that some engines were better at some types of searches than others. Some, like Infoseek, were better for timely searches due to their fast indexing cycles and large indexes. Some, like Yahoo, were better at canonical searches that benefited from a hierarchal directory approach. When searching in the pre-Google days, we tended to match our choice of engine to the search we were doing. This required a fairly significant degree of rational neural processing on our part, precluding the formation of a habit.

Googlebottompage1998

But Google’s use of PageRank changed the search ballgame dramatically. Their new way of determining relevancy rankings was consistently better for all types of searches than any of their competitors. As we started to use Google for more types of searches because of their superior results, we stopped using their competitors. This finally created the stability required for habit formation.

Finally, acceptable outcomes. As mentioned above, Google came out of the gate with outcomes that generally exceeded our expectations, set by the spotty results of their competitors. Now, all Google had to do to keep the newly formed habit in place was to continue to meet the user’s expectations of relevancy. Thanks to truly disruptive leap Google took with the introduction of PageRank, they had a huge advantage when it came to search results quality. Google has also done an admirable job of maintaining that quality over the past 15 years. While the gap has narrowed significantly (today, one could argue that Bing comes close on many searches and may even have a slight advantage on certain types of searches) Google has never seriously undershot the user’s expectations when it comes to providing relevant search results. Therefore, Google has never given us a reason to break our habits. This has resulted in a market share that has hovered over 60% for several years now.

When it comes to online loyalty, it’s hard to beat Google’s death grip on search traffic. But, that grip may start to loosen in the near future. In my next post, I’ll look the conditions that can break habitual loyalty, again using Google as an example. I’ll also look at how our brains decide to accept or reject new useful technologies.