Are Our Brains Trading Breadth for Depth?

ebrain1In last week’s column, I looked at how efficient our brains are. Essentially, if there’s a short cut to an end goal identified by the brain, it will find it. I explained how Google is eliminating the need for us to remember easily retrievable information. I also speculated about how our brains may be defaulting to an easier form of communication, such as texting rather than face-to-face communication.

Personally, I am not entirely pessimistic about the “Google Effect,” where we put less effort into memorizing information that can be easily retrieved on demand. This is an extension of Daniel Wegner’s “transactive memory”, and I would put it in the category of coping mechanisms. It makes no sense to expend brainpower on something that technology can do easier, faster and more reliably. As John Mallin commented, this is like using a calculator rather than memorizing times tables.

Reams of research has shown that our memories can be notoriously inaccurate. In this case, I partially disagree with Nicholas Carr. I don’t think Google is necessarily making us stupid. It may be freeing up the incredibly flexible power of our minds, giving us the opportunity to redefine what it means to be knowledgeable. Rather than a storehouse of random information, our minds may have the opportunity to become more creative integrators of available information. We may be able to expand our “meta-memory”, Wegner’s term for the layer of memory that keeps track of where to turn for certain kinds of knowledge. Our memory could become index of interesting concepts and useful resources, rather than ad-hoc scraps of knowledge.

Of course, this positive evolution of our brains is far from a given. And here Carr may have a point. There is a difference between “lazy” and “efficient.” Technology’s freeing up of the processing power of our brain is only a good thing if that power is then put to a higher purpose. Carr’s title, “The Shallows” is a warning that rather than freeing up our brains to dive deeper into new territory, technology may just give us the ability to skip across the surface of the titillating. Will we waste our extra time and cognitive power going from one piece of brain candy to the other, or will we invest it by sinking our teeth into something important and meaningful?

A historical perspective gives us little reason to be optimistic. We evolved to balance the efforts required to find food with the nutritional value we got from that food. It used to be damned hard to feed ourselves, so we developed preferences for high calorie, high fat foods that would go a long way once we found them. Thanks to technology, the only effort required today to get these foods is to pick them off the shelf and pay for them. We could have used technology to produce healthier and more nutritious foods, but market demands determined that we’d become an obese nation of junk food eaters. Will the same thing happen to our brains?

I am even more concerned with the short cuts that seem to be developing in our social networking activities. Typically, our social networks are built both from strong ties and weak ties. Mark Granovetter identified these two types of social ties in the 70’s. Strong ties bind us to family and close friends. Weak ties connect us with acquaintances. When we hit rough patches, as we inevitably do, we treat those ties very differently. Strong ties are typically much more resilient to adversity. When we hit the lowest points in our lives, it’s the strong ties we depend on to pull us through. Our lifelines are made up of strong ties. If we have a disagreement with someone with whom we have a strong tie, we work harder to resolve it. We have made large investments in these relationships, so we are reluctant to let them go. When there are disruptions in our strong tie network, there is a strong motivation to eliminate the disruption, rather than sacrifice the network.

Weak ties are a whole different matter. We have minimal emotional investments in these relationships. Typically, we connect with these either through serendipity or when we need something that only they can offer. For example, we typically reinstate our weak tie network when we’re on the hunt for a job. LinkedIn is the virtual embodiment of a weak tie network. And if we have a difference of opinion with someone to whom we’re weakly tied, we just shut down the connection. We have plenty of them so one more or less won’t make that much of a difference. When there are disruptions in our weak tie network, we just change the network, deactivating parts of it and reactivating others.

Weak ties are easily built. All we need is just one thing in common at one point in our lives. It could be working in the same company, serving on the same committee, living in the same neighborhood or attending the same convention. Then, we just need some way to remember them in the future. Strong ties are different. Strong ties develop over time, which means they evolve through shared experiences, both positive and negative. They also demand consistent communication, including painful communication that sometimes requires us to say we were wrong and we’re sorry. It’s the type of conversation that leaves you either emotionally drained or supercharged that is the stuff of strong ties. And a healthy percentage of these conversations should happen face-to-face. Could you build a strong tie relationship without ever meeting face-to-face? We’ve all heard examples, but I’d always place my bets on face-to-face – every time.

It’s the hard work of building strong ties that I fear we may miss as we build our relationships through online channels. I worry that the brain, given an easy choice and a hard choice, will naturally opt for the easy one. Online, our network of weak ties can grow beyond the inherent limits of our social inventory, known as Dunbar’s Number (which is 150, by the way). We could always find someone with which to spend a few minutes texting or chatting online. Then we can run off to the next one. We will skip across the surface of our social network, rather than invest the effort and time required to build strong ties. Just like our brains, our social connections may trade breadth for depth.

The Pros and Cons of a Fuel Efficient Brain

Transactive dyadic memory Candice Condon3Your brain will only work as hard as it has to. And if it makes you feel any better, my brain is exactly the same. That’s the way brains work. They conserve horsepower until when it’s absolutely needed. In the background, the brain is doing a constant calculation: “What do I want to achieve and based on everything I know, what is the easiest way to get there?” You could call it lazy, but I prefer the term “efficient.”

The brain has a number of tricks to do this that involve relatively little thinking. In most cases, they involve swapping something that’s easy for your brain to do in place of something difficult. For instance, consider when you vote. It would be extraordinarily difficult to weigh all the factors involved to truly make an informed vote. It would require a ton of brainpower. But it’s very easy to vote for whom you like. We have a number of tricks we use to immediately assess whether we like and trust another individual. They require next to no brainpower. Guess how most people vote? Even those of us who pride ourselves on being informed voters rely on these brain short cuts more than we would like to admit.

Here’s another example that’s just emerging, thanks to search engines. It’s called the Google Effect and it’s an extension of a concept called Transactive Memory. Researchers Betsy Sparrow, Jenny Liu and Daniel Wegner identified the Google Effect in 2011. Wegner first explained transactive memory back in the 80’s. Essentially, it means that we won’t both to remember something that we can easily reference when we need it. When Wegner first talked about transactive memory in the 80’s, he used the example of a husband and wife. The wife was good at remembering important dates, such as anniversaries and birthdays. The husband was good at remembering financial information, such as bank balances and when bills were due. The wife didn’t have to remember financial details and the husband didn’t have to worry about dates. All they had to remember was what each other was good at memorizing. Wegner called this “chunking” of our memory requirements “metamemory.”

If we fast-forward 30 years from Wegner’s original paper, we find a whole new relevance for transactive memory, because we now have the mother of all “metamemories”, called Google. If we hear a fact but know that this is something that can easily be looked up on Google, our brains automatically decide to expend little to no effort in trying to memorize it. Subconsciously, the brain goes into power-saver mode. All we remember is that when we do need to retrieve the fact, it will be a few clicks away on Google. Nicholar Carr fretted about whether this and other cognitive short cuts were making us stupid in his book “The Shallows.”

But there are other side effects that come from the brain’s tendency to look for short cuts without our awareness. I suspect the same thing is happening with social connections. Which would you think required more cognitive effort: a face-to-face conversation with someone or texting them on a smartphone?

Face-to-face conversation can put a huge cognitive load on our brains. We’re receiving communication at a much greater bandwidth than with text.   When we’re across from a person, we not only hear what they’re saying, we’re reading emotional cues, watching facial expressions, interpreting body language and monitoring vocal tones. It’s a much richer communication experience, but it’s also much more work. It demands our full attention. Texting, on the other hand, can easily be done along with other tasks. It’s asynchronous – we can pause and pick up when ever we want. I suspect its no coincidence that younger generations are moving more and more to text based digital communication. Their brains are pushing them in that direction because it’s less work.

One of the great things about technology is that it makes our life easier. But is that also a bad thing? If we know that our brains will always opt for the easiest path, are we putting ourselves in a long, technology aided death spiral? That was Nicholas Carr’s contention. Or, are we freeing up our brains for more important work?

More on this to come next week.

When Are Crowds Not So Wise?

the-wisdom-of-crowdsSince James Surowiecki published his book “The Wisdom of Crowds”, the common wisdom is – well – that we are commonly wise. In other words, if we average the knowledge of many people, we’ll be smarter than any of us would be individually. And that is true – to an extent. But new research suggests that there are group decision dynamics at play where bigger (crowds) may not always be better.

A recent study by Iain Couzin and Albert Kao at Princeton suggests that in real world situations, where information is more complex and spotty, the benefits of crowd wisdom peaks in groups of 5 to 20 participants and then decreases after that. The difference comes in how the group processes the information available to them.

In Surowiecki’s book, he uses the famous example of Sir Francis Galton’s 1907 observation of a contest where villagers were asked to guess the weight of an ox. While no individual correctly guessed the weight, the average of all the guesses came in just one pound short of the correct number. But this example has one unique characteristic that would be rare in the real world – every guesser had access to the same information. They could all see the ox and make their guess. Unless you’re guessing the number of jellybeans in a jar, this is almost never the case in actual decision scenarios.

Couzin and Kao say this information “patchiness” is the reason why accuracy tends to diminish as the crowd gets bigger. In most situations, there is commonly understood and known information, which the researchers refer to as “correlated information.” But there is also information that only some of the members of the group have, which is “uncorrelated information.” To make matters even more complex, the nature of uncorrelated information will be unique to each individual member. In real life, this would be our own experience, expertise and beliefs.  To use a technical term, the correlated information would be the “signal” and the uncorrelated information would be the “noise.” The irony here is that this noise is actually beneficial to the decision process.

In big groups, the collected “noise” gets so noisy it becomes difficult to manage and so it tends to get ignored. It drowns itself out. The collective focuses instead on the correlated information. In engineering terms this higher signal-to-noise ratio would seem to be ideal, but in decision-making, it turns out a certain amount of noise is a good thing. By focusing just on the commonly known information, the bigger crowd over-simplifies the situation.

Smaller groups, in contrast, tend to be more random in their make up. The differences in experiences, knowledge, beliefs and attitudes, even if not directly correlated to the question at hand, have a better chance of being preserved. They don’t get “averaged” out like they would in a bigger group. And this “noise” leads to better decisions if the situation involves imperfect information. Call it the averaging of intuition, or hunches. In a big group, the power of human intuition gets sacrificed in favor of the commonly knowable. But in a small group, it’s preserved.

In the world of corporate strategy, this has some interesting implications. Business decisions are almost always complex and involve imperfectly distributed information. This research seems to indicate that we should carefully consider our decision-making units. There is a wisdom of crowds benefit as long as the crowd doesn’t get too big. We need to find a balance where we have the advantage of different viewpoints and experiences, but this aggregate “noise” doesn’t become unmanageable.

The Power of Meta

meta111First published April 24, 2014 in Mediapost’s Search Insider

To the best of our knowledge, humans are the only species capable of thinking about thinking, even though most of us don’t do it very often. We use the Greek word “meta” to talk about this ability.  Basically, “meta” refers to a concept which is an abstraction of another concept –an instruction sheet for whatever the original thing is.

Because humans can grasp this concept, it can be a powerful way to overcome the limits of our genetic programming. Daniel Kahneman’s book, Thinking Fast and Slow, is essentially a meta-guide to the act of thinking – an owner’s guide for our minds. In it, he catalogs evolutions extensive list of cognitive “gotch-yas” that can way lay our rational reasoning.

In our digital world, we use the word “metadata” a lot. Essentially, metadata is a guide to the subject data. It sits above the data in question, providing essential information about it, such as sources, structure, indexing guides, etc. Increasingly, as we get data from more and more disparate sources, metadata will be required to use it. Ideally, it will provide universally understood implementation guide.  This, of course, requires a common schema for metadata; something that organizations like schema.org is currently working on.

Meta is a relatively new concept that has exploded in the last few decades. It’s one of those words we throw around, but we probably don’t stop to think about. It’s power lies in its ability to both “mark up” the complexity of real world, giving us another functional layer in which to operate. But it also allows us to examine ourselves and overcome some of the mental foibles we’re subject to.

According to Wikipedia, there are over 160 cognitive biases that can impact our ability to rationally choose the optimal path. They include such biases as the Cheerleader Effect, where individuals are more attractive in a group, the IKEA Effect, where we overvalue something we assemble ourselves, and the Google Effect, where we tend to forget information we know we can look up on Google. These are like little bugs in our operating software and most times, they impact our rational performance without us even being aware of them.  But if we have a meta-awareness of them, we can mitigate them to a large degree. We can step back from our decision process and see where biases may be clouding our judgment.

Meta also allows us to model and categorize complexity. It allows us to append data to data, exponentially increasing the value of the aggregated data set. This becomes increasingly important in the new era of Big Data. The challenge with Big Data is that it’s not only more data, because in this case, more is different. Big Data typically comes from multiple structured sources and when it’s removed from the guidance of it’s native contextual schema, it becomes unwieldy. A metadata layer gives us a Rosetta’s Stone with which we can integrate these various data sources. And it’s in the combining of data in new combinations that the value of Big Data can be found.

Perhaps the most interesting potential of meta is in how we might create a meta-model of ourselves. I’ve talked about this before in the context of social media.  Increasingly, our interactions with technology will gain value from personalization. Each of us will be generating reams of personal data. There needs to be an efficient connection between the two. We can’t invest the time required to train all these platforms, tools and apps to know us better. It makes sense to consolidate the most universally applicable data about us into a meta-profile of our goals, preferences and requirements. In effect, it will be a technologically friendly abstraction of who we are.  If we can agree on a common schema for these meta-profiles, the developers of technology can develop their various tools to recognize them and reconfigure their functionality tailor made for us.

As our world becomes more complex, the power of meta will become more and more important.

Five Years Later – An Answer to Lance’s Question (kind of)

112309-woman-internetIt never ceases to amaze me how writing can take you down the most unexpected paths, if you let it. Over 5 years ago now, I wrote a post called “Chasing Digital Fluff – Who Cares about What’s Hot?” It was a rant, and it was aimed at marketer’s preoccupation with what the latest bright shiny object was. At the time, it was social. My point was that true loyalty needs stabilization in habits to emerge. If you’re constantly chasing the latest thing, your audience will be in a constant state of churn. You’d be practicing “drive-by” marketing. If you want to find stability, target what your audience finds useful.

This post caused my friend Lance Loveday to ask a very valid question…”What about entertainment?” Do we develop loyalty to things that are entertaining? So, I started with a series of posts on the Psychology of Entertainment. What types of things do we find entertaining? How do we react to stories, or humor, or violence? And how do audiences build around entertainment? As I explored the research on the topic, I came to the conclusion that entertainment is a by-product of several human needs – the need to bond socially, the need to be special, our appreciation for others whom we believe to be special, a quest for social status and artificially stimulated tweaks to our oldest instincts – to survive and to procreate. In other words, after a long and exhausting journey, I concluded that entertainment lives in our phenotype, not our genotype. Entertainment serves no direct evolutionary purpose, but it lives in the shadows of many things that do.

So, what does this mean for stability of an audience for entertainment? Here, there is good news, and bad news. The good news is that the raw elements of entertainment haven’t really changed that much in the last several thousand years. We can still be entertained by a story that the ancient Romans might have told. Shakespeare still plays well to a modern audience. Dickens is my favorite author and it’s been 144 years since his last novel was published. We haven’t lost our evolved tastes for the basic building blocks of entertainment. But, on the bad news side, we do have a pretty fickle history when it comes to the platforms we use to consume our entertainment.

This then introduces a conundrum for the marketer. Typically, our marketing channels are linked to platforms, not content. And technology has made this an increasingly difficult challenge. While we may connect to, and develop a loyalty for, specific entertainment content, it’s hard for marketers to know which platform we may consume that content on. Take Dickens for example. Even if you, the marketer, knows there’s a high likelihood that I may enjoy something by Dickens in the next year, you won’t know if I’ll read a book on my iPad, pick up an actual book or watch a movie on any one of several screens. I’m loyal to Dickens, but I’m agnostic as to which platform I use to connect with his work. As long as marketing is tied to entertainment channels, and not entertainment content, we are restricted to targeting our audience in an ad hoc and transitory manner. This is one reason why brands have rushed to use product placement and other types of embedded advertising, where the message is set free from the fickleness of platform delivery challenges. If you happen to be a fan of American Idol, you’re going to see the Coke and Ford brands displayed prominently whether you watch on TV, your laptop, your tablet or your smartphone.

It’s interesting to reflect on the evolution of electronic media advertising and how it’s come full circle in this one regard. In the beginning, brands sponsored specific shows. Advertising messages were embedded in the content. Soon, however, networks, which controlled the only consumption choice available, realized it was far more profitable to decouple advertising from the content and run it in freestanding blocks during breaks in their programming. This decoupling was fine as long as there was no fragmentation in the channels available to consume the content, but obviously this is no longer the case. We now watch TV on our schedule, at our convenience, through the device of our choice. Content has been decoupled from the platform, leaving the owners of those platforms scrambling to evolve their revenue models.

So – we’re back to the beginning. If we want to stabilize our audience to allow for longer-term relationship building, what are our options? Obviously, entertainment offers some significant challenges in this regard, due mainly to the fragmentation of platforms we use to consume that content. If we use usefulness as a measure, the main factor in determining loyalty is frequency and stability. If you provide a platform that becomes a habit, as Google has, then you’ll have a fairly stable audience. It won’t destabilize until there is a significant enough resetting of user expectations, forcing the audience to abandon habits (always very tough to do) and start searching for another useful tool that is a better match for the reset expectations. If this happens, you’ll be continually following your audience through multiple technology adoption curves. Still, it seems that usefulness offers a better shot at a stable audience than entertainment.

But there’s still one factor we haven’t explored – what part does social connection play? Obviously, this is a huge question that the revenue models of Facebook, Twitter, Snapchat and others will depend on. So, with entertainment and usefulness explored ad nauseum, in the series of posts, I’ll start tracking down the Psychology of Social connection.

Letting the Foxes into Journalism’s Hen(Hedgehog) House

First published March 27, 2014 in Mediapost’s Search Insider

fanhI am rooting for Nate Silver and fivethirtyeight.com, his latest attempt to introduce a little data-driven veracity into the murky and anecdotal world of journalism. But I may be one of the few, at least if we take the current backlash as a non-scientific, non-quantitative sample:

I have long been a fan of Nate Silver, but so far I don’t think this is working. – Tyler Cowen, Marginal Revolution

Nate Silver’s new venture may become yet another outlet for misinformation when it comes to the issue of human-caused climate change, Michael Mann, director of the Earth System Science Center at Pennsylvania State University.

Here’s hoping that Nate Silver and company up their game, soon. – Paul Krugman, NY Times

Krugman also states:

You can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking.

Now..Nate Silver doesn’t disagree with this. In fact, he says pretty much the same thing in his book, The Signal and the Noise:

The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning.

But he goes on,

Like Caesar, we may construe them in self-serving ways that are detached from their objective reality.

And it’s this construal that Silver is hoping to nip in the bud with FiveThirtyEight. In essence, he wants to do it by being a Fox, to borrow from Isaiah Berlin’s analogy.

‘The fox knows many things, but the hedgehog knows one big thing.’ We take a pluralistic approach and we hope to contribute to your understanding of the news in a variety of ways.

Silver thinks the media’s preoccupation with punditry is a dangerous thing. Pundits, whether they’re coming from the right or left, are Hedgehogs. They get paid for their expertise on “one big thing.” And the more controversial their stand, the more attention they get. This can lead to a dangerous spiral, as researcher Philip Tetlock found out:

What experts think matters far less than how they think. If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin’s prototypical fox—those who “know many little things,” draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life.

Tetlock was researching how expertise correlated with the ability to make good predictions. What he found was that it was actually an inverse relationship. The higher the degree of expertise, the more likely the person in question was a hedgehog. Media pundits are usually extreme versions of hedgehogs, which not only have one worldview, but also love to talk about it. Nate Silver believes that to get an objective view of world events, you need to be a fox, first, but secondly; you should be a fox that’s good at sifting through data:

Conventional news organizations on the whole are lacking in data journalism skills, in my view. Some of this is a matter of self-selection. Students who enter college with the intent to major in journalism or communications have above-average test scores in reading and writing, but below-average scores in mathematics.

So, all this makes sense. The problem in Silver’s approach is that journalism is the way it is because that’s the way humans want it. While I applaud Silver’s determination to change it, he may be trying to push water up hill. Pundits exist not just because the media keeps pushing them in front of us – they exist because we keep listening. Humans like opinions and anecdotes. We’re not hardwired to process data and objectively rationalize. We connect with stories and we’re drawn to decisive opinion leaders. Silver will have to find some middle ground here, and that seems to be where the problems arise. The minute writers add commentary to data; they have to impose an ideological viewpoint. It’s impossible not to. And when you do that, you introduce a degree of abstraction.

The backlash against Fivethirtyeight.com generally falls into two camps: Foxes like Silver that have no problem with the approach but disagree with the specific data put forward and Hedgehogs that just don’t like the entire concept. The first camp may come onside as Silver and his team work out the inevitable hiccups in their approach. The second, which, it should be noted, have a large number of pundits in their midst, will never become fans of Silver and his foxlike approach.

In the end though, it really doesn’t matter what columnists and journalists think. It’s up to the consumers of news media. We’ll decide what we like better – hedgehogs or foxes.

The Bug in Google’s Flu Trend Data

First published March 20, 2014 in Mediapost’s Search Insider

Last year, Google Flu Trends blew it. Even Google admitted it. It over predicted the occurrence of flu by a factor of almost 2:1.  Which is a good thing for the health care system, because if Google’s predictions had have been right, we would have had the worst flu season in 10 years.

Here’s how Google Flu Trends works. It monitors a set of approximately 50 million flu related terms for query volume. It then compares this against data collected from health care providers where Influenza-like Illnesses (ILI) are mentioned during a doctor’s visit. Since the tracking service was first introduced there has been a remarkably close correlation between the two, with Google’s predictions typically coming within 1 to 2 percent of the number of doctor’s visits where the flu bug is actually mentioned. The advantage of Google Flu Trends is that it is available about 2 weeks prior to the ILI data, giving a much needed head start for responsiveness during the height of flu season.

FluBut last year, Google’s estimates overshot actual ILI data by a substantial margin, effectively doubling the size of the predicted flu season.

Correlation is not Causation

This highlights a typical trap with big data – we tend to start following the numbers without remembering what is generating the numbers. Google measures what’s on people’s minds. ILI data measures what people are actually going to the doctor about. The two are highly correlated, but one doesn’t not necessarily cause the other. In 2013, for instance, Google speculated that increased media coverage might be the cause for the overinflated predictions. More news coverage would have spiked interest, but not actual occurrences of the flu.

Allowing for the Human Variable

In the case of Google Flu Trends, because it’s using a human behavior as a signal – in this case online searching for information – it’s particularly susceptible to network effects and information cascades. The problem with this is that these social signals are difficult to rope into an algorithm. Once they reach a tipping point, they can break out on their own with no sign of a rational foundation. Because Google tracks the human generated network effect data and not the underlying foundational data, it is vulnerable to these weird variables in human behavior.

Predicting the Unexpected

A recent article in Scientific American pointed out another issue with an over reliance on data models –  Google Flu Trends completely missed the non-seasonal H1N1 pandemic in 2009. Why? Algorithmically, Google wasn’t expecting it. In trying to eliminate noise from the model, they actually eliminated signal coming during an unexpected time. Models don’t do very well at predicting the unexpected.

Big Data Hubris

The author of the Scientific American piece, associate editor Larry Greenemeier, nailed another common symptom of our emerging crush on data analytics – big data hubris. We somehow think the quantitative black box will eliminate the need for more mundane data collection – say – actually tracking doctor’s visits for the flu. As I mentioned before, the biggest problem with this is that the more we rely on data, which often takes the form of arm’s length correlated data, the further we get from exploring causality. We start focusing on “what” and forget to ask “why.”

We should absolutely use all the data we have available. The fact is, Google Flu Trends is a very valuable tool for health care management. It provides a lot of answers to very pertinent questions. We just have to remember that it’s not the only answer.

Can Facebook Maintain High Ground?

 First published March 13, 2014 in Mediapost’s Search Insider

SnapchatPicAs I said in my last column, Facebook’s recent acquisition spree seems to indicate that they’re trying to evolve from being our Social Landmark to being a virtual map that guides us through our social activity. But, as Facebook rolls out new features or acquires one-time competitors in order to complete this map of the social landscape, will we use it?  Snapchat CEO Evan Spiegel apparently doesn’t think so. That’s part of the reason he turned down $3 billion from Facebook.

At the end of 2012, Mark Zuckerberg paid Spiegel and his team a visit. The purpose of the visit was to scare the bejeezus out of Snapchat by threatening to crush them with the roll out of Poke.  Of course, we now know that Poke was a monumental flop while Snapchat rolled along quite nicely, thank you.  Several months later, Zuck flew out to meet with the Snapchat team again, taking a decidedly different tone this time. He also brought along a very big checkbook.  Snapchat said thanks, but no thanks.

So, how can a brash start up like Snapchat beat the 800 lb Gorilla in it’s own back yard? Why was Poke DOA? Was it a one-of-a-kind miscue on the part of Facebook – or part of a trend?

Part of the answer may lie in how we feel about novelty vs familiarity in the things we deal with. As I said in the last column, we go through 3 stages when we explore new landscapes. We move from navigating by landmarks to memorizing routes and finally, we create our own mental maps of the space, allowing us to plot our own routes as needed. It we apply this to navigating a virtual space like the online social sphere, we should move from relying on landmarks (like Facebook) to using routes (single purpose apps like Snapchat) and finally, to creating our own map that allows us to switch back and forth between apps as required.  Facebook wants to jump from the first stage to the last in order to remain dominant in the social market maintaining our map for us by becoming a hub for all required social functionality. But if the Poke story is any indication, we may not be willing to go along for the ride.

But there’s a subtle psychological point to how we learn to navigate new landscapes – we gain mastery over our environment. With this increased confidence comes a reluctance to feel we’re moving backward. We tend to discard the familiar and embrace novelty as we gain confidence. This squares with research done in the familiarity and novelty seeking in humans. We look for familiarity in things that have high degrees of risk, in the faces of others around us or when we’re operating on autopilot. But when we’re actively considering and judging options and looking for new opportunities, we are drawn to new things.

Humans are natural foragers. We have built in rules of conduct when we go out seeking things that will improve our lot, whether it be food, shelter or tools. Ideally, we look for things that will offer us a distinct advantage over the status quo with a reasonable investment of effort. We balance the two – advantage against effort. If the new options come from a overly familiar place, we tend to mentally discount the potential advantage because we no longer feel we’re moving forward. Over time, this builds into a general feeling of malaise towards the overly familiar.

Time will tell if Evan Spiegel was prescient or just plain stupid in turning down Facebook’s offer. The question is not so much will Facebook prevail, but rather will Snapchat end up emerging as a key part of the social landscape on a continuing basis? That particular landscape is notoriously unstable and it’s been known to swallow up many, many other companies with nary a burp.  Perhaps Spiegel should have taken the money and ran.

But then I wouldn’t be betting the farm on Facebook’s chances of permanence either.

Finding Our Way in the Social Landscape

First published March 6, 2014 in Mediapost’s Search Insider

social-mediaLast month, Om Malik (of GigaOM fame) wrote an article in Fast Company about the user backlash against Facebook.  To be fair, It seems that what’s happening to Facebook is not so much a backlash as apathy. You have to care to lash back. This is more of a wholesale abandonment, as millions of users are going elsewhere – using single purpose apps to get their social media fix. According to the article,

“we cycle between periods in which we want all of our Internet activity consolidated and other times in which we want a bunch of elegant monotaskers. Clearly we have reentered a simplification phase.”

There’s a reason why Facebook has been desperately trying to acquire Snapchat for a reported $3 Billion. There’s also a reason why they picked up Instagram for a billion last year.  It’s because these simple little apps are leaving the home grown Facebook alternatives in the dust. Snapchat is killing Facebook’s Poke – as Mashable pointed out in this comparison.  Snapchat has consistently stayed near the top of App Annie’s most popular download chart for the past 18 months. This coincides exactly with Facebook’s release of Poke.

Screen-Shot-2013-11-14-at-11.29.55-AM

Download rates of Facebook Poke

Screen-Shot-2013-11-14-at-12.10.02-PM

Download rates of Snapchat

Malik indicates it’s because we want a simpler, streamlined experience. A recent article in Business Insider goes one step further – Facebook is just not cool anymore. The mere name induces extended eye rolling in teenagers. It’s like parking the family mini-van in the high school parking lot.  “I hate Facebook. It’s just so boring,” said one of the teens interviewed. Hate! That’s a pretty strong word. What did the Zuck ever do to garner such contempt? Maybe it’s because he’s turning 30 in a few months. Maybe it’s because he’s an old married man.

Or maybe it’s just that we have a better alternative. Malik has a good point. He indicates that we tend to oscillate between consolidation and specialization. I take a bit different view. What’s happening in social media is that we’re getting to know the landscape better. We’re finding our way. This isn’t so much about changing tastes as it is about increased familiarity and a resetting of expectations.

If you look at how humans navigate new environments, you’ll notice some striking similarities. When we encounter a new landscape, we go through three phases of way finding. We begin with relying on landmarks. These are the “highest ground” in a new, unfamiliar landscape and we navigate relative to them. They become our reference points and we don’t stray far from them. Facebook is, you guessed it, a landmark.

The next phase is called “Route Knowledge.” Here, we memorize the routes we use to get from landmark to landmark. We become to recognize the paths we take all the time. In the world of online landscapes, you could substitute the word “app” for “route.” Instagram, Snapchat, Vine and the rest are routes we use to get where we need to go quickly and easily. They’re our virtual “short cuts.”

The last stage of way finding is “Survey Knowledge.”  Here, we are familiar enough with a landscape that we’ve acquired a mental “map” of it and can mentally calculate alternative routes to get to our destination. This is how you navigate in your hometown.

What’s happening to Facebook is not so much that our tastes are swinging. It’s just that we’re confident enough in our routes/apps that we’re no longer solely reliant on landmarks.  We know what we want to do and we know the right tool to use. The next stage of wayfinding, Survey Knowledge, will require some help, however. I’ve talked in the past about the eventual emergence of meta-apps. These will sit between us and the dynamic universe of tools available. They may be largely or even completely transparent to us. What they will do is learn about us and our requirements while maintaining an inventory of all the apps at our disposal. Then, as our needs arise, it will serve up the right app for the job. These meta-apps will maintain our survey knowledge for us, keeping a virtual map of the online landscape to allow us to navigate at will.

As Facebook tries to gobble up the Instagrams and Snapchats of the world, they’re trying to become both a landmark and a meta-app. Will they succeed? I have my thoughts, but those will have to wait until a future column.

How Can Humans Co-Exist with Data?

First published February 6, 2014 in Mediapost’s Search Insider

tumblr_inline_mpt49sqAwV1qz4rgpLast week, I talked about our ability to ignore data. I positioned this as a bad thing. But Pete Austin called me on it, with an excellent counterpoint:

Ignoring Data is the most important thing we do. Only the people who could ignore the trees and see the tiger, in real-time, survived to become our ancestors.”

Too true. We’re built to subconsciously filter and ignore vast amounts of input data in order to maintain focus on critical tasks, such as avoiding hungry tigers. If you really want to dive into this, I would highly recommend Daniel Simons and Christopher Chabris’s “The Invisible Gorilla.” But, as Simons and Chabris point out, with example after example of how our intuitions (which we use as filters) can mislead us, this “inattentional blindness” is not always a good thing. In the adaptive environment in which we evolved, it was pretty effective at keeping us alive.  But in a modern, rational environment, it can severely inhibit our ability to maintain an objective view of the world.

But Pete also had a second, even more valid point:

“What you need to concentrate on now is “curated data”, where the junk has already been ignored for you.”

And this brought to mind an excellent example from a recent interview I did as background for an upcoming book I’m working on.  This idea of pre-filtered, curated data becomes a key consideration in this new world of Big Data.

Nowhere are the stakes higher for the use of data than in healthcare. It’s what lead to the publication of a manifesto in 1992 calling for a revolution in how doctors made life and death decisions. One of the authors, Dr. Gordon Guyatt, coined the term “Evidence based medicine.” The rational is simple here. By taking an empirical approach to not just diagnosis but also to the best prescriptive path, doctors can rise above the limitations of their own intuition and achieve higher accuracy. It’s data driven decision-making, applied to health care. Makes perfect sense, right? But even though Evidence based medicine is now over 20 years old, it’s still difficult to consistently apply at the doctor to individual patient level.

I had the chance to ask Dr. Guyatt why this was:

“Essentially after medical school, learning the practice of medicine is an apprenticeship exercise and people adopt practice patterns according to the physicians who are teaching them and their role models and there is still a relatively small number of physicians who really do good evidence-based practice themselves in terms of knowing the evidence behind what they’re doing and being able to look at it critically.”

The fact is, a data driven approach to any decision-making domain that previously used to rely on intuition just doesn’t feel – well – very intuitive. It’s hard work. It’s time consuming. It, to Mr. Austin’s point, runs directly counter to our tiger-avoidance instincts.

Dr. Guyatt confirms that physicians are not immune to this human reliance on instinct:

“Even the best folks are not going to do it – maybe the best folks – but most folks are not going to be able to do that very often.”

The answer in healthcare, and likely the answer everywhere else where data should back up intuition, is the creation of solid data based resources, which adhere to empirical best practices without requiring every single practitioner to do the necessary heavy lifting. Dr. Guyatt has seen exactly this trend emerge in the last decade:

“What you need is preprocessed information. People have to be able to identify good preprocessed evidence-based resources where the people producing the resources have gone through that process well.”

The promise of curated, preprocessed data is looming large in the world of marketing. The challenge is that, unlike medicine, where data is commonly shared and archived, in the world of marketing much of the most important data stays proprietary. What we have to start thinking about is a truly empirical, scientific way to curate, analyze and filter our own data for internal consumption, so it can be readily applied in real world situations without falling victim to human bias.