Can Stories Make Us Better?

In writing this column, I often put ideas on the shelf for a while. Sometimes, world events conspire to make one of these shelved ideas suddenly relevant. This happened this past weekend.

The idea that caught my eye some months ago was an article that explored whether robots could learn morality by reading stories. On the face of it, it was mildly intriguing. But early Sunday morning as the heartbreaking news filtered to me from Orlando, a deeper connection emerged.

When we speak of unintended consequence, which we have before, the media amplification of acts of terror are one of them. The staggeringly sad fact is that shocking casualty numbers have their own media value. And that, said one analyst who was commenting on ways to deal with terrorism, is a new reality we have to come to terms with. When we in the media business make stories news worthy we assign worth not just for news consumers but also to newsmakers – those troubled individuals who have the motivation and the means to blow apart the daily news cycle.

This same analyst, when asked how we deal with terrorism, made the point you can’t prevent lone acts of terrorism. The only answer is to use that same network of cultural connections we use to amplify catastrophic events to create an environment that dampens rather than intensifies violent impulse. We in the media and advertising industries have to use our considerable skills in setting cultural contexts to create an environment that reduces the odds of a violent outcome. And sadly, this is a game of odds. There are no absolute answers here – there is just a statistical lowering of the curve. Sometimes, despite your best efforts, the unimaginable still happens.

But how do you use the tools at our disposal to amplify morality? Here, perhaps the story I shelved some months ago can provide some clues.

In the study from Georgia Tech, Mark Riedl and Brent Harrison used stories as models of acceptable morality. For most of human history, popular culture included at least an element of moral code. We encoded the values we held most dear into our stories. It provided a base for acceptable behavior, either through positive reinforcement of commonly understood virtues (prudence, justice, temperance, courage, faith, hope and charity) or warnings about universal vices (lust, gluttony, greed, sloth, wrath, envy and pride). Sometimes these stories had religious foundations, sometimes they were secular morality fables but they all served the same purpose. They taught us what was acceptable behavior.

Stories were never originally intended to entertain. They were created to pass along knowledge and cultural wisdom. Entertainment came after when we discovered the more entertaining the story, the more effective it was at its primary purpose: education. And this is how the researchers used stories. Robots can’t be entertained, but they can be educated.

At some point in the last century, we focused on the entertainment value of stories over education and, in doing so, rotated our moral compass 180 degrees. If you look at what is most likely to titillate, sin almost always trumps sainthood. Review that list of virtues and vices and you’ll see that the stories of our current popular culture focus on vice – that list could be the programming handbook for any Hollywood producer. I don’t intend this a sermon – I enjoy Game of Thrones as much as the next person. I simply state it as a fact. Our popular culture – and the amplification that comes from it – is focused almost exclusively on the worst aspects of human nature. If robots were receiving their behavioral instruction through these stories, they would be programmed to be psychopathic moral degenerates.

For most of us, we can absorb this continual stream of anti-social programming and not be affected by it. We still know what is right and what is wrong. But in a world where it’s the “black swan” outliers that grab the news headlines, we have to think about the consequences that reach beyond the mainstream. When we abandon the moral purpose of stories and focus on their entertainment aspect, are we also abandoning a commonly understood value landscape?

If you’re looking for absolute answers here, you won’t find them. That’s just not the world we live in. And am I naïve when I say the stories we chose to tell may have an influence on isolated violent events such as happened in Orlando? Perhaps. Despite all our best intentions, Omar Mateen might still have gone horribly offside.

But all things and all people are, to some extent, products of their environment. And because we in media and advertising are storytellers, we set that cultural environment. That’s our job. Because of this, I belief we have a moral obligation. We have to start paying more attention to the stories we tell.

 

 

 

 

Ex Machina’s Script for Our Future

One of the more interesting movies I’ve watched in the past year has been Ex Machina. Unlike the abysmally disappointing Transcendence (how can you screw up Kurzweil – for God’s sake), Ex Machina is a tightly directed, frighteningly claustrophobic sci-fi thriller that peels back the moral layers of artificial intelligence one by one.

If you haven’t seen it, do so. But until you do, here’s the basic set up. Caleb Smith (Domhnall Gleeson) is a programmer at a huge Internet search company called Blue Book (think Google). He wins a contest where the prize is a week spent with the CEO, Nathan Bateman (Oscar Isaac) at his private retreat. Bateman’s character is best described as Larry Page meets Steve Jobs meets Larry Ellison meets Charlie Sheen – brilliant as hell but one messed up dude. It soon becomes apparent that the contest is a ruse and Smith is there to play the human in an elaborate Turing Test to determine if the robot Ava (Alicia Vikander) is capable of consciousness.

About half way through the movie, Bateman confesses to Smith the source of Ava’s intelligence “software.” It came from Blue Book’s own search data:

‘It was the weird thing about search engines. They were like striking oil in a world that hadn’t invented internal combustion. They gave too much raw material. No one knew what to do with it. My competitors were fixated on sucking it up, and trying to monetize via shopping and social media. They thought engines were a map of what people were thinking. But actually, they were a map of how people were thinking. Impulse, response. Fluid, imperfect. Patterned, chaotic.”

As a search behaviour guy – that sounded like more fact than fiction. I’ve always thought search data could reveal much about how we think. That’s why John Motavalli’s recent column, Google Looks Into Your Brain And Figures You Out, caught my eye. Here, it seemed, fiction was indeed becoming fact. And that fact is, when we use one source for a significant chunk of our online lives, we give that source the ability to capture a representative view of our related thinking. Google and our searching behaviors or Facebook and our social behaviors both come immediately to mind.

Motavalli’s reference to Dan Ariely’s post about micro-moments is just one example of how Google can peak under the hood of our noggins and start to suss out what’s happening in there. What makes this either interesting or scary as hell, depending on your philosophic bent, is that Ariely’s area of study is not our logical, carefully processed thoughts but our subconscious, irrational behaviors. And when we’re talking artificial intelligence, it’s that murky underbelly of cognition that is the toughest nut to crack.

I think Ex Machina’s writer/director Alex Garland may have tapped something fundamental in the little bit of dialogue quoted above. If the data we willingly give up in return for online functionality provides a blue print for understanding human thought, that’s a big deal. A very big deal. Ariely’s blog post talks about how a better understanding of micro-moments can lead to better ad targeting. To me, that’s kind of like using your new Maserati to drive across the street and visit your neighbor – it seems a total waste of horsepower. I’m sure there are higher things we can aspire to than figuring out a better way to deliver a hotels.com ad. Both Google and Facebook are full of really smart people. I’m pretty sure someone there is capable of connecting the dots between true artificial intelligence and their own brand of world domination.

At the very least, they could probably whip up a really sexy robot.

 

 

 

 

 

 

 

 

 

 

 

 

Why Marketers Love Malcolm Gladwell … and Why They Shouldn’t

Marketers love Malcolm Gladwell. They love his pithy, reductionist approach to popular science – his tendency to sacrifice verity for the sake of a good “Just-so” story. And in doing this, what is Malcolm Gladwell but a marketer at heart? No wonder our industry is ga-ga over him. We love anyone who can oversimplify complexity down to the point where it can be appropriated as yet another marketing “angle”.

Take the entire influencer advertising business, for instance. Earlier this year, I saw an article saying more and more brands are expanding their influencer marketing programs. We are desperately searching for that holy nexus where social media and those super-connected “mavens” meet. While the idea of influencer marketing has been around for a while, it really gained steam with the release of Gladwell’s “The Tipping Point.” And that head of steam seems to have been building since the release of the book in 2000.

As others have pointed out, Gladwell has made a habit of taking one narrow perspective that promises to “play well” with the masses, supporting it with just enough science to make it seem plausible and then enshrining it as a “Law.”

Take “The Law of the Few”, for instance, from The Tipping Point: “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.” You could literally hear the millions of ears attached to marketing heads “perk up” when they heard this. “All we have to do,” the reasoning went, “is reach these people, plant a favorable opinion of our product and give them the tools to spread the word. Then we just sit back and wait for the inevitable epidemic to sweep us to new heights of profitability.”

Certainly commercial viral cascades do happen. They happen all the time. And, in hindsight, if you look long and hard enough, you’ll probably find what appears to be a “maven” near ground-zero. From this perspective, Gladwell’s “Law of the Few” seems to hold water. But that’s exactly the type of seductive reasoning that makes “Just So” stories so misleading. You mistakenly believe that because it happened once, you can predict when it’s going to happen again. Gladwell’s indiscriminate use of the term “Law” contributes to this common deceit. A law is something that is universally applicable and constant. When a law governs something, it plays out the same way, every time. And this is certainly not the case in social epidemics.

duncan-watts

Duncan Watts

If Malcolm Gladwell’s books have become marketing and pop-culture bibles, the same, sadly, cannot be said for Duncan Watts’ books. I’m guessing almost everyone reading this column has heard of Malcolm Gladwell. I further guess that almost none of you have heard of Duncan Watts. And that’s a shame. But it’s completely understandable.

Duncan Watts describes his work as determining the “role that network structure plays in determining or constraining system behavior, focusing on a few broad problem areas in social science such as information contagion, financial risk management, and organizational design.”

You started nodding off halfway through that sentence, didn’t you?

As Watts shows in his books, “Firms spent great effort trying to find “connectors” and “mavens” and to buy the influence of the biggest influencers, even though there was never causal evidence that this would work.” But the work required to get to this point is not trivial. While he certainly aims at a broad audience, Watts does not read like Gladwell. His answers are not self-evident. There is no pithy “bon mot” that causes our neural tumblers to satisfyingly click into place. Watts’ explanations are complex, counter-intuitive, occasionally ambiguous and often non-conclusive – just like the world around us. As he explains his book “Everything is Obvious: *Once You Know the Answer”, it’s easy to look backwards to find causality. But it’s not always right.

Marketers love simplicity. We love laws. We love predictability. That’s why we love Gladwell. But in following this path of least resistance, we’re straying further and further from the real world.

How We Might Search (On the Go)

As I mentioned in last week’s column – Mediative has just released a new eyetracking study on mobile devices. And it appears that we’re still conditioned to look for the number one organic result before clicking on our preferred destination.

But…

It appears that things might be in the process of changing. This makes sense. Searching on a mobile device is – and should be – significantly different from searching on a desktop. We have different intents. We are interacting with a different platform. Even the way we search is different.

Searching on a desktop is all about consideration. It’s about filtering and shortlisting multiple options to find the best one. Our search strategies are still carrying a significant amount of baggage from what search was – an often imperfect way to find the best place to get more information about something. That’s why we still look for the top organic listing. For some reason we still subconsciously consider this the gold standard of informational relevancy. We measure all other results against it. That’s why we make sure we reserve one slot from the three to five available in our working memory (I have found that the average person considers about 4 results at a time) for its evaluation.

But searching on a mobile device isn’t about filtering content. For one thing, it’s absolutely the wrong platform to do this with. The real estate is too limited. For another, it’s probably not what we want to spend our time doing. We’re on the go and trying to get stuff done. This is not the time for pausing and reflecting. This is the time to find what I’m looking for and use it to take action.

This all makes sense but the fact remains that the way we search is a product of habit. It’s a conditioned subconscious strategy that was largely formed on the desktop. Most of us haven’t done enough searching on mobile devices yet to abandon our desktop-derived strategies and create new mobile specific ones. So, our subconscious starts playing out the desktop script and only varies from it when it looks like it’s not going to deliver acceptable results. That’s why we’re still looking for that number one organic listing to benchmark against

There were a few findings in the Mediative study that indicate that our desktop habits may be starting to slip on mobile devices. But before we review them, let’s do a quick review of how habits play out. Habits are the brains way of cutting down on thinking. If we do something over and over again and get acceptable results, we store that behavior as a habit. The brain goes on autopilot so we don’t have to think our way through a task with predictable outcomes.

If, however, things change, either in the way the task plays out or in the outcomes we get, the brain reluctantly takes control again and starts thinking its way through the task. I believe this is exactly what’s happening with our mobile searches. The brain desperately wants to use its desktop habits, but the results are falling below our threshold of acceptability. That means we’re all somewhere in the process of rebuilding a search strategy more suitable for a mobile device.

Mediative’s study shows me a brain that’s caught between the desktop searches we’ve always done and the mobile searches we’d like to do. We still feel we should scroll to see at least the top organic result, but as mobile search results become more aligned with our intent, which is typically to take action right away, we are being side tracked from our habitual behaviors and kicking our brains into gear to take control. The result is that when Google shows search elements that are probably more aligned with our intent – either local results, knowledge graphs or even highly relevant ads with logical ad extensions (such as a “call” link) – we lose confidence in our habits. We still scroll down to check out the organic result but we probably scroll back up and click on the more relevant result.

All this switching back and forth from habitual to engaged interaction with the results ends up exacting a cost in terms of efficiency. We take longer to conduct searches on a mobile device, especially if that search shows other types of results near the top. In the study, participants spent an extra 2 seconds or so scrolling between the presented results (7.15 seconds for varied results vs. 4.95 seconds for organic only results). And even though they spent more time scrolling, more participants ended up clicking on the mobile relevant results they saw right at the top.

The trends I’m describing here are subtle – often playing out in a couple seconds or less. And you might say that it’s no big deal. But habits are always a big deal. The fact that we’re still relying on desktop habits that were laid down over the past two decades show how persistent then can be. If I’m right and we’re finally building new habits specific to mobile devices, those habits could dictate our search behaviors for a long time to come.

In Search- Even in Mobile – Organic Still Matters

I told someone recently that I feel like Rick Astley. You know, the guy that had the monster hit “Never Gonna Give You Up” in 1987 and is still trading on it almost 30 years later? He even enjoyed a brief resurgence of viral fame in 2007 when the world discovered what it meant to be “Rickrolled”

google-golden-triangle-eye-trackingFor me, my “Never Gonna Give You Up” is the Golden Triangle eye tracking study we released in 2005. It’s my one hit wonder (to be fair to Astley, he did have a couple other hits, but you get the idea). And yes, I’m still talking about it.

The Golden Triangle as we identified it existed because people were drawn to look at the number one organic listing. That’s an important thing to keep in mind. In today’s world of ad blockers and teeth gnashing about the future of advertising, there is probably no purer or more controllable environment than the search results page. Creativity is stripped to the bare minimum. Ads have to be highly relevant and non-promotional in nature. Interaction is restricted to the few seconds required to scan and click. If there was anywhere where ads might be tolerated, its on the search results page

But…

If we fully trusted ads – especially those as benign as those that show up on search results – there would have be no Golden Triangle. It only existed because we needed to see that top Organic result and dragging our eyes down to it formed one side of the triangle.

eyetracking2014Fast forward almost 10 years. Mediative, which is the current incarnation of my old company, released a follow up two years ago. While the Golden Triangle had definitely morphed into a more linear scan, the motivation remained – people wanted to scan down to see at least one organic listing. They didn’t trust ads then. They don’t trust ads now.

Google has used this need to anchor our scanning with the top organic listing to introduce a greater variety of results into the top “hot zone” – where scanning is the greatest. Now, depending on the search, there is likely to be at least a full screen of various results – including ads, local listings, reviews or news items – before your eyes hit that top organic web result. Yet, we seem to be persistent in our need to see it. Most people still make the effort to scroll down, find it and assess its relevance.

It should be noted that all of the above refers to desktop search. But almost a year ago, Google announced that – for the first time ever – more searches happened on a mobile device than on a desktop.

eyetracking mobile.pngMediative just released a new eye-tracking study (Note: I was not involved at all with this one). This time, they dove into scan patterns on mobile devices. Given the limited real estate and the fact that for many popular searches, you would have to consciously scroll down at least a couple times to see the first organic result, did users become more accepting of ads?

Nope. They just scanned further down!

The study’s first finding was that the #1 organic listing still captures the most click activity, but it takes users almost twice as long to find it compared to a desktop.

The study’s second finding was that even though organic is still important, position matters more than ever. Users will make the effort to find the top organic result and, once they do, they’ll generally scan the top 4 results, but if they find nothing relevant, they probably won’t scan much further. In the study, 92.6% of the clicks happened above the 4th organic listing. On a desktop, 84% of the clicks happened above the number 4 listing.

The third listing shows an interesting paradox that’s emerging on mobile devices: we’re carrying our search habits from the desktop over with us – especially our need to see at least one organic listing. The average time to scan the top sponsored listing was only 0.36 seconds, meaning that people checked it out immediately after orienting themselves to the mobile results page, but for those that clicked the listing, the average time to click was 5.95 seconds. That’s almost 50% longer than the average time to click on a desktop search. When organic results are pushed down the page because of other content, it’s taking us longer before we feel confident enough to make our choice. We still need to anchor our relevancy assessment with that top organic result and that’s causing us to be less efficient in our mobile searches than we are on the desktop.

The study also indicated that these behaviors could be in flux. We may be adapted our search strategies for mobile devices, but we’re just not quite there yet. I’ll touch on this in next week’s column.

 

 

 

 

 

 

 

 

The World in Bite Sized Pieces

It’s hard to see the big picture when your perspective is limited to 160 characters.

Or when we keep getting distracted from said big picture by that other picture that always seems to be lurking over there on the right side of our screen – the one of Kate Upton tilting forward wearing a wet bikini.

Two things are at work here obscuring our view of the whole: Our preoccupation with the attention economy and a frantic scrambling for a new revenue model. The net result is that we’re being spoon-fed stuff that’s way too easy to digest. We’re being pandered to in the worst possible way. The world is becoming a staircase of really small steps, each of which has a bright shiny object on it urging us to scale just a little bit higher. And we, like idiots, stumble our way up the stairs.

This cannot be good for us. We become better people when we have to chew through some gristle. Or when we’re forced to eat our broccoli. The world should not be the cognitive equivalent of Captain Crunch cereal.

It’s here where human nature gets the best of us. We’re wired to prefer scintillation to substance. Our intellectual laziness and willingness to follow whatever herd seems to be heading in our direction have conspired to create a world where Donald Trump can be a viable candidate for president of the United States – where our attention span is measured in fractions of a second – where the content we consume is dictated by a popularity contest.

Our news is increasingly coming to us in smaller and smaller chunks. The exploding complexity of our world, which begs to be understood in depth, is increasingly parceled out to us in pre-digested little tidbits, pushed to our smartphone. We spend scant seconds scanning headlines to stay “up to date.” And an algorithm that is trying to understand where our interests lie usually determines the stories we see.

This algorithmic curation creates both “Filter” and “Agreement” Bubbles. The homogeneity of our social network leads to a homogeneity of content. But if we spend our entire time with others that think like us, we end up with an intellectually polarized society in which the factions that sit at opposite ends of any given spectrum are openly hostile to each other. The gaps between our respective ideas of what is right are simply too big and no one has any interest in building a bridge across them. We’re losing our ideological interface areas, those opportunities to encounter ideas that force us to rethink and reframe, broadening our worldview in the process. We sacrifice empathy and we look for news that “sounds right” to us, not matter what “right” might be.

This is a crying shame, because there is more thought provoking, intellectually rich content than ever before being produced. But there is also more sugar coated crap who’s sole purpose is to get us to click.

I’ve often talked about the elimination of friction. Usually, I think this is a good thing. Bob Garfield, in a column a few months ago, called for a whoop-ass can of WD 40 to remove all transactional friction. But if we make things too easy to access, will we also remove those cognitive barriers that force us to slow down and think, giving our rationality a change to catch up with impulse? And it’s not just on the consumption side where a little bit of friction might bring benefits. The upside of production friction was that it did slow down streams of content just long enough to introduce an editorial voice. Someone somewhere had to give some thought as to what might actually be good for us.

In other words, it was someone’s job to make sure we ate our vegetables.

We’re Informed. But Are We Thoughtful?

I’m a bit of a jerk when I write. I lock myself behind closed doors in my home office. In the summer, I retreat to the most remote reaches of the back yard. The reason? I don’t want to be interrupted with human contact. If I am interrupted, I stare daggers through the interrupter and answer in short, clipped sentences. The house has to be silent. If conditions are less than ideal, my irritation is palpable. My family knows this. The warning signal is “Dad is writing.” This can be roughly translated as “Dad is currently an asshole.” The more I try to be thoughtful, the bigger the ass I am.

I suspect Henry David Thoreau was the same.  He went even further than my own backyard exile. He camped out alone for two years in Ralph Waldo Emersen’s cabin on Walden Pond. He said things like,

“I never found a companion that was so companionable as solitude.”

But Thoreau but was also a pretty thoughtful guy, who advised us that,

“As a single footstep will not make a path on the earth, so a single thought will not make a pathway in the mind. To make a deep physical path, we walk again and again. To make a deep mental path, we must think over and over the kind of thoughts we wish to dominate our lives.”

But, I ask, how can we be thoughtful when we are constantly distracted by information? Our mental lives are full of single footsteps. Even if we intend to cover the same path more than once, there are a thousand beeps, alerts, messages, prompts, pokes and flags that are beckoning us to start down a new path, in a different direction. We probably cover more ground, but I suspect we barely disturb the fallen leaves on the paths we take.

I happen to do all my reading on a tablet. I do this for three reasons; first, I always have my entire library with me and I usually have four books on the go at the same time (currently 1491, Reclaiming Conversation, Flash Boys and 50 Places to Bike Before You Die) – secondly, I like to read before I go to sleep and I don’t need to keep a light on that keeps my wife awake – and thirdly, I like to highlight passages and make notes. But there’s a trade-off I’ve had to make. I don’t read as thoughtfully as I used to. I can’t “escape” with a book anymore. I am often tempted to check email, play a quick game of 2048 or search for something on Google. Maybe the fact that my attention is always divided amongst four books is part of the problem. Or maybe it’s that I’m more attention deficit than I used to be.

There is a big difference between being informed and being thoughtful. And our connected world definitely puts the bias on the importance of information. Being connected is all about being informed. But being thoughtful requires us to remove distraction. It’s the deep paths that Thoreau was referring too. And it requires a very different mindset. Our brains are a single-purpose engine. We can either be informed or be thoughtful. We can’t be both at the same time.

090313-RatMaze

At the University of California, San Francisco, Mattiass Karlsson and Loren Frank found that rats need two very different types of cognitive activity when mastering a maze. First, when they explore a maze, certain parts of their brain are active as they’re being “informed” about their new environment. But they don’t master the maze unless they’re allowed downtime to consolidate the information into new persistent memories. Different parts of the brain are engaged, including the hippocampus. They need time to be thoughtful and create a “deep path.”

In this instance, we’re not all that different than rats. In his research, MIT’s Alex “Sandy” Pentland found that effective teams tend to cycle through two very different phases: First, they explore, gathering new information. Then, just like the thoughtful rats, they engage as a group, taking that information, digesting it and synthesizing it for future execution. Pentland found that while both are necessary, they don’t exist at the same time,

“Exploration and engagement, while both good, don’t easily coexist, because they require that the energy of team members be put to two different uses. Energy is a finite resource.”

Ironically, research is increasingly showing that are previous definitions of cognitive activity may have been off-the mark. We always assumed that “mind-wandering” or “day-dreaming” was a non-productive activity. But we’re finding out that it’s an essential part of being thoughtful. We’re actually not “wandering.” It’s just the brain’s way of synthesizing and consolidating information. We’re wearing deeper paths in the by-ways of our mind. But a constant flow of new information, delivered through digital channels, keeps us from synthesizing the information we already have. Our brain is too busy being informed to be able to make the switch to thoughtfulness. We don’t have enough cognitive energy to do both.

What price might we pay for being “informed” at the expense of being “thoughtful?” It appears that it might be significant. Technology distraction in the classroom could lower grades by close to 20 percent. And you don’t even have to be the one using the device. Just having an open screen in the vicinity might distract you enough to drop your report card from a “B” to a “C.”

Having read this, you now have two choices. You could click off to the next bit of information. Or, you could stare into space for a few minutes and be lost in your thoughts.

Chose wisely.

Are Atheists More Innovative?

A few columns back, I talked about the most innovative countries in the world, according to INSEAD, Johnson School of Management and WIPO. Switzerland, of all places, topped the list. At the time, I mentioned diversity as possibly being one of the factors. But for some reason, I just couldn’t let it lie there.

Last Friday afternoon, it being pretty miserable outside, I dusted off my Stats 101 prowess and decided to look for correlations. The next thing I knew, 3 hours had passed and I was earlobe deep in data tables and spreadsheets.

Yeah..that’s how I roll. That’s wassup.

But I digress. What initially sent me down this path was a new study out of the University of Kansas by Tien-Tsung lee and co-authors Masahiro Yamamoto and Weina Ran. Working with data from Japan, they found that the amount of trust you have in media depends on the diversity of the community you live in. The more diverse the population, the lower the degree of trust in media.

This caught my attention – a negative correlation between trust and diversity. I wondered how those two things might triangulate with innovation. Was there a three-way link here?

So, I started compiling the data. First, I wanted to broaden the definition of innovation. Originally, I had cited the INSEAD Global Innovation Index. Bloomberg also has a ranking of innovation by country that uses a few different criteria. I decided to take an average, normalized score of the two together. In case you’re wondering, Switzerland scored much lower in the Bloomberg ranking, which had South Korea, Japan and Germany in the top three spots.

With my new innovation ranking, I then started to look for correlations. What part, for example, did trust play? According to Edelman, the global marketing giant, who publishes an annual trust barometer, it plays a massive role: “Building trust is essential to successfully bringing new products and services to market.” Their trust barometer measures trust in the infrastructural institutions of the respective countries. So I added Edelman’s indexed trust scores to my spreadsheet and used a quick and dirty Pearson r-value test to look for significant correlations. For those as rusty as I when it comes to stats, a perfect correlation would be 1.0. Strong relationships show up in the 0.6 and above range. Moderate relationships are in the 0.3 to 0.6 range. Weak relationships are 0.3 and below. Zero values indicate no relationship. Inverse relationships follow the same scale but with negative values.

The result? Not only was there no positive correlation, there was actually a moderately significant negative correlation! For those interested, the r-value was -0.4224. Based on this admittedly amateur analysis, trust in national institutions and innovation do not seem to go hand-in-hand. Some of the most innovative countries are the least trusting and vice-versa. It certainly wasn’t the neat supposed linear relationship that Edelman referred to in their press releases for their barometer.

Next, I turned to the obvious – the wealth of the respective nations. I added GDP per capita as a data point. Predictably, there was a strong positive correlation here – I came up with an r-value of .793. Rich countries are more innovative. Duh.

Now comes the really interesting part. What was the relationship between cultural diversity and innovation? If my original hypothesis was correct, there should be at least a moderate correlation here. The problem was trying to find an accurate measure of cultural diversity. I ended up using three measures from Alesina et al: Ethnic Fractionalization, Linguistic Fractionalization and Religious Fractionalization. I averaged these out and indexed them to give me a single score of cultural diversity. To my surprise, my hypothesis appeared to be significantly flawed – my r value was -0.2488.

But then I started analyzing the individual measures of diversity. Ethnic Diversity and Innovation showed a moderate negative correlation: -0.5738. Linguistic Diversity and Innovation showed a less significant negative correlation: -0.3886. But Religious Diversity and Innovation came up as a moderate positive correlation: 0.4129! Of the three, religion is the only measure of diversity that’s directly ideological, at least to some extent.

This seemed to be promising, so I pushed it to the extreme. If religious diversity shows to be correlated with innovation, I wonder how the prevalence of atheists would relate? After all, this should be the ultimate measure of religious ideological freedom. So, using a combination of results from a worldwide Gallup survey and a study from Phil Zuckerman, I added an indexed “atheism” score. Sure enough, the r-value was 0.7461! This was almost as significant as the correlation between national wealth and innovation! Based on my combined innovation scores, some of the least religious countries in the world (Japan, Sweden and Switzerland) are the most innovative.

So – ignoring for a moment the barn-door sized holes in my impromptu methodology and a whack of confounding factors – what might this hypothetically mean? I’ll come back to this intriguing question in next week’s Online Spin.

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.

Do We Really Want Virtual Reality?

Facebook bought Oculus. Their goal is to control the world you experience while wearing a pair of modified ski goggles. Mark Zuckerberg is stoked. Netflix is stoked. Marketers the world over are salivating. But, how should you feel about this?

Personally, I’m scared. I may even be terrified.

First of all, I don’t want anyone, especially not Mark Zuckerberg, controlling my sensory world.

Secondly, I’m pretty sure we’re not built to be virtually real.

I understand the human desire to control our environment. It’s part of the human hubris. We think we can do a better job than nature. We believe introducing control and predictability into our world is infinitely better than depending on the caprices of nature. We’ve thought so for many thousands of years. And – Oh Mighty Humans Who Dare to be Gods – just how is that working out for us?

Now that we’ve completely screwed up our physical world, we’re building an artificial version. Actually, it’s not really “we” – it’s “they.” And “they” are for profit organizations that see an opportunity. “They” are only doing it so “they” control our interface to consciousness.

Personally, I’m totally comfortable giving a profit driven corporation control over my senses. I mean, what could possibly happen? I’m sure anything they may introduce to my virtual world will be entirely for my benefit. I’m sure they would never take the opportunity to use this control to add to their bottom line. If you need proof, look how altruistically media – including the Internet – has evolved under the stewardship of corporations.

Now, their response would be that we can always decide to take the goggles off. We stay in control, because we have an on/off switch. What they don’t talk about is the fact that they will do everything in their power to keep us from switching their VR world off. It’s in their best interest to do so, and by best interest, I mean they more time we spend in their world, as opposed to the real one, the more profitable it is for them. They can hold our senses hostage and demand ransom in any form they choose.

How will they keep us in their world? By making it addictive. And this brings us to my second concern about Virtual Reality – we’re just not built for it.

We have billions of neurons that are dedicated to parsing and understanding a staggeringly complex and dynamic environment. Our brain is built to construct a reality from thousands and thousands of external cues. To manage this, it often takes cognitive shortcuts to bring the amount of processing required down to a manageable level. We prefer pleasant aspects of reality. We are alerted to threats. Things that could make us sick disgust us. The brain manages the balance by a judicious release of neurochemicals that make us happy, sad, disgusted or afraid. Emotions are the brain’s way of effectively guiding us through the real world.

A virtual world, by necessity, will have a tiny fraction of the inputs that we would find in the real world. Our brains will get an infinitesimal slice of the sensory bandwidth it’s used to. Further, what inputs it will get will have the subtlety of a sledgehammer. Ham fisted programmers will try to push our emotional hot buttons, all in the search for profit. This means a few sections of our brain will be cued far more frequently and violently than they were ever intended to be. Additionally, huge swaths of our environmental processing circuits will remain dormant for extended periods of time. I’m not a neurologist, but I can’t believe that will be a good thing for our cognitive health.

We were built to experience the world fully through all our senses. We have evolved to deal with a dynamic, complex and often unexpected environment. We are supposed to interact with the serendipity of nature. It is what it means to be human. I don’t know about you, but I never, ever, want to auction off this incredible gift to a profit-driven corporation in return for a plastic, programmed, 3 dimensional interface.

I know this plea is too late. Pandora’s Box is opened. The barn door is open. The horse is long gone. But like I said, I’m scared.

Make that terrified.