Decoupling Our Hunch Making Mechanism

Humans are hunch-making machines. We’re gloriously good at it. In fact, no one and no thing is better at coming up with a hunch. It’s what sets up apart on our planet and, thus far, nothing we’ve invented has proven to be better suited to strike the spark of intuition.

We can seemingly draw speculative guesses out of thin air – literally. From all the noise that surrounds us, we recognize potential patterns and infer significance. Scientists call them hypotheses. Artists call them artistic inspirations. Entrepreneurs call them innovations.

Whatever the label, we’re not exactly sure what happens. Mihaly Czikszentmihaly (which, in case you’re wondering, is pronounced Me-high Cheek-sent-me-high) explored where these hunches come from in his fascinating book, Creativity, The Psychology of Discovery and Invention. But despite the collective curiosity about the source of human creativity – the jury remains out. The mechanism that turns these very human gears and sparks the required connections between our synapses remains a mystery.

We’re good at making hunches. But we suck at qualifying those hunches. The reason is that we rush a hunch straight into becoming a belief. And that’s where things go off the rails. A hunch is a guess about what might be true. A belief is what we deem to be true. We go straight from what is one of many possible scenarios to the only scenario we execute against. The entire scientific method was created to counteract this very human tendency – forcing rational analysis of the hunches we churn out.

Philip Tetlock’s work on expertise in prediction shows how fragile this tendency to go from hunch to belief can make us. After all, a prediction is nothing more than a hunch of what might be. He referred to Isaiah Berlin’s 1950 essay, “The Hedgehog and the Fox.” In the essay, Berlin quotes the ancient Greek poet Archilochus, “”a fox knows many things, but a hedgehog one important thing.” Taking some poetic license, you could said that a hedgehog is more prone to moving straight from hunch to belief, where a fox tends to evaluate her hunches against multiple sources. Tetlock found that when it came to the accuracy of predictions, it was better to be a fox than a hedgehog. In some cases, much better.

But Tetlock also found that when it comes down to “crunching hunches”, machines tend to bet man hands down. It’s because humans have been programmed for thousands of generations to trust our hunches and no matter how much we fight it, we are born to treat our hunches as fact. Machines bear no such baggage.

This is an example of Moravec’s Paradox – the things that seem simple for humans are amazingly complex for machines. And vice versa. As artificial intelligence pioneer Marvin Minsky once recognized, it’s the things we do unconsciously that represent the biggest challenges for artificial intelligence, “In general, we’re least aware of what our minds do best.” Machines may never be as good as humans at creating a hunch – or, at least – we’re certainly not there yet. But machines have already outstripped humans in the ability to empirically analyze and validate multiple options.

Fellow Online Spin columnist Kaila Colbin posited this in her last column, “When Watson Comes for Your Job, Give it to Him.” As she points out, IBM’s Watson can kick any human ass when it comes to reviewing case law – or plowing through the details required for an accurate medical diagnosis – or assisting students prepare for an upcoming exam. But Watson isn’t very good at coming up with hunches. It’s because hunches aren’t rational. They’re inspirational. And machines aren’t fluent in inspiration. Not yet, anyway.

Maybe that’s why – even in something as logical as chess – the current champion isn’t a machine, or a human. It’s a combination of both. As American economist and author (Average is Over) Tyler explained in a blog post, a “striking percentage of the best or most accurate chess games of all time have been played by man-machine pairs.” Cowen shows four ways a man-machine team can outperform and they all have to do with leveraging the respective strengths of each. Humans use intuition to create hunches, and then harness the power of the machine to analyze relevant options.

Hunches have served humans very well. They will continue to do so. The trick is to decouple those hunches from the belief making mechanism that has historically accompanied it. That’s where we should let machines take over.

 

 

The World in Bite Sized Pieces

It’s hard to see the big picture when your perspective is limited to 160 characters.

Or when we keep getting distracted from said big picture by that other picture that always seems to be lurking over there on the right side of our screen – the one of Kate Upton tilting forward wearing a wet bikini.

Two things are at work here obscuring our view of the whole: Our preoccupation with the attention economy and a frantic scrambling for a new revenue model. The net result is that we’re being spoon-fed stuff that’s way too easy to digest. We’re being pandered to in the worst possible way. The world is becoming a staircase of really small steps, each of which has a bright shiny object on it urging us to scale just a little bit higher. And we, like idiots, stumble our way up the stairs.

This cannot be good for us. We become better people when we have to chew through some gristle. Or when we’re forced to eat our broccoli. The world should not be the cognitive equivalent of Captain Crunch cereal.

It’s here where human nature gets the best of us. We’re wired to prefer scintillation to substance. Our intellectual laziness and willingness to follow whatever herd seems to be heading in our direction have conspired to create a world where Donald Trump can be a viable candidate for president of the United States – where our attention span is measured in fractions of a second – where the content we consume is dictated by a popularity contest.

Our news is increasingly coming to us in smaller and smaller chunks. The exploding complexity of our world, which begs to be understood in depth, is increasingly parceled out to us in pre-digested little tidbits, pushed to our smartphone. We spend scant seconds scanning headlines to stay “up to date.” And an algorithm that is trying to understand where our interests lie usually determines the stories we see.

This algorithmic curation creates both “Filter” and “Agreement” Bubbles. The homogeneity of our social network leads to a homogeneity of content. But if we spend our entire time with others that think like us, we end up with an intellectually polarized society in which the factions that sit at opposite ends of any given spectrum are openly hostile to each other. The gaps between our respective ideas of what is right are simply too big and no one has any interest in building a bridge across them. We’re losing our ideological interface areas, those opportunities to encounter ideas that force us to rethink and reframe, broadening our worldview in the process. We sacrifice empathy and we look for news that “sounds right” to us, not matter what “right” might be.

This is a crying shame, because there is more thought provoking, intellectually rich content than ever before being produced. But there is also more sugar coated crap who’s sole purpose is to get us to click.

I’ve often talked about the elimination of friction. Usually, I think this is a good thing. Bob Garfield, in a column a few months ago, called for a whoop-ass can of WD 40 to remove all transactional friction. But if we make things too easy to access, will we also remove those cognitive barriers that force us to slow down and think, giving our rationality a change to catch up with impulse? And it’s not just on the consumption side where a little bit of friction might bring benefits. The upside of production friction was that it did slow down streams of content just long enough to introduce an editorial voice. Someone somewhere had to give some thought as to what might actually be good for us.

In other words, it was someone’s job to make sure we ate our vegetables.

Living in the Age of “Hyper”

Amazon is a disappointment.

In the fourth quarter of 2015, it made a measly $482 million profit on sales of $35.7 billion. That’s a 22% gain in revenue from a year ago, and over a 100% gain in profit. In that year, Amazon also doubled its market value to over $300 billion.

Bunch of deadbeats…

Last week, Amazon’s share price took a beating in after hours trading, dropping 15%

Serves you right, slackers…

And this all happened because despite Amazon’s healthy performance, it “didn’t meet analyst’s expectations.”

Maybe it’s time to look at those expectations.

Amazon is what those analysts call a “growth” stock. If you compare it against the rest of the Fortune 500, it might even be called a “hyper-growth” stock. It’s doubling of market value outperformed other growth stocks like Apple, which has had it’s own history of disappointment. We expect great things from anything prefaced with “hyper.

You all know what hyper means. It means “above” – as in “above” normal. In terms of growth of revenue and market value, Amazon would certainly qualify. It’s in the top few percent of all companies of the Fortune 500 in both categories.

But we expect more. We expect “hyper” performance. And it you don’t measure up, you disappoint us. It’s like kicking your kid out of the house when they come home with a straight A report card in grade 10 because they didn’t qualify for early admission to Harvard.

Here’s the thing about “hyper.” Not everything can be “hyper.” Something needs to be the opposite of hyper. Do you know what the opposite of “hyper” is? It’s “hypo.” Everyone knows what hyper means, but I bet it’s been a long time since you used “hypo” in a sentence.

hypo hyper

That’s because we’re fixated on “hyper”. But the way we use “hyper” makes it an outlier. It’s a statistical anomaly on the far right of the normal distribution curve. It doesn’t represent reality. But we think it does. We expect everything to measure up to some unrealistic measure of performance. When we start a business, we expect to be as successful as Google. When we look at our bank account, we expect it to be as big as Kanye West’s. When we buy a stock, we want it to outperform every other stock in the market.

We have over-hyped “hyper.”

This tendency is starting to impact other aspects of our lives. As we quantify more of who we are, we tend to measure ourselves against the “hyper” end of the yardstick. It’s becoming a real problem. Even our friendships are now quantified, thanks to Facebook, Twitter and Instagram. The result is that it’s now almost impossible to measure up to expectations.

We, like Amazon, are disappointing. The difference is that Amazon disappoints analysts. We disappoint ourselves.

This can be a real bummer. Tom Magliozzi, co-host of NPR’s Car Talk show, summarized the problem in five words:

“Happiness Equals Reality Minus Expectations.”

If our expectations keep moving to the “hyper” end of the scale, it will never match up to reality. We’ll never be happy. According to this blog post by Tim Urban, it’s a big problem for Generation Y. And Tim should know. He’s a 31-year-old Harvard grad who owns a couple of tutoring businesses and has started a blog that grew virally to over 300,000 subscribers.

Slacker.

 

 

 

 

We’re Informed. But Are We Thoughtful?

I’m a bit of a jerk when I write. I lock myself behind closed doors in my home office. In the summer, I retreat to the most remote reaches of the back yard. The reason? I don’t want to be interrupted with human contact. If I am interrupted, I stare daggers through the interrupter and answer in short, clipped sentences. The house has to be silent. If conditions are less than ideal, my irritation is palpable. My family knows this. The warning signal is “Dad is writing.” This can be roughly translated as “Dad is currently an asshole.” The more I try to be thoughtful, the bigger the ass I am.

I suspect Henry David Thoreau was the same.  He went even further than my own backyard exile. He camped out alone for two years in Ralph Waldo Emersen’s cabin on Walden Pond. He said things like,

“I never found a companion that was so companionable as solitude.”

But Thoreau but was also a pretty thoughtful guy, who advised us that,

“As a single footstep will not make a path on the earth, so a single thought will not make a pathway in the mind. To make a deep physical path, we walk again and again. To make a deep mental path, we must think over and over the kind of thoughts we wish to dominate our lives.”

But, I ask, how can we be thoughtful when we are constantly distracted by information? Our mental lives are full of single footsteps. Even if we intend to cover the same path more than once, there are a thousand beeps, alerts, messages, prompts, pokes and flags that are beckoning us to start down a new path, in a different direction. We probably cover more ground, but I suspect we barely disturb the fallen leaves on the paths we take.

I happen to do all my reading on a tablet. I do this for three reasons; first, I always have my entire library with me and I usually have four books on the go at the same time (currently 1491, Reclaiming Conversation, Flash Boys and 50 Places to Bike Before You Die) – secondly, I like to read before I go to sleep and I don’t need to keep a light on that keeps my wife awake – and thirdly, I like to highlight passages and make notes. But there’s a trade-off I’ve had to make. I don’t read as thoughtfully as I used to. I can’t “escape” with a book anymore. I am often tempted to check email, play a quick game of 2048 or search for something on Google. Maybe the fact that my attention is always divided amongst four books is part of the problem. Or maybe it’s that I’m more attention deficit than I used to be.

There is a big difference between being informed and being thoughtful. And our connected world definitely puts the bias on the importance of information. Being connected is all about being informed. But being thoughtful requires us to remove distraction. It’s the deep paths that Thoreau was referring too. And it requires a very different mindset. Our brains are a single-purpose engine. We can either be informed or be thoughtful. We can’t be both at the same time.

090313-RatMaze

At the University of California, San Francisco, Mattiass Karlsson and Loren Frank found that rats need two very different types of cognitive activity when mastering a maze. First, when they explore a maze, certain parts of their brain are active as they’re being “informed” about their new environment. But they don’t master the maze unless they’re allowed downtime to consolidate the information into new persistent memories. Different parts of the brain are engaged, including the hippocampus. They need time to be thoughtful and create a “deep path.”

In this instance, we’re not all that different than rats. In his research, MIT’s Alex “Sandy” Pentland found that effective teams tend to cycle through two very different phases: First, they explore, gathering new information. Then, just like the thoughtful rats, they engage as a group, taking that information, digesting it and synthesizing it for future execution. Pentland found that while both are necessary, they don’t exist at the same time,

“Exploration and engagement, while both good, don’t easily coexist, because they require that the energy of team members be put to two different uses. Energy is a finite resource.”

Ironically, research is increasingly showing that are previous definitions of cognitive activity may have been off-the mark. We always assumed that “mind-wandering” or “day-dreaming” was a non-productive activity. But we’re finding out that it’s an essential part of being thoughtful. We’re actually not “wandering.” It’s just the brain’s way of synthesizing and consolidating information. We’re wearing deeper paths in the by-ways of our mind. But a constant flow of new information, delivered through digital channels, keeps us from synthesizing the information we already have. Our brain is too busy being informed to be able to make the switch to thoughtfulness. We don’t have enough cognitive energy to do both.

What price might we pay for being “informed” at the expense of being “thoughtful?” It appears that it might be significant. Technology distraction in the classroom could lower grades by close to 20 percent. And you don’t even have to be the one using the device. Just having an open screen in the vicinity might distract you enough to drop your report card from a “B” to a “C.”

Having read this, you now have two choices. You could click off to the next bit of information. Or, you could stare into space for a few minutes and be lost in your thoughts.

Chose wisely.

A New Definition of Order

The first time you see the University of Texas – Austin’s AIM traffic management simulator in action, you can’t believe it would work. It shows the intersection of two 12 lane, heavily trafficked roads. There are no traffic lights, no stop signs, none of the traffic control systems we’re familiar with. Yet, traffic zips through with an efficiency that’s astounding. It appears to be total chaos, but no cars have to wait more than a few seconds to get through the intersection and there’s nary a collision in site. Not even a minor fender bender.

Oh, one more thing. The model depends on there being no humans to screw things up. All the vehicles are driverless. In fact, if just one of the vehicles had a human behind the wheel, the whole system would slow dramatically. The probability of an accident would also soar.

The thing about the simulation is that there is no order – or, at least – there is no order that is apparent to the human eye. The programmers at the U of T seem to recognize this with a tongue in cheek nod to our need for rationality. This particular video clip is called “insanity.” There are other simulation videos available at the project’s website, including ones where humans drive cars at intersections controlled by stoplights. These seem much saner and controlled. They’re also much less efficient. And likely more dangerous. No simulation that includes a human factor comes even close to matching the efficiency of the 100% autonomous option.

The AIM simulation is complex, but it isn’t complicated. It’s actually quite simple. As cars approach the intersection, they signal to a central “manager” if they want to turn or go straight ahead. The manager predicts whether the vehicles path will intersect another vehicle’s predicted path. If it does, it delays the vehicle slightly until the path is clear. That’s it.

The complexity comes in trying to coordinate hundreds of these paths at any given moment. The advantage the automated solution has is that it is in communication with all the vehicles. What appears chaotic to us is actually highly connected and coordinated. It’s fluid and organic. It has a lot in common with things like beehives, ant colonies and even the rhythms of our own bodies. It may not be orderly in our rational sense, but it is natural.

Humans don’t deal very well with complexity. We can’t keep track of more than a dozen or so variables at any one time. We categorize and “chunk” data into easily managed sets that don’t overwhelm our working memory. We always try to simplify things down by imposing order. We use heuristics when things get too complex. We make gut calls and guesses. Most of the time, it works pretty well, but this system gets bogged down quickly. If we pulled the family SUV into the intersection shown in the AIM simulation, we’d probably jam on the brakes and have a minor mental meltdown as driverless cars zipped by us.

Artificial intelligence, on the other hand, loves complexity. It can juggle amounts of disparate data that humans could never dream of managing. This is not to say that computers are more powerful than humans. It’s just that they’re better at different things. It’s referred to as Moravec’s Paradox: It’s relatively easy to program a computer to do what a human finds hard, but it’s really difficult to get it to do what humans find easy. Tracking the trajectories and coordinating the flow of hundreds of autonomous cars would fall into the first category. Understanding emotions would fall into the second category.

This matters because, increasingly, technology is creating a world that is more dynamic, fluid and organic. Order, from our human perspective, will yield to efficiency. And the fact is that – in data rich environments – machines will be much better at this than humans.   Just like our perspectives on driving, our notions of order and efficiency will have to change.

 

Giving Thanks for The Law of Accelerating Returns

For the past few months, I’ve been diving into the world of show programming again, helping MediaPost put together the upcoming Email Insider Summit up in Park City. One of the keynotes for the Summit, delivered by Charles W. Swift, VP of Strategy and Marketing Operations for Hearst Magazines, is going to tackle a big question, “How do companies keep up with the ever accelerating rate of change of our culture?”

After an initial call with Swift, I did some homework and reacquainted myself with Ray Kurzweil’s Law of Accelerating Returns. Shortly after, I had to stop because my brain hurt. Now, I would like to pass that unique experience along to you.

In an interview that is now 12 years old, Kurzweil explained the concept, using biological evolution as an analogy. I’ll try to make this fast. Earth is about 4.6 billion years old. The very first life appeared about 3.8 billion years ago. It took another 1.7 billion years for multicellular life to appear. Then, about 1.2 billion years later, we had something called the Cambrian Explosion. This was really when the diversity of life we recognize today started. If you’ve been keeping track, you know that it took the earth 4.1 of it’s 4.6 billion year history, or about 90% of the time since the earth was formed, to produce complex life forms of any kind.

Things started to move much quicker at that point. Amphibians and reptiles appeared about 350 million years ago, dinosaurs appeared 225 million years ago, mammals 200 million years ago, dinosaurs disappeared about 70 million years ago, the first great apes appeared about 15 million years ago and we homo sapiens have only been around for 200,000 years or so. And, as a species, we really have only made much of dent in the world in the last 10,000 years of our history. In the entire history of the world, that represents a very tiny 0.00022% slice. But consider how much the world has changed in that 10,000 years.

Accelerating Returns

Kurzweil’s Law says that, like biology, technology also evolves exponentially. It took us a very long time to do much of anything at all. The wheel, stone tools and fire took us tens of thousands of years to figure out. But now, technological paradigms shifts happen in decades or less. And the pace keeps accelerating. The Law of Accelerating Returns states that in the first 20 years of the 21st century, we’ll have progressed as much as we did during the entire 20th century. Then we’ll double that progress again by 2034, and double it once more by 2041.

Let me put this in perspective. At this rate, if my youngest daughter – born in 1995 – lives to be 100 (not an unlikely forecast), she will see more technological change in her life than in previous 20,000 years of human history!

This is one of those things we probably don’t think about because, frankly, it’s really hard to wrap your head around this. The math shows why predictability is flying out the window and why we have to get comfortable reacting to the unexpected. It would also be easy to dismiss it, but Kurzweil’s concepts are sound. Evolution does accelerate exponentially, as has our rate of technological advancement. Unless the later showed a dramatic reversal or slowing down, the future will move much much faster than we can possibly imagine.

The reason change accelerates is that the technology we develop today builds the foundations required for the technological leaps that will happen tomorrow. Agriculture set the stage for industry. Industry enabled electricity. Electricity made digital technology possible. Digital technology enables nanotechnology. And so on. Each advancement sets the stage for the next, and we progress from stage to stage more rapidly each time.

So, for your extended long weekend, if you’re sitting in a turkey-induced tryptophan daze and there’s no game on, try wrapping your head around The Law of Accelerating Returns.

Happy Thanksgiving. You’re welcome.

Step One. React.

We’re learning what it means to be forced to be reactive. Sometimes, as on Friday, november 13 at 9:20 pm CET in Paris, France, we react with horror. But that’s the new reality. Plan as we might, we can’t predict everything. Sometimes we can’t predict anything. We just have to make ourselves – in the words of Nassim Taleb – antifragile.

Why is the world less predictable? One reason could be that it’s more connected. Things happen faster. Actions and reactions are connected by wired milliseconds. The world has become one huge Rube Goldberg machine and anyone can put it in motion at any time.

I suspect the world is also more organic. The temporary stasis of human effort that used to hold nature at bay for a bit is giving way to a more natural ebb and flow. Artificial barriers and constraints, like national borders, have little meaning any more. We flow back and forth across geography. Hierarchies have become networks. Centralized planning yields to spontaneous emerging events. We are afloat in an ocean of unpredictability. It’s hard to steer a straight path in such an environment.

Because of these two things, the world is definitely more amplified. Small things become big things much faster. Implications can grow thunderous in mere seconds. Ripple effects become tsunamis.

We want predictability. We want control. We hate that our world can be thrown into a tailspin by 8 people who hate us and what we stand for. We want intelligence to be fool proof. We want detection to be flawless. But while we wish for these things with all our hearts, the reality is that we will be forced to react. This is the world we have built. The technology that makes it wonderful is the same technology that, in a span of 40 minutes (the time it took for all the attacks in Paris), can make it heart-achingly painful.

In a world where structures give way to flow – where straight lines blur into oscillating waves – what can we do?

First of all, we can continually improve our ability to react. We have to make sense of new events as quickly as possible. We have to adapt more rapidly. Our world has to be more sensitive, more flexible, more nimble. Again, with a head nod to Nassim Taleb, we have to know how to minimize the downside and maximize the upside.

Secondly, we have to rethink how our institutions work. They have to evolve for a new world. And this evolution will happen faster in the areas of greatest unpredictability. For an enlightening read, try Team of Teams by General Stanley McChrystal. As leader of the Joint Special Operations Task Force in Afghanistan and Iraq, he was at the eye of the storm of unpredictability. His lessons have gained a terrifying new relevance after the events of last Friday.

Finally, we have to hold fast to our values. They are the things that cannot – must not – change. They are the one constant that helps us set our bearings when we react and adapt. While plans are a constant “work in progress,” values must be rock solid.

For most of our recorded history, we have tried to understand the world and gain some sense of control over it. We have tried to push back chaos with order – replace jagged or fluid curves of nature with artificially straight lines. Ironically, the more we have imposed our rational will, the more our environment has become dynamic, networked, organic, reactive and complex – all the things the world has always been. The harder we try to set our own beat, the more we find ourselves moving to the timeless rhythm of nature.

And in that world, adaptation is the whole ball of wax.

 

The Required Conditions for Innovation

Statistically speaking, it appears that there’s a correlation between atheism and innovation. But my point in last week’s column was not to show that atheists are more innovative. My goal was to try to hypothesize what the underlying causation might be. I don’t really care if atheists, or Buddhists – or Seventh Day Adventists for that matter – are more innovative. What does interest me, however, is what is unique about an environment in which both atheism and innovation can flourish.

Why am I so focused on innovation? Because innovation drives economic growth. It is the force that unleashes Schumpeterian Gales of Creative Distruction. In any formula measuring economic performance, Innovation always equals N. It’s a very big deal. The biggest deal.

That means the conditions that lead to innovation are worth noting. And I started on the national scale for a reason. Sometimes, it helps to change our perspective if we’re exploring the “why” of a question. Either pulling back to the macro or zooming in to the micro allows us to see things we may not see when we remain stuck in our current context. So, what can we learn about the conditions of innovation from the world’s most innovative countries?

Atheism = Innovation?

Let’s look at the atheist factor. How might a lack of religion lead to a surfeit of innovation? I think it may have something to do with belief – or rather – the lack of belief. When we believe something, we usually don’t go out of our way to prove it true. That also means we never find out it’s false. But nations that have a lot of atheists are not a very trusting lot, at least when it comes to things like government and other institutions. There was a moderately negative correlation (r=-0.4425). They are skeptics. And when it comes to innovation, skepticism is very healthy.

If we looked back to Alex Pentland’s ideas about Social Physics, we need two types of social interactions: exploration and engagement. The first of these is where innovation comes from. And skeptics are more exploratory than the trustful. They probe the unknown rather than rely on their beliefs. As Pentland says in his book Social Physics, “If you can find many such independent thinkers and discover that there is a consensus among a large subset of them, then a really, really good trading strategy is to follow the contrarian consensus.”

Ideological Diversity

It’s not just skepticism, however, that drives innovation. It’s also ideological diversity. You need a social network that encompasses a lot of different experiences and points of view. The broader the spectrum of ideas, the more likely that you’ve captured something that approximates the truth somewhere in that spectrum. If you can trade monolithic beliefs for a healthy respect for ideas that may not mirror your own in your own organization, you’ve probably laid the groundwork for innovation.

Going Rogue

Not so many years ago, an organization I was part of called for a rather rushed retreat of all the executive management. We gathered in a posh ski resort for a brainstorming session. All the top managerial talent was present. The CEO took the stage and called on us to be innovative. But he, like most managers, did not encourage diversity. Rather, he believed unity was the way to innovate.

“No one can go rogue!” he preached from the corporate pulpit. In other words, “No one can disagree with me.”

If this CEO (who has since stepped down) had looked at countries like Sweden, or Japan, or South Korea, he might have realized that sometimes, “going rogue” is exactly what you need to come up with a new idea.

Are Atheists More Innovative?

A few columns back, I talked about the most innovative countries in the world, according to INSEAD, Johnson School of Management and WIPO. Switzerland, of all places, topped the list. At the time, I mentioned diversity as possibly being one of the factors. But for some reason, I just couldn’t let it lie there.

Last Friday afternoon, it being pretty miserable outside, I dusted off my Stats 101 prowess and decided to look for correlations. The next thing I knew, 3 hours had passed and I was earlobe deep in data tables and spreadsheets.

Yeah..that’s how I roll. That’s wassup.

But I digress. What initially sent me down this path was a new study out of the University of Kansas by Tien-Tsung lee and co-authors Masahiro Yamamoto and Weina Ran. Working with data from Japan, they found that the amount of trust you have in media depends on the diversity of the community you live in. The more diverse the population, the lower the degree of trust in media.

This caught my attention – a negative correlation between trust and diversity. I wondered how those two things might triangulate with innovation. Was there a three-way link here?

So, I started compiling the data. First, I wanted to broaden the definition of innovation. Originally, I had cited the INSEAD Global Innovation Index. Bloomberg also has a ranking of innovation by country that uses a few different criteria. I decided to take an average, normalized score of the two together. In case you’re wondering, Switzerland scored much lower in the Bloomberg ranking, which had South Korea, Japan and Germany in the top three spots.

With my new innovation ranking, I then started to look for correlations. What part, for example, did trust play? According to Edelman, the global marketing giant, who publishes an annual trust barometer, it plays a massive role: “Building trust is essential to successfully bringing new products and services to market.” Their trust barometer measures trust in the infrastructural institutions of the respective countries. So I added Edelman’s indexed trust scores to my spreadsheet and used a quick and dirty Pearson r-value test to look for significant correlations. For those as rusty as I when it comes to stats, a perfect correlation would be 1.0. Strong relationships show up in the 0.6 and above range. Moderate relationships are in the 0.3 to 0.6 range. Weak relationships are 0.3 and below. Zero values indicate no relationship. Inverse relationships follow the same scale but with negative values.

The result? Not only was there no positive correlation, there was actually a moderately significant negative correlation! For those interested, the r-value was -0.4224. Based on this admittedly amateur analysis, trust in national institutions and innovation do not seem to go hand-in-hand. Some of the most innovative countries are the least trusting and vice-versa. It certainly wasn’t the neat supposed linear relationship that Edelman referred to in their press releases for their barometer.

Next, I turned to the obvious – the wealth of the respective nations. I added GDP per capita as a data point. Predictably, there was a strong positive correlation here – I came up with an r-value of .793. Rich countries are more innovative. Duh.

Now comes the really interesting part. What was the relationship between cultural diversity and innovation? If my original hypothesis was correct, there should be at least a moderate correlation here. The problem was trying to find an accurate measure of cultural diversity. I ended up using three measures from Alesina et al: Ethnic Fractionalization, Linguistic Fractionalization and Religious Fractionalization. I averaged these out and indexed them to give me a single score of cultural diversity. To my surprise, my hypothesis appeared to be significantly flawed – my r value was -0.2488.

But then I started analyzing the individual measures of diversity. Ethnic Diversity and Innovation showed a moderate negative correlation: -0.5738. Linguistic Diversity and Innovation showed a less significant negative correlation: -0.3886. But Religious Diversity and Innovation came up as a moderate positive correlation: 0.4129! Of the three, religion is the only measure of diversity that’s directly ideological, at least to some extent.

This seemed to be promising, so I pushed it to the extreme. If religious diversity shows to be correlated with innovation, I wonder how the prevalence of atheists would relate? After all, this should be the ultimate measure of religious ideological freedom. So, using a combination of results from a worldwide Gallup survey and a study from Phil Zuckerman, I added an indexed “atheism” score. Sure enough, the r-value was 0.7461! This was almost as significant as the correlation between national wealth and innovation! Based on my combined innovation scores, some of the least religious countries in the world (Japan, Sweden and Switzerland) are the most innovative.

So – ignoring for a moment the barn-door sized holes in my impromptu methodology and a whack of confounding factors – what might this hypothetically mean? I’ll come back to this intriguing question in next week’s Online Spin.

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.