The World in Bite Sized Pieces

It’s hard to see the big picture when your perspective is limited to 160 characters.

Or when we keep getting distracted from said big picture by that other picture that always seems to be lurking over there on the right side of our screen – the one of Kate Upton tilting forward wearing a wet bikini.

Two things are at work here obscuring our view of the whole: Our preoccupation with the attention economy and a frantic scrambling for a new revenue model. The net result is that we’re being spoon-fed stuff that’s way too easy to digest. We’re being pandered to in the worst possible way. The world is becoming a staircase of really small steps, each of which has a bright shiny object on it urging us to scale just a little bit higher. And we, like idiots, stumble our way up the stairs.

This cannot be good for us. We become better people when we have to chew through some gristle. Or when we’re forced to eat our broccoli. The world should not be the cognitive equivalent of Captain Crunch cereal.

It’s here where human nature gets the best of us. We’re wired to prefer scintillation to substance. Our intellectual laziness and willingness to follow whatever herd seems to be heading in our direction have conspired to create a world where Donald Trump can be a viable candidate for president of the United States – where our attention span is measured in fractions of a second – where the content we consume is dictated by a popularity contest.

Our news is increasingly coming to us in smaller and smaller chunks. The exploding complexity of our world, which begs to be understood in depth, is increasingly parceled out to us in pre-digested little tidbits, pushed to our smartphone. We spend scant seconds scanning headlines to stay “up to date.” And an algorithm that is trying to understand where our interests lie usually determines the stories we see.

This algorithmic curation creates both “Filter” and “Agreement” Bubbles. The homogeneity of our social network leads to a homogeneity of content. But if we spend our entire time with others that think like us, we end up with an intellectually polarized society in which the factions that sit at opposite ends of any given spectrum are openly hostile to each other. The gaps between our respective ideas of what is right are simply too big and no one has any interest in building a bridge across them. We’re losing our ideological interface areas, those opportunities to encounter ideas that force us to rethink and reframe, broadening our worldview in the process. We sacrifice empathy and we look for news that “sounds right” to us, not matter what “right” might be.

This is a crying shame, because there is more thought provoking, intellectually rich content than ever before being produced. But there is also more sugar coated crap who’s sole purpose is to get us to click.

I’ve often talked about the elimination of friction. Usually, I think this is a good thing. Bob Garfield, in a column a few months ago, called for a whoop-ass can of WD 40 to remove all transactional friction. But if we make things too easy to access, will we also remove those cognitive barriers that force us to slow down and think, giving our rationality a change to catch up with impulse? And it’s not just on the consumption side where a little bit of friction might bring benefits. The upside of production friction was that it did slow down streams of content just long enough to introduce an editorial voice. Someone somewhere had to give some thought as to what might actually be good for us.

In other words, it was someone’s job to make sure we ate our vegetables.

The Face of Disruption

If you ask publishing giant Elsevier, Alexandra Elbakyan is a criminal – a pernicious pirate.

If you ask the Lifeboat Foundation, or blogger P.Z Myers, or millions of students around the world, Alexandra Elbakyan is a hero.

Labels can be tricky things, especially in a world of disruption.

ElboykanMs. Elbakyan certainly doesn’t look like a criminal. You would walk right past her on a campus quad and think nothing of it. She looks pretty much what you would expect a post-grad neuroscience student from Kazakhstan to look like.

But her face is the face of disruption. And she’s at the receiving end of a lawsuit launched by Elsevier that, if you were to take it seriously, would be worth several billion dollars.

Just over a year ago, I wrote a column about the academic journal racket. The work of thousands of researchers is published by Elsevier and others and remains locked behind hugely expensive pay walls. Elbakyan, as a post-grad research student at a university that couldn’t afford to pay the licensing fees to gain access to these journals, got frustrated. In a letter she wrote in response to the lawsuit, she elaborated on this frustration:

“When I was a student in Kazakhstan University, I did not have access to any research papers. These papers I needed for my research project. Payment of 32 dollars is just insane when you need to skim or read tens or hundreds of these papers to do research. I obtained these papers by pirating them.”

Elbaykan was not alone in this piracy.

“Later I found there are lots and lots of researchers (not even students, but university researchers) just like me, especially in developing countries. They created online communities (forums) to solve this problem.”

“…to solve this problem.” There, in a nutshell, is the source of disruption. Elbakyan thought there had to be a more efficient way to facilitate this communal piracy and turned to technology, launching the Sci-Hub search portal in 2011. Depending on the donation of access keys from academics at institutions that had subscriptions to research publishers, Sci-Hub bypasses the paywall and locates the paper a researcher is looking for. It then delivers the paper and saves a copy for LibGen, a library of “pirated” papers that will continue to be freely available to future researchers. The LibGen database now has over 48 million papers available.

Is Elbaykan guilty of piracy? Absolutely – as it’s defined by the law. She makes no bones about the fact. She uses the term repeatedly in her own letter of defense.

But, in that letter, Alexandra Elbaykan also appeals to a higher law – the law of fairness. She is not stealing from the authors of that research, who receive no compensation for their work from the publisher. When Elsevier claims “irreparable harm” the only harm that can be identified is to their own business model. There is no harm to academics, who are becoming increasingly hostile to the business practices of publishers like Elsevier. There is certainly no harm to fellow researchers, who now have open access to knowledge, helping them in their own work. And there is no harm to the public, who can only benefit from the more open sharing of knowledge amongst academics. The only one hurt here is Elsevier.

According to RELX’s (the parent company of Elsevier) 2014 annual report, the company raked in £ 2,944 M ($4.23 billion US) from it’s various subscription businesses. The Scientific, Technical and Medical division (the same division that Elbaykan “irreparably harmed”) had revenues of £ 2,048 M ($2.94 B US) and a tidy little operating profit of £787 M ($ 1.13 B US).

Poor Elsevier.

The question that should be asked here is not whether Elsevier’s business model has been harmed, but rather, does it deserve to live? According to that same annual report, they “help scientists make new discoveries, lawyers win cases, doctors save lives and executives forge commercial relationships with their clients.”

Actually, no.

Elsevier does none of those things. The information they deal in does those things. And that same information is finding a way to be free, thanks to people like Alexandra Elbaykan. Elsevier is just the middleman who is being cut out of the supply chain through technology.

The American legal system will undoubtedly side with Elsevier. The law, as it is currently written, defends the right of a corporation to do business, whether or not people like you and me deem that business ethical. But ultimately, we rely on our laws to be fair, and what is fair depends on the context of our society. That context can be changed through the forces of disruption.

Sometimes, disruption comes in the guise of a young post grad student from Kazakhstan.

We’re Informed. But Are We Thoughtful?

I’m a bit of a jerk when I write. I lock myself behind closed doors in my home office. In the summer, I retreat to the most remote reaches of the back yard. The reason? I don’t want to be interrupted with human contact. If I am interrupted, I stare daggers through the interrupter and answer in short, clipped sentences. The house has to be silent. If conditions are less than ideal, my irritation is palpable. My family knows this. The warning signal is “Dad is writing.” This can be roughly translated as “Dad is currently an asshole.” The more I try to be thoughtful, the bigger the ass I am.

I suspect Henry David Thoreau was the same.  He went even further than my own backyard exile. He camped out alone for two years in Ralph Waldo Emersen’s cabin on Walden Pond. He said things like,

“I never found a companion that was so companionable as solitude.”

But Thoreau but was also a pretty thoughtful guy, who advised us that,

“As a single footstep will not make a path on the earth, so a single thought will not make a pathway in the mind. To make a deep physical path, we walk again and again. To make a deep mental path, we must think over and over the kind of thoughts we wish to dominate our lives.”

But, I ask, how can we be thoughtful when we are constantly distracted by information? Our mental lives are full of single footsteps. Even if we intend to cover the same path more than once, there are a thousand beeps, alerts, messages, prompts, pokes and flags that are beckoning us to start down a new path, in a different direction. We probably cover more ground, but I suspect we barely disturb the fallen leaves on the paths we take.

I happen to do all my reading on a tablet. I do this for three reasons; first, I always have my entire library with me and I usually have four books on the go at the same time (currently 1491, Reclaiming Conversation, Flash Boys and 50 Places to Bike Before You Die) – secondly, I like to read before I go to sleep and I don’t need to keep a light on that keeps my wife awake – and thirdly, I like to highlight passages and make notes. But there’s a trade-off I’ve had to make. I don’t read as thoughtfully as I used to. I can’t “escape” with a book anymore. I am often tempted to check email, play a quick game of 2048 or search for something on Google. Maybe the fact that my attention is always divided amongst four books is part of the problem. Or maybe it’s that I’m more attention deficit than I used to be.

There is a big difference between being informed and being thoughtful. And our connected world definitely puts the bias on the importance of information. Being connected is all about being informed. But being thoughtful requires us to remove distraction. It’s the deep paths that Thoreau was referring too. And it requires a very different mindset. Our brains are a single-purpose engine. We can either be informed or be thoughtful. We can’t be both at the same time.

090313-RatMaze

At the University of California, San Francisco, Mattiass Karlsson and Loren Frank found that rats need two very different types of cognitive activity when mastering a maze. First, when they explore a maze, certain parts of their brain are active as they’re being “informed” about their new environment. But they don’t master the maze unless they’re allowed downtime to consolidate the information into new persistent memories. Different parts of the brain are engaged, including the hippocampus. They need time to be thoughtful and create a “deep path.”

In this instance, we’re not all that different than rats. In his research, MIT’s Alex “Sandy” Pentland found that effective teams tend to cycle through two very different phases: First, they explore, gathering new information. Then, just like the thoughtful rats, they engage as a group, taking that information, digesting it and synthesizing it for future execution. Pentland found that while both are necessary, they don’t exist at the same time,

“Exploration and engagement, while both good, don’t easily coexist, because they require that the energy of team members be put to two different uses. Energy is a finite resource.”

Ironically, research is increasingly showing that are previous definitions of cognitive activity may have been off-the mark. We always assumed that “mind-wandering” or “day-dreaming” was a non-productive activity. But we’re finding out that it’s an essential part of being thoughtful. We’re actually not “wandering.” It’s just the brain’s way of synthesizing and consolidating information. We’re wearing deeper paths in the by-ways of our mind. But a constant flow of new information, delivered through digital channels, keeps us from synthesizing the information we already have. Our brain is too busy being informed to be able to make the switch to thoughtfulness. We don’t have enough cognitive energy to do both.

What price might we pay for being “informed” at the expense of being “thoughtful?” It appears that it might be significant. Technology distraction in the classroom could lower grades by close to 20 percent. And you don’t even have to be the one using the device. Just having an open screen in the vicinity might distract you enough to drop your report card from a “B” to a “C.”

Having read this, you now have two choices. You could click off to the next bit of information. Or, you could stare into space for a few minutes and be lost in your thoughts.

Chose wisely.

Are Atheists More Innovative?

A few columns back, I talked about the most innovative countries in the world, according to INSEAD, Johnson School of Management and WIPO. Switzerland, of all places, topped the list. At the time, I mentioned diversity as possibly being one of the factors. But for some reason, I just couldn’t let it lie there.

Last Friday afternoon, it being pretty miserable outside, I dusted off my Stats 101 prowess and decided to look for correlations. The next thing I knew, 3 hours had passed and I was earlobe deep in data tables and spreadsheets.

Yeah..that’s how I roll. That’s wassup.

But I digress. What initially sent me down this path was a new study out of the University of Kansas by Tien-Tsung lee and co-authors Masahiro Yamamoto and Weina Ran. Working with data from Japan, they found that the amount of trust you have in media depends on the diversity of the community you live in. The more diverse the population, the lower the degree of trust in media.

This caught my attention – a negative correlation between trust and diversity. I wondered how those two things might triangulate with innovation. Was there a three-way link here?

So, I started compiling the data. First, I wanted to broaden the definition of innovation. Originally, I had cited the INSEAD Global Innovation Index. Bloomberg also has a ranking of innovation by country that uses a few different criteria. I decided to take an average, normalized score of the two together. In case you’re wondering, Switzerland scored much lower in the Bloomberg ranking, which had South Korea, Japan and Germany in the top three spots.

With my new innovation ranking, I then started to look for correlations. What part, for example, did trust play? According to Edelman, the global marketing giant, who publishes an annual trust barometer, it plays a massive role: “Building trust is essential to successfully bringing new products and services to market.” Their trust barometer measures trust in the infrastructural institutions of the respective countries. So I added Edelman’s indexed trust scores to my spreadsheet and used a quick and dirty Pearson r-value test to look for significant correlations. For those as rusty as I when it comes to stats, a perfect correlation would be 1.0. Strong relationships show up in the 0.6 and above range. Moderate relationships are in the 0.3 to 0.6 range. Weak relationships are 0.3 and below. Zero values indicate no relationship. Inverse relationships follow the same scale but with negative values.

The result? Not only was there no positive correlation, there was actually a moderately significant negative correlation! For those interested, the r-value was -0.4224. Based on this admittedly amateur analysis, trust in national institutions and innovation do not seem to go hand-in-hand. Some of the most innovative countries are the least trusting and vice-versa. It certainly wasn’t the neat supposed linear relationship that Edelman referred to in their press releases for their barometer.

Next, I turned to the obvious – the wealth of the respective nations. I added GDP per capita as a data point. Predictably, there was a strong positive correlation here – I came up with an r-value of .793. Rich countries are more innovative. Duh.

Now comes the really interesting part. What was the relationship between cultural diversity and innovation? If my original hypothesis was correct, there should be at least a moderate correlation here. The problem was trying to find an accurate measure of cultural diversity. I ended up using three measures from Alesina et al: Ethnic Fractionalization, Linguistic Fractionalization and Religious Fractionalization. I averaged these out and indexed them to give me a single score of cultural diversity. To my surprise, my hypothesis appeared to be significantly flawed – my r value was -0.2488.

But then I started analyzing the individual measures of diversity. Ethnic Diversity and Innovation showed a moderate negative correlation: -0.5738. Linguistic Diversity and Innovation showed a less significant negative correlation: -0.3886. But Religious Diversity and Innovation came up as a moderate positive correlation: 0.4129! Of the three, religion is the only measure of diversity that’s directly ideological, at least to some extent.

This seemed to be promising, so I pushed it to the extreme. If religious diversity shows to be correlated with innovation, I wonder how the prevalence of atheists would relate? After all, this should be the ultimate measure of religious ideological freedom. So, using a combination of results from a worldwide Gallup survey and a study from Phil Zuckerman, I added an indexed “atheism” score. Sure enough, the r-value was 0.7461! This was almost as significant as the correlation between national wealth and innovation! Based on my combined innovation scores, some of the least religious countries in the world (Japan, Sweden and Switzerland) are the most innovative.

So – ignoring for a moment the barn-door sized holes in my impromptu methodology and a whack of confounding factors – what might this hypothetically mean? I’ll come back to this intriguing question in next week’s Online Spin.

Consumers in the Wild

Once a Forager, Always a Forager

Your world is a much different place than the African Savanna. But over 100,000 generations of evolution that started on those plains still dictates a remarkable degree of our modern behavior.

Take foraging, for example. We evolved as hunters and gatherers. It was our primary survival instinct. And even though the first hominids are relatively recent additions to the biological family tree, strategies for foraging have been developing for millions and millions of years. It’s hardwired into the deepest and most inflexible parts of our brain. It makes sense, then, that foraging instincts that were once reserved for food gathering should be applied to a wide range of our activities.

That is, in fact, what Peter Pirolli and Stuart Card discovered two decades ago. When they looked at how we navigated online sources of information, they found that humans used the very same strategy we would have used for berry picking or gathering cassava roots. And one of the critical elements of this was something called Marginal Value.

Bounded Rationality & Foraging

It’s hard work being a forager. Most of your day – and energy – is spent looking for something to eat. The sparser the food sources in your environment, the more time you spend looking for them. It’s not surprising; therefore, that we should have some fairly well honed calculations for assessing the quality of our food sources. This is what biologist Eric Charnov called Marginal Value in 1976. It’s an instinctual (and therefore, largely subconscious) evaluation of food “patches” by most types of foragers, humans included . It’s how our brain decides whether we should stay where we are or find another patch. It would have been a very big deal 2 million – or even 100,000 – years ago.

Today, for most of us, food sources are decidedly less “patchy.” But old instincts die hard. So we did what humans do. We borrowed an old instinct and applied it to new situations. We exapted our foraging strategies and started using them for a wide range of activities where we had to have a rough and ready estimation of our return on our energy investment. Increasingly, more and more of these activities asked for an investment of cognitive processing power. And we did all this without knowing we were even doing it.

This brings us to Herbert Simon’s concept of Bounded Rationality. I believe this is tied directly to Charnov’s theorem of Marginal Value. When we calculate how much mental energy we’re going to expend on an information-gathering task, we subconsciously determine the promise of the information “patches” available to us. Then we decided to invest accordingly based on our own “bounded” rationality.

Brands as Proxies for Foraging

It’s just this subconscious calculation that has turned the world of consumerism on its ear in the last two decades. As Itamar Simonson and Emanuel Rosen explain in their book Absolute Value, the explosion of information available has meant that we are making different marginal value calculations than we would have thirty or forty years ago. We have much richer patches available, so we’re more likely to invest the time to explore them. And, once we do, the way we evaluate our consumer choices changes completely. Our modern concept of branding was a direct result of both bounded rationality and sparse information patches. If a patch of objective and reliable information wasn’t apparent, we would rely on brands as a cognitive shortcut, saving our bounded rationality for more promising tasks.

Google, The Ultimate “Patch”

In understanding modern consumer behavior, I think we have to pay much more attention to this idea of marginal value. What is the nature of the subconscious algorithm that decides whether we’re going to forage for more information or rely on our brand beliefs? We evolved foraging strategies that play a huge part in how we behave today.

For example, the way we navigate our physical environment appears to owe much to how we used to search for food. Women determine where they’re going differently than men because women used to search for food differently. Men tend to do this by orientation, mentally maintaining a spatial grid in their minds against which they plot their own location. Women do it by remembering routes. In my own research, I found split-second differences in how men and women navigated websites that seem to go back to those same foundations.

Whether you’re a man or a woman, however, you need to have some type of mental inventory of information patches available to you to in order to assess the marginal value of those patches. This is the mental landscape Google plays in. For more and more decisions, our marginal value calculation starts with a quick search on Google to see if any promising patches show up in the results. Our need to keep a mental inventory of patches can be subjugated to Google.

It seems ironic that in our current environment, more and more of our behavior can be traced back millions of years to behaviors that evolved in a world where high-tech meant a sharper rock.

Do We Really Want Virtual Reality?

Facebook bought Oculus. Their goal is to control the world you experience while wearing a pair of modified ski goggles. Mark Zuckerberg is stoked. Netflix is stoked. Marketers the world over are salivating. But, how should you feel about this?

Personally, I’m scared. I may even be terrified.

First of all, I don’t want anyone, especially not Mark Zuckerberg, controlling my sensory world.

Secondly, I’m pretty sure we’re not built to be virtually real.

I understand the human desire to control our environment. It’s part of the human hubris. We think we can do a better job than nature. We believe introducing control and predictability into our world is infinitely better than depending on the caprices of nature. We’ve thought so for many thousands of years. And – Oh Mighty Humans Who Dare to be Gods – just how is that working out for us?

Now that we’ve completely screwed up our physical world, we’re building an artificial version. Actually, it’s not really “we” – it’s “they.” And “they” are for profit organizations that see an opportunity. “They” are only doing it so “they” control our interface to consciousness.

Personally, I’m totally comfortable giving a profit driven corporation control over my senses. I mean, what could possibly happen? I’m sure anything they may introduce to my virtual world will be entirely for my benefit. I’m sure they would never take the opportunity to use this control to add to their bottom line. If you need proof, look how altruistically media – including the Internet – has evolved under the stewardship of corporations.

Now, their response would be that we can always decide to take the goggles off. We stay in control, because we have an on/off switch. What they don’t talk about is the fact that they will do everything in their power to keep us from switching their VR world off. It’s in their best interest to do so, and by best interest, I mean they more time we spend in their world, as opposed to the real one, the more profitable it is for them. They can hold our senses hostage and demand ransom in any form they choose.

How will they keep us in their world? By making it addictive. And this brings us to my second concern about Virtual Reality – we’re just not built for it.

We have billions of neurons that are dedicated to parsing and understanding a staggeringly complex and dynamic environment. Our brain is built to construct a reality from thousands and thousands of external cues. To manage this, it often takes cognitive shortcuts to bring the amount of processing required down to a manageable level. We prefer pleasant aspects of reality. We are alerted to threats. Things that could make us sick disgust us. The brain manages the balance by a judicious release of neurochemicals that make us happy, sad, disgusted or afraid. Emotions are the brain’s way of effectively guiding us through the real world.

A virtual world, by necessity, will have a tiny fraction of the inputs that we would find in the real world. Our brains will get an infinitesimal slice of the sensory bandwidth it’s used to. Further, what inputs it will get will have the subtlety of a sledgehammer. Ham fisted programmers will try to push our emotional hot buttons, all in the search for profit. This means a few sections of our brain will be cued far more frequently and violently than they were ever intended to be. Additionally, huge swaths of our environmental processing circuits will remain dormant for extended periods of time. I’m not a neurologist, but I can’t believe that will be a good thing for our cognitive health.

We were built to experience the world fully through all our senses. We have evolved to deal with a dynamic, complex and often unexpected environment. We are supposed to interact with the serendipity of nature. It is what it means to be human. I don’t know about you, but I never, ever, want to auction off this incredible gift to a profit-driven corporation in return for a plastic, programmed, 3 dimensional interface.

I know this plea is too late. Pandora’s Box is opened. The barn door is open. The horse is long gone. But like I said, I’m scared.

Make that terrified.

How Our Brains Process Price Information

On-Off-Switch-For-Human-BrainWe have a complex psychological relationship with pricing. A new brain scanning study out of Harvard and Stanford starts to pick apart the dynamics of that relationship.

Uma R. Karmarkar, Baba Shiv, and Brian Knutson wanted to see how we evaluate a potential purchase when the price is the first piece of information we get as opposed to the last piece of information. They used both fMRI scanning and behavioral tracking to see how the study participants responded. Participants were given $40 dollars to spend and then were presented with a number of sample offers. In all cases, the price represented an attractive bargain on the product featured. But one group was given the price first, and the second group was given the price last.

There was another critical difference in the evaluation process as well. In the first phase of the study, participants were shown products that they would like to buy, and in the second phase, they were shown products that they would have to buy. The difference between the two was how they activated the reward center of our brain – the nucleus accumbens. I’ve been talking for years about the importance of understanding the balance of risk and reward in our purchase decisions. This study provides a little more understanding about how our brain processes those two factors.

In the first phase, participants were shown a variety of products that they would consider rewarding. These would fall into the first quadrant of the risk/reward matrix I introduced in my column from 5 years ago. The researchers were paying particular attention to two different parts of the brain – the nucleus accumbens and the medial prefrontal cortex. For a layman’s analogy, think of you and a five year old walking down the toy aisle in a department store. The nucleus accumbens is the five year old who starts chanting, “I want it. I want it. I want it.” The medial prefrontal cortex is the adult who decides if they’re actually going to buy it. In the study, the researchers found that the sequence in which these two parts of the brain “lit up” depended on whether or not you saw the price first. If you saw the product first, the nucleus accumbens started its chant – “I want it.” If you saw the price first, the prefrontal medial cortex kicked into action and started evaluating whether the offer represented a good bargain. In the case of the reward products, although the sequence varied, the actually purchase process didn’t. In most cases, participants still ended up making the purchase, whether price was presented first or last.

But things changed when the researchers tried a variety of products that fell into the second quadrant of the risk reward matrix – low risk and low reward. These are the everyday items we have to buy. In the study, they included things like a water filtration pitcher, a pack of AA batteries, a USB drive, and a flashlight. There was nothing here that was likely to get the nucleus accumbens starting to chant.

Now, it should be noted that this follow-up study did not include the fMRI scanning, but by tracking purchasing behaviors we can make some pretty educated guesses as to what’s happening in the respective brains of our participants. Here, presenting prices first resulted in a significant increase in actual purchases over instances when price was presented last. If price comes first, we can imagine that the prefrontal cortex is indicating that it’s a good bargain on a needed product. But if a relatively boring product is presented first for evaluation to the nucleus accumbens, there’s little to excite the reward center.

An important caveat to this part of the study comes with knowing that the prices presented represented significant savings on the products. After the simulated purchases, participants were asked to indicate a price they would be willing to pay for the product. When the price was the lead, the named prices tended to be a little lower, indicating that if you are going to lead with price, especially for quadrant two products, you’d better make sure you’re offering a true bargain.

If anything, this study provides further proof of the value of knowing a prospect’s mental landscape. What are the risk and reward factors that will be motivating them? Will the media prefrontal cortex or the nucleus accumbens be calling the shots? What priming effects might an early introduction of price introduce into the process?

When I wrote about the risk/reward matrix five years ago, one commenter said “a simple low-high risk/low-high reward graph is not very useful for driving just in time and location based offers, discounts, etc.” I respectfully disagree. While more sophisticated models are certainly possible, I think even a simple 2X2 matrix that helps map out the decision factors that are in play with purchases would be a significant step forward. And this isn’t about driving real time variations on offers. It’s about understanding the fundamentals of the buyer’s decision process. There’s nothing wrong with simplicity, especially if it drives greater usage.

The Persona is Dead, Long Live the Person

First, let me go on record as saying up to this point, I’ve been a fan of personas. In my past marketing and usability work, I used personas extensively as a tool. But I’m definitely aware that not everyone is equally enamored with personas. And I also understand why.

Personas, like any tool, can be used both correctly and incorrectly. When used correctly, they can help bridge the gap between the left brain and the right brain. They live in the middle ground between instinct and intellectualism. They provide a human face to raw data.

But it’s just this bridging quality that tends to lead to abuse. On the instinct side, personas are often used as a short cut to avoid quantitative rigor. Data driven people typically hate personas for this reason. Often, personas end up as fluffy documents and life sized cardboard cutouts with no real purpose. It seems like a sloppy way to run things.

On the intellectual side, because quant people distrust personas, they also leave themselves squarely on data side of the marketing divide. They can understand numbers – people not so much. This is where personas can shine. At their best, they give you a conceptual container with a human face to put data into. It provides a richer but less precise context that allows you to identify, understand and play out potential behaviors that data alone may not pinpoint.

As I said, because personas are intended as a bridging tool, they often remain stranded in no man’s land. To use them effectively, the practitioner should feel comfortable living in this gap between quant and qual. Too far one way or the other and it’s a pretty safe bet that personas will either be used incorrectly or be discarded entirely.

Because of this potential for abuse, maybe it’s time we threw personas in the trash bin. I suspect they may be doing more harm than good to the practice of marketing. Even at their best, personas were meant as a more empathetic tool to allow you to thing through interactions with a real live person in mind. But in order to make personas play nice with real data, you have to be very diligent about continually refining your personas based on that data. Personas were never intended to be placed on a shelf. But all too often, this is exactly what happens. Usually, personas are a poor and artificial proxy for real human behaviors. And this is why they typically do more harm than good.

The holy grail of marketing would be to somehow give real time data a human face. If we could find a way to bridge left brain logic and right brain empathy in real time to discover insights that were grounded in data but centered in the context of a real person’s behaviors, marketing would take a huge leap forward. The technology is getting tantalizingly close to this now. It’s certainly close enough that it’s preferable to the much abused persona. If – and this is a huge if – personas were used absolutely correctly they can still add value. But I suspect that too much effort is spent on personas that end up as documents on a shelf and pretty graphics. Perhaps that effort would be better spent trying to find the sweet spot between data and human insights.

The Secret of Successful Marketing Lies in Split Seconds

affordanceThe other day, I was having lunch in a deli. I was also watching the front door, which you had to push to get in. Almost everyone who came to the door pulled, even though there was a fairly big sign over the handle which said “Push.” The problem? The door had the wrong kind of handle. It was a pull handle, not a push. The door had been mounted backwards. In usability terms, the door handle presented a misleading affordance.

I suspect the door had been there for many years. I was at the deli for about 30 minutes. In that time, about 70% of the people (probably close to 50) pulled rather than pushed. Extrapolating this to the whole, that means over the years, thousands and thousands of people have had to try twice to enter this particular place of business. Yet, the only acknowledgement of this instance of customer pain was the sign that had been taped to the door – “Push” – and I suspect there was an implied “(You Idiot)” following that.

I suspect most marketing falls in the same category as that sign. It’s an attempt to fight the intuitive actions that customers take – those split-second actions that happen before our brain has a chance to kick in. And we have to counteract those split-second decisions because the path we have created for our customers was built without an understanding of those intuitive actions. After we realize that our path runs counter to our customer’s natural behaviors do we rebuild the path? Does the deli owner pay a contractor to remount the door? No, we post a sign asking customers to push rather than pull. After all, all they have to do is think for a moment. It seems like a reasonable request.

But here’s the problem with that. You don’t want your customers to think. You want them to act. And you want them to act as quickly and naturally as possible. The battles of marketing are won in those split seconds before the brain kicks in.

Let me give you one example. A few years ago I did a study with Simon Fraser University in Canada. We wanted to know how the brain responded in those same split seconds to brands we like versus brands we have no particular affinity to. What we found was fascinating. In about 150 milliseconds (roughly a sixth of a second) our brain responds to a well-loved brand the same way we respond to a smiling face. This all happens before any rational part of the brain can kick in. This positive reaction sets the stage for a much different subsequent mental processing of the brand (which starts at about 450 milliseconds, or half a second). And the power of this alignment can be startling. As Dr. Read Montague discovered, it can literally alter your perception of the world.

If you can rebuild your path to purchase to align with your customer’s intuitive behaviors, you don’t need to put up “push” signs when they stray off course. You don’t have to make your customers think. Here’s why that is important. As long as we operate at the intuitive level, humans are a fairly predictable lot. Evolution has wired in a number of behaviors that are universal across the population. You would not be risking your vacation fund if you placed a bet that the majority of people would try to pull a door with a door handle that suggested your should pull it, even if there was a sign that said “push.” As long as we operate on auto-pilot, we can plot a predicted behavioral course with a fair degree of confidence (assuming, of course, we’ve taken the time to understand those behaviors).

But the minute we start to think, all bets are off. The miracle of the human brain is that it has two loops of activity – one fast and one slow. The fast loop relies on instinct and evolved behavioral habits. It’s incredibly efficient but stubbornly rigid. The slow loop brings the full power of human rationality to bear on the problem. It’s what happens when we think. And once the prefrontal cortex kicks it, we are amazingly flexible but we pay the price in efficiency. It takes time to think. It also brings a massive amount of variability into the equation. If we start thinking, behaviors become much more difficult to predict.

The longer you can keep your customers on the fast path, the closer you’ll be to a successful outcome. Plan that path carefully and remove any signs telling them to “push.”

The Messy Part of Marketing

messymarketingMarketing is hard. It’s hard because marketing reflects real life. And real life is hard. But here’s the thing – it’s just going to get harder. It’s messy and squishy and filled with nasty little organic things like emotions and human beings.

For the past several weeks, I’ve been filing things away as possible topics for this column. For instance, I’ve got a pretty big file of contradicting research on what works in B2B marketing. Videos work. They don’t work. Referrals are the bomb. No, it’s content. Okay, maybe it’s both. Hmmm..pretty sure it’s not Facebook though.

The integration of marketing technology was another promising avenue. Companies are struggling with data. They’re drowning in data. They have no idea what to do with all the data that’s pouring in from smart watches and smart phones and smart bracelets and smart bangles and smart suppositories and – okay, maybe not suppositories, but that’s just because no one thought of it till I just mentioned it.

Then there’s the new Google tool that predicts the path to purchase. That sounds pretty cool. Marketers love things that predict things. That would make life easier. But life isn’t easy. So marketing isn’t easy. Marketing is all about trying to decipher the mangled mess of living just long enough to shoehorn in a message that maybe, just maybe that will catch the right person at the right time. And that mangled mess is just getting messier.

Personally, the thing that attracted me to marketing was its messiness. I love organic, gritty problems with no clear-cut solutions. Scientists call these ill-defined problems. And that’s why marketing is hard. It’s an ill-defined problem. It defies programmatic solutions. You can’t write an algorithm that will spit out perfect marketing. You can attack little slivers of marketing that lend themselves to clearer solutions, which is why you have the current explosion of ad-tech tools. But the challenge is trying to bring all these solutions together into some type of cohesive package that actually helps you relate to a living, breathing human.

One of the things that has always amazed me is how blissfully ignorant most marketers are about concepts that I think should be fundamental to understanding customer behaviors: things like bounded rationality, cognitive biases, decision theory and sense-making. Mention any of these things in a conference room full of marketers and watch eyes glaze over as fingers nervously thumb through the conference program, looking for any session that has “Top Ten” or “Surefire” in it’s title.

Take Information Foraging Theory, for instance. Anytime I speak about a topic that touches on how humans find information (which is almost always), I ask my audience of marketers if they’ve ever heard of I.F.T. Generally, not one hand goes up. Sometimes I think Jakob Nielsen and I are the only two people in the world that recognize I.F.T. for what it is: “the most important concept to emerge from Human-Computer Interaction research since 1993.” (Jakob’s words). If you take the time to understand this one concept I promise it will fundamentally and forever change how you look at web design, search marketing, creative and ad placement. Web marketers should be building a shrine to Peter Pirolli and Stuart Card. Their names should be on the tips of every marketer’s tongue. But I venture to guess that most of you reading this column never heard of them until today.

None of these fundamental concepts about human behavior are easy to grasp. Like all great ideas, they are simple to state but difficult to understand. They cover a lot of territory – much of it ill defined. I’ve spent most of my professional life trying to spread awareness of things like Information Foraging Theory. Can I always predict human behavior? Not by a long shot. But I hope that by taking the time to learn more about the classic theories of how we humans tick, I have also learned a little more about marketing. It’s not easy. It’s not perfect. It’s a lot like being human. But I’ve always believed that to be an effective marketer, you first need to understand humans.