Do We Really Want Virtual Reality?

Facebook bought Oculus. Their goal is to control the world you experience while wearing a pair of modified ski goggles. Mark Zuckerberg is stoked. Netflix is stoked. Marketers the world over are salivating. But, how should you feel about this?

Personally, I’m scared. I may even be terrified.

First of all, I don’t want anyone, especially not Mark Zuckerberg, controlling my sensory world.

Secondly, I’m pretty sure we’re not built to be virtually real.

I understand the human desire to control our environment. It’s part of the human hubris. We think we can do a better job than nature. We believe introducing control and predictability into our world is infinitely better than depending on the caprices of nature. We’ve thought so for many thousands of years. And – Oh Mighty Humans Who Dare to be Gods – just how is that working out for us?

Now that we’ve completely screwed up our physical world, we’re building an artificial version. Actually, it’s not really “we” – it’s “they.” And “they” are for profit organizations that see an opportunity. “They” are only doing it so “they” control our interface to consciousness.

Personally, I’m totally comfortable giving a profit driven corporation control over my senses. I mean, what could possibly happen? I’m sure anything they may introduce to my virtual world will be entirely for my benefit. I’m sure they would never take the opportunity to use this control to add to their bottom line. If you need proof, look how altruistically media – including the Internet – has evolved under the stewardship of corporations.

Now, their response would be that we can always decide to take the goggles off. We stay in control, because we have an on/off switch. What they don’t talk about is the fact that they will do everything in their power to keep us from switching their VR world off. It’s in their best interest to do so, and by best interest, I mean they more time we spend in their world, as opposed to the real one, the more profitable it is for them. They can hold our senses hostage and demand ransom in any form they choose.

How will they keep us in their world? By making it addictive. And this brings us to my second concern about Virtual Reality – we’re just not built for it.

We have billions of neurons that are dedicated to parsing and understanding a staggeringly complex and dynamic environment. Our brain is built to construct a reality from thousands and thousands of external cues. To manage this, it often takes cognitive shortcuts to bring the amount of processing required down to a manageable level. We prefer pleasant aspects of reality. We are alerted to threats. Things that could make us sick disgust us. The brain manages the balance by a judicious release of neurochemicals that make us happy, sad, disgusted or afraid. Emotions are the brain’s way of effectively guiding us through the real world.

A virtual world, by necessity, will have a tiny fraction of the inputs that we would find in the real world. Our brains will get an infinitesimal slice of the sensory bandwidth it’s used to. Further, what inputs it will get will have the subtlety of a sledgehammer. Ham fisted programmers will try to push our emotional hot buttons, all in the search for profit. This means a few sections of our brain will be cued far more frequently and violently than they were ever intended to be. Additionally, huge swaths of our environmental processing circuits will remain dormant for extended periods of time. I’m not a neurologist, but I can’t believe that will be a good thing for our cognitive health.

We were built to experience the world fully through all our senses. We have evolved to deal with a dynamic, complex and often unexpected environment. We are supposed to interact with the serendipity of nature. It is what it means to be human. I don’t know about you, but I never, ever, want to auction off this incredible gift to a profit-driven corporation in return for a plastic, programmed, 3 dimensional interface.

I know this plea is too late. Pandora’s Box is opened. The barn door is open. The horse is long gone. But like I said, I’m scared.

Make that terrified.

Talking Back to Technology

The tech world seems to be leaning heavily towards voice activated devices. Siri – Amazon Echo – Facebook M – “OK Google” – as well as pretty much every vehicle in existence. It should make sense that we would want to speak to our digital assistants. After all, that’s how we communicate with each other. So why – then – do I feel like such a dork when I say “Siri, find me an Indian restaurant”?

I almost never use Sir as my interface to my iPhone. On the very rare occasions when I do, it’s when I’m driving. By myself. With no one to judge me. And even then, I feel unusually self-conscious.

I don’t think I’m alone. No one I know uses Siri, except on the same occasions and in the same way I do. This should be the most natural thing in the world. We’ve been talking to each other for several millennia. It’s so much more elegant than hammering away on a keyboard. But I keep seeing the same scenario play out over and over again. We give voice navigation a try. It sometimes works. When it does, it seems very cool. We try it again. And then, we don’t do it any more. I base this on admittedly anecdotal evidence. I’m sure there are those that continually chat merrily away to the nearest device. But not me. And not anyone I know either. So, given that voice activation seems to be the way devices are going, I have to ask why we’re dragging our heels to adopt?

In trying to judge the adoption of voice-activated interfaces, we have to account for mismatches in our expected utility. Every time we ask for some thing – like, for instance, “Play Bruno Mars” and we get the response, “I’m sorry, I can’t find Brutal Cars,” some frustration would be natural. This is certainly part of it. But that’s an adoption threshold that will eventually yield to sheer processing brute strength. I suspect our reluctance to talk to an object is found in the fact that we’re talking to an object. It doesn’t feel right. It makes us look addle-minded. We make fun of people who speak when there’s no one else in the room.

Our relationship with language is an intimately nuanced one. It’s a relatively newly acquired skill, in evolutionary terms, so it takes up a fair amount of cognitive processing. Granted, no matter what the interface, we currently have to translate desire into language, and speaking is certainly more efficient than typing, so it should be a natural step forward in our relationship with machines. But we also have to remember that verbal communication is the most social of things. In our minds, we have created a well-worn slot for speaking, and it’s something to be done when sitting across from another human.

Mental associations are critical for how we make sense of things. We are natural categorizers. And, if we haven’t found an appropriate category when we encounter something new, we adapt an existing one. I think vocal activation may be creating cognitive dissonance in our mental categorization schema. Interaction with devices is a generally solitary endeavor. Talking is a group activity. Something here just doesn’t seem to fit. We’re finding it hard to reconcile our usage of language and our interaction with machines.

I have no idea if I’m right about this. Perhaps I’m just being a Luddite. But given that my entire family, and most of my friends, have had voice activation capable phones for several years now and none of them use that feature except on very rare occasions, I thought it was worth mentioning.

By the way, let’s just keep this between you and I. Don’t tell Siri.

Who’s Who on the Adoption Curve

For me, the Adoption Curve of the Internet of Things is fascinating to observe. Take the PoloTech shirt from Ralph Lauren, for example. It’s a “smart shirt”. The skintight shirt measures your heart rate, how deeply you’re breathing, how stable you are and a host of other key biometrics. All this is sent to your smart phone. One will set you back a cool 300 bucks. But it’s probably not the price that will separate the adopters from the laggards in this case. In the case of the PoloTech shirt, as with many of the new pieces of wearable tech, it’s likely to be your level of fitness that determines which slope of the adoption curve you’ll end up on.

polotechIf you look at the advertising of the PoloTech, it’s clear who the target is: dudes with 0.3% body fat and ridiculously sculpted torsos who live on protein drinks and 4 hour workouts. Me? Not so much. The same is true, I suspect, for the vast majority of us. Unless we’re looking for a high tech girdle to both hold back and monitor the rate of expansion of our guts, I don’t think this particular smart shirt is in the immediate future for me.

As I said, much of the current generation of wearable technology is designed to tell us just how fit we are. Logic predicts that these devices should offer the greatest benefits to those who are the least fit. They, after all, have the most to gain. But that’s not who’s jumping the adoption curve. In my world, which is recreational cycling, the ones who are religiously tracking a zillion metrics are the ones who are already on top of the statistical heap. The reason? Technology has created an open market of bragging rights. Humans are naturally competitive. We like to know how we stack up against others. But we don’t bother keeping track until we’re reasonably sure we’re well above average. So, if you log onto Strava, where many cyclists upload their tech-tracked rides, you can find out just who is the “King of the Mountain” at your local version of the Alpe d’Huez.

This brings about an interesting variation on Roger’s Technology Adoption Curve. Wearable technology often means the generation of personal data. Therefore, an appetite for that data will accelerate the adoption of those respective technologies. We don’t mind being quantified, as long as that quantification paints us in a good light. We want to live in Lake Wobegon, where all the women are strong, all the men are good-looking and all the children are above average.

Adoption of new technologies, according to Rogers, depends on 5 factors: Relative Advantage, Compatibility, Complexity, Trialability and Observability. To this, Rogers added a sixth factor – the status conferring potential of a new innovation. Physical fitness, by its nature, begs to be quantified. Athletic ability and rankings go hand in hand. Status is literally the name of the game. Therefore, there is a natural affinity between wearable technologies that tracks physical performance and fitness.

This introduces some interesting patterns of adoption for new additions to the Internet of Things. Adoption will rapidly saturate certain niches of the population, but may take much longer to cross the chasm to the general masses. And the defining characteristics of the early adopters could be completely different in each case. As more and more things become “smart” the factors of adoption will become more fragmented and diverse. Early adopters of Coke’s Freestyle vending machine will have little in common with early adopters of the PoloTech shirt.

The absorption rate of technology into our lives has been increasing exponentially, seemingly in lock step with Moore’s Law. Every day, we are introduced to more and more things that have technology embedded in them. The advantages that this technology offers will depend on who is judging it. For some, a given technology will be a perfect fit. For others, it will be like trying to squeeze into a high tech shirt that makes us look like an overstuffed sausage.

Donald Trump, The Clickbait Candidate

Intellectually, I hate clickbait. But do I click on it? You bet. Usually before I stop to think. It hits me in the quick and dirty (in every sense of the word) part of my brain. Much as I know I should be better than this, I find myself clicking through more viscerally tantalizing slideshows than I would care to admit. Humans, of which I number myself one, are suckers for sensationalism.

So, I admit to human foibles. But in doing so, I stress that they’re something we should strive to overcome. Ration should rule the day. We should not embrace a future that’s built on the pushing of our collective hot buttons.

That’s why the current ascendency of one Mr. Trump is scaring the hell out of me.

Donald Trump is not stupid. He’s built his campaign to be one massive, ongoing A/B clickbait test. He floats Outrageous Remark A against Outrageous Remark B to see which generates the biggest response. He’s probing the collective psyche of America to see what goes viral. And he knows that virality cannot live in the middle of the road. It has to live in the extreme margins. In order to be sensational, you have to provoke senses. You have to push buttons. To get people to love you, you also have to get people to hate you. It was an inevitable evolution of politicking in the Age of the Internet.

To this point, Trumps tactics appear to be working. He’s distancing his Republican opponents by increasing margins (the latest has him doubling Jeb Bush’s support, at 32% vs 16%). He’s even closing in on Hilary Clinton, trailing by just 6% in a recent poll. Trump’s sledgehammer-subtle attack on the quick and dirty shortcuts of our brains seems to be triumphing over any rational appeal to the slow and reasoned loops of logic.

But is this really how we want our leaders to be chosen?

In 1856, America was edging closer to the ideological precipice of the Civil War. It was a time when it was easy to ignite hair-triggered passions. And the country was captivated by one senatorial race in particular – in the state of Illinois. There, incumbent Stephen A. Douglas was running against a little known lawyer who had served one largely unremarkable term in Congress. His name was Abraham Lincoln. As part of the campaign, Douglas agreed to debate Lincoln on what was the only real issue of the election – the future of slavery. Prior to the debates, popular opinion had it that Douglas would eviscerate Lincoln.

lincolndouglasThe series of seven debates were spread around the state over a period of 56 days. The stakes were profound. Over 14% of the US population was black. Of them, almost 90% were slaves. The future of the union revolved on the thorny question of the legality of slavery. No matter what side of the issue you were on, whatever came out of your mouth was guaranteed to be provocative.

Each debate was 3 hours in length. The first speaker spoke for 60 minutes, the other candidate had 90 minutes to respond, and the first speaker had an additional 30 minutes as a rejoinder. In total, that was 21 hours of usually eloquent political debate. The full text of all speeches were published almost verbatim in the nation’s newspapers (papers usually fixed the grammatical errors of whichever candidate they were supporting, while leaving the opponent’s remarks in rough form.) Lincoln got off to a rough start, but hit his stride midway through the debates. By the final two debates, in Quincy and Alton, most everyone who was at objective felt that Lincoln was the clear winner. He ended up losing the senatorial race to Douglas, but emerged as the national champion of abolitionists. The momentum from those debates eventually carried him into the presidency 4 years later.

In these debates, Lincoln managed to do something extraordinary. He reframed the slavery debate – moving it from a question of social equality to one of legal liberty. This sidestepped some of the fiercely held beliefs and allowed for a more rational examination of the question. Beliefs are the bedrock of the quick and dirty mechanisms of our mind. It’s relatively easy to connect with someone’s beliefs. You just have to know the right buttons to push. It’s much more difficult to encourage people to think, as Lincoln did, and push them to question their beliefs. Beliefs act as bulwarks against open and rational consideration.

By the way, if you’re not familiar with the term, a bulwark is a great wall built to keep things out. Like, for example, a great wall on the US/Mexican border.

How Our Brains Process Price Information

On-Off-Switch-For-Human-BrainWe have a complex psychological relationship with pricing. A new brain scanning study out of Harvard and Stanford starts to pick apart the dynamics of that relationship.

Uma R. Karmarkar, Baba Shiv, and Brian Knutson wanted to see how we evaluate a potential purchase when the price is the first piece of information we get as opposed to the last piece of information. They used both fMRI scanning and behavioral tracking to see how the study participants responded. Participants were given $40 dollars to spend and then were presented with a number of sample offers. In all cases, the price represented an attractive bargain on the product featured. But one group was given the price first, and the second group was given the price last.

There was another critical difference in the evaluation process as well. In the first phase of the study, participants were shown products that they would like to buy, and in the second phase, they were shown products that they would have to buy. The difference between the two was how they activated the reward center of our brain – the nucleus accumbens. I’ve been talking for years about the importance of understanding the balance of risk and reward in our purchase decisions. This study provides a little more understanding about how our brain processes those two factors.

In the first phase, participants were shown a variety of products that they would consider rewarding. These would fall into the first quadrant of the risk/reward matrix I introduced in my column from 5 years ago. The researchers were paying particular attention to two different parts of the brain – the nucleus accumbens and the medial prefrontal cortex. For a layman’s analogy, think of you and a five year old walking down the toy aisle in a department store. The nucleus accumbens is the five year old who starts chanting, “I want it. I want it. I want it.” The medial prefrontal cortex is the adult who decides if they’re actually going to buy it. In the study, the researchers found that the sequence in which these two parts of the brain “lit up” depended on whether or not you saw the price first. If you saw the product first, the nucleus accumbens started its chant – “I want it.” If you saw the price first, the prefrontal medial cortex kicked into action and started evaluating whether the offer represented a good bargain. In the case of the reward products, although the sequence varied, the actually purchase process didn’t. In most cases, participants still ended up making the purchase, whether price was presented first or last.

But things changed when the researchers tried a variety of products that fell into the second quadrant of the risk reward matrix – low risk and low reward. These are the everyday items we have to buy. In the study, they included things like a water filtration pitcher, a pack of AA batteries, a USB drive, and a flashlight. There was nothing here that was likely to get the nucleus accumbens starting to chant.

Now, it should be noted that this follow-up study did not include the fMRI scanning, but by tracking purchasing behaviors we can make some pretty educated guesses as to what’s happening in the respective brains of our participants. Here, presenting prices first resulted in a significant increase in actual purchases over instances when price was presented last. If price comes first, we can imagine that the prefrontal cortex is indicating that it’s a good bargain on a needed product. But if a relatively boring product is presented first for evaluation to the nucleus accumbens, there’s little to excite the reward center.

An important caveat to this part of the study comes with knowing that the prices presented represented significant savings on the products. After the simulated purchases, participants were asked to indicate a price they would be willing to pay for the product. When the price was the lead, the named prices tended to be a little lower, indicating that if you are going to lead with price, especially for quadrant two products, you’d better make sure you’re offering a true bargain.

If anything, this study provides further proof of the value of knowing a prospect’s mental landscape. What are the risk and reward factors that will be motivating them? Will the media prefrontal cortex or the nucleus accumbens be calling the shots? What priming effects might an early introduction of price introduce into the process?

When I wrote about the risk/reward matrix five years ago, one commenter said “a simple low-high risk/low-high reward graph is not very useful for driving just in time and location based offers, discounts, etc.” I respectfully disagree. While more sophisticated models are certainly possible, I think even a simple 2X2 matrix that helps map out the decision factors that are in play with purchases would be a significant step forward. And this isn’t about driving real time variations on offers. It’s about understanding the fundamentals of the buyer’s decision process. There’s nothing wrong with simplicity, especially if it drives greater usage.

An Eulogy for “Kathy” – The First Persona

My column last week on the death of the persona seemed to find a generally agreeable audience. But prior to tossing our cardboard cutouts of “Sally the Soccer Mom” in the trash bin, let’s just take a few minutes to remind ourselves why personas were created in the first place.

Alan Cooper – the father of usability personas – had no particular methodology in mind when he created “Kathy,” his first persona. Kathy was based on a real person that Cooper had talked to during his research for a new project management program. Cooper found himself with a few hours on his hands every day when his early 80’s computer chugged away, compiling the latest version of his program. He would use the time to walk around a golf course close to his office and run through the design in his head. One day, he engaged himself in an imaginary dialogue with “Kathy,” a potential customer who was requesting features based on her needs. Soon, he was deep in his internal discussion with Kathy. His first persona was a way to get away from the computer and cubicle and get into the skin of a customer.

There are a few points here that important to note. “Kathy” was based on input from a real person. The creation of “Kathy” had no particular goal, other than to give Cooper a way to imagine how a customer might use his program. It was a way to make the abstract real, and to imagine that reality through the eyes of another person. At the end we realize that the biggest goal of a persona is just that – to imagine the world through someone else’s eyes.

As we transition from personas to data modeling, it’s essential to keep that aspect alive. We have to learn how to live in someone else’s skin. We have to somehow take on the context of their world and be aware of their beliefs, biases and emotions. Until we do this, the holy grail of the “Market of One” is just more marketing hyperbole.

I think the persona started its long decline towards death when it transitioned from a usability tool to a marketing one. Personas were never intended to be a slide deck or a segmentation tool. They were just supposed to be a little mental trick to allow designers to become more empathetic – to slip out of their own reality and into that of a customer. But when marketers got their hands on personas, they do what marketers tend to do. They added the gloss and gutted the authenticity. At that moment, personas started to die.

So, for all the reasons I stated last week, I think personas should be allowed to slip away into oblivion. But if we do so, we have to find a way to understand the reality of our customers on a one to one basis. We have to find a better way to accomplish what personas were originally intended to do. We have to be more empathetic.

Because humans are humans, and not spreadsheets, I’m not sure we can get all the way there with data alone. Data analysis forces us to put on another set of lenses – ones that analyze – not empathize. Those lenses help us to see the “what” but not the “why.” It’s the view of the world that Alan Cooper would have had if he never left his cubicle to walk around the Old Del Monte golf course, waving his arms and carrying on his internal dialogue with “Kathy.” The way to empathize is to make connections with our customers – in the real world – where they live and play.  It’s using qualitative methods like ethnographic research to gain insights that can then be verified with data. Personas may be dead, but qualitative research is more important than ever.

The Mother of All Disruption

Once again fellow Online Spin author Tom Goodwin has piqued my interest. He starts to unwrap a tremendously thorny problem in his column of last Thursday – Time to Think about Regulation for Disruption. Today, I’d like to take this question up one level – do we have to rethink government entirely?

Government is almost entirely a reactionary business. Even far sighted, historic documents such as the Constitution of the United States and the Magna Carta were reactions to the untenable circumstances that preceded them. And these are the exceptions. The vast majority of governing involves a highly bureaucratic and excruciatingly slow process that attempts to respond to emerging breaches in the unspoken code of fairness that our society tries to live by. Realistically, from the time the need for a new law is recognized to the time a bill is passed, months or even years can pass.

Months or years were, practically speaking, adequate in the world we once knew. But today, that is no longer the case. In that time, complex ecosystems can establish around the breach in question, and, as Tom points out, entire industries may have been decimated in the process. This is the reality of disruption.

In a world that seeks order and governance, this is a bad thing. But, now that we have unleashed the technological Kraken, is that a world we can reasonably expect? Slowly but surely we are dismantling every aspect of our hierarchical society and replacing it with a horizontal network. Hierarchies can’t work horizontally. Something has to give.

Disruptions are a characteristic of networked structures. In order for networks to work, each component of that network has to be given the freedom to act. If the action of an individual resonates with other parts of the network, the actions are picked up and amplified. Each individual act has the potential to become a disruption – with corresponding consequences. Everything becomes accelerated in a network.

Government is built on the ideological foundation of a hierarchy. The word “government” means “to steer.” The assumption is that our society is capable of being steered. This, in turn, assumes that our society all wants to go in the same direction. But if we enforce these restrictions on a network, networks cease to work. Yes, we quell the negative disruptions, but we also eliminate the positive ones.

The United States of America is one of the least restrictive societies on the planet. The founding fathers drafted their articles to enshrine that freedom. You (as a Canadian, I have to say “you”) have managed to balance the practical necessities of government with the lack of restrictions typical of a market economy. Markets naturally emerge from networks. Because the U.S. treasures freedom and innovation, it was inevitable that it would emerge as the testing ground for the impacts of technological advances. You are the canary in the coalmine of massive disruption.

Tom urges lawmakers to become more proactive. But historically speaking, that’s just not the way government works. It’s like riding a cow in the Kentucky Derby and wondering why you can’t keep up. I just don’t think that our current hierarchical system of government is up to the job. It’s a great system, with a ton of democratic checks and balances, but it was built for a different era – one built along vertical lines.

The final issue is one of enforcement. Even if laws are passed to deal with emerging disruptions, it’s becoming almost impossible to enforce them. If lawmakers are scrambling to keep up with society, law enforcers have capitulated entirely. We just can’t afford to enforce the laws we already have on the books.

So, if this is the problem, what is the answer? I think, perhaps, it lies in the very same properties of networks. Government and laws became necessary to avoid abuses of power. Power comes from hierarchies. As societies level out the old dictates of fairness become increasingly relevant. We all have universal concepts of fairness. Abuses of what we consider to be fair are generally dealt with quickly and effectively at the network level. Networks tend to police themselves, as long as there is a common understanding of what is acceptable and what is not. In short, we have to think of regulation in terms of market and network dynamics, not hierarchical governance.

I admit this is tough to wrap your head around. In a world of disruptions, this is the Mother of all Disruption. But symptomatically speaking, it appears that our historic notion of government is ailing. As frightening as it may be to contemplate, we should start thinking about what may replace it.

Some Second Thoughts on Mindless Media

When I read Tom Goodwin’s Online Spin last week, I immediately jumped on his bandwagon. How could I not? He played the evolutionary psychology card and then trumped that by applying it to the consumption of media. This was right up my ideological alley.

addict_f1pjr6Here’s a quick recap: Humans evolved to crave high calorie foods because these were historically scarce. In the last century, however, processed food manufacturing has ensured that high calorie foods are abundantly available. The result? We got fat. Really fat. Tom worries that the same thing is happening to our consumption of media. As traditional publishing channels break down, will we become a society of information snackers?

We’re rewarding pieces that are most-clickable or most easily digested, and our news diet shifts from good-for-us to snackable.”

Goodwin also mourns the death of serendipitous discovery – which was traditionally brought to us by our loyalty to a channel and the editorial control exercised by that channel. If we were loyal to the New York Times, then we were introduced to content they thought we should see. But in the age of “filter bubbles” our content becomes increasingly homogenized based on algorithms, which are drawing an ever-narrowing circle bounded by our explicit requests and our implicit behavior patterns. We become further insulated from quality by mindless social media sharing – which tends to favor content pandering to the lowest common denominator.

But the more I thought about it, the more I wondered if this wasn’t a little paradoxical? Tom’s very thoughtful column, which hardly qualifies as intellectual fast-food, didn’t come to us through traditional journalism. Tom, like myself, is not a professional journalist. And while MediaPost does provide some editorial curation, it’s purpose it to provide a fairly transparent connection between industry experts like Tom and other experts like you. Tom’s piece came to us through a much more transparent information marketplace – the very same marketplace that Tom worries is turning us into an audience of mindless media junkies. And I should add that Tom’s piece was shared through social circles over 200 times.

So where is the disconnect here? The problem is that when it comes to human behaviors, there are no universal truths. How we act in almost any given situation will eventually distribute itself across a bell curve. Let’s take obesity, for instance. If we talk trends, Tom is absolutely correct. The introduction of fast food in North America coincided with an explosion of obesity, which as a percentage of the US population rose from about 10% in the 1950’s to almost 35% in 2013. But if we accept the premise that we all mindlessly crave calories, we should all be obese. Obesity rates should also continue to be going up until they reach 100% of the population. But those two things are just not true. Obesity rates have plateaued in the last few years and there are indications that they are starting to decline amongst children. Also, although fast food is now available around the world – obesity rates vary greatly. Japan has one of the highest concentrations of McDonald’s outlets per capita (25 per million) in the world but has an obesity rate of 3.2%, the lowest in all OECD countries. The US has a higher concentration McDonald’s (45 per million) but has an obesity rate 10 times that of Japan. And my own country, Canada, almost matches the US McDonald for McDonald (41 per million) but has an obesity rate half that of the US (14.3%).

My point is not to debate whether we’re getting fatter. We are. But there’s more to it than just the prevalence of fast food. And these factors apply to our consumption of media as well. For example, there is a strong negative correlation between obesity levels and education. There is also a strong negative correlation between obesity and income. Cultural norms have a huge impact on the prevalence of obesity. There are no universal truths here. There are just a lot of nebulous factors at play. So, if we want to be honest when we draw behavioral comparisons, we have to be accepting of those factors.

Much as I believe evolution drives many of our behaviors, I also believe that more open markets are better than more restrictive ones. As the mentality of abundance takes hold, our behaviors take time to adjust. Yes, we do snack on crap. But we also have access to high quality choices we could have never dreamed of before. And the ratio of consumption between those two extremes will be different for all of us. Consider the explosion of TV programming that has happened over the last 3 decades. Yes, there is an over-abundance of mindless dreck, but there is also more quality programming than ever to choose from. The same is true of music and pretty much any other category where markets have opened up through technology.

The way to increase the quality of what we consume, whether it be food, information or entertainment, is not to limit the production and distribution of those consumables through more restrictive markets, but to improve education, access and create a culture of considered consumption. Some of us will choose crap. But some of us will choose the cream that rises to the top. The choice will be ours. The answer is not to take those choices away, but rather to create a culture that encourages wiser choices.

The Secret of Successful Marketing Lies in Split Seconds

affordanceThe other day, I was having lunch in a deli. I was also watching the front door, which you had to push to get in. Almost everyone who came to the door pulled, even though there was a fairly big sign over the handle which said “Push.” The problem? The door had the wrong kind of handle. It was a pull handle, not a push. The door had been mounted backwards. In usability terms, the door handle presented a misleading affordance.

I suspect the door had been there for many years. I was at the deli for about 30 minutes. In that time, about 70% of the people (probably close to 50) pulled rather than pushed. Extrapolating this to the whole, that means over the years, thousands and thousands of people have had to try twice to enter this particular place of business. Yet, the only acknowledgement of this instance of customer pain was the sign that had been taped to the door – “Push” – and I suspect there was an implied “(You Idiot)” following that.

I suspect most marketing falls in the same category as that sign. It’s an attempt to fight the intuitive actions that customers take – those split-second actions that happen before our brain has a chance to kick in. And we have to counteract those split-second decisions because the path we have created for our customers was built without an understanding of those intuitive actions. After we realize that our path runs counter to our customer’s natural behaviors do we rebuild the path? Does the deli owner pay a contractor to remount the door? No, we post a sign asking customers to push rather than pull. After all, all they have to do is think for a moment. It seems like a reasonable request.

But here’s the problem with that. You don’t want your customers to think. You want them to act. And you want them to act as quickly and naturally as possible. The battles of marketing are won in those split seconds before the brain kicks in.

Let me give you one example. A few years ago I did a study with Simon Fraser University in Canada. We wanted to know how the brain responded in those same split seconds to brands we like versus brands we have no particular affinity to. What we found was fascinating. In about 150 milliseconds (roughly a sixth of a second) our brain responds to a well-loved brand the same way we respond to a smiling face. This all happens before any rational part of the brain can kick in. This positive reaction sets the stage for a much different subsequent mental processing of the brand (which starts at about 450 milliseconds, or half a second). And the power of this alignment can be startling. As Dr. Read Montague discovered, it can literally alter your perception of the world.

If you can rebuild your path to purchase to align with your customer’s intuitive behaviors, you don’t need to put up “push” signs when they stray off course. You don’t have to make your customers think. Here’s why that is important. As long as we operate at the intuitive level, humans are a fairly predictable lot. Evolution has wired in a number of behaviors that are universal across the population. You would not be risking your vacation fund if you placed a bet that the majority of people would try to pull a door with a door handle that suggested your should pull it, even if there was a sign that said “push.” As long as we operate on auto-pilot, we can plot a predicted behavioral course with a fair degree of confidence (assuming, of course, we’ve taken the time to understand those behaviors).

But the minute we start to think, all bets are off. The miracle of the human brain is that it has two loops of activity – one fast and one slow. The fast loop relies on instinct and evolved behavioral habits. It’s incredibly efficient but stubbornly rigid. The slow loop brings the full power of human rationality to bear on the problem. It’s what happens when we think. And once the prefrontal cortex kicks it, we are amazingly flexible but we pay the price in efficiency. It takes time to think. It also brings a massive amount of variability into the equation. If we start thinking, behaviors become much more difficult to predict.

The longer you can keep your customers on the fast path, the closer you’ll be to a successful outcome. Plan that path carefully and remove any signs telling them to “push.”

Mourning Becomes Electric

dreamstime_19503560Last Friday was a sad day. A very dear and lifelong friend of mine, my Uncle Al, passed away. And so I did what I’ve done before on these occasions. I expressed my feelings by writing about it. The post went live on my blog around 10:30 in the morning. By mid afternoon, it had been shared and posted through Facebook, Twitter and many other online channels. Many were kind enough to send comments. The family, in the midst of their grief, forwarded my post to their family and friends. Soon, there was an extended network of mourning that sought to heal each other, all through channels that didn’t exist just a few years ago. Mourning had moved online.

As you probably know, I’m fascinated by how we express our innate human needs through digital technologies. And death, together with birth, is the most universal of human experiences. It was inevitable that we would use online channels to grieve. So I, as I always do, asked the question – why?

First of all – why do we mourn? Well, we mourn because we are social animals. We are probably the most social of animals. So we grieve to an according degree. We miss the departed terribly. It is natural to try to fill the hole a death tears inside of us by reaching out to others who may share the same grief. James R. Averill believed we communally mourn because it cements the social bonds that make it more likely that we will survive as a species. When it comes to dealing with death, misery loves company.

Secondly, why do we grieve online? Well, here, I think it has something to do with Granovetter’s weak ties. Death is one of those life events where we reach beyond the strong ties that define our day-to-day social existence. Certainly we seek comfort from those closest to us, but the death also triggers the existence of a virtual community – defined and united by their grieving for the one who has passed away. Our digital networks allow us to eliminate the six degrees of separation in one fell swoop. We can share our grief almost instantaneously and simultaneously with family, friends, acquaintances and even people we have never met.

There are two other aspects of grief that I believe lend themselves well to online channels: the need to chronicle and the comfort of emotional distance.

Part of the healing process is sharing memories of the departed love one. And, for those like myself, just writing about our feelings helps overcome the pain. Online provides a perfect platform for chronicling. We can share our own thoughts and, in the expressing of them, start the healing process.

The comfort of physical distance seems a contradictory idea, but almost everyone I know who has gone through a deep loss has one common dread – dealing with a never-ending stream of condolences over the coming weeks and months, triggered by each new physical encounter.

When you’ve been in the middle of the storm, you are typically a few days ahead of everyone else in dealing with your grief. Your mind has been occupied with nothing else as you have sat vigil by the hospital bed. While the condolences are given with the best of intentions, you feel compelled to give a response. The problem is, each new expression of grief forces you to replay your loop of very painful memories. The amplitude of this pain increases when it’s a face-to-face encounter. Condolences that reach you through a more detached channel, such as online, can be dealt with at your discretion. You can wait until you marshall the emotional reserves necessary to respond. You can also respond to several people at a time. How many times have you heard this from a grieving loved one, “I just wish I could record my message and play it whenever I meet someone who wants to tell me how sorry they are for my loss?” It may seem callous, but no one wants to relive that pain over and over again. And let’s face it – almost no one knows the right things to say at a moment like this.

By the end of last Friday, my online social connections had helped me ease a very deep pain. I hope I was able to return the favor for others that were dealing with their own grief. There are many things about technology that I treat with suspicion, but in this case, turning online seemed like the most natural thing in the world.