The Psychology of Entertainment: Men, Women and How We Process Humor

Yesterday, I talked about context and it’s impact on comedy. What makes something funny in Scotland wouldn’t necessarily be as funny in Switzerland or South Africa. If different nationalities process jokes differently, there must be other dividing lines as well, right? Yes, and the biggest one is the line that segments the sexes. Men and women have significantly different humor processing hardware. Women tend to think before laughing, monitoring the social temperature before making a judgement about what’s funny. A man’s response tends to be less deliberate, a more direct connection to our primal “humor” centres.

And it’s this divide in the senses that provides some clues on the mechanisms used to process humor. Studies have found that unless both the right and left hemispheres of the brain are fully engaged in the task of processing humor, we won’t find a joke funny. This is why you never find a joke funny if it has to be explained to you. If we use the left hemisphere (the logical side) of our brain to analyze a joke too extensively, it ceases to be humorous. The suddenness of the gap closing, the elimination of incongruity and the feeling of mastery is no longer there. You’ve taken too long a road to the punchline and the humor got lost on the way.

In humans, humor seems to be a balancing act between the left and right hemisphere. The left gets the facts in order, and the right seems to provide the synthesis that produces the humor. Neurologists have found that patients with lesions to their right hemisphere can understand the “logic” of a joke but simply won’t find any humor in it. Knowing that an interplay between the hemispheres is required to produce humor explains the differing responses from men and women when it comes to what’s funny. Women have more robust wiring between the right and left hemispheres.  The important thing, however, is that we process humor subconsciously. As I said yesterday, if we stop to think too long about a joke, it ceases to be funny.

The Difference between Slapstick and Wit

three_stoogesYesterday, I talked about what makes a baby laugh. In effect, I stripped humor down to it’s essential building blocks. But, as we get older, we get more sophisticated. We move beyond the universal foundations of humor and start to develop tastes. Some of us love Oscar Wilde. Some of us love Tyler Perry. So, what is the difference between high brow and low brow humor?

Why do we laugh when other people hurt themselves? Why was it funny when Larry slapped Moe, or poked Curly in the eyes? What kind of sick, sadistic bastards are we? The Germans even coined a word for it: Schadenfreude – which translates literally as “joy from adversity”.

There is a double punch-line to slapstick comedy. The first comes from the fact that laughter and danger live in the same parts of our brain, as I explained in yesterday’s post. We have an immediate and complex reaction to physical calamity. It surprises us, which triggers the appropriate part of the brain, which in turn responds with a double hit of fear and laughter. Which side of the dividing line we end up depends on the seriousness of the calamity. Minor bumps on the head (when they happen to others), slips, falls, knocks and bumps can all trigger laughter as an immediate response. If the damage is more seriousness, our laughter quickly turns to concern. Remember yesterday when we looked at how a 5 month old’s laughter is triggered by conquerable danger, in a playful setting? These same mechanisms stay in place throughout our lives and partially explain our response to other’s physical misfortunes. In comedy, Slapstick is stylized so that we can be certain nobody is getting hurt too badly. Facial expressions, sound effects and mock moans all signal that this is just good fun. Look at the picture of the Three Stooges I included with this post. No one could look at the expressions on those faces and make the mistake of thinking that there’s anything remotely serious about the ear twisting that’s going on. We distance the physical violence from the result of that violence. It’s the entire premise of the game show Wipeout, as well  as 85% of the clips on America’s Funniest Home Videos.

The Social Side of Humor

But there’s more to it than just a mixed up fear/laughter response. Humor depends on our social radar. It depends on how we position ourselves in our social network. This is where the Schadenfreude part of the equation plays out. We find it funny when  Wile E. Coyote falls off a cliff but we don’t when the same fate befalls the Road Runner. Why? Because Wile is the bad guy and the Road Runner is the good guy. Archetypes are important in comedy. This goes back to Aristotle’s rules for drama: bad things can happen to bad people, good things are supposed to happen to good people, but when those two get mixed up, it’s a lot less satisfying to us. Schadenfreude works best when the good/bad roles are clearly defined.

So, how do we define Schadenfreude for men vs women? This is another place where males and females diverge in their opinions of what we find funny. In men, it typically plays out in terms of physical violence. We men laugh when others get hurt. With women, it’s more often defined as a social comeuppance. Women laugh at social ostracization.

Tom Green vs Kate Hudson: Guy’s Movies & Chick Flicks

Let’s visit the 6th grade school yard at lunch time. Over in this corner we have a group of guys laughing. What are they laughing about? Chances are, it’s something to do with some type of bodily emission or various parts of the male and female anatomy and how they might interact. Guys are, on the average, predictably base about what we find funny. And much as I wish we outgrew this, a quick glance down what’s currently playing at the local Cineplex will probably prove me right.

But there, over in the other corner, is a group of girls laughing. What are they laughing about? Chances are it’s not about farting or doody. It’s more likely laughter at the expense of some poor unfortunate distant member of their social circle. Social status is a key ingredient in comedies aimed at women, usually with a romantic twist thrown in.

High Brow Humor

Do we ever rise above the limitations of our base instincts when it comes to humor? Thankfully, yes. Many of us appreciate wit for it’s own sake. So, what is it about the witty remark that we find so appealing?

Perhaps the answer can be found in how we respond to wit. A witty remark almost never elicits a belly laugh. Witty remarks cause us to smile. A chuckle is usually the most we can hope for. Belly laughs are usually reserved for more physical types of comedy. Why the difference? Let’s return to our 5 month old. Babies both smile and laugh. They laugh during rough housing and more robust play sessions. They smile when they recognize the face of their mother or a grandparent. Laughter seems to come from our danger/humor circuit. Smiling comes from a more social place in our brain. In chimpanzees, a smile signals social submission. So, what does this have to do with wit?

We admire wit. We aspire to be witty. We identify with the mental acuity that typifies a witty person. We all want to be Chandler Bing, Conan O’Brien or, in an earlier age, Dorothy Parker. Wit is a signal of social station. Again, we find that what we find funny and what we find socially desirably are inextricably linked.

Wit has truth in it; wisecracking is simply calisthenics with words. – Dorothy Parker

Now that we’ve looked at what we find funny, on the next post I’ll return to a question I started to ask: what separates a TV hit from a miss?

The Psychology of Entertainment: What We Find Funny

Did you hear the one about….

A rabbi, a priest and a prostitute walk into a bar….

Knock Knock….

A lot of decidedly unfunny academic papers have been written about what makes us laugh (the one I referred to for this post was  Robert Storey, “Comedy, Its Theorists and the Evolutionary Perspective,” Criticism 38.3 (1996), Questia – what a hoot!). Freud has his own ideas that involved a sudden release of psychic energy, sort of like a mental steam release valve. It’s a sign of the dryness of the academic world to note that there is vigorous academic debate about what we find funny.

At the risk of examining an inherent human trait that’s probably better left alone, if we’re going to look at the psychology of entertainment, we have to look at what we find funny. And to begin, let’s look at what makes a baby laugh.

Getting a Baby to Laugh

babylaughBabies get humor at a pretty early age. Most babies start laughing in their first half year of life. So, obviously, there must be some fundamental qualities of humor. In understanding what we find funny, it’s helpful to look at what makes a 5 month old baby laugh.

Think about how you get a baby to laugh. A game of Peekaboo is usually effective. Tickling and gentle rough housing can usually elicit a chuckle. A adult face zooming into close proximity while babbling verbal nonsense also seems to do the trick.

Now, if we look closely at each of these activities, we start to realize there’s a macabre and twisted underbelly to humor.

Peekaboo generally works best with the primary care givers, the parents. The closer the adult is to the baby, the more likely you’ll get  a smile or laugh. But the game basically mimics the disappearance of the person closest to the baby and then brings them back. Now you see me, now you don’t, and now you see me again.

Tickling and rough housing is a toned down mock attack. The same is true when we jam our faces into that of an infant and spout baby talk. We get them to laugh by scaring the bejesus out of them. Is it any wonder that babies seem to be balanced on the fine line between laughing and crying during most of these activities? It doesn’t take much to slip from humor to fear. As the baby gets tired or if a stranger tries the same game as the parent, you’re more likely to get tears than laughter.

The Primal Building Blocks of Humor

This starts to tell us what the primal elements of humor might be. For a baby, we take a threatening situation and down play it dramatically, letting the baby feel that it’s just play. The baby picks up signals from us that there is no real threat, leaving them free to enjoy the game.  In this benign version of toned down danger, the baby builds coping skills for the world around them. This mastery of our environments, our ability to align things with a sense what’s right and achieve congruity, continues to play a critical role in what we find humorous as we get older.

By the way, humans aren’t the only animals that laugh. Other primates, such as chimpanzees, also laugh, and there the dividing line between hostility and humor is almost non existent. The toothy grin in a primate is not too many steps removed from baring your teeth in preparation for battle. And a smile is the primates sign for submission to a superior.

This line between danger and pain is one that humans continue to ride through our lives, and some enjoy the journey more than others. Some smile and laugh like idiots on a roller coaster (myself included), others are paralyzed in fear. But the difference between the two extremes is not as far as you might think. Research seems to show that both feelings originate from the same centres of the brain and it’s our threshold for sensation stimulation that separates laughter from screaming.

The Psychology of a Joke

The jokes we find funny can tell much about us as individuals. Again, jokes rely on closing gaps of incongruity, a sudden revelation that suddenly allows a situation that highlights a discrepancy to make sense. We master the situation when we “get” the punchline, the source of the humor.

But the funniness of a joke depends on our frame of mind. What we find incongruous and the things that offer a pleasing solution to that incongruity differ from person to person. A highly religious person may be offended by a dirty joke that would be gang busters amongst a bunch of guys having a drink after work. The different view of context and competing emotions of disgust render the joke unfunny to more “upright” recipients.

This dependency on cultural context can help explain why jokes seldom translate well from culture to culture. The more the joke relies on a frame of reference steeped in the uniqueness of a culture, the less likely it will be to successfully cross borders. In 2002 a study was done to find the funniest joke in the world. The winner was:

A couple of New Jersey hunters are out in the woods when one of them falls to the ground. He doesn’t seem to be breathing, his eyes are rolled back in his head. The other guy whips out his cell phone and calls the emergency services. He gasps to the operator: “My friend is dead! What can I do?” The operator, in a calm, soothing voice, says: “Just take it easy. I can help. First, let’s make sure he’s dead.” There is a silence, then a shot is heard. The guy’s voice comes back on the line. He says: “Okay, now what?”

The classic elements of humor are all here. The initial situation, the set up, the twist and the sudden understanding of the twist, resulting in, apparently, universal laughter. Notice that the context is so broad and independent of a cultural context that anyone, anywhere, should “get it”. There is nothing culturally specific about this joke.

But now let’s look at what the winner in the US was:

A man and a friend are playing golf one day at their local golf course. One of the guys is about to chip onto the green when he sees a long funeral procession on the road next to the course. He stops in mid-swing, takes off his golf cap, closes his eyes, and bows down in prayer. His friend says: “Wow, that is the most thoughtful and touching thing I have ever seen. You truly are a kind man.” The man then replies: “Yeah, well we were married 35 years.”

The humor in this joke depends on understanding how fanatical some males are about golf, a context familiar in the US, not as familiar in Sri Lanka or Zimbabwe.

The funniest joke in Canada revealed a nastier side of our culture:

When NASA first started sending up astronauts, they quickly discovered that ballpoint pens would not work in zero gravity. To combat the problem, NASA scientists spent a decade and $12 billion to develop a pen that writes in zero gravity, upside down, underwater, on almost any surface including glass and at temperatures ranging from below freezing to 300 C. The Russians used a pencil.

Much as we Canadians love our neighbors to the south, we also love to see the U.S. get it’s comeuppance. The humor of this joke depends on a shared cultural perception of Americans “overdoing” it on the world stage. Canada’s reputation as a source of world class comedians and satirists has been honed by this love/hate relationship with the U.S. Perhaps it’s not coincidental that this same tendency has produced some of the world’s best known observers of human behavior and social peculiarities, including Malcolm Gladwell, Steven Pinker and Marshall McLuhan.

In tomorrow’s post, we’ll talk about how we process humor and why we can laugh at both Oscar Wilde and Three’s Company.

The Decline and Fall of Our Mythologies

What happens when information swamps our common myths? What happens to humans when facts overtake commonly shared fantasies?

In yesterday’s post, I started by looking at how our culture might be moving too quickly for myths to keep up. This is important because human’s have historically used myths to create a “oneness” of mind. Myths often come bundled with behavioral codes and societal rules. Myths have dictated how we should think and act. Myths rule the mob.

But in the last century, one sweeping technical advance had two very different impacts on two different parts of our world. Today, I want to examine the impact of TV in North America and Communist Russia.

The Death of Mythology in America

bowlingaloneIn his book Bowling Alone, Robert Putnam noticed that American values did an abrupt u-turn somewhere in the middle of the 60’s. After a decades long trend of increasing participation in community activities, Americans stopped spending time together. They went to church less often, belonged to fewer service organizations, attended fewer PTA meetings, stopped having dinner parties, stopping playing Bridge with the neighbors and quit their bowling leagues. Not coincidentally, the percentage of voter turnout in elections also started to drop. Americans, once the most intensely community minded people on earth, stopped spending time with each other.

This trend didn’t make American’s bad people, however. At the same time that American’s became less concerned about the well being of their immediate community, they became more concerned about universal issues such as civil rights, equality of women, international piece, religious persecution, sexual intolerance, freedom of speech and nuclear disarmament. At the same time we were becoming less engaged with our communities we were becoming more open minded and tolerant in our ideologies.

bowlingalonegraphThe chart shown, from the BuyerSphere Project, provides one hint about why this mental about face may have happened in the middle of the 60’s:

As you can see, the 50’s and 60’s were also the decades where most of us brought TV into our homes. In 1950, only about 12% of American homes had TV. By 1960, that number had exploded to 78%. This meant we spent more time in our homes, which naturally meant we spent less time outside the home, interacting with others. That alone might explain our withdrawal from our communities. But a simple reckoning of where we spend our time wouldn’t explain the ideological blossoming of America. I believe it was more than just where we were spending our time. I believe it was what we were spending our time doing. As we viewed the world through a flickering blue screen, our common myths were being slowly but surely destroyed.

Myths rely on an absence of information. Myths depend on a singular point of view, supported by carefully chosen and disseminated information, in the guise of facts. The more singular the culture, the more important it is to carefully restrict the flow of information. Societies where there are strict codes of behavior and adherence to one ideology have the tightest censorship rules and the most virulent propaganda.

The Myth of the American Dream

While America in the first half of the 20th century was philosophically a democratic, pluralistic society, it was, in practice, a culture heavily bound by commonly held myths. In 50 years, America was rocked by two world wars and a decade long economic crisis. Well over half of these 50 years was spent united against a common enemy and sharing in common hardships. We were sustained by our mythologies – the importance of hard work, the ultimate rightness of democracy, the ultimate wrongness of tyranny, the ideal of the American dream. Our channels of information carefully supported these myths and filtered out dissenting facts. Even in the 50’s, the imagined spectre of Communism helped us maintain a common mythology, leading to McCarthyism and other irrational behaviors.

But in the 60’s, the electronic window of television provided a new channel of information. The history of television typically runs a similar path wherever it plays out. In the beginning, it is a tightly restricted channel that offers governments and other power structures an unprecedented opportunity to build and strengthen common mythologies through controlled programming and propaganda. But, over time, the leash on TV programming inevitably gets loosened. It’s difficult to keep too tight a reign on a communication medium that travels freely over the airwaves. The common mythological view gives way to a pluralistic, fragmented pipeline of information. We see other realities, other ideologies, other cultures. As awareness seeps into our collective consciousness, our myths start to die. Our “oneness” gets fragmented across multiple ideological and sociological lines.

This, I believe, is what happened to us, starting in the 60’s. Television forever changed how we looked at the world. TV provided the lens through which we lost our innocence, discovering other truths beyond the American mythology. Putnam also cites TV as one of the factors that eroded our social capital. I suspect it played a bigger part than even he imagined.

The Death of Mythology in the USSR

communist-poster-1967-grangerIf the effect of TV was earth shaking in a democratic America, at least it appears that most of our institutions will survive the transition. Our governments are essentially built on the same foundations they were a century ago. The same was not true for Communist Russia. There, the very structures of government crumbled along with their myths.

In the analysis of the decline and fall of Communism in the former Soviet Union, the role of television has only been mentioned in passing, but the timeline of the introduction of TV and the decline of the Soviet Communist government are suspiciously aligned. State controlled TV was introduced in the Soviet Union at roughly the same time as in North America (just before World War II) but its spread was delayed by the war. Also, the saturation rate of TV in the Soviet Union lagged far behind America. In 1960, when 78% of Americans had a set in their homes, only 5% of the Soviet population could watch TV. It wasn’t until the mid 80’s that over 90% of Soviets could watch TV. This coincided almost exactly with the introduction of glasnost (transparency, openness and freedom of information) and perestroika (a restructuring of government) by Mikhail Gorbachev. Demands for more openness and freedom moved in lock step with the adoption of TV and the lessening of restrictions on programming.

If the pervasiveness of myths was an important factor in the history of America, the very mythology of Communism was the foundation of Soviet history. History was literally rewritten to make sure that available information aligned with the mythology currently in vogue. And this mythology, the utopia of Communist ideology and the depravity of capitalism (myths that run directly counter to our western ones) kept the emotions of Soviets aligned for the first 60 years. But just like their American counterparts, TV provided Soviets with a glimpse of reality beyond the mythology. There were other channels of information that began to erode faith in the myths. The speed of TV surpassed the durability of the myths. The rest, as they say, is history.

The Accelerated Demise of Our Myths

The decline of our myths started with the introduction of TV, but the fragmentation of our ideologies and realities has been accelerated dramatically by the Internet. We are bombarded by information, much of which comes to us through unedited, unrestricted channels. The Internet is a massive organic hotbed of differing opinions from millions of different voices. Myths can hardly hope to survive in such an environment.

My original question was: what happens when information strips away our myths, along with the social codes embedded in them? What happens when our common views are shattered into billions of different fragments? If the introduction of TV caused the social fabric of America to unravel and the Soviet empire to crumble, what will the digital onslaught of information do?

What indeed?

Living Between the Disconnected Dots

We’ve been in transition for a long time. And it’s starting to wear us down.

Cognitive anthropologist Bob Deutsch had a column this morning that talked about the crisis of time we’re all experiencing in our lives. It seems we’re always rushing to do something. In the column, Bob had a paragraph that jumped out at me:

The consumer finds himself at a cognitive impasse, where America is presently “between mythologies.” We are not what we once were, and we do not yet know what we will become. This is a hard place for a culture. Worse, because of the speed of the culture, and the perceived complexity and unpredictability of things, people experience the world as a series of unconnected dots.

Myth-Beggoten

virginofguadalupewikiHis line – ‘between mythologies” – was particularly interesting. Humans are animals that need to share a lot of things. We are herding animals and this need to herd drives much of our behavior. We look for commonalities and feel more comfortable when we find them. It gives us a sense of belonging that is very important. And myths are an essential part of that formula.

For our entire history, our shared acceptance of myths has united us. Myths govern our view of the world. They are the tools we have invented to explain the unexplainable. But, one by one, science and technology have stripped down our myths and thrown them into question. Myths come from the deeper, darker recesses of our brain, down in the sub cortical regions of our neural basement. They don’t stand up very to the cold hard light of rational reasoning. And increasingly, we are forced to be reasonable about the things in our life. Information drives us towards reason, and we have more information thrown at us than ever before.

Moral Reinforcement

Myths also served another purpose. They gave us rules to govern our behavior. Most of our myths were religious in nature and came with a corresponding code of social behavior. The basic rules of herd survival,  including fairness and reciprocal altruism, were baked into the package. That’s why a variation of the Golden Rule is found in every single religion in the world.

But, when the myths start to break down, what happens to the rules of behavior that came bundled with them? We start to get confused. Things start to become disconnected.

The Atheist Next Door

There’s a mix up of cause and effect that we struggle with when we talk about things like religion. Even if we renounce our religion, we don’t suddenly become evil people. Just because atheists don’t believe in God doesn’t mean they’ve freed themselves from the obligation to do right  by their fellow man.  In fact, if you had to pick someone to be your neighbor, an atheist wouldn’t be a bad choice. Statistically speaking, the percentage of atheists in prison is far less than the percentage of atheists in the general population. Atheists are also less likely to get divorced. When you look at the types of behavior that govern the continuance of social harmony, atheists have a far better track record than most segments of the population.  Religion doesn’t cause morality. Morality superseded religion. You could say morality begat religion. Unfortunately, a lot of the less noble instincts of our species also got tied up in the whole religious bundle – including the tendency of humans belonging to different herds to try to kill each other.

But when our myths, including religion, start to slip away under the scrutiny of rationalization, we start to feel cut out from the herd. We start to become disconnected from our sense of “oneness”. We still try to do the right thing, but the reason why isn’t as clear as it once was. If we stop to think about it, we can come up with a Dawkinesque rationalization using things like game theory and “tit for tat” reciprocal strategies, but it was a whole lot easier just to believe that God would smite us if we weren’t nice. The fact is, we don’t take much time in our lives to “stop and think.” We cruise through live 95% of the time on emotional autopilot and myths are great guidance systems for emotions.

Myth-drift

So, back to Deutsch’s point. What happens as we drift between mythologies? The Pew Forum on Public Life and Religion has shown that the percentage of “non religious” people in America has grown from just over 7% in 1990 to over 16% in 2007. What is perhaps even more telling is to see how that group breaks down. Only 1.6% were atheists and 2.4% agnostics. These are the ones who were, to some degree, proactive about severing their ties with an accepted mythology. 12.1% were simply drifting away from their mythologies. They were wandering out there, beyond the idealogical boundaries of the herd.

Deutsch talks abut the increasing pace of our lives being the culprit in our sense of disconnection. And, in that drive to do more in less time, we tend to sample life in little commoditized chunks. Ironically, in the same email that continued the link to Deutsch’s article was a sidebar with the top 10 franchises of 2009, courtesy of Entrepreneur magazine:

Top 10 Franchises Of 2009
1.    Subway
2.    McDonald’s
3.    7-Eleven
4.    Hampton Inn/ Hampton Inn & Suites
5.    Supercuts
6.    H & R Block
7.    Dunkin’ Donuts
8.    Jani-King
9.    Servpro
10.    am/pm Mini Market

It was a fitting echo to Deutsch’s words. The most successful businesses are the ones that slice off some aspect of our lives and serves it up to us fast and shrink wrapped, preferably at a cheap price.

I’m not so sure we are simply “between mythologies” as Bob Deutsch suggests.  I suspect we’re moving too fast for myths to keep up. Myths, by their very nature, have to grow to critical mass to be effective. Historically, myths were the foundation for global religions. Today, myths are email strings that quickly get exposed on snopes.com. We deconstruct myths before they get a chance to gain enough traction to serve their purpose: uniting us in a common view. We have access to too much information for myths to stand much of a chance of survival. That’s where I’ll pick up in tomorrow’s post.

Socially, We’re Suckers for a Deal

Razorfish’s new FEED 2009 report found that consumers like to spread the word digitally about great deals on brands. In fact, this far surpassed their desire to just talk about brands.

Humans are still Humans, even Online

Here’s the thing that gets me. When we talk digital channels, we seem to forget that humans are humans. We’ll still be the way we’ve always been, we’ll just do in on a new canvas. The “finding” of FEED 2009 discovered that we like to talk about deals. This has been hardwired into humans since we crawled out of caves. In a bit, I’ll share the findings of an interesting study that looked at how this social news spreads through our networks.

The Results of FEED

But first, let’s look at the other results of the study. Despite my morning grumpiness, this is a report worth downloading:

FEED09_Chart-Q1765% of consumers have had a digital experience that either positively or negatively changed their opinion about a brand. Again, this is behavior that is common, we all have perception altering brand experiences. As we spend more time online, it’s natural that this will happen here too.

Branding is now a participatory experience. We’re no longer passive consumers of brand messaging. We now expect to roll up our sleeves, get in and muck around with the building of brands. We want to do things with the brand. We will now participate in building the aggregate story of a brand. 73% of study participants had posted a product or brand review on web sites like Amazon, Yelp, Facebook or Twitter. We now have a voice and we’re using it.

We’re becoming Brand Fans. 40% of consumers have “friended” a brand on Facebook and/or MySpace and 26% of followed a brand on Twitter. Again, this isn’t new, it’s just going digital. There are certain brands that inspire fierce loyalty: Apple, Harley Davidson, Nike. It’s natural that these Brand Fans would now be expressing themselves online. One word of caution for Brand Marketers here. People won’t suddenly become fans just because you’re on FaceBook. You have to be a brand that people care about.

FEED09_Chart-Q27Here’s the study tidbit that was “surprising”. Of those that follow brands on Twitter, 44% said access to exclusive deals is the main reason. Same is true for those that “friended” a brand on Facebook or MySpace..accounting for 37% of participants. The next highest reason for following a brand on Twitter? Being a current customer, at 23.5% And again, this would be for those brands that inspire an unusually high degree of loyalty.

Strength of Weak Ties

Sometime ago, I talked about a fascinating study by Frenzen and Nakamoto that looked at how rumors, or in this case, news of a bargain, spread through social networks. It explored the roll of Mark Granovetter’s famous “Weak Ties” in social networks. Social networks tend to be “clumpy”, rather than uniformly dense. There are dense clumps, representing our families, closest friends and co-workers that we see every day. You’re connected to these people with “Strong Ties”. But the clumps are also connected with “Weak ties” that span the gaps. These are ties between more distant family, casual friends and acquaintances. As Granovetter discovered, news spreads quickly through the strong ties within a clump, but it’s the ability to jump the weak ties that really causes word to spread throughout the network. We rely on the “connectedness” of these weak ties for things like news on potential jobs, social tidbits and yes, the scoop on a great bargain. If you look at the nature of these weak ties, you’ll realize that it’s exactly those types of ties we tend to maintain on Twitter and Facebook.

In 1993, Jonathon Frenzen and Kent Nakamoto decided to explore the conditions that had to exist for news to jump from cluster to cluster across those weak ties. They tested the nature of the message itself and also how the news would impact the person delivering the message, a condition called moral hazard. In other words, would the messenger lose something by spreading the word? The scenario they used to test the conditions for this social “viralness” was news of a sale. There were three variables built into the study: the structure of the network itself (strongly connected vs weakly connected), the attractiveness of the sale (20% off vs 50 to 70% off) and the availability of the sale item (unlimited vs very limited quantities – introducing the aspect of moral hazard).

Frenzen and Nakamoto found that in all cases, news of the sale spread quickly through the strong clusters. But when the message wasn’t that remarkable (the 20% off example), word of mouth had difficulty jumping across weak ties. Also, when moral hazard was high (quantities were limited) again, the message tended to get stuck within a cluster and not be transmitted across the weak ties. If you look back at the original post, I go into more depth about how this impacts our inclination to spread news through our networks.

Twitter: The Weak Tie Pipeline

So, let’s take this back to the Razorfish study. There needs to be a few conditions present for news to spread along weak ties: The information has to be valuable (50 to 70% off) and it can’t put the person holding the information in moral hazard (if I share this information amongst too many people, there will be nothing left for me or my family). The example given in the study, following a Brand on Twitter to get news of exclusive offers, is our “weak tie” to the brand, so we can be first to benefit. And, if the discount is substantial and there is low moral hazard, we will in turn Tweet about it ourselves.

The Razorfish study indicated surprise that more people were engaging in social networks to learn about discounts and not to evangelize brands. Again, if we look at human behavior, there is no surprise here. Brand evangelization engages a completely different part of our brain, the same part, incidentally, that gets triggered when we talk about religion and other unusually strong beliefs. These are things most of us hold closer to our chest. We share them with our strong ties, but we don’t usually spread that across weak ties. There are exceptions, of course, but I think most marketers assume all of us are willing to build public shrines to their products. That’s just not how humans tick.

But, humans can’t resist spreading the word if that word has social value (a great bargain) and we don’t miss out ourselves by spreading the word. Those are the messages built to set Granovetter’s weak ties singing in a social network. We’ve been this way for a long, long time. And now that Twitter and FaceBook are here, we’ll still be that way.

The Common Denominator between Brains, Cities and the Internet (..oh..and ants too)

If you took the time to look at an ant colony..really look at it…you’d be amazed. In his book Emergence, Steven Johnson did just that. And here’s what he found. Ant colonies are perfectly designed. The food supply of the colony is the perfect distance away from trash pile, and both are strategically placed to be the greatest possible distance from the ants’ graveyard. It’s as if some ant mastermind somewhere took the time to plot out the colony design on some ant-sized draftboard. Of course, that didn’t happen. What did happen is that even ant sized brains can remember a set of simple rules and over time, even with the complexity of thousands of ants doing their thing, a sort of order emerges. Patterns that look to be deliberated planned emerge out of complex and seemingly chaotic activity.

The Organized Cesspool: Manchester

In the 1800’s, the industrial revolution caused the city of Manchester, England to explode in size, from 24,000 in 1773 to 250,000 by 1850. The growth was not steered by any form of urban planning. Factories sprung up anywhere. Factories needed workers, so new neighborhoods, many shantytowns housing the poorest of the poor seeking work, suddenly sprouted up. People need some basic form of support, so new shops and services suddenly appeared. All this happened without a plan in place, a seemingly hopeless mishmash of urban development. Alex De Toqueville described it like this, “From this foul drain the greatest stream of human industry flows out to fertilize the whole world. From this filthy sewer pure gold flows. Here humanity attains its most complete development and its most brutish; here civilization works its miracles, and civilized man is turned back almost into a savage.”  Dickens was even less kind, ” What I have seen has disgusted and astonished me beyond all measure.”

One of the visitors to Manchester saw something different, however. Frederich Engels, who would become co-author of the Communist Manifesto with Karl Marx, came to Manchester to see first hand the horrific struggles of the Industrial-era working class. Certainly he found what he came looking for, but he also saw something that surprised him. There, in the squalid chaos that was Manchester, he found a strange sort of order that had emerged. Manchester had developed so that the factory owners that lived in the upper class neighborhoods could live for years in the city without seeing a working class neighborhood. Thoroughfares, businesses and social institutions emerged so that the city just “worked” for it’s inhabitants. Just like the ants, the citizens of Manchester had some social rules that dictated the pattern of the city that emerged.

Brains and Cities: Evolved Functionality

citybrainThis natural evolution of cities is the subject of a recent study that comes from Rennselaer Polytechnic Institute. The finding? Cities are organized like human brains.As cities grow, they not only increase in physical size, they also become more densely interconnected. As brains increase in complexity from species to species, you don’t just get more neurons, you also get more efficient neurons. Both can handle more traffic.

The study used Seattle and Chicago as examples. You couldn’t just take Seattle and triple it to become Chicago. The traffic corridors wouldn’t be able to handle the increased flow. There wouldn’t be enough on ramps and off ramps, and the ones that did exist would be would be too small. The services and support needed to accommodate the population wouldn’t be efficiently planned. As cities grow, they evolve to meet the needs of their citizens. Every time I visit New York, it amazes me that Manhattan can work at all. It seems to be an impossibly delicate act of magic..keeping that many people on an island fed and functioning. This is one of the reasons high growth cities struggle to keep up with infrastructure such as required freeways and public transit – they’re growing faster than the infrastructure, handcuffed by the need for administrative approval, can change to support them.

And if I think Manhattan is a miracle, the complexity of what the human brain has to deal with daily represents a feat of impressiveness several magnitudes greater. Indeed, the functioning of the human brain is so complex, all the combined efforts of science have barely scratched the surface of how the damned thing actually works.

The Emergence of the Internet

This common theme of functional evolution and patterns emerging from complexity is also playing out currently on the Internet. Much like Industrial age Manchester, the Internet is growing exponentially without any master plan. And yet, it seems to work. And, as the internet evolves, just like brains and cities, it becomes more interconnected. Functionality is increased through API’s and mash-ups. The internet is evolving into an incredibly complex ecosystem that is remarkably workable. And, like all complex systems, the emergence of workable patterns will depend on a handful of universal rules: the ability to find information, the ability to do things, the ability to talk to people, the ability to have fun and the ability to buy stuff. That’s all we really want and the Internet will naturally emerge in the way best suited to accomplish those simple goals.

Two Different Views of Tweeting

DigitalNativeComparison2Last week in San Jose, I was talking to a group of marketers about how digital natives and digital immigrants use social networking. Inevitably, the subject of Twitter came up. In our recent BuyerSphere research, we found that Digital Natives (the younger generation that grew up with technology) use Twitter or microblogging platforms more than Digital Immigrants (the older generation that adopted technology as adults). Someone in the audience said that he thought it was common knowledge that younger people don’t “tweet” but older people do, running counter to our research. The following chart shows the percentage of difference in time spend each week between the Digital Natives and Digital Immigrants in our sample:

As you can see, Digital Natives spend significantly more time on social networks and Twitter..almost 50% more time than Digital Immigrants. Yet, Twitter is labeled as an older person’s platform. Today, from the PEW Internet group, new numbers came out on Twitter usage that seem aligned with our findings:

emarketertwitterThe core audience for Twitter is squarely in the Digital Native age group. I think the answer lies in how the respective groups use Twitter. And this difference in usage and attitude extends beyond Twitter to almost any social networking platform.

The Digital Immigrant views Twitter as a tool. It’s a way to get information out, build traffic to a blog, connect with someone. We treat it as technology that offers us another way to get things done.

But for the Digital Native, Twitter is just part of the world they live in..like air or water. They don’t treat it as technology. It simply is. This attitude towards technology as not being technology is common amongst Natives. They don’t have the same “Gee Whiz” awe of technology. They’re not constantly comparing Twitter or Facebook against the good old days. Why should they? For them, this is just part of the world they live it..there is no reference point in the past. That’s why Natives spend substantially more of each week interacting with technology that connects them with their lives and social circle. For myself, FaceBook is a destination, as is Linked In or Twitter. I only go there when I need to do something. But for the Native, it’s just part of their environment.

Finally, I want to share the view of another Digital Immigrant, Comedian C.K. Louis, who ranted about the Native lack of appreciation for technology.

The New Speed of Information

First published August 27, 2009 in Mediapost’s Search Insider

This summer, we had fires in the town I live in. From the back deck of my house, I could see the smoke and, as darkness descended, the flames that were threatening the homes in the hills above Kelowna. I had friends and co-workers that lived in the neighborhoods that were being evacuated, so I wanted to know what was happening as soon as possible.

I was sitting on the back deck, watching the progress of the fire through binoculars and monitoring Twitter on my laptop. My wife was inside the house, listening on the radio and watching on TV. Because I had an eyewitness perspective, I was able to judge the timeliness of our news channels and gained a new appreciation for the speed of social networks.

News That’s Not So New

If you had tuned in to our local TV station even hours after the fires began, you wouldn’t have known that anything out of the ordinary was happening. There was no mention of the fire for hours after it started. The TV station in Vancouver was better, with real-time coverage a few hours after the fire first started. But their “coverage” consisted of newscasters repeating the same limited information, which was at least 2 hours out of date, and playing the same 30-second video loop over and over. If you needed information, you would not have found it there.

The local news radio station fared a little better, reporting new evacuation areas as soon as they came through the official communication channels. But the real test came at about 8:45 p.m. that night. The original fire started near a sawmill on the west side of Okanagan Lake. Around the aforementioned time, I noticed a wisp of smoke far removed from the main fire. It seemed to me that a new fire had started, and this one was in the hills directly above the subdivision that my business partner lived in. Was this a new fire? Were the homes threatened? I ran in and asked my wife if she had heard anything about a second fire. Nothing was being reported on TV or radio. I checked the local news Web sites. Again, no report.

Turning to Twitter

So I tweeted about it. Within 15 minutes, someone replied that there did seem to be a second fire and fire crews had just gone by their house, on the way up to the location. Soon, there were more tweets with eyewitness accounts and reports of more fire crews. In 30 minutes, the Kelowna Twitter community had communicated the approximate location of the new fire, the official response, potential neighborhoods that might be evacuated and even the possible cause of the fire.

Yes, it was all unvetted and unauthorized, but it was in time to make a difference. It would take TV two more hours to report a possible new fire, and even then, they got most of the details wrong. The local radio station again beat TV to the punch, but (as I found out afterwards) only because a reporter was also monitoring Twitter.

We’ve all heard about the new power of social media, whether it be breaking the news of Michael Jackson’s death or the elections in Iran, but for me, it took an event a little closer to home to help me realize the magnitude of this communication shift. Official channels are being hopelessly outstripped by the efficiency of technology-enabled communications. Communication flows freely, unrestricted by bottlenecks. One might argue that with the freedom in restrictions, one sacrifices veracity. There is no editor to double-check facts. But in the case of the Kelowna fires of 2009, at least, official channels proved to be even more inaccurate. Not everything I read on Twitter was true, but the corrections happened much faster than they did through the supposed “authorized” channels. Twitter had broken the news of Jackson’s death while the official news sources still had him in the hospital with an undisclosed condition. When it came to timely, accurate information, social media beat the massive news machine hands down.

Do we need a two-hour jump on the news we hear? Is it really that important that we know about events as soon as they happen? When a fire is bearing down on your home and every minute gained means you might lose one less precious keepsake or treasured photo, you bet it’s important.

Tweets from the Edge

First published February 5, 2009 in Mediapost’s Search Insider

I’m now on Twitter (@outofmygord if you’re interested), which, to use the emerging verb of consensus, means that I tweet.  I’m not sure I’m a Twaddict (a la Todd Friesen) but I am moving through Rohit Bhargava’s 5 Stages of Twitter Acceptance

1 . Denial  — “I think Twitter sounds stupid. Why would anyone care what other people are doing right now?”

2. Presence —  “Ok, I don’t really get why people love it, but I guess I should at least create an account.”

3. Dumping –“I’m on Twitter and use it for pasting links to my blog posts and pointing people to my press releases.”

4 .Conversing — “I don’t always post useful stuff, but I do use Twitter to have authentic 1X1 conversations.”

5. Microblogging — “I’m using Twitter to publish useful information that people read AND converse 1×1 authentically .”

My self-assessment has me currently lodged between steps 3 and 4, but with signs of promise. And so, through the phenomenon of synchronicity, it now seems that everywhere I turn I see signs of Twitter. One of the recent one’s was Kaila Colbin’s Search Insider column about Twitter’s monetization strategy, or lack of same. Twitter is not unique; virtually every social network struggles with this issue. I would like to add two observations from my perspective.

The Curse of the Early Adopter

Social networks seem to be perennially stuck on the edge of the wrong side of Geoffrey Moore’s Chasm.  They flourish with early adopters, who are by nature fickle when it comes to technology and any bright shiny object, but social networks have difficultly embedding themselves in the mainstream. I’m seeing signs that Facebook might successfully make the leap across the Chasm, based on my “Jill” litmus test. When my wife is familiar with a technology, it usually means it’s crossed the Chasm.  Jill doesn’t have a Facebook page, but she has visited it (due largely to the fact that we have teenage daughters — ’nuff said).

The problem in trying to track these things is that whatever the blogosphere is buzzing about bears little resemblance to what will actually gain traction with a mainstream market. We (and yes, I include myself) are exactly the wrong people to prognosticate about what may be the next killer app for the average Joe. We are all technology nerds. Everyone I know in this industry is a technology nerd. The ones who actually blog and emerge as thought leaders are the most hopeless of the lot. We exist in a rarified technological atmosphere and have largely lost touch with the real world. It doesn’t mean we’re inherently prone to be wrong about the marketability of new technology, but it also means we’re not inherently right. We’re guessing, and all too often we let our personal enthusiasm bias our forecasts.

Social networks are always held up to Google as the monetization baseline, and it’s an unfair and misleading comparison. There were a number of circumstances unique to Google that won’t be replicated with a social network. They include user intent, the nascent stage of the Internet during Google’s introduction, lack of visionary competition and the luxury of developing a critical mass of usage on its own real estate.  The problem with monetizing Twitter is that much of the interaction with it happens on a third-party app.

Social and Market Norms

Perhaps the biggest reason why it’s difficult to monetize social results has to do with how our online experiences are framed, and the concept of social vs. market norms.  Here’s an example. You take your family out for an Italian dinner. The meal is fabulous. The portions are huge. After one of the best meals you’ve ever had, you hand $180 to the hostess. She throws it back in your face, storms into the kitchen and you’re abruptly escorted to the door. If we were at a restaurant, this reaction would be rather surprising. But if we’re at my mother-in-law’s for Sunday dinner, it suddenly makes sense. The difference is the frame in which we view the scenario. If we look at it through a market norm, the rules that govern commerce and fair trade, it’s entirely appropriate to offer fair compensation for a meal. If we look at it through a social norm, the rules that govern our family and friend relationships, it’s an unforgivable insult.

This slippery slope between market and social norms is the treacherous one that a social network must tread. Here’s another example. You’re at a party and you’ve asked two friends about their opinions on the best car for you to buy. Another person at the party overhears this — someone who just happen to be a salesperson at the local Ford dealership. Sensing opportunity, the salesperson whips around and immediately starts telling you why the Ford Mustang is the perfect car for you. How would you feel? How would you respond to the information?  How uncomfortable would the discussion become?

The challenge is that you moved from a social norm to a market norm and you weren’t in control of the transition. The same is true when you use a social network to ask for information and suddenly the network uses that to present targeted ads to you.  Kaila was right to point to Twitter’s search functionality as its only monetization opportunity. Google has conditioned us to accept a search results page as a place we can look at through market norm eyes. Also, we’re searching all Tweets for mention of a product, not specifically asking our friends. The difference is crucial in how we accept the advertising message.

The confluence of social networking and search is exciting to contemplate, but expect a lot of trial and error in the quest to find the right business model. Personally, I don’t expect to find it any time soon, and I also expect a lot of miffed users as part of the collateral damage.

Questioning the Power of the Influencer

First published October 2, 2008 in Mediapost’s Search Insider

Word of mouth is powerful in marketing. In the last two weeks, we’ve seen how the opinions of others can cause us to change our own beliefs to match. We’ve also seen how the speed at which the word spreads is a function not only of the structure of the network itself, but also the value of the message and its impact on the people in the network, as well as how much they stand to gain (or lose) by spreading the word.

Influencers: Our Connection to Opinion?

In the world of marketing, one of the most cherished concepts has been the idea of an influencer or opinion leader, the super-connected individual who acts as a hub in an information cascade, rapidly disseminating the idea to many. According to this theory, most of us (90%) play relatively passive roles in information cascades, meekly accepting the opinions of these influencers and following the herd. Katz and Lazarsfeld introduced the two-step influencer model in the middle of the last century, showing how media first influences these influencers, or opinion leaders, who then act as a conduit and “infection agent” for the greater population.

It’s Not the Influencer, It’s Our Willingness to be Influenced

For the past 6 decades, marketers have allocated a lot of effort in reaching these influencers, assuming that once you capture the influencers, you capture the entire market. The assumption was that information cascades depended on these influential hubs. Malcolm Gladwell’s “TheTipping Point” brought this phenomenon to popular attention.

In the past few years, a number of researchers, including Duncan Watts from Columbia University, have questioned the impact of influencers on information cascades. They’ve created several network models which have shown that in most cases, ordinary individuals are all that’s required to trigger a word-of-mouth cascade. We are not merely sheep following the herd. We are all influencers in our own right, but only when we feel strongly about something. The necessary ingredient is not a hyper-connected influencer or super trend-setter, but rather a group of people willing to be influenced.

Passion by Word of Mouth

Which brings us to Mel Gibson’s “The Passion of the Christ.” When promoting the film, Gibson knew the most receptive audience would be church-goers. So he arranged for private screenings and the distribution of free tickets in churches throughout North America. We had Watts’ ideal model, a low variance network (similar levels of influence) that shared a vulnerability to influence, given the nature of the message. Word spread quickly before the launch of the movie (which also resulted in a firestorm of controversy), making “The Passion of the Christ” one of the most successful movies of 2004.

This example also leads us to a possible error in analysis of information cascades that has perpetuated the “influencer” theory. It’s relatively easy, when looking in hindsight, to make the assumption that if a cascade happened, the individuals at the beginning of the cascade had to be unique in their ability to influence others. A proponent of the Influentials Theory could look at the example of “The Passion of the Christ” and say that it was the pastors and ministers of the selected screening churches that acted as the influencers, spreading the word to their congregations.

But Watts’ theory offers an alternate explanation. The everyday, commonly connected members of the audience were willing to be influenced, and once captured by the message, went and spread it within their other social groups. It was the willingness to be influenced that was the critical factor. To use the analogy provided by Watts in his paper, assuming some unique level of influence by the catalysts of a cascade is like assuming that the first trees to burn in a forest fire are somehow able to spread flames farther than other trees. Often, the fact that the tree was combustible in the first place is overlooked.

Starting a Brand Fire

So, when we talk about brand, what makes a tree ready to catch on fire? Here we have another important insight from Watts’ work. Too many marketers make the assumption that influencers are the critical component of success. Proctor and Gamble has made influencer marketing a cornerstone of its strategy. But the fact is, if “The Passion of the Christ” was an unremarkable movie that audiences couldn’t connect with, all the influencers in the world wouldn’t have caused the word to spread. It was a powerful message connecting with an audience primed to accept it.

Watts’ models show that the success of a cascade depends on the vulnerability to influence. If that is present, ordinary individuals can cause the word to spread as far and just as quickly as hyper-connected influencers. And the vulnerability to be influenced, the “combustibility” of the audience, depends on many factors, perhaps the most important of which is the back story of the brand.

The Combustible iPhone

Look at what has been one of the most successful cascades of recent times: the Apple iPhone. The iPhone is a tremendously combustible product. It’s not technology mavens causing the word to spread (although they do have influence. Watts is quick to point out that they have impact, but it may not as disproportionally large as everyone believes), it’s the person sitting next to you on the plane who says she loves it. And we’re receptive to that message because we have that magic connection of brand (Apple makes cool products) and a remarkable product. We’re ready to be set on fire.

I’ve spent the last few columns detailing the aspects of word of mouth because they have a tremendous impact on brand and how we create our own brand beliefs. And it’s these brand beliefs that are triggered when we interact with search results. Next week, we return to more familiar territory and see how this interaction plays out.