How Activation Works in an Absolute Value Market

As I covered last week, if I mention a brand to you – like Nike, for instance – your brain immediately pulls back your own interpretation of the brand. What has happened, in a split second, is that the activation of that one node – let’s call it the Nike node – triggers the activation of several related nodes in your brain, which is quickly assembled into a representation of the brand Nike. This is called Spreading Activation.

This activation is all internal. It’s where most of the efforts of advertising have been focused over the past several decades. Advertising’s job has been to build a positive network of associations so when that prime happens, you have a positive feeling towards the brand. Advertising has been focused on winning territory in this mental landscape.

Up to now, we have been restricted to this internal landscape when making consumer decisions by the boundaries of our own rationality. Access to reliable and objective information about possible purchases was limited. It required more effort on our part than we were willing to expend. So, for the vast majority of purchases, these internal representations were enough for us. They acted as a proxy for information that lay beyond our grasp.

But the world has changed. For almost any purchase category you can think of, there exists reliable, objective information that is easy to access and filter. We no longer are restricted to internal brand activations (relative values based on our own past experiences and beliefs). Now, with a few quick searches, we can access objective information, often based on the experiences of others. In their book of the same name, Itimar Simonson and Emanuel Rosen call these sources “Absolute Value.” For more and more purchases, we turn to external sources because we can. The effort invested is more than compensated for the value returned. In the process, the value of traditional branding is being eroded. This is truer for some product categories than others. The higher the risk or the level of interest, the more the prospect will engage in an external activation. But across all product categories, there has been a significant shift from the internal to the external.

What this means for advertising is that we have to shift our focus from internal spreading activations to external spreading activations. Now, when we retrieve an internal representation of a product or brand, it typically acts as a starting point, not the end point. That starting point is then to be modified or discarded completely depending on the external information we access. The first activated node is our own initial concept of the product, but the subsequent nodes are spread throughout the digitized information landscape.

In an internal spreading activation, the nodes activated and the connections between those nodes are all conducted at a subconscious level. It’s beyond our control. But an external spreading activation is a different beast. It’s a deliberate information search conducted by the prospect. That means that the nodes accessed and the connections between those nodes becomes of critical importance. Advertisers have to understand what those external activation maps look like. They have to be intimately aware of the information nodes accessed and the connections used to get to those nodes. They also have to be familiar with the prospect’s information consumption preferences. At first glance, this seems to be an impossibly complex landscape to navigate. But in practice, we all tend to follow remarkable similar paths when establishing our external activation networks. Search is often the first connector we use. The nodes accessed and the information within those nodes follow predictable patterns for most product categories.

For the advertiser, it comes down to a question of where to most profitably invest your efforts. Traditional advertising was built on the foundation of controlling the internal activation. This was the psychology behind classic treatises such as Ries and Trout’s “Positioning, The Battle for Your Mind.” And, in most cases, that battle was won by whomever could assemble the best collection of smoke and mirrors. Advertising messaging had very little to do with facts and everything to do with persuasion.

But as Simonsen and Rosen point out, the relative position of a brand in a prospect’s mind is becoming less and less relevant to the eventual purchase decision. Many purchases are now determined by what happens in the external activation. Factual, reliable information and easy access to that information becomes critical. Smoke and mirrors are relegated to advertising “noise” in this scenario. The marketer with a deep understanding of how the prospect searches for and determines what the “truth” is about a potential product will be the one who wins. And traditional marketing is becoming less and less important to that prospect.

 

Consuming in Context

npharris-oscarsIt was interesting watching my family watch the Oscars Sunday night. Given that I’m the father of two millennials, who have paired with their own respective millennials, you can bet that it was a multi-screen affair. But to be fair, they weren’t the only ones splitting their attention amongst the TV and various mobile devices. I was also screen hopping.

As Dave Morgan pointed out last week, media usage no longer equates to media opportunity. And it’s because the nature of our engagement has changed significantly in the last decade. Unfortunately, our ad models have been unable to keep up. What is interesting is the way our consumption has evolved. Not surprisingly, technology is allowing our entertainment consumption to evolve back to its roots. We are watching our various content streams in much the same way that we interact with our world. We are consuming in context.

The old way of watching TV was very linear in nature. It was also divorced from context. We suspended engagement with our worlds so that we could focus on the flickering screen in front of us. This, of course, allowed advertisers to buy our attention in little 30-second blocks. It was the classic bait and switch technique. Get our attention with something we care about, and then slip in something the advertiser cares about.

The reason we were willing to suspend engagement with the world was that there was nothing in that world that was relevant to our current task at hand. If we were watching Three’s Company, or the Moon Landing, or a streaker running behind David Niven at the 1974 Oscar ceremony, there was nothing in our everyday world that related to any of those TV events. Nothing competed for the spotlight of our attention. We had no choice but to keep watching the TV to see what happened next.

But imagine if a nude man suddenly appeared behind Matthew McConaughey at the 2015 Oscars. We would immediately want to know more about the context of what just happened. Who was it? Why did it happen? What’s the backstory? The difference is now, we have channels at our disposal to try to find answers to those questions. Our world now includes an extended digital nervous system that allows us to gain context for the things that happen on our TV screens. And because TV no longer has exclusive control of our attention, we switch to the channel that is the best bet to find the answers we seek.

That’s how humans operate. Our lives are a constant quest to fill gaps in our knowledge and by doing so, make sense of the world around us. When we become aware of one of these gaps we immediate scan our environment to find cues of where we might find answers. Then, our senses are focused on the most promising cues. We forage for information to satiate our curiosity. A single-minded focus on one particular cue, especially one over which we have no control, is not something we evolved to do. The way we watched TV in the 60s and 70s was not natural. It was something we did because we had no option.

Our current mode of splitting attention across several screens is much closer to how humans naturally operate. We continually scan our environment, which, in this case, included various electronic interfaces to the extended virtual world, for things of interest to us. When we find one, our natural need to make sense sends us on a quest for context. As we consume, we look for this context. The diligence of our quest for that context will depend on the degree of our engagement with the task at hand. If it is slight, we’ll soon move on to the next thing. If it’s deep, we’ll dig further.

On Sunday night, the Hotchkiss family quest for context continually skipped around, looking for what other movies J.K. Simmons had acted in, watching the trailer for Whiplash, reliving the infamous Adele Dazeem moment from last year and seeing just how old Benedict Cumberbatch is (I have two daughters that are hopelessly in love, much to the chagrin of their boyfriends). As much as the advertisers on the 88th Oscars might wish otherwise, all of this was perfectly natural. Technology has finally evolved to give our brain choices in our consumption.

 

 

 

 

 

 

Why More Connectivity is Not Just More – Why More is Different

data-brain_SMEric Schmidt is predicting from Davos that the Internet will disappear. I agree. I’ve always said that Search will go under the hood, changing from a destination to a utility. Not that Mr. Schmidt or the Davos crew needs my validation. My invitation seems to have got lost in the mail.

Laurie Sullivan’s recent post goes into some of the specifics of how search will become an implicit rather than an explicit utility. Underlying this is a pretty big implication that we should be aware of – the very nature of connectivity will change. Right now, the Internet is a tool, or resource. We access it through conscious effort. It’s a “task at hand.” Our attention is focused on the Internet when we engage with it. The world described by Eric Schmidt and the rest of the panel is much, much different.   In this world, the “Internet of Things” creates a connected environment that we exist in. And this has some pretty important considerations for us.

First of all, when something becomes an environment, it surrounds us. It becomes our world as we interpret it through our assorted sensory inputs. These inputs have evolved to interpret a physical world – an environment of things. We will need help interpreting a digital world – an environment of data. Our reality, or what we perceive our reality to be, will change significantly as we introduce technologically mediated inputs into it.

Our brains were built to parse information from a physical world. We have cognitive mechanisms that evolved to do things like keep us away from physical harm. Our brains were never intended to crunch endless reams of digital data. So, we will have to rely on technology to do that for us. Right now we have an uneasy alliance between our instincts and the capabilities of machines. We are highly suspicious of technology. There is every rational reason in the world to believe that a self-driving Google car will be far safer than a two ton chunk of accelerating metal under the control of a fundamentally flawed human, but who of us are willing to give up the wheel? The fact is, however, that if we want to function in the world Schmidt hints at, we’re going to have to learn not only to trust machines, but also to rely totally on them.

The other implication is one of bandwidth. Our brains have bottlenecks. Right now, our brain together with our senses subconsciously monitor our environment and, if the situation warrants, they wake up our conscious mind for some focused and deliberate processing. The busier our environment gets, the bigger this challenge becomes. A digitally connected environment will soon exceed our brain’s ability to comprehend and process information. We will have to determine some pretty stringent filtering thresholds. And we will rely on technology to do the filtering. As I said, our physical senses were not built to filter a digital world.

It will be an odd relationship with technology that will have to develop. Even if we lower our guard on letting machines do much of our “thinking” (in terms of processing environmental inputs for us) we still have to learn how to give machines guidelines so they know what our intentions are. This raises the question, “How smart do we want machines to become?” Do we want machines that can learn about us over time, without explicit guidance from us? Are we ready for technology that guesses what we want?

One of the comments on Laurie’s post was from Jay Fredrickson, “Sign me up for this world, please. When will this happen and be fully rolled out? Ten years? 20 years?” Perhaps we should be careful what we wish for.  While this world may seem to be a step forward, we will actually be stepping over a threshold into a significantly different reality. As we step over that threshold, we will change what it means to be human. And there will be no stepping back.

#AlexfromTarget – An Unexpected Consequence of Technology

1414997478566_wps_10_Original_Tweet_of_Alex_frYes, I’m belatedly jumping on the #AlexfromTarget bandwagon, but it’s in service of a greater truth that I’m trying to illustrate. Last column, I spoke about the Unintended Consequences of Technology. I think this qualifies. And furthermore, this brings us full circle to Kaila Colbin’s original point, which started this whole prolonged discussion.

It is up to us to decide what is important, to create meaning and purpose. And, personally, I think we could do a better job than we’re doing now.

So, why did the entire world go ga-ga over a grocery bagger from Texas? What could possibly be important about this?

Well – nothing – and that’s the point. Thinking about important things is hard work. Damned hard work – if it’s really important. Important things are complex. They make our brains hurt. It’s difficult to pin them down long enough to plant some hooks of understanding in them. They’re like eating broccoli, or doing push ups. They may be good for us, but that doesn’t make them any more fun.

Remember the Yir Yoront from my last column – the tribal society that was thrown into a tail spin by the introduction of steel axes? The intended consequence of that introduction was to make the Yir Yoront more productive. The axes did make the tribe more productive, in that they were able to do the essential tasks more quickly, but the result was that the Yir Yoront spent more time sleeping.

Here’s the thing about technology. It allows us to be more human – and by that I mean the mixed bag of good and bad that defines humanity. It extends our natural instincts. It’s natural to sleep if you don’t have to worry about survival. And it’s also natural for young girls to gossip about adorable young boys. These are hard-wired traits. Deep philosophical thought is not a hard-wired trait. Humans can do it, but it takes conscious effort

Here’s where the normal distribution curve comes in. Any genetically determined trait will have a normal distribution over the population. How we apply new technologies will be no different. The vast majority of the population will cluster around the mean. But here’s the other thing – that “mean” is a moving target. As our brains “re-wire” and adapt to new technologies, the mean that defines typical behavior will move over time. We adapt strategies to incorporate our new technology-aided abilities. This creates a new societal standard and it is also human to follow the unwritten rules of society. This creates a cause and effect cycle. Technologies enable new behaviors that are built on top of the foundations of human instinct – society determines whether these new behaviors are acceptable – and if they are acceptable, they become the new “mean” of our behavioral bell curve. We bounce new behaviors off the backboard of society. So, much as we may scoff at the fan-girls that gave “Alex” insta-fame – ultimately it’s not the girl’s fault, or technology’s. The blame lies with us. It also lies with Ellen DeGeneres, the New York Times, and the other barometers of societal acceptance that offered endorsement of the phenomenon.

It’s human to be distracted by the titillating and trivial. It’s also human to gossip about it. There’s nothing new here. It’s just that these behaviors used to remain trapped within the limited confines of our own social networks. Now, however, they’re amplified through technology. It’s difficult to determine what the long-term consequences of this might be. Is Nicholas Carr right? Is technology leading us down the garden path to imbecility, forever distracted by bright, shiny objects? Or is our finest moment yet to come?

The Unintended Consequences of Technology

Who_caresIn last Friday’s Online Spin Column, Kaila Colbin asks a common question when it comes to the noise surrounding the latest digital technologies: Who Cares? Colbin rightly points out that we tend to ascribe unearned importance to whatever digital technology we seemed to be focused on at the given time. This is called, aptly enough, the focusing illusion and in the words of Daniel Kahneman, who coined the term, “Nothing in life is as important as you think it is, while you are thinking about it.”

But there’s another side to this. How important are the things we aren’t thinking about? For example, because it’s difficult to wrap our minds around big picture consequences in the future, we tend not to think as much as we should about them. In the case of digital technology shifts such as the ones Kaila mentioned, what we should care about is the overall shift caused by the cumulative impact of these technologies, not the individual components that make up the wave.

When we introduce a new technology, we usually have some idea of the impact they will have. These are the intended consequences. And we focus on these, which makes them more important in our minds. But some things will catch us totally by surprise. These are called unintended consequences. We won’t know them until the happen, but when they do, we will very much care about them. To illustrate that point, I’d like to tell the story about the introduction of one technology that dramatically changed one particular society.

yiryorontThe Yir Yoront were a nomadic tribe in Australia that somehow managed to avoid significant contact with the western world until well into the 20th century. In Yir Yoront society, one of the most valuable things you could possess was a stone axe. The making of these axes took time and skill and was typically done by elder males. In return, these “axe-makers” were conferred special status in aboriginal society. Only a man could own an axe and if a woman or child needed one, they had to borrow it. A complex social network evolved around the ownership of axes.

In 1915 the Anglican Church established a mission in Yir Yoront territory. The missionaries brought with them a large supply of steel hatchets. They distributed these freely to any Yir Yoront that asked for them. The intended consequence was to make life easier for the tribe and trigger an improvement in living conditions.

As anthropologist Lauriston Sharp chronicled, steel axes spread rapidly through the Yir Yoront. But they didn’t spread evenly. Elder males held on to their stone axes, both as a symbol of their status and because of their distrust of the missionaries. It was the younger men, women and children that previously had to borrow stone axes who eagerly adopted the new steel axes. The steel axes were more efficient, and so jobs were done in much less time. But, to the missionary’s horror, the Yir Yoront spent most of their extra leisure time sleeping.

Sleeping, however, was the least of the unintended consequences. Social structures, which had evolved over thousands of years, were dismantled overnight. Elders were forced to borrow steel axes from what would have been their social inferiors. People no longer attended important intertribal gatherings, which were once the exchange venues for stone axes. Traditional trading channels and relationships disappeared. Men began prostituting their daughters and wives in exchange for someone else’s steel ax. The very fabric of Yir Yoront society began unraveling as a consequence of the introduction of steel axes by the Anglican missionaries.

Now, one may argue that there were aspects of this culture that were overdue for change. A traditional Yir Yoront society was undeniably chauvinistic. But the point of this story is not to pass judgment. My only purpose here is to show how new technologies can bring massive and unanticipated disruption to a society.

Everett Rogers used the Yir Yoront example in his seminal book Diffusion of Innovations. In it, he said that introductions of new technologies typically have three components: Form, Function and Meaning. The first two of these tend to be understood and intended during the introduction. Both the Yir Yoront and the Anglican missionaries understand the form and function of the steel ax. But neither understood the meaning, because meaning was determined over time through the absorption of the technology into the receiving culture. This is where unintended consequences come from.

When it comes to digital technologies, we usually talk about form and function. We focus on what a technology is and what it will do. We seldom talk about what the meaning of a new technology might be. This is because form and function can be intentionally designed and defined. Meaning has to evolve. You can’t see it until it happens.

So, to return to Kaila’s question. Who cares? Specifically, who cares about the meaning of the new technologies we’re all voraciously adopting? If the story of the Yir Yoront is any lesson, we all should.

Want to Be More Strategic? Stand Up!

article-1388357-0050C69D00000258-771_472x345One of the things that always frustrated me in my professional experience was my difficulty in switching from tactical to strategic thinking. For many years, I served on a board that was responsible for the strategic direction of an organization. A friend of mine, Andy Freed, served as an advisor to the board. He constantly lectured us on the difference between strategy and tactics:

“Strategy is your job. Tactics are mine. Stick to your job and I’ll stick to mine.”

Despite this constant reminder, our discussions always seemed to quickly spiral down to the tactical level. We all caught ourselves doing it. It seemed that as soon as we started thinking about what needed to be done and why, we automatically shifted gears and thought about how it should be done.

A recent study may have found the problem. We were sitting down. We should have stood up. Better yet, we should have taken the elevator to the top of the building (we actually did do this at one board retreat in Scottsdale, Arizona). Two researchers at the University of Toronto (home, I should point out, of what was the tallest free standing structure in the world for many years – the CN Tower), Pankaj Aggarwal and Min Zhao, found that a subject’s physical situation impacted how strategic they were. When subjects were physically higher up, say standing on a tall stool, they were more likely to look at the “big picture.”

Our physical context has more than a little impact on how we think. It’s a phenomenon called Mental Construal. And it’s not just restricted to how strategic our thinking is. It can impact thinks like social judgment as well. In a 2006 paper, University of Michigan professor Norbert Schwartz gave some examples that fall under the category called “situated concepts.” For example, the mental images you retrieve when I say “chair” might be different if we’re standing in a living room rather than an airplane or movie theatre. Another example, which unfortunately speaks to a darker side of human nature, is how you would respond to the face of a young African American when shown in the context of a church scene versus the context of a street corner scene.

Schwartz also talks about levels of construal. We’re more successful staying at strategic levels when our planning is trouble free. The minute we hit a problem, we tend to revert to finer grained tactical thinking. Again, in my board experience, the minute we started hitting problems we immediately tried to solve them, which effectively derailed any strategic discussion.

In his book, Creativity: Flow and the Psychology of Discovery and Invention, Mihaly Csikszentmihalyi found that physical contexts can also impact creativity. Physicist Freeman Dyson found that walking was essential to drive the creative process,

“Again, I never went to a class that (Richard) Feynman taught. I never had any official connection with him at all, in fact. But we went for walks. Most of the time that I spent with him was actually walking, like the old style of philosophers who used to walk around under the cloisters.”

In a study where subjects were given pagers and were signaled at random times of the day, they were asked to rate how creative they felt. It turned out the highest level of creativity came while they were walking, driving or swimming. Perhaps it was the physical stimulation, but it may have also been mental construal at work. Perhaps physical movement primed the brain for mental movement.

So, if you need to be strategic, find the highest vantage point possible, with room to walk around, preferably with the smartest person you know.

Two Views of the Promise of Technology

technologybrainIn the last two columns, I’ve looked at how technology may be making us intellectually lazy. The human brain tends to follow the path of least resistance and technology’s goal is to eliminate resistance. Last week, I cautioned that this may end up making us both more shallow in our thinking and more fickle in our social ties. We may become an attention deficit society, skipping across the surface of the world. But, this doesn’t necessarily have to be the case.

The debate is not a new one. Momentous technologies generally come complete with their own chorus of naysayers. Whether it’s the invention of writing, the printing press, electronic communication or digital media, the refrain is the same – this will be the end of the world as we know it. But if history has taught us anything, it’s that new technologies are seldom completely beneficial or harmful. Their lasting impact lies somewhere in the middle. With the good comes some bad.

The same will be true for the current digital technologies. The world will change, both for the positive and for the negative. The difference will come in how individuals use the technology. This will spread out along the inevitable bell curve.

watchingTVLook at television, for instance. A sociologist could make a pretty convincing case for the benefits of TV. A better understanding of the global community helped ease our xenophobic biases. Public demand lead to increased international pressure on repressive regimes. There was a sociological leveling that is still happening across cultures. Civil rights and sexual equality were propelled by the coverage they received. Atrocities still happen with far too much regularity, but I personally believe the world is a less savage and brutal place than it was 100 years ago, partially due to the spread of TV.

On the flip side, we have developed a certain laziness of spirit that is fed by TV’s never ending parade of entertainment to be passively consumed. We spend less time visiting our neighbors. We volunteer less. We’re less involved in our communities. Ironically, we’re  a more idealistic society but we make poorer neighbors.

The type of programming to be found on TV also shows that despite the passive nature of the medium, we didn’t become stupider en masse. Some of us use TV for enlightenment, and some of us use it to induce ourselves into a coma. At the end of the day, I think the positives and negatives of TV as a technology probably net out a little better than neutral.

I suspect the same thing is happening with digital media. Some of us are diving deeper and learning more than ever. Others are clicking their way through site after site of brain-porn. Perhaps there are universal effects that will show up over generations that will type the scale one way or the other, but we’re too early in the trend to see those yet. The fact is, digital technologies are not changing our brains in a vacuum. Our environment is also changing and perhaps our brains are just keeping up. The 13 year old who is frustrating the hell out of us today may be a much better match for the world 20 years from now.

I’ll wrap up by leaving three pieces of advice that seem to provide useful guides for getting the best out of new technologies.

First: A healthy curiosity is something we should never stop nurturing. In particular, I find it helpful to constantly ask “how” and “why.”

Second: Practice mindfulness. Be aware of your emotions and cognitive biases and recognize them for what they are. This will help you steer things back on track when they’re leading down an unhealthy path

Third: Move from consuming content to contributing something meaningful. The discipline of publishing tends to push you beyond the shallows.

If you embrace the potential of technology, you may still find yourself as an outlier, but technology has done much to allow a few outliers to make a huge difference.

Are Our Brains Trading Breadth for Depth?

ebrain1In last week’s column, I looked at how efficient our brains are. Essentially, if there’s a short cut to an end goal identified by the brain, it will find it. I explained how Google is eliminating the need for us to remember easily retrievable information. I also speculated about how our brains may be defaulting to an easier form of communication, such as texting rather than face-to-face communication.

Personally, I am not entirely pessimistic about the “Google Effect,” where we put less effort into memorizing information that can be easily retrieved on demand. This is an extension of Daniel Wegner’s “transactive memory”, and I would put it in the category of coping mechanisms. It makes no sense to expend brainpower on something that technology can do easier, faster and more reliably. As John Mallin commented, this is like using a calculator rather than memorizing times tables.

Reams of research has shown that our memories can be notoriously inaccurate. In this case, I partially disagree with Nicholas Carr. I don’t think Google is necessarily making us stupid. It may be freeing up the incredibly flexible power of our minds, giving us the opportunity to redefine what it means to be knowledgeable. Rather than a storehouse of random information, our minds may have the opportunity to become more creative integrators of available information. We may be able to expand our “meta-memory”, Wegner’s term for the layer of memory that keeps track of where to turn for certain kinds of knowledge. Our memory could become index of interesting concepts and useful resources, rather than ad-hoc scraps of knowledge.

Of course, this positive evolution of our brains is far from a given. And here Carr may have a point. There is a difference between “lazy” and “efficient.” Technology’s freeing up of the processing power of our brain is only a good thing if that power is then put to a higher purpose. Carr’s title, “The Shallows” is a warning that rather than freeing up our brains to dive deeper into new territory, technology may just give us the ability to skip across the surface of the titillating. Will we waste our extra time and cognitive power going from one piece of brain candy to the other, or will we invest it by sinking our teeth into something important and meaningful?

A historical perspective gives us little reason to be optimistic. We evolved to balance the efforts required to find food with the nutritional value we got from that food. It used to be damned hard to feed ourselves, so we developed preferences for high calorie, high fat foods that would go a long way once we found them. Thanks to technology, the only effort required today to get these foods is to pick them off the shelf and pay for them. We could have used technology to produce healthier and more nutritious foods, but market demands determined that we’d become an obese nation of junk food eaters. Will the same thing happen to our brains?

I am even more concerned with the short cuts that seem to be developing in our social networking activities. Typically, our social networks are built both from strong ties and weak ties. Mark Granovetter identified these two types of social ties in the 70’s. Strong ties bind us to family and close friends. Weak ties connect us with acquaintances. When we hit rough patches, as we inevitably do, we treat those ties very differently. Strong ties are typically much more resilient to adversity. When we hit the lowest points in our lives, it’s the strong ties we depend on to pull us through. Our lifelines are made up of strong ties. If we have a disagreement with someone with whom we have a strong tie, we work harder to resolve it. We have made large investments in these relationships, so we are reluctant to let them go. When there are disruptions in our strong tie network, there is a strong motivation to eliminate the disruption, rather than sacrifice the network.

Weak ties are a whole different matter. We have minimal emotional investments in these relationships. Typically, we connect with these either through serendipity or when we need something that only they can offer. For example, we typically reinstate our weak tie network when we’re on the hunt for a job. LinkedIn is the virtual embodiment of a weak tie network. And if we have a difference of opinion with someone to whom we’re weakly tied, we just shut down the connection. We have plenty of them so one more or less won’t make that much of a difference. When there are disruptions in our weak tie network, we just change the network, deactivating parts of it and reactivating others.

Weak ties are easily built. All we need is just one thing in common at one point in our lives. It could be working in the same company, serving on the same committee, living in the same neighborhood or attending the same convention. Then, we just need some way to remember them in the future. Strong ties are different. Strong ties develop over time, which means they evolve through shared experiences, both positive and negative. They also demand consistent communication, including painful communication that sometimes requires us to say we were wrong and we’re sorry. It’s the type of conversation that leaves you either emotionally drained or supercharged that is the stuff of strong ties. And a healthy percentage of these conversations should happen face-to-face. Could you build a strong tie relationship without ever meeting face-to-face? We’ve all heard examples, but I’d always place my bets on face-to-face – every time.

It’s the hard work of building strong ties that I fear we may miss as we build our relationships through online channels. I worry that the brain, given an easy choice and a hard choice, will naturally opt for the easy one. Online, our network of weak ties can grow beyond the inherent limits of our social inventory, known as Dunbar’s Number (which is 150, by the way). We could always find someone with which to spend a few minutes texting or chatting online. Then we can run off to the next one. We will skip across the surface of our social network, rather than invest the effort and time required to build strong ties. Just like our brains, our social connections may trade breadth for depth.

When Are Crowds Not So Wise?

the-wisdom-of-crowdsSince James Surowiecki published his book “The Wisdom of Crowds”, the common wisdom is – well – that we are commonly wise. In other words, if we average the knowledge of many people, we’ll be smarter than any of us would be individually. And that is true – to an extent. But new research suggests that there are group decision dynamics at play where bigger (crowds) may not always be better.

A recent study by Iain Couzin and Albert Kao at Princeton suggests that in real world situations, where information is more complex and spotty, the benefits of crowd wisdom peaks in groups of 5 to 20 participants and then decreases after that. The difference comes in how the group processes the information available to them.

In Surowiecki’s book, he uses the famous example of Sir Francis Galton’s 1907 observation of a contest where villagers were asked to guess the weight of an ox. While no individual correctly guessed the weight, the average of all the guesses came in just one pound short of the correct number. But this example has one unique characteristic that would be rare in the real world – every guesser had access to the same information. They could all see the ox and make their guess. Unless you’re guessing the number of jellybeans in a jar, this is almost never the case in actual decision scenarios.

Couzin and Kao say this information “patchiness” is the reason why accuracy tends to diminish as the crowd gets bigger. In most situations, there is commonly understood and known information, which the researchers refer to as “correlated information.” But there is also information that only some of the members of the group have, which is “uncorrelated information.” To make matters even more complex, the nature of uncorrelated information will be unique to each individual member. In real life, this would be our own experience, expertise and beliefs.  To use a technical term, the correlated information would be the “signal” and the uncorrelated information would be the “noise.” The irony here is that this noise is actually beneficial to the decision process.

In big groups, the collected “noise” gets so noisy it becomes difficult to manage and so it tends to get ignored. It drowns itself out. The collective focuses instead on the correlated information. In engineering terms this higher signal-to-noise ratio would seem to be ideal, but in decision-making, it turns out a certain amount of noise is a good thing. By focusing just on the commonly known information, the bigger crowd over-simplifies the situation.

Smaller groups, in contrast, tend to be more random in their make up. The differences in experiences, knowledge, beliefs and attitudes, even if not directly correlated to the question at hand, have a better chance of being preserved. They don’t get “averaged” out like they would in a bigger group. And this “noise” leads to better decisions if the situation involves imperfect information. Call it the averaging of intuition, or hunches. In a big group, the power of human intuition gets sacrificed in favor of the commonly knowable. But in a small group, it’s preserved.

In the world of corporate strategy, this has some interesting implications. Business decisions are almost always complex and involve imperfectly distributed information. This research seems to indicate that we should carefully consider our decision-making units. There is a wisdom of crowds benefit as long as the crowd doesn’t get too big. We need to find a balance where we have the advantage of different viewpoints and experiences, but this aggregate “noise” doesn’t become unmanageable.

The Era of Amplification

First published in Mediapost’s Search Insider, May 1, 2014

AmandaToddVideoMediapost columnist Joseph Jaffe wrote a great piece Tuesday on the Death of Anonymity. He shows how anonymity in the era of digital has become both a blessing and a curse, leading to an explosion of cowardly, bone-headed comments and cyber-bullying.  This reinforces something I’ve said repeated: technology doesn’t change human behavior; it just enables it in new ways. Heroes will find new ways to be heroes, and idiots will find new ways to be idiots.

But there is something important happening here. It’s not that technology is making us meaner, more cowardly or more stupid. I grew up with bullies, my father grew up with bullies and his father grew up with bullies. You could trace a direct line of bullies going back to the first time our ancestors walked erect, and probably further than that. So what’s different today? Why do we now need laws against cyber-bullying?

It’s because we now live in a time of increased amplification. The waves that spread from an individual’s actions go farther than ever before.  Technology increases the consequences of those actions.  A heroic act can spread through a network and activate other heroes, creating a groundswell of heroism. Unfortunately, the flip side is also true – bullying can begat more bullying. The viral spread of bullying that technology enables can make the situation hopeless for the victim.

Consider the case of Amanda Todd, a grade 10 student from Port Coquitlam, BC, Canada. Todd had been bullied for over a year by a guy who wanted “a show”. She finally relented and flashed her breasts. While not advisable, Todd’s actions were not that unusual. She wasn’t the first 15 year-old to experiment with a little sexual promiscuity after prolonged male pleading. It certainly shouldn’t have turned into a death sentence for Todd. But it did – because of amplification.

First of all, Todd’s tormentor was a man who lived thousands of miles away, in Holland. They never met. Secondly, Todd’s indiscretion was captured in a digital picture and was soon circulated worldwide. As teen-agers have been since time began, Todd was mercilessly teased. But it wasn’t just at the hands of a small circle of bullies at her high school. Taunts from around the world came from jerks who jumped on the bandwagon. A teen-ager’s psyche is typically a fragile thing, and the amplitude of that teasing was psychologically crushing for Todd. Desperate for escape, she first recorded a plea for understanding that she posted online, and then took her own life. The act that started all this should have been added to that pile of minor regrets we all assemble in our adolescence. It should not have ended the way it did. Unfortunately, Todd was a victim of amplification.

My wife and I have two daughters, one of which is about the same age as Todd. Because they grew up in the era of Amplification, we pounded home the fact that anything captured online can end up anywhere. You just can’t be careless, not even for the briefest of moments. But, of course, teenagers are occasionally careless. It’s part of the job description. They’re testing the world as a place to live in – experimenting with what it means to be an adult –  and mistakes are inevitable. Unfortunately, the potential price to be paid for those mistakes has been raised astronomically.

Here’s perhaps the most frightening thing about this. Todd’s Youtube video has been seen over 17 million times, so it too has been amplified by technology. Amanda’s story has spread through the world online. The vast majority of comments are those you would hope to see – expressions of sympathy, support, understanding and caring. But there are a handful of hateful comments of the sort that drove Todd to suicide. Technology allows us to sort and filter for negativity. In other words, technology allows bullies to connect to bullies.

In social networks, there is something called “threshold-limited spreading.” Essentially, it means that for something to spread through a network, the number of incidences needs to reach a certain threshold. In the case of bullying, as in the case of rioting or social movements, the threshold depends on the connections between like-minded individuals. If bullies can connect in a cluster, they draw courage from each other. This can then trigger a cascade effect, encouraging those “on the margin” to also engage in bullying. Technology, because of its unique ability to enable connections between those who think alike, can trigger these cascades of bullying. It doesn’t matter if the ratio of positive to negative is ten to one or even one hundred to one. All that matters is there are a sufficient number of negative comments for the would-be bully to feel that he or she has support.

I don’t know what the lasting impact of the Era of Amplification will be.  I do know that Technology has made the world a much more promising place than it was when I was born. I also know it’s made it much crueler and more frightening.  And it’s not because of any changes in who we are. It’s because the ripples of our actions now can spread further than we can even imagine.