Nobel Intentions, Ignoble Consequences

It was 20 years ago that I discovered the Internet. According to the International Telecommunication Union, that put me in select company. There were only 77 million users of the Internet by the end of 1996. That represented a little more than 1% of the world’s population. 66% of those were in the US, due likely to access restrictions in other areas. I know I logged on to the Web as soon as I could. I had actually been online with Compuserve for a few years prior to that, but it was in 1996 that the first ISP opened in the Canadian city I live in. I was one of the first to set up an account.

Three years later I changed my business to focus exclusively on online marketing. We became one of the fastest growing companies in Canada. Eleven years after start up (or, more accurately, realignment) we sold that company.

Things moved rather quickly after I first went online. At least, I thought they did. But compared to the growth of other start ups – say, Google for instance – I was a very little fish in a very big pond.

The Nobel Survey

In 2001, Cisco conducted a survey of past Nobel Prize winners. By then, Internet usage had mushroomed. Half a billion people – almost 9% of the world’s population – were online. The Internet appeared to be a real thing. The question asked was, “Where will the Internet take us over the next 20 years?

The Laureates were mostly optimistic in their replies. Here’s a quick summary

  • 87% said the Internet would improve education.
  • 93% felt it would provide greater access to libraries, information and teachers.
  • 74% saw the coming of virtual classrooms by 2020.
  • 82% said it would accelerate innovation
  • 83% felt it would improve productivity
  • 72% believed it would improve quality of life and provide more economic opportunity to people in less developed countries
  • 93% saw it improving communications with people in other countries
  • 76% predicted a breaking down of borders

On the negative side, 65% feared it would violate personal privacy, 51% saw it increasing alienation and 44% felt it would lead to greater political or economic inequity.

15 Years later…

I think you could safely put a check beside every single box on the Nobel Laureate wish list. In fact, as optimistic as these predictions seemed just 15 years ago, they seem conservative in hindsight. Online classrooms have been a reality for a few years and education is undergoing a massive reformation. In 2011, 10 years after the survey was conducted, McKinsey estimated that 10% of GDP growth in developed countries was directly attributable to the Internet. And the fact that almost half the world now has Internet access speaks to the role it plays in communication across cultures.

But none of the laureates predicted a gut punch to the cab drivers of the world. No one foresaw the short-sheeting of the traditional hospitality industry. And there was not a peep of new forms of investment predation that would be measured in microseconds.

The Biggest Can of WD-40 Ever

All the benefits of the Internet – and all the negative consequences – come from the same common factor: the elimination of friction. Economist Ronald Coase rightly identified friction – or, in his terminology, “transactional costs” – as the reason corporations exist. Until very recently, geographic distance introduced friction into pretty much every aspect of our society. It took physical resources to overcome friction. Physical resources required capital. Capital could most efficiently be raised and controlled by corporations.

The Internet enabled a new type of connection. It was agnostic to physical distance. But, more importantly, it was a peer-to-peer connection. There was no hierarchy to the Internet. Hierarchies depend on friction. As soon as that friction is removed, the hierarchies begin to fall apart. They are no longer required.

All the good things that were predicted in 2001 came from a removal of friction. But so did all the bad. In the case, the word “regulation” can be often be substituted for “friction.” Regulation is just another form of hierarchal control.

I’ve been “online” for 20 years now. It certainly accelerated every aspect of my life; most positively, some negatively. But one thing’s for certain. Going backwards is not an option.

Luddites Unite…

Throw off the shackles of technology. Rediscover the true zen of analog pleasures!

The Hotchkisses had a tech-free Christmas holiday – mostly. The most popular activity around our home this year was adult coloring. Whodathunkit?

There were no electronic gadgets, wired home entertainment devices or addictive apps exchanged. No personal tech, no connected platforms, no internet of things (with one exception). There were small appliances, real books printed on real paper, various articles of clothing – including designer socks – and board games.

As I mentioned, I did give one techie gift, but with a totally practical intention. I gave everyone Tiles to keep track of the crap we keep losing with irritating regularity. Other than that, we were surprisingly low tech this year.

Look, I’m the last person in the world that could be considered a digital counter-revolutionary. I love tech. I eat, breathe and revel in stuff that causes my wife’s eyes to repeatedly roll. But this year – nada. Not once did I sit down with a Chinglish manual that told me “When the unit not work, press “C” and hold on until you hear (you should loose your hands after you hear each sound) “

This wasn’t part of any pre-ordained plan. We didn’t get together and decide to boycott tech this holiday. We were just technology fatigued.

Maybe it’s because technology is ceasing to be fun. Sometimes, it’s a real pain in the ass. It nags us. It causes us to fixate on stupid things. It beeps and blinks and points out our shortcomings. It can lull us into catatonic states for hours on end. And this year, we just said “Enough!” If I’m going to be catatonic, it’s going to be at the working end of a pencil crayon, trying to stay within the lines.

Even our holiday movie choice was anti-tech, in a weird kind of way. We, along with the rest of the world, went to see Star Wars, the Force Awakens. Yes, it’s a sci-fi movive, but no one is going to see this movie for its special effects or CGI gimcrackery. Like the best space opera entries, we want to get reacquainted with people in the story. The Force’s appeal is that it is a long-awaited (32 years!) family reunion. We want to see if Luke Skywalker got bald and fat, despite the force stirring within him.

I doubt that this is part of any sustained move away from tech. We are tech-dependent. But maybe that’s the point. It used to be that tech gadgets separated us from the herd. It made us look coolly nerdish and cutting edge. But when the whole world is wearing an iWatch, the way to assert your independence is to use a pocket watch. Or maybe a sundial.

And you know what else we discovered? Turning away from tech usually means you turn towards people. We played board games together – actual board games, with cards and dice and boards that were made of pasteboard, not integrated circuits. We were in the same room together. We actually talked to each other. It was a form of communication that – for once – didn’t involve keyboards, emojis or hashtags.

I know this was a fleeting anomaly. We’re already back to our regular tech-dependent habits, our hands nervously seeking the nearest connected device whenever we have a millisecond to spare.

But for a brief, disconnected moment, it was nice.

Giving Thanks for The Law of Accelerating Returns

For the past few months, I’ve been diving into the world of show programming again, helping MediaPost put together the upcoming Email Insider Summit up in Park City. One of the keynotes for the Summit, delivered by Charles W. Swift, VP of Strategy and Marketing Operations for Hearst Magazines, is going to tackle a big question, “How do companies keep up with the ever accelerating rate of change of our culture?”

After an initial call with Swift, I did some homework and reacquainted myself with Ray Kurzweil’s Law of Accelerating Returns. Shortly after, I had to stop because my brain hurt. Now, I would like to pass that unique experience along to you.

In an interview that is now 12 years old, Kurzweil explained the concept, using biological evolution as an analogy. I’ll try to make this fast. Earth is about 4.6 billion years old. The very first life appeared about 3.8 billion years ago. It took another 1.7 billion years for multicellular life to appear. Then, about 1.2 billion years later, we had something called the Cambrian Explosion. This was really when the diversity of life we recognize today started. If you’ve been keeping track, you know that it took the earth 4.1 of it’s 4.6 billion year history, or about 90% of the time since the earth was formed, to produce complex life forms of any kind.

Things started to move much quicker at that point. Amphibians and reptiles appeared about 350 million years ago, dinosaurs appeared 225 million years ago, mammals 200 million years ago, dinosaurs disappeared about 70 million years ago, the first great apes appeared about 15 million years ago and we homo sapiens have only been around for 200,000 years or so. And, as a species, we really have only made much of dent in the world in the last 10,000 years of our history. In the entire history of the world, that represents a very tiny 0.00022% slice. But consider how much the world has changed in that 10,000 years.

Accelerating Returns

Kurzweil’s Law says that, like biology, technology also evolves exponentially. It took us a very long time to do much of anything at all. The wheel, stone tools and fire took us tens of thousands of years to figure out. But now, technological paradigms shifts happen in decades or less. And the pace keeps accelerating. The Law of Accelerating Returns states that in the first 20 years of the 21st century, we’ll have progressed as much as we did during the entire 20th century. Then we’ll double that progress again by 2034, and double it once more by 2041.

Let me put this in perspective. At this rate, if my youngest daughter – born in 1995 – lives to be 100 (not an unlikely forecast), she will see more technological change in her life than in previous 20,000 years of human history!

This is one of those things we probably don’t think about because, frankly, it’s really hard to wrap your head around this. The math shows why predictability is flying out the window and why we have to get comfortable reacting to the unexpected. It would also be easy to dismiss it, but Kurzweil’s concepts are sound. Evolution does accelerate exponentially, as has our rate of technological advancement. Unless the later showed a dramatic reversal or slowing down, the future will move much much faster than we can possibly imagine.

The reason change accelerates is that the technology we develop today builds the foundations required for the technological leaps that will happen tomorrow. Agriculture set the stage for industry. Industry enabled electricity. Electricity made digital technology possible. Digital technology enables nanotechnology. And so on. Each advancement sets the stage for the next, and we progress from stage to stage more rapidly each time.

So, for your extended long weekend, if you’re sitting in a turkey-induced tryptophan daze and there’s no game on, try wrapping your head around The Law of Accelerating Returns.

Happy Thanksgiving. You’re welcome.

Basic Instincts and Attention Economics

We’ve been here before. Something becomes valuable because it’s scarce. The minute society agrees on the newly assigned value, wars begin because of it. Typically these things have been physical. And the battle lines have been drawn geographically. But this time is different. This time, we’re fighting over attention – specifically, our attention – and the battle is between individuals and corporations. Do we, as individuals, have the right to choose what we pay attention to? Or do the creators of content own our attention and can they harvest it at their will? This is the question that is rapidly dismantling the entire advertising industry. It has been debated at length here at Mediapost and pretty much every other publication everywhere.

I won’t join in the debate at this time. The reality here is that we do control our attention and the advertising industry was built on a different premise of scarcity from a different time. It was built on a foundation of access and creation, when both those things were in short supply. By creating content and solving the physical problem of giving us access to that content, the industry gained the right to ask us to watch an ad. No ads, no content. It was a bargain we agreed to because we had no other choice.

The Internet then proceeded to blow that foundation to smithereens.

By removing the physical constraints that restricted both the creation and distribution of content, technology has also erased the scarcity. In fact, the balance has been forever tipped the other way. We now have access to so much content; we don’t have enough attention to digest it all. Viewed in this light, it makes the debate around ad blockers seem hopelessly out of touch. Accusing someone of stealing content is like accusing someone of stealing air. The anti-blocking side is trying to apply the economic rational of a market that no longer exists.

So let us accept the fact that we are the owners of our own attention, and that it is a scarce commodity. That makes it valuable. My point is that we should pay more attention to how we pay attention. If the new economy is going to be built on attention, we should treat it with more respect.

The problem here is that we have two types of attention, the same as we have two types of thinking: Fast and Slow. Our slow attention is our focused, conscious attention. It is the attention we pay when we’re reading a book, watching a video or talking to someone. We consciously make a choice when we pay this type of attention. Think of it like a spotlight we shine on something for an extended period of time.

It’s the second type of attention, fast attention, which is typically the target of advertising. It plays on the edge of our spotlight, quickly and subconsciously monitoring the environment so it can swing the spotlight of conscious attention if required. Because this type of attention operates below the level of rational thought, it is controlled by base instincts. It’s why sex works in advertising. It’s why Kim Kardashian can repeatedly break the Internet. It’s why Donald Trump is leading the Republican race. And it’s why adorable Asian babies wearing watermelons can go viral.

It’s this type of attention that really determines the value of the attention economy. It’s the gatekeeper that determines how slow attention is focused. And it’s here where we may need some help. I don’t think instincts developed 200,000 years ago are necessarily the best guide for how we should invest something that has become so valuable. We need a better yardstick that simple titillation for determining where our attention should be spent.

I expect the death throes of the previous access economy to go on for some time. The teeth gnashing of the advertising industry will capture a lot of attention. But the end is inevitable. The economic underpinnings are gone, so it’s just a matter of time before the superstructures built on top of them will collapse. In my opinion, we should just move on and think about what the new world will look like. If attention is the new currency, what is the smartest way to spend it?

Do We Really Want Virtual Reality?

Facebook bought Oculus. Their goal is to control the world you experience while wearing a pair of modified ski goggles. Mark Zuckerberg is stoked. Netflix is stoked. Marketers the world over are salivating. But, how should you feel about this?

Personally, I’m scared. I may even be terrified.

First of all, I don’t want anyone, especially not Mark Zuckerberg, controlling my sensory world.

Secondly, I’m pretty sure we’re not built to be virtually real.

I understand the human desire to control our environment. It’s part of the human hubris. We think we can do a better job than nature. We believe introducing control and predictability into our world is infinitely better than depending on the caprices of nature. We’ve thought so for many thousands of years. And – Oh Mighty Humans Who Dare to be Gods – just how is that working out for us?

Now that we’ve completely screwed up our physical world, we’re building an artificial version. Actually, it’s not really “we” – it’s “they.” And “they” are for profit organizations that see an opportunity. “They” are only doing it so “they” control our interface to consciousness.

Personally, I’m totally comfortable giving a profit driven corporation control over my senses. I mean, what could possibly happen? I’m sure anything they may introduce to my virtual world will be entirely for my benefit. I’m sure they would never take the opportunity to use this control to add to their bottom line. If you need proof, look how altruistically media – including the Internet – has evolved under the stewardship of corporations.

Now, their response would be that we can always decide to take the goggles off. We stay in control, because we have an on/off switch. What they don’t talk about is the fact that they will do everything in their power to keep us from switching their VR world off. It’s in their best interest to do so, and by best interest, I mean they more time we spend in their world, as opposed to the real one, the more profitable it is for them. They can hold our senses hostage and demand ransom in any form they choose.

How will they keep us in their world? By making it addictive. And this brings us to my second concern about Virtual Reality – we’re just not built for it.

We have billions of neurons that are dedicated to parsing and understanding a staggeringly complex and dynamic environment. Our brain is built to construct a reality from thousands and thousands of external cues. To manage this, it often takes cognitive shortcuts to bring the amount of processing required down to a manageable level. We prefer pleasant aspects of reality. We are alerted to threats. Things that could make us sick disgust us. The brain manages the balance by a judicious release of neurochemicals that make us happy, sad, disgusted or afraid. Emotions are the brain’s way of effectively guiding us through the real world.

A virtual world, by necessity, will have a tiny fraction of the inputs that we would find in the real world. Our brains will get an infinitesimal slice of the sensory bandwidth it’s used to. Further, what inputs it will get will have the subtlety of a sledgehammer. Ham fisted programmers will try to push our emotional hot buttons, all in the search for profit. This means a few sections of our brain will be cued far more frequently and violently than they were ever intended to be. Additionally, huge swaths of our environmental processing circuits will remain dormant for extended periods of time. I’m not a neurologist, but I can’t believe that will be a good thing for our cognitive health.

We were built to experience the world fully through all our senses. We have evolved to deal with a dynamic, complex and often unexpected environment. We are supposed to interact with the serendipity of nature. It is what it means to be human. I don’t know about you, but I never, ever, want to auction off this incredible gift to a profit-driven corporation in return for a plastic, programmed, 3 dimensional interface.

I know this plea is too late. Pandora’s Box is opened. The barn door is open. The horse is long gone. But like I said, I’m scared.

Make that terrified.

Talking Back to Technology

The tech world seems to be leaning heavily towards voice activated devices. Siri – Amazon Echo – Facebook M – “OK Google” – as well as pretty much every vehicle in existence. It should make sense that we would want to speak to our digital assistants. After all, that’s how we communicate with each other. So why – then – do I feel like such a dork when I say “Siri, find me an Indian restaurant”?

I almost never use Sir as my interface to my iPhone. On the very rare occasions when I do, it’s when I’m driving. By myself. With no one to judge me. And even then, I feel unusually self-conscious.

I don’t think I’m alone. No one I know uses Siri, except on the same occasions and in the same way I do. This should be the most natural thing in the world. We’ve been talking to each other for several millennia. It’s so much more elegant than hammering away on a keyboard. But I keep seeing the same scenario play out over and over again. We give voice navigation a try. It sometimes works. When it does, it seems very cool. We try it again. And then, we don’t do it any more. I base this on admittedly anecdotal evidence. I’m sure there are those that continually chat merrily away to the nearest device. But not me. And not anyone I know either. So, given that voice activation seems to be the way devices are going, I have to ask why we’re dragging our heels to adopt?

In trying to judge the adoption of voice-activated interfaces, we have to account for mismatches in our expected utility. Every time we ask for some thing – like, for instance, “Play Bruno Mars” and we get the response, “I’m sorry, I can’t find Brutal Cars,” some frustration would be natural. This is certainly part of it. But that’s an adoption threshold that will eventually yield to sheer processing brute strength. I suspect our reluctance to talk to an object is found in the fact that we’re talking to an object. It doesn’t feel right. It makes us look addle-minded. We make fun of people who speak when there’s no one else in the room.

Our relationship with language is an intimately nuanced one. It’s a relatively newly acquired skill, in evolutionary terms, so it takes up a fair amount of cognitive processing. Granted, no matter what the interface, we currently have to translate desire into language, and speaking is certainly more efficient than typing, so it should be a natural step forward in our relationship with machines. But we also have to remember that verbal communication is the most social of things. In our minds, we have created a well-worn slot for speaking, and it’s something to be done when sitting across from another human.

Mental associations are critical for how we make sense of things. We are natural categorizers. And, if we haven’t found an appropriate category when we encounter something new, we adapt an existing one. I think vocal activation may be creating cognitive dissonance in our mental categorization schema. Interaction with devices is a generally solitary endeavor. Talking is a group activity. Something here just doesn’t seem to fit. We’re finding it hard to reconcile our usage of language and our interaction with machines.

I have no idea if I’m right about this. Perhaps I’m just being a Luddite. But given that my entire family, and most of my friends, have had voice activation capable phones for several years now and none of them use that feature except on very rare occasions, I thought it was worth mentioning.

By the way, let’s just keep this between you and I. Don’t tell Siri.

Ode to a Grecian Eurozone

comm-crisis I’d like to comment on the Greek debt crisis. But I don’t know anything about it. Zip..or, as they say in Athens – μηδέν. I do, however, know how to say zero in Greek, thanks to Google Translate. At least for the next few minutes. I also happen to know rather a lot right now about the Tour de France, how to wire RV batteries, how to balance pool chemicals, how to write obituaries and most of the plotlines for the Showtime series Homeland. I certainly know more about all those things than the average person. Tomorrow, I’ll probably know different stuff. And I will retain almost nothing. But if you ask me what in the world is happening right now, I’ll likely draw a blank. I’d say it’s all Greek to me, but a certain Mediapost columnist already stole that line. Damn you Bob Garfield!

I’m not really sure if I’m concerned about this. After all, I’m the one who has chosen not to watch the news for a long time. My various information sources feed me a steady diet of information, but it’s all been predetermined based on my interests. I’m in what they call a “filter bubble.” I’ve become my own news curator and somewhere along the line, I’ve completely filtered out anything to do with the Greek economy. It’s because I’m not really interested in the Greek economy, but I’m thinking maybe I should be.

(Incidentally, am I the only one who finds it a bit ironic that the word “economy” comes from – you guessed it – the Greek words for “house” and “management”)

The problem is that I have a limited attention span. My memory capacity is a little more voluminous, but there are definite limits to that, as well. To make matters worse, Google is making me intellectually lethargic. I don’t try as hard to remember stuff because I don’t have to. Why learn how to count to 10 in Greek when I can just look it up when I need to. I’m not alone in this. We’re all going down the same blind cornered path together. Sooner or later, we’ll all run into a major crisis we never saw coming. And it’s because we’ve all been looking in different places.

40 years ago, to be well informed, you had to pay attention to mainstream news sources. It was the only option we had. We all got feed the same diet of information. Some of us retained more than others, but we all dined at the same table. Our knowledge capacity was first filled from these common news sources. Then, after that, we’d fill whatever nooks and crannies were left with whatever our unique interests might be. But we all, to some extent, shared a common context. Knowledge may not have been deep, but it was definitely broad.

Now, if I choose to learn more about the Greek economy, I certainly have plenty of opportunities to do so. But I’d be starting with a blank slate. It would take some work to get up to speed. So I have to decide whether it’s worth the effort for me to inform myself. Is the return worth the investment? Something has to tip the balance to make it important enough to learn more about whatever it is the Greeks are referendumming (referendering?) about. And in the meantime, there will be a lot of other things competing for that same limited supply of information gathering attention. Tomorrow, for instance, it might become really important for me to find out how close BC is to legalizing pot, or what the wild fire hazard is in Northern Saskatchewan, or what July’s weather is like in Chiang Mai. All of these things are relatively easy to find, but I have to reserve enough retention capacity to use the information once I find it. Information may want to be free, but the resources required to utilize it depletes our limited stores of cognitive ability.

Perhaps we’re saving more of our attention for on demand information requirements. Or maybe we’re just filtering out more of what we used to call news. Whatever the cause, I think we’re loosing our common cultural context, bit by byte. A community is defined by what it has in common, and the more technology allows us to pursue our individual interests, the more we surrender the common narratives that used to bind us.

Justine Sacco, Twitter and the End of Irony

Justine Sacco is in the news again. Not that she wants to. She’d like nothing more than to fade from the spotlight. As she recently said in an interview, “Someday you’ll Google me and my LinkedIn will be the first thing that pops up.” But today, over 15 months after she launched the tweet that just won’t go away, she’s still the poster child for career ruination via social media. The recent revival of Justine’s story comes ahead of the release of a new book by Jon Ronson, “So You’ve Been Publicly Shamed.”

Justine SaccoIf you’ve never heard of Justine Sacco, I’ll recap quickly. Just before boarding an 11-hour flight to South Africa, in what can only be called a monumental melt down of discretion, she tweeted this, “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” This touched off a social media feeding frenzy looking for Sacco’s blood. The world waited for her to land (#HasJustineLandedYet? became the top trender) and meet her righteous retribution.

Oh, did I mention that Justine was IAC’s Corporate Head of Communications? Yeah, I know. WTF – right?

But the point here is not whether or not Justine Sacco was wrong. I think even she’ll admit that it was a momentarily brain-dead blurb of 64-character stupidity. The point here is whether or not Sacco was a racist, cold-hearted bitch. And to that, the answer is no.  Justine meant the comment to be ironic – a satirical poke at white privilege and comfort. She never intended for it to be taken seriously. And that was where the wheels came off.

A_Modest_Proposal_1729_CoverSatire has been around for a long time. The Greeks and Romans invented it, but it was the British that perfected it. The satirical essay became an art form in the hands of Alexander Pope, John Gay and the greatest of the satirists, Jonathon Swift. Through them, irony became honed to a razor sharp scythe for social change.  Swift’s A Modest Proposal is perhaps the greatest satirical piece ever written. In it, he proposed a solution for the starving beggars of Ireland – they should sell their children, of which there was an abundant supply, to the upper classes as a food source.

Now, did the pamphlet reading public of 1729 England call for Swift’s head? Did they think he was serious when he wrote:

“A young healthy child well nursed, is, at a year old, a most delicious nourishing and wholesome food, whether stewed, roasted, baked, or boiled; and I make no doubt that it will equally serve in a fricassee, or a ragout.”

Well, perhaps a few missed the irony, but for the vast majority of Swift’s audience, the pamphlet helped make his reputation, rather than ruin it. There was no “HasSwiftreturnedfromLilliputYet?” trend on Twitter. People got it.

There is no way Sacco’s work should be compared to Swift’s in terms of literary merit, but there are some other fundamental differences we should pay attention too.

First of all, Swift was known as a satirist. Satire was an established literary form in the Age of Enlightenment. The context was in place for the audience. They were able to manage the flip of perspective required to understand the irony. But before December 20, 2013, we had never heard of Justine Sacco. The tweet was stripped of any context. There was nothing to tell us that she wasn’t being serious. Twitter fragments our view of the world into tiny missives that float unconnected and unsupported.  Twitter, by its very nature, forces us to take its messages out of context. This is not the place to hope for a nuanced understanding.

Also, Sacco’s entire tweet totaled 64 characters. Swift’s essay comes in at 3405 words, or 19,373 characters. That’s about 300 times the literary volume of Sacco’s tweet. Swift had ample opportunity to expound on his irony and make sure readers got his point.  Even Swift’s title, at a hefty 169 characters, couldn’t have squeezed into the limits of a tweet.  Tweets beg to be taken at face value, because there’s no room to aim for anything other than that.

And that brings us to the biggest difference here – the death of thoughtfulness. You can’t get irony or satire unless you’re thoughtful. You have to spend some time thinking about what you’ve read. To use Daniel Kahneman’s terminology, you have to use System 2, which specializes in slow thinking. Sacco’s tweet takes about 2 seconds to read, from beginning to end. There is no time for thought there. But there is time for visceral reaction. That’s all System 1, and System 1 doesn’t understand irony.

At the average reading speed of 300 words a minute, you’d have to invest 11.3 minutes to get through Swift’s essay. That’s plenty of time for System 2 to digest what it’s read and to look for meaning beyond face value. You have to read it in a thoughtful manner.  But it’s not only in our reading where we don’t have to be thoughtful. We can also abandon thoughtfulness in our response. We can retweet in a matter of seconds and add our own invectives. This starts a chain reaction of indignation that starts a social media brush fire. Careful consideration is not part of the equation.

Sacco’s sin wasn’t that she was being racist. Her sin was trying to be ironic in a medium that couldn’t support it. By her own admission, she had been experimenting with Twitter to see if edgy tweets got retweeted more often. The answer, as it turned out, was yes, but the experiment damned near killed her. As a communication expert, she should have known better. Justine Sacco painfully discovered that in the split second sound-bite world of social media, thoughtful reading is extinct.  And with it, irony and satire have died as well.

Mourning Becomes Electric

dreamstime_19503560Last Friday was a sad day. A very dear and lifelong friend of mine, my Uncle Al, passed away. And so I did what I’ve done before on these occasions. I expressed my feelings by writing about it. The post went live on my blog around 10:30 in the morning. By mid afternoon, it had been shared and posted through Facebook, Twitter and many other online channels. Many were kind enough to send comments. The family, in the midst of their grief, forwarded my post to their family and friends. Soon, there was an extended network of mourning that sought to heal each other, all through channels that didn’t exist just a few years ago. Mourning had moved online.

As you probably know, I’m fascinated by how we express our innate human needs through digital technologies. And death, together with birth, is the most universal of human experiences. It was inevitable that we would use online channels to grieve. So I, as I always do, asked the question – why?

First of all – why do we mourn? Well, we mourn because we are social animals. We are probably the most social of animals. So we grieve to an according degree. We miss the departed terribly. It is natural to try to fill the hole a death tears inside of us by reaching out to others who may share the same grief. James R. Averill believed we communally mourn because it cements the social bonds that make it more likely that we will survive as a species. When it comes to dealing with death, misery loves company.

Secondly, why do we grieve online? Well, here, I think it has something to do with Granovetter’s weak ties. Death is one of those life events where we reach beyond the strong ties that define our day-to-day social existence. Certainly we seek comfort from those closest to us, but the death also triggers the existence of a virtual community – defined and united by their grieving for the one who has passed away. Our digital networks allow us to eliminate the six degrees of separation in one fell swoop. We can share our grief almost instantaneously and simultaneously with family, friends, acquaintances and even people we have never met.

There are two other aspects of grief that I believe lend themselves well to online channels: the need to chronicle and the comfort of emotional distance.

Part of the healing process is sharing memories of the departed love one. And, for those like myself, just writing about our feelings helps overcome the pain. Online provides a perfect platform for chronicling. We can share our own thoughts and, in the expressing of them, start the healing process.

The comfort of physical distance seems a contradictory idea, but almost everyone I know who has gone through a deep loss has one common dread – dealing with a never-ending stream of condolences over the coming weeks and months, triggered by each new physical encounter.

When you’ve been in the middle of the storm, you are typically a few days ahead of everyone else in dealing with your grief. Your mind has been occupied with nothing else as you have sat vigil by the hospital bed. While the condolences are given with the best of intentions, you feel compelled to give a response. The problem is, each new expression of grief forces you to replay your loop of very painful memories. The amplitude of this pain increases when it’s a face-to-face encounter. Condolences that reach you through a more detached channel, such as online, can be dealt with at your discretion. You can wait until you marshall the emotional reserves necessary to respond. You can also respond to several people at a time. How many times have you heard this from a grieving loved one, “I just wish I could record my message and play it whenever I meet someone who wants to tell me how sorry they are for my loss?” It may seem callous, but no one wants to relive that pain over and over again. And let’s face it – almost no one knows the right things to say at a moment like this.

By the end of last Friday, my online social connections had helped me ease a very deep pain. I hope I was able to return the favor for others that were dealing with their own grief. There are many things about technology that I treat with suspicion, but in this case, turning online seemed like the most natural thing in the world.

How Activation Works in an Absolute Value Market

As I covered last week, if I mention a brand to you – like Nike, for instance – your brain immediately pulls back your own interpretation of the brand. What has happened, in a split second, is that the activation of that one node – let’s call it the Nike node – triggers the activation of several related nodes in your brain, which is quickly assembled into a representation of the brand Nike. This is called Spreading Activation.

This activation is all internal. It’s where most of the efforts of advertising have been focused over the past several decades. Advertising’s job has been to build a positive network of associations so when that prime happens, you have a positive feeling towards the brand. Advertising has been focused on winning territory in this mental landscape.

Up to now, we have been restricted to this internal landscape when making consumer decisions by the boundaries of our own rationality. Access to reliable and objective information about possible purchases was limited. It required more effort on our part than we were willing to expend. So, for the vast majority of purchases, these internal representations were enough for us. They acted as a proxy for information that lay beyond our grasp.

But the world has changed. For almost any purchase category you can think of, there exists reliable, objective information that is easy to access and filter. We no longer are restricted to internal brand activations (relative values based on our own past experiences and beliefs). Now, with a few quick searches, we can access objective information, often based on the experiences of others. In their book of the same name, Itimar Simonson and Emanuel Rosen call these sources “Absolute Value.” For more and more purchases, we turn to external sources because we can. The effort invested is more than compensated for the value returned. In the process, the value of traditional branding is being eroded. This is truer for some product categories than others. The higher the risk or the level of interest, the more the prospect will engage in an external activation. But across all product categories, there has been a significant shift from the internal to the external.

What this means for advertising is that we have to shift our focus from internal spreading activations to external spreading activations. Now, when we retrieve an internal representation of a product or brand, it typically acts as a starting point, not the end point. That starting point is then to be modified or discarded completely depending on the external information we access. The first activated node is our own initial concept of the product, but the subsequent nodes are spread throughout the digitized information landscape.

In an internal spreading activation, the nodes activated and the connections between those nodes are all conducted at a subconscious level. It’s beyond our control. But an external spreading activation is a different beast. It’s a deliberate information search conducted by the prospect. That means that the nodes accessed and the connections between those nodes becomes of critical importance. Advertisers have to understand what those external activation maps look like. They have to be intimately aware of the information nodes accessed and the connections used to get to those nodes. They also have to be familiar with the prospect’s information consumption preferences. At first glance, this seems to be an impossibly complex landscape to navigate. But in practice, we all tend to follow remarkable similar paths when establishing our external activation networks. Search is often the first connector we use. The nodes accessed and the information within those nodes follow predictable patterns for most product categories.

For the advertiser, it comes down to a question of where to most profitably invest your efforts. Traditional advertising was built on the foundation of controlling the internal activation. This was the psychology behind classic treatises such as Ries and Trout’s “Positioning, The Battle for Your Mind.” And, in most cases, that battle was won by whomever could assemble the best collection of smoke and mirrors. Advertising messaging had very little to do with facts and everything to do with persuasion.

But as Simonsen and Rosen point out, the relative position of a brand in a prospect’s mind is becoming less and less relevant to the eventual purchase decision. Many purchases are now determined by what happens in the external activation. Factual, reliable information and easy access to that information becomes critical. Smoke and mirrors are relegated to advertising “noise” in this scenario. The marketer with a deep understanding of how the prospect searches for and determines what the “truth” is about a potential product will be the one who wins. And traditional marketing is becoming less and less important to that prospect.