Chatting Up a Storm

I’ve been talking about a “meta-app” for ages. It looks like China may have found it in WeChat. We in the Western World have been monitoring the success of TenCent’s WeChat with growing interest. Who would have thought that a simple chat interface could be the killer app of the future?

Chat interfaces seem so old school. They appear to be clunky and inefficient. But the beauty of chat is that it’s completely flexible. As Wired.com’s David Pierce said, “You can, for all intents and purposes, live your entire life within WeChat.” That’s exactly the type of universal functionality you need to become a meta-app.

We’ve always envisioned having conversations with our computers, even going back to Star Trek and 2001: A Space Odyssey. But we didn’t think out conversations would be carried out in text bubbles on a hand held device. A PEW study found that texting is the single most common activity on a Smartphone. 97% of us do it. So if messaging is the new UI, none of us have to learn anything new.

Graphic interfaces are necessarily tied to a particular task. The interface is designed for a specific intent. But messaging interfaces can adapt as intents change. They can quickly switch from social messaging to purchasing online to searching for an address to – well – you get the idea.

But where texting really shines is when it’s combined with artificially intelligent chatbots. A simple, universally understood interface that’s merged with powerful intelligent agents – either human and machine – allows the user to quickly request or do anything they wish. The functionality of intent specific apps can be called on as required and easily introduced into the chat interface.

In effect, text messaging is doing exactly what Apple hoped Siri would do – become the universal interface to the digital world. Given that speaking would appear to be easier than texting, one has to wonder why Siri never really gained the traction that Apple hoped it would. I think this can be attributed to three reasons:

  • The difficulties of spoken interpretation still restricts the functionality of Siri. The success rate isn’t high enough to completely gain our confidence
  • The use case of Siri is still primarily when we need to keep our hands free. It’s not that easy to switch to interactions where tactile input is required
  • We look like idiots speaking to a machine

All of these are avoided in a chat-based interface. We still have the flexibility of a conversational interface but we still have all the power of our device at our fingertips. Plus, we don’t infringe on any social taboos.

Given the advantages, it’s small wonder that a number of players – primarily Facebook – are seriously plotting for the commercialization of chat based messaging services. There’s one other massive advantage that a stand-alone messaging interface has. The more activities we conduct through any particular interface, the greater the opportunity for personalization. I’ve always maintained that a truly useful “meta-app” should be able to anticipate our intent. That requires interactions across the broad spectrum of our activities. Previously only operating systems offered this type of breadth and because OS’s operate “under the hood,” there were some limitations on the degree of personalization – and through that, commercialization – that was possible. But an app we explicitly choose to use seems to be fair game for commercialization. It’s one of those unwritten social modality rules that advertisers are well advised to be aware of.

Between Messenger and WhatsApp, Facebook has a huge slice of the chat market. They just passed the 900 million user mark for Messenger alone. According to a recent study from the Global Web Index, over 36% of users have used Messenger in the past month, followed closely by WhatsApp at 34%, then Skype at 19%, Line at 10% and Viber and SnapChat at 7% each. These numbers exclude the Chinese market, which is dominated by WeChat, but it remains to be seen if WeChat can expand its base beyond Asia.

And leaked documents from earlier this year indicated that Messenger may soon introduce targeted ads. This hardly qualifies as a security breach. It’s more of a “Duh – ya think?” The rumor mill around the commercialization of Messenger has been going full steam in 2016. If chatting is the UI juggernaut it seems to be, of course we will soon see ads there. WeChat is well down this road, and it seems to be working like a charm, if the recent Smart Car promotion is any example.

 

A Possibly Premature Post-Mortem on Yahoo

Last Thursday, Yahoo held it ‘s annual shareholder meeting. At that meeting, CEO Marissa Mayer dealt the company a doubled down kiss of death. She stated the goals of the board are fully aligned with one clear priority: “delivering shareholder value to all of you.” She further mentioned, when dealing with the divesture of all that once was Yahoo, that she’s “been very heartened by the level of interest in Yahoo. It validates our business processes as well as our achievements to date.”

It’s fancier language, but it’s basically the same as the butcher saying, “This cow is no longer viable as a cow, so I’m looking at it as a collection of rump roasts, T-Bones and hamburger. I’m hoping we have more of the former and less of the later.”

Yahoo_1996I first encountered Yahoo in 1995, shortly after it’s brief life as Jerry and David’s Guide to the World Wide Web. I think it was probably still parked on Stanford’s servers at the time. At the time, the Internet was like the world’s biggest second-hand store – a huge collection that was 95% junk/5 % useful stuff with no overarching order or organization. David Filo and Jerry Yang’s site was one of the very first to try to provide that order.

As an early search marketer in the run up to the dot-com bubble, you couldn’t ignore the Yahoo directory. The Yahooligans walked with typical Valley swagger. Hubris was never in short supply. They were the cocks of the walk and they knew it.

It was a much-humbled post-bubble Yahoo that I visited in 2004. They had got their search asses soundly kicked by Google, who was now powering their non-directory results. The age of the curated directory was gone, replaced by the scalability of algorithmic search.

As a culture, the Yahooligans were struggling with the mixed management signals that came from then CEO Terry Semel and his team. Sunnyvale was clouded in a purple haze. The Yahooligans didn’t know who the hell they were or what they were supposed to do. Where they a tech company or an entertainment company? The answer, as it turned out, was neither.

I met with the remnants of the once mighty search team to talk about user behaviors. I didn’t know it at the time, but Yahoo was gearing up to relaunch their search service. A much vilified paid inclusion program would also be debuted. It was one of many ill-fated attempts to find the next “Big Thing.”

Marissa Mayer continues to put a brave face on it, but the Yahoo engine ran out of steam at least a decade and a half ago. What amazes me is how long the ride has been. There is a message here for tech-based companies.

If you dig down to the critical incubation period of any tech company, you find a recurring pattern. Some technologically mediated connection allows people to do something they were previously unable to do. This releases pent up market demand. It’s like a thin sliver trying to poke through a water balloon. If successful, this released market demand creates an immediate and sizable audience for whomever introduced the innovation. Yahoo’s directory, Google’s PageRank, Facebook’s “Facemash”, AirBnB’s accommodation directory, Uber’s ridesharing app – they all share the same modus operandi – a tech-step forward creates a new audience and market opportunity.

In hindsight, once you strip away all the hype, it’s amazing how tenuous and unimpressive these technological advances are. Luck and timing typically play a huge part. If the conditions are right, the sliver eases through the balloon’s membrane and for a time, there is a steady stream of opportunity.

The problem is that is that as easily as these markets form, they can just as easily evaporate. When the technological advantage passes to the next competitor, as it did when Yahoo gave way to Google, all that’s left is the audience. When you consider that Yahoo has been coasting on this audience for close to two decades, it’s rather amazing that Mayer still has any assets at all to sell.

 

What Would a “Time Well Spent” World Look Like?

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.

Can Stories Make Us Better?

In writing this column, I often put ideas on the shelf for a while. Sometimes, world events conspire to make one of these shelved ideas suddenly relevant. This happened this past weekend.

The idea that caught my eye some months ago was an article that explored whether robots could learn morality by reading stories. On the face of it, it was mildly intriguing. But early Sunday morning as the heartbreaking news filtered to me from Orlando, a deeper connection emerged.

When we speak of unintended consequence, which we have before, the media amplification of acts of terror are one of them. The staggeringly sad fact is that shocking casualty numbers have their own media value. And that, said one analyst who was commenting on ways to deal with terrorism, is a new reality we have to come to terms with. When we in the media business make stories news worthy we assign worth not just for news consumers but also to newsmakers – those troubled individuals who have the motivation and the means to blow apart the daily news cycle.

This same analyst, when asked how we deal with terrorism, made the point you can’t prevent lone acts of terrorism. The only answer is to use that same network of cultural connections we use to amplify catastrophic events to create an environment that dampens rather than intensifies violent impulse. We in the media and advertising industries have to use our considerable skills in setting cultural contexts to create an environment that reduces the odds of a violent outcome. And sadly, this is a game of odds. There are no absolute answers here – there is just a statistical lowering of the curve. Sometimes, despite your best efforts, the unimaginable still happens.

But how do you use the tools at our disposal to amplify morality? Here, perhaps the story I shelved some months ago can provide some clues.

In the study from Georgia Tech, Mark Riedl and Brent Harrison used stories as models of acceptable morality. For most of human history, popular culture included at least an element of moral code. We encoded the values we held most dear into our stories. It provided a base for acceptable behavior, either through positive reinforcement of commonly understood virtues (prudence, justice, temperance, courage, faith, hope and charity) or warnings about universal vices (lust, gluttony, greed, sloth, wrath, envy and pride). Sometimes these stories had religious foundations, sometimes they were secular morality fables but they all served the same purpose. They taught us what was acceptable behavior.

Stories were never originally intended to entertain. They were created to pass along knowledge and cultural wisdom. Entertainment came after when we discovered the more entertaining the story, the more effective it was at its primary purpose: education. And this is how the researchers used stories. Robots can’t be entertained, but they can be educated.

At some point in the last century, we focused on the entertainment value of stories over education and, in doing so, rotated our moral compass 180 degrees. If you look at what is most likely to titillate, sin almost always trumps sainthood. Review that list of virtues and vices and you’ll see that the stories of our current popular culture focus on vice – that list could be the programming handbook for any Hollywood producer. I don’t intend this a sermon – I enjoy Game of Thrones as much as the next person. I simply state it as a fact. Our popular culture – and the amplification that comes from it – is focused almost exclusively on the worst aspects of human nature. If robots were receiving their behavioral instruction through these stories, they would be programmed to be psychopathic moral degenerates.

For most of us, we can absorb this continual stream of anti-social programming and not be affected by it. We still know what is right and what is wrong. But in a world where it’s the “black swan” outliers that grab the news headlines, we have to think about the consequences that reach beyond the mainstream. When we abandon the moral purpose of stories and focus on their entertainment aspect, are we also abandoning a commonly understood value landscape?

If you’re looking for absolute answers here, you won’t find them. That’s just not the world we live in. And am I naïve when I say the stories we chose to tell may have an influence on isolated violent events such as happened in Orlando? Perhaps. Despite all our best intentions, Omar Mateen might still have gone horribly offside.

But all things and all people are, to some extent, products of their environment. And because we in media and advertising are storytellers, we set that cultural environment. That’s our job. Because of this, I belief we have a moral obligation. We have to start paying more attention to the stories we tell.

 

 

 

 

Why Marketers Love Malcolm Gladwell … and Why They Shouldn’t

Marketers love Malcolm Gladwell. They love his pithy, reductionist approach to popular science – his tendency to sacrifice verity for the sake of a good “Just-so” story. And in doing this, what is Malcolm Gladwell but a marketer at heart? No wonder our industry is ga-ga over him. We love anyone who can oversimplify complexity down to the point where it can be appropriated as yet another marketing “angle”.

Take the entire influencer advertising business, for instance. Earlier this year, I saw an article saying more and more brands are expanding their influencer marketing programs. We are desperately searching for that holy nexus where social media and those super-connected “mavens” meet. While the idea of influencer marketing has been around for a while, it really gained steam with the release of Gladwell’s “The Tipping Point.” And that head of steam seems to have been building since the release of the book in 2000.

As others have pointed out, Gladwell has made a habit of taking one narrow perspective that promises to “play well” with the masses, supporting it with just enough science to make it seem plausible and then enshrining it as a “Law.”

Take “The Law of the Few”, for instance, from The Tipping Point: “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.” You could literally hear the millions of ears attached to marketing heads “perk up” when they heard this. “All we have to do,” the reasoning went, “is reach these people, plant a favorable opinion of our product and give them the tools to spread the word. Then we just sit back and wait for the inevitable epidemic to sweep us to new heights of profitability.”

Certainly commercial viral cascades do happen. They happen all the time. And, in hindsight, if you look long and hard enough, you’ll probably find what appears to be a “maven” near ground-zero. From this perspective, Gladwell’s “Law of the Few” seems to hold water. But that’s exactly the type of seductive reasoning that makes “Just So” stories so misleading. You mistakenly believe that because it happened once, you can predict when it’s going to happen again. Gladwell’s indiscriminate use of the term “Law” contributes to this common deceit. A law is something that is universally applicable and constant. When a law governs something, it plays out the same way, every time. And this is certainly not the case in social epidemics.

duncan-watts

Duncan Watts

If Malcolm Gladwell’s books have become marketing and pop-culture bibles, the same, sadly, cannot be said for Duncan Watts’ books. I’m guessing almost everyone reading this column has heard of Malcolm Gladwell. I further guess that almost none of you have heard of Duncan Watts. And that’s a shame. But it’s completely understandable.

Duncan Watts describes his work as determining the “role that network structure plays in determining or constraining system behavior, focusing on a few broad problem areas in social science such as information contagion, financial risk management, and organizational design.”

You started nodding off halfway through that sentence, didn’t you?

As Watts shows in his books, “Firms spent great effort trying to find “connectors” and “mavens” and to buy the influence of the biggest influencers, even though there was never causal evidence that this would work.” But the work required to get to this point is not trivial. While he certainly aims at a broad audience, Watts does not read like Gladwell. His answers are not self-evident. There is no pithy “bon mot” that causes our neural tumblers to satisfyingly click into place. Watts’ explanations are complex, counter-intuitive, occasionally ambiguous and often non-conclusive – just like the world around us. As he explains his book “Everything is Obvious: *Once You Know the Answer”, it’s easy to look backwards to find causality. But it’s not always right.

Marketers love simplicity. We love laws. We love predictability. That’s why we love Gladwell. But in following this path of least resistance, we’re straying further and further from the real world.

The World in Bite Sized Pieces

It’s hard to see the big picture when your perspective is limited to 160 characters.

Or when we keep getting distracted from said big picture by that other picture that always seems to be lurking over there on the right side of our screen – the one of Kate Upton tilting forward wearing a wet bikini.

Two things are at work here obscuring our view of the whole: Our preoccupation with the attention economy and a frantic scrambling for a new revenue model. The net result is that we’re being spoon-fed stuff that’s way too easy to digest. We’re being pandered to in the worst possible way. The world is becoming a staircase of really small steps, each of which has a bright shiny object on it urging us to scale just a little bit higher. And we, like idiots, stumble our way up the stairs.

This cannot be good for us. We become better people when we have to chew through some gristle. Or when we’re forced to eat our broccoli. The world should not be the cognitive equivalent of Captain Crunch cereal.

It’s here where human nature gets the best of us. We’re wired to prefer scintillation to substance. Our intellectual laziness and willingness to follow whatever herd seems to be heading in our direction have conspired to create a world where Donald Trump can be a viable candidate for president of the United States – where our attention span is measured in fractions of a second – where the content we consume is dictated by a popularity contest.

Our news is increasingly coming to us in smaller and smaller chunks. The exploding complexity of our world, which begs to be understood in depth, is increasingly parceled out to us in pre-digested little tidbits, pushed to our smartphone. We spend scant seconds scanning headlines to stay “up to date.” And an algorithm that is trying to understand where our interests lie usually determines the stories we see.

This algorithmic curation creates both “Filter” and “Agreement” Bubbles. The homogeneity of our social network leads to a homogeneity of content. But if we spend our entire time with others that think like us, we end up with an intellectually polarized society in which the factions that sit at opposite ends of any given spectrum are openly hostile to each other. The gaps between our respective ideas of what is right are simply too big and no one has any interest in building a bridge across them. We’re losing our ideological interface areas, those opportunities to encounter ideas that force us to rethink and reframe, broadening our worldview in the process. We sacrifice empathy and we look for news that “sounds right” to us, not matter what “right” might be.

This is a crying shame, because there is more thought provoking, intellectually rich content than ever before being produced. But there is also more sugar coated crap who’s sole purpose is to get us to click.

I’ve often talked about the elimination of friction. Usually, I think this is a good thing. Bob Garfield, in a column a few months ago, called for a whoop-ass can of WD 40 to remove all transactional friction. But if we make things too easy to access, will we also remove those cognitive barriers that force us to slow down and think, giving our rationality a change to catch up with impulse? And it’s not just on the consumption side where a little bit of friction might bring benefits. The upside of production friction was that it did slow down streams of content just long enough to introduce an editorial voice. Someone somewhere had to give some thought as to what might actually be good for us.

In other words, it was someone’s job to make sure we ate our vegetables.

The “Get It” Gap

Netflix and Chill…

It sounds innocent enough – a night of curling up on the couch in a Snuggie with no plan other than some Orange is the New Black and Häagen Dazs binging. And that’s how it started out. Innocent. Then those damned kids got hold of it, and its present meaning ended up in a place quite removed from its origin.

Unfortunately, my wife didn’t know that when she used the phrase in a Facebook post for her gift basket company. That is, until one of our daughters walked in the door and before her bags hit the floor yelled from the entryway, “Mom, you have to change your post – right now!”

“What post?”

“The Netflix and Chill one…”

“Why?”

“Unless your basket contains lubricants and condoms, I don’t think your post means what you think it means”

But how is a middle-aged parent to know? The subversion of this particular phrase has just happened in the last year. It takes me the better part of a year to remember that it’s no longer 2015 when I sign a check. There’s no way a middle-aged brain could possibly keep up with the ongoing bastardization of the English language. The threshold for “getting it” keeps getting higher, driven by the acceleration of memes through social media.

getitgapParents were never intended to “get it.” That’s the whole point. Kids want to speak their own language and have their own cultural reference points. We were no different when we were kids. Neither were our parents.

And kids always “get it.” It’s like a rite of passage. Memes propagate through social networks and when you’re 18, your social network is the most important thing in your life. New memes spread like wildfire and part of belonging to this culture depends on “getting it.” The faster things spread, the more likely it is that you can increase the “Get It” gap between you and your parents. It’s a control thing. If the parents call all the shots about everything in your life, at least you can have this one thing to call your own.

As you start to gain control, the Gap becomes less important. Our daughters are now becoming adults, so they now act as “Get It” translators and, in cases like the one above, Urban Slang enforcement officers. When we transgress, they attempt to bridge the gap.

As you get older, the “stuff” of life gets in the way of continuing to “get it.” Buying a house, getting a job and changing diapers leaves little time left over to Snapchat about Scumbag Steve or tweet “Hell yea finna get crunk!” to your Hommie gee funk-a-nator on a Friday night.

The danger comes when parents unilaterally try to cross over the gap and attempt to tap into the zeitgeist of urban slang. This is always doomed to failure. There are no exceptions. It’s like tiptoeing through a minefield with snowshoes on.

At the very least, run it past your kids before you post anything. Better yet – look it up in Urban Dictionary. Kids can’t be trusted.

“Hotchkiss – Ouuuttt!” (Mic drop here)

Justine Sacco, Twitter and the End of Irony

Justine Sacco is in the news again. Not that she wants to. She’d like nothing more than to fade from the spotlight. As she recently said in an interview, “Someday you’ll Google me and my LinkedIn will be the first thing that pops up.” But today, over 15 months after she launched the tweet that just won’t go away, she’s still the poster child for career ruination via social media. The recent revival of Justine’s story comes ahead of the release of a new book by Jon Ronson, “So You’ve Been Publicly Shamed.”

Justine SaccoIf you’ve never heard of Justine Sacco, I’ll recap quickly. Just before boarding an 11-hour flight to South Africa, in what can only be called a monumental melt down of discretion, she tweeted this, “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” This touched off a social media feeding frenzy looking for Sacco’s blood. The world waited for her to land (#HasJustineLandedYet? became the top trender) and meet her righteous retribution.

Oh, did I mention that Justine was IAC’s Corporate Head of Communications? Yeah, I know. WTF – right?

But the point here is not whether or not Justine Sacco was wrong. I think even she’ll admit that it was a momentarily brain-dead blurb of 64-character stupidity. The point here is whether or not Sacco was a racist, cold-hearted bitch. And to that, the answer is no.  Justine meant the comment to be ironic – a satirical poke at white privilege and comfort. She never intended for it to be taken seriously. And that was where the wheels came off.

A_Modest_Proposal_1729_CoverSatire has been around for a long time. The Greeks and Romans invented it, but it was the British that perfected it. The satirical essay became an art form in the hands of Alexander Pope, John Gay and the greatest of the satirists, Jonathon Swift. Through them, irony became honed to a razor sharp scythe for social change.  Swift’s A Modest Proposal is perhaps the greatest satirical piece ever written. In it, he proposed a solution for the starving beggars of Ireland – they should sell their children, of which there was an abundant supply, to the upper classes as a food source.

Now, did the pamphlet reading public of 1729 England call for Swift’s head? Did they think he was serious when he wrote:

“A young healthy child well nursed, is, at a year old, a most delicious nourishing and wholesome food, whether stewed, roasted, baked, or boiled; and I make no doubt that it will equally serve in a fricassee, or a ragout.”

Well, perhaps a few missed the irony, but for the vast majority of Swift’s audience, the pamphlet helped make his reputation, rather than ruin it. There was no “HasSwiftreturnedfromLilliputYet?” trend on Twitter. People got it.

There is no way Sacco’s work should be compared to Swift’s in terms of literary merit, but there are some other fundamental differences we should pay attention too.

First of all, Swift was known as a satirist. Satire was an established literary form in the Age of Enlightenment. The context was in place for the audience. They were able to manage the flip of perspective required to understand the irony. But before December 20, 2013, we had never heard of Justine Sacco. The tweet was stripped of any context. There was nothing to tell us that she wasn’t being serious. Twitter fragments our view of the world into tiny missives that float unconnected and unsupported.  Twitter, by its very nature, forces us to take its messages out of context. This is not the place to hope for a nuanced understanding.

Also, Sacco’s entire tweet totaled 64 characters. Swift’s essay comes in at 3405 words, or 19,373 characters. That’s about 300 times the literary volume of Sacco’s tweet. Swift had ample opportunity to expound on his irony and make sure readers got his point.  Even Swift’s title, at a hefty 169 characters, couldn’t have squeezed into the limits of a tweet.  Tweets beg to be taken at face value, because there’s no room to aim for anything other than that.

And that brings us to the biggest difference here – the death of thoughtfulness. You can’t get irony or satire unless you’re thoughtful. You have to spend some time thinking about what you’ve read. To use Daniel Kahneman’s terminology, you have to use System 2, which specializes in slow thinking. Sacco’s tweet takes about 2 seconds to read, from beginning to end. There is no time for thought there. But there is time for visceral reaction. That’s all System 1, and System 1 doesn’t understand irony.

At the average reading speed of 300 words a minute, you’d have to invest 11.3 minutes to get through Swift’s essay. That’s plenty of time for System 2 to digest what it’s read and to look for meaning beyond face value. You have to read it in a thoughtful manner.  But it’s not only in our reading where we don’t have to be thoughtful. We can also abandon thoughtfulness in our response. We can retweet in a matter of seconds and add our own invectives. This starts a chain reaction of indignation that starts a social media brush fire. Careful consideration is not part of the equation.

Sacco’s sin wasn’t that she was being racist. Her sin was trying to be ironic in a medium that couldn’t support it. By her own admission, she had been experimenting with Twitter to see if edgy tweets got retweeted more often. The answer, as it turned out, was yes, but the experiment damned near killed her. As a communication expert, she should have known better. Justine Sacco painfully discovered that in the split second sound-bite world of social media, thoughtful reading is extinct.  And with it, irony and satire have died as well.

Mourning Becomes Electric

dreamstime_19503560Last Friday was a sad day. A very dear and lifelong friend of mine, my Uncle Al, passed away. And so I did what I’ve done before on these occasions. I expressed my feelings by writing about it. The post went live on my blog around 10:30 in the morning. By mid afternoon, it had been shared and posted through Facebook, Twitter and many other online channels. Many were kind enough to send comments. The family, in the midst of their grief, forwarded my post to their family and friends. Soon, there was an extended network of mourning that sought to heal each other, all through channels that didn’t exist just a few years ago. Mourning had moved online.

As you probably know, I’m fascinated by how we express our innate human needs through digital technologies. And death, together with birth, is the most universal of human experiences. It was inevitable that we would use online channels to grieve. So I, as I always do, asked the question – why?

First of all – why do we mourn? Well, we mourn because we are social animals. We are probably the most social of animals. So we grieve to an according degree. We miss the departed terribly. It is natural to try to fill the hole a death tears inside of us by reaching out to others who may share the same grief. James R. Averill believed we communally mourn because it cements the social bonds that make it more likely that we will survive as a species. When it comes to dealing with death, misery loves company.

Secondly, why do we grieve online? Well, here, I think it has something to do with Granovetter’s weak ties. Death is one of those life events where we reach beyond the strong ties that define our day-to-day social existence. Certainly we seek comfort from those closest to us, but the death also triggers the existence of a virtual community – defined and united by their grieving for the one who has passed away. Our digital networks allow us to eliminate the six degrees of separation in one fell swoop. We can share our grief almost instantaneously and simultaneously with family, friends, acquaintances and even people we have never met.

There are two other aspects of grief that I believe lend themselves well to online channels: the need to chronicle and the comfort of emotional distance.

Part of the healing process is sharing memories of the departed love one. And, for those like myself, just writing about our feelings helps overcome the pain. Online provides a perfect platform for chronicling. We can share our own thoughts and, in the expressing of them, start the healing process.

The comfort of physical distance seems a contradictory idea, but almost everyone I know who has gone through a deep loss has one common dread – dealing with a never-ending stream of condolences over the coming weeks and months, triggered by each new physical encounter.

When you’ve been in the middle of the storm, you are typically a few days ahead of everyone else in dealing with your grief. Your mind has been occupied with nothing else as you have sat vigil by the hospital bed. While the condolences are given with the best of intentions, you feel compelled to give a response. The problem is, each new expression of grief forces you to replay your loop of very painful memories. The amplitude of this pain increases when it’s a face-to-face encounter. Condolences that reach you through a more detached channel, such as online, can be dealt with at your discretion. You can wait until you marshall the emotional reserves necessary to respond. You can also respond to several people at a time. How many times have you heard this from a grieving loved one, “I just wish I could record my message and play it whenever I meet someone who wants to tell me how sorry they are for my loss?” It may seem callous, but no one wants to relive that pain over and over again. And let’s face it – almost no one knows the right things to say at a moment like this.

By the end of last Friday, my online social connections had helped me ease a very deep pain. I hope I was able to return the favor for others that were dealing with their own grief. There are many things about technology that I treat with suspicion, but in this case, turning online seemed like the most natural thing in the world.

Facebook at Work – Stroke of Genius or Act of Desperation?

facebookworkSo, with the launching of Facebook at Work, Facebook wants to become your professional networking platform of choice, does it? Well, speaking as a sample of one, I don’t think so. And it all comes down to one key reason that I’ve talked about in the past, but for some reason, Facebook doesn’t seem to get – social modality.

Social modality is not a tough concept to understand. I’m one person in my office, another on the couch. The things that interest me in the office have little overlap with the things that interest me when I’m “sofa-tose” (nodding into a state of minimal consciousness on overstuffed furniture). But it’s not just about interests. It’s about context. I think differently. I act differently. I react differently. And I want to keep those two states as separate as possible.

Facebook seems to understand the need for separation. They’re building out Facebook at Work as a separate entity. But it’s still Facebook, and when I’ve got my business persona on, I don’t even think of Facebook. Neither, apparently, does anyone else. In 2010, BranchOut tried to build a professional network layer on top of Facebook. Last summer, it changed its business model. The reason? A lack of users. When you think of work, you just don’t think of Facebook. If fact, there’s almost an instinctual revulsion to the idea. Mixing Facebook and work is a cultural taboo.

When we look at the technologies we use to mediate our social activities, different rules apply. It’s not just about features or functionality – it’s about what instinctively feels right. Facebook is trying to create a monolithic platform for social connecting and that doesn’t seem to be where we’re heading. Rather than consolidating our social activity, it’s splintering over different tools and platforms. One reason is functionality. The other is that socially; we’re much too complex to fit into any one particular technological mold. I wrote a few months ago about the maturity continuum of social media. The final stage was to become a platform, which is exactly what Facebook is trying to do. But perhaps becoming a social media platform – at least in the sense that Facebook is attempting – isn’t possible. It could be that our social media personalities are too fractured to fit comfortably in any single destination.

Facebook’s revenue model depends on advertising, which depends on eyeballs. It’s a real estate play. Maybe to be successful, social has to be less about location and more about functionality. In other words, to become a social media platform, you have to be a utility, not a destination. Facebook seems to be trying to do both. According to an article in the Financial Times (registration required) Facebook at work will offer functionality through chat, contact management and document collaboration, but it will do so on a site that “looks very much like Facebook,” including, one assumes, ads served from Facebook. By trying to attract eyeballs to drive revenue, Facebook won’t be able to avoid mixing modality, and therein lays the problem. I suspect Facebook at Work will join an ever-increasing string of Facebook failures.

LinkedIn isn’t perfect, but it has definitely established itself as the B-to-B platform of choice. It fits our sensibilities of what a professional social networking tool should be. And it doesn’t suffer from Facebook’s overly ambitious hubris. It hasn’t launched “LinkedIn at Home” – trying to become the social network platform for our non-work life. It knows what it is. We know what it is. Our social modality isn’t conflicted. Facebook is another matter. It wants to be all things social to all people. I suppose from a revenue point you can’t blame them, but there’s a reason I don’t invite my co-workers to my family reunion – or vice versa.

Someday Facebook will learn that lesson. I suspect it will probably be the hard way.